ISMS Copilot
Legal

Acceptable Use Policy (AUP)

Last Updated: January 2026

This Acceptable Use Policy (AUP) governs your use of ISMS Copilot's services. By accessing or using our platform, you agree to comply with this policy and our Terms of Service. We designed ISMS Copilot to support information security and compliance professionals in high-stakes work, and we expect all users to use the platform responsibly and ethically.

This policy complements our existing AI Safety & Responsible Use Overview and How to Use ISMS Copilot Responsibly guide. Together, these documents help ensure safe, effective, and compliant use of AI in compliance workflows.

Universal Prohibited Activities

You may not use ISMS Copilot to engage in or facilitate any of the following activities:

Illegal or Fraudulent Activities

  • Violating any applicable laws, regulations, or legal obligations

  • Generating fraudulent compliance documentation, certifications, or audit reports

  • Creating false evidence of regulatory compliance (ISO 27001, SOC 2, GDPR, NIS2, DORA, etc.)

  • Misrepresenting audit findings or security postures to stakeholders, auditors, or regulators

  • Money laundering, fraud, or other financial crimes

  • Facilitating unauthorized access to systems or data

Security and System Integrity

  • Attempting to compromise, hack, or exploit ISMS Copilot's infrastructure or security controls

  • Accessing or attempting to access system prompts, internal data, or underlying AI models

  • Reverse engineering, decompiling, or extracting proprietary knowledge bases

  • Jailbreaking or prompt injection attacks to bypass safety guardrails

  • Generating malware, exploits, or attack tools (ransomware, keyloggers, phishing kits, etc.)

  • Conducting automated attacks, vulnerability scanning, or penetration testing against the platform without written authorization

  • Overloading or degrading service availability through excessive requests or abuse

Privacy and Data Protection Violations

  • Processing special categories of personal data (health, biometric, genetic data) without appropriate legal basis under GDPR

  • Uploading or sharing personally identifiable information (PII) without legitimate business need and proper safeguards

  • Using ISMS Copilot for unauthorized surveillance, profiling, or tracking of individuals

  • Violating data subject rights or processing obligations under GDPR, CCPA, or other privacy regulations

  • Sharing client confidential data across isolated Workspaces or with unauthorized parties

Follow data minimization principles. Use role-based examples ("IT Manager") instead of real names. Review our Privacy Policy and Data Processing Agreement for best practices.

Harmful or Unethical Content

  • Creating content that promotes violence, hatred, harassment, or discrimination

  • Generating content related to child sexual abuse material (CSAM) or child exploitation

  • Producing content intended to threaten, intimidate, or harm individuals or groups

  • Creating sexually explicit material without legitimate compliance context (e.g., drafting acceptable use policies)

  • Generating disinformation, misinformation, or misleading compliance guidance intended to deceive

Misuse of Compliance and Security Outputs

  • Representing AI-generated policies, procedures, or risk assessments as final audit-ready deliverables without human review and customization

  • Using outputs to provide legal, accounting, or professional compliance advice without appropriate qualifications

  • Submitting unverified AI-generated documentation directly to auditors or certification bodies

  • Copying or reproducing copyrighted standards content (ISO 27001, NIST frameworks, etc.) verbatim (see our Intellectual Property Compliance policy)

  • Claiming ISMS Copilot outputs guarantee certification, compliance, or regulatory approval

ISMS Copilot is an assistant, not a replacement for professional expertise. Always verify outputs against official standards, customize for your organization's context, and involve qualified compliance professionals in final reviews.

Platform Abuse

  • Creating multiple accounts to circumvent usage quotas or subscription limits

  • Sharing account credentials with unauthorized users

  • Reselling, redistributing, or white-labeling ISMS Copilot services without authorization

  • Using the platform to compete with or undermine ISMS Copilot's business

  • Scraping, harvesting, or bulk-downloading content or knowledge base materials

High-Risk Use Requirements

Certain uses of ISMS Copilot involve elevated compliance, legal, or reputational risks. If you use our platform for the following purposes, you must implement additional safeguards:

Audit and Certification Processes

When using ISMS Copilot outputs in formal audits (ISO 27001, SOC 2, etc.) or certification submissions:

  • Human Review Required: All AI-generated content must be reviewed and approved by qualified compliance or security professionals

  • Verification Against Standards: Cross-check outputs against official framework requirements (Annex A controls, SOC 2 criteria, etc.)

  • Customization Mandatory: Adapt generic outputs to your organization's specific context, risk environment, and controls

  • Disclosure Recommended: Consider informing auditors that AI tools assisted in documentation preparation

When using outputs for regulatory submissions (GDPR DPIAs, NIS2 incident reports, DORA compliance documentation):

  • Legal Review Required: Involve legal counsel or qualified compliance officers in final review

  • Accuracy Verification: Ensure factual accuracy of all statements, particularly regarding implemented controls and risk assessments

  • No Copyrighted Reproduction: Do not submit AI-generated content that reproduces copyrighted standards text

Client-Facing Deliverables

If you're a consultant or service provider using ISMS Copilot to create deliverables for clients:

  • Workspace Isolation: Use separate Workspaces for each client to maintain confidentiality

  • Professional Standards: Apply the same quality controls and professional standards you would to manually created work

  • Client Consent: Consider whether client agreements require disclosure of AI tool usage

  • Output Ownership: Verify you have rights to deliver AI-generated content under your service agreements

Use custom instructions in Workspaces to tailor outputs to specific client contexts, industries, or regulatory environments. This improves accuracy and reduces generic content.

EU AI Act Alignment

ISMS Copilot is designed to align with the EU AI Act's requirements for general-purpose AI systems. We prohibit uses that fall under the Act's banned practices:

  • Social scoring or evaluation systems that harm individuals' rights

  • Manipulative or deceptive techniques that exploit vulnerabilities

  • Biometric identification for law enforcement without proper authorization

  • High-risk uses without appropriate human oversight (addressed in our High-Risk Requirements above)

Our platform includes technical safeguards against hallucinations, jailbreak attempts, and copyrighted content reproduction. Learn more about our approach in our AI Safety & Responsible Use Overview.

Enforcement

We monitor usage for violations of this policy through automated systems and user reports. If we detect prohibited activities, we may:

  • Issue warnings for minor or unintentional violations

  • Throttle or limit access for abusive usage patterns

  • Suspend accounts temporarily for serious violations

  • Terminate accounts permanently for repeated or egregious violations

  • Recover costs for investigation, mitigation, and remediation as described in Section 14 of our Terms of Service

  • Report to authorities when required by law (fraud, CSAM, illegal activity)

We investigate reports in good faith and provide appeals processes for wrongful enforcement actions. Contact [email protected] if you believe your account was actioned in error.

Reporting Violations

If you become aware of activity that violates this policy, please report it to:

  • Email: [email protected]

  • Subject Line: "AUP Violation Report"

  • Include: Description of violation, relevant account details (if known), and any supporting evidence

We review all reports and take appropriate action. We do not disclose reporter identities without consent.

Changes to This Policy

We may update this Acceptable Use Policy to address new risks, regulatory requirements, or platform capabilities. We will notify users of material changes via email or platform notifications. Continued use of ISMS Copilot after updates constitutes acceptance of the revised policy.

Questions about this policy? Contact our team at [email protected] or review our Responsible Use Guide for practical implementation advice.

Was this helpful?