ISMS Copilot
Legal

AI System Disclaimer

ISMS Copilot is an artificial intelligence (AI) system. This notice explains what that means for your interaction with the platform and your responsibilities when using AI-generated content.

You Are Interacting with AI

When you send messages, upload documents, or receive responses in ISMS Copilot, you are interacting with an AI system—not a human expert. The system uses large language models combined with a specialized compliance knowledge base to provide guidance on ISO 27001, SOC 2, GDPR, and other frameworks.

This disclosure is required under the EU AI Act (Regulation 2024/1689, Article 50). ISMS Copilot classifies as a limited-risk AI system that interacts directly with users and generates text content.

AI-Generated Content

All responses, policy drafts, procedure documents, risk assessments, and other outputs you receive from ISMS Copilot are generated by AI. This means:

  • Not human expertise: Outputs are not written by compliance consultants, auditors, or legal professionals

  • Require verification: You must review and validate all content before use in audits, certifications, or regulatory submissions

  • May contain errors: AI can produce incorrect, incomplete, or outdated information despite safeguards

  • Generic by default: Outputs need customization to your specific organizational context, risk environment, and requirements

Never submit AI-generated policies, procedures, or assessments directly to auditors or certification bodies without thorough human review by qualified compliance professionals.

Your Responsibilities

When using ISMS Copilot, you are responsible for:

  1. Verification: Cross-check AI outputs against official standards (ISO 27001:2022, SOC 2 TSC, GDPR text, etc.)

  2. Customization: Adapt generic content to your organization's size, industry, risk profile, and compliance scope

  3. Professional judgment: Apply your expertise or consult qualified professionals for final decisions

  4. Implementation: Ensure AI-drafted controls and procedures are actually implemented and effective—documentation alone does not achieve compliance

  5. Transparency: If you share or publish AI-generated content externally, disclose that it was produced using AI where context requires

Treat ISMS Copilot as a research assistant and drafting tool, not a replacement for compliance expertise. Use it to accelerate workflows, generate starting points, and explore framework requirements—but always apply human oversight.

Limitations of AI

ISMS Copilot's AI has specific constraints:

  • Knowledge cutoff: Training data is current as of early 2025; recent regulatory changes or framework updates may not be reflected

  • Hallucinations: Despite safeguards, AI can generate confident-sounding but incorrect information (see AI Safety & Responsible Use for mitigation details)

  • No real-time data: Cannot access live databases, current threat intelligence, or your organization's live systems

  • Generic context: Lacks deep knowledge of your specific business model, operational environment, or unique risks unless you provide detailed prompts

  • Not legal advice: Cannot interpret laws, regulations, or contractual obligations specific to your jurisdiction or situation

No Certification Guarantee

Using ISMS Copilot does not guarantee you will achieve ISO 27001 certification, SOC 2 compliance, GDPR adequacy, or any other regulatory outcome. Certification and compliance depend on:

  • Actual implementation and operation of controls (not just documentation)

  • Demonstrated effectiveness over time

  • Independent assessment by accredited certification bodies or auditors

  • Organizational maturity and commitment to continuous improvement

See Service Limitations and Disclaimers for complete details.

EU AI Act Compliance

ISMS Copilot complies with the EU AI Act (Regulation 2024/1689) transparency requirements for limited-risk AI systems:

  • Article 50(1): Users are informed they are interacting with AI through this disclaimer and in-app notices

  • Article 50(2): AI-generated text outputs are marked as artificially generated in metadata and user-facing disclaimers

  • Transparency: AI capabilities, limitations, and data handling practices are documented in the Help Center and Trust Center

ISMS Copilot is designed and operated in the EU (France) with full GDPR and AI Act compliance. See our Privacy Policy and Trust Center for detailed governance information.

How We Mitigate AI Risks

ISMS Copilot implements multiple safeguards to make AI interaction safer and more reliable:

  • Dynamic framework knowledge injection (v2.5): Detects framework mentions (ISO 27001, SOC 2, GDPR, etc.) and injects verified knowledge before generating responses, reducing hallucinations

  • Specialized training: AI is trained on a proprietary compliance knowledge base from real consulting projects, not generic internet data

  • No user data training: Your conversations and documents are never used to train or improve AI models

  • Uncertainty acknowledgment: AI explicitly states when information is uncertain and prompts you to verify

  • Scope limitation: AI is constrained to compliance topics and refuses off-topic or harmful requests

For complete details, see AI Safety & Responsible Use Overview.

Data Privacy & Security

When you interact with ISMS Copilot's AI:

  • Your prompts and uploaded documents are processed to generate responses

  • Conversation data is stored according to your retention settings (1 day to 7 years, or forever)

  • Data is hosted in the EU (Frankfurt, Germany) with end-to-end encryption

  • Workspace isolation ensures client/project data separation

  • AI providers (Mistral, OpenAI, xAI) operate under zero data retention agreements—they do not store or train on your data

See Privacy Policy and Your Rights Under GDPR for full details.

Reporting AI Issues

If you encounter AI-generated content that is incorrect, inappropriate, violates copyright, or raises safety concerns, report it immediately:

  1. Click the user menu icon (top right) → Help CenterContact Support

  2. Describe the issue, including your prompt, the AI response, and why it concerns you

  3. Support will investigate and respond within 48 hours

Your reports help improve AI safety and reliability for all users.

Where to Learn More

Was this helpful?