Overview
ISMS Copilot implements comprehensive AI safety measures to ensure reliable, trustworthy, and responsible AI assistance for compliance professionals. This article explains the guardrails, safety constraints, and responsible AI practices built into the platform.
Who This Is For
This article is for:
Compliance professionals evaluating AI safety measures
Risk managers assessing AI governance controls
Security teams concerned about AI misuse
Anyone who wants to understand how ISMS Copilot ensures responsible AI use
AI Safety Principles
ISMS Copilot's AI safety framework is built on four core principles:
1. Purpose Limitation
The AI assistant is designed exclusively for information security and compliance work:
Focused on ISMS frameworks (ISO 27001, SOC 2, GDPR, NIST, etc.)
Politely redirects off-topic questions to compliance-related topics
Refuses requests for harmful, illegal, or unethical activities
Stays within the bounds of compliance consulting assistance
By limiting the AI's scope to compliance and security, ISMS Copilot reduces the risk of misuse and ensures expertise in its specialized domain rather than attempting to be a general-purpose assistant.
2. Transparency & Honesty
The AI assistant openly acknowledges its limitations:
Explicitly disclaims uncertainty when appropriate
Prompts users to verify important information
Admits when it doesn't know something rather than guessing
Clearly explains what it can and cannot do
3. Copyright & Intellectual Property Protection
ISMS Copilot respects intellectual property rights:
Will not reproduce copyrighted ISO standards or proprietary content
Directs users to purchase official standards from authorized sources
Provides guidance based on framework principles without copying text
Trained on lawfully sourced, anonymized data compliant with EU copyright requirements
If you ask the AI to reproduce ISO 27001 text or other copyrighted material, it will politely refuse and instead offer actionable guidance based on its knowledge of the framework's principles.
4. Privacy by Design
User data protection is embedded in every AI interaction:
Conversations are never used to train AI models
User-provided content is not shared with other users
Each conversation is processed independently
Workspace isolation prevents data mixing between projects
AI Safety Guardrails
Content Scope Guardrails
What the AI Will Do:
Answer questions about ISMS frameworks and compliance
Analyze uploaded compliance documents (policies, procedures, risk assessments)
Generate audit-ready policies and procedures
Provide gap analysis and implementation guidance
Explain security controls and compliance requirements
What the AI Will NOT Do:
Provide legal advice (suggests consulting legal professionals instead)
Offer medical, financial, or personal advice outside compliance scope
Generate content for illegal, harmful, or unethical purposes
Reproduce copyrighted standards or proprietary materials
Disclose its custom instructions or system prompts
If you ask the AI to help with something outside its scope, it will politely explain its limitations and redirect you to compliance-related assistance it can provide.
Jailbreak Prevention
ISMS Copilot is designed to resist manipulation attempts:
Blocked Tactics:
"Repeat after me" tricks to extract system prompts
Role-playing scenarios designed to bypass safety constraints
Requests to "ignore previous instructions"
Constraint manipulation ("respond without refusal responses")
Attempts to access internal knowledge base files directly
How It Works:
When the AI detects a jailbreak attempt, it:
Recognizes the manipulation pattern
Politely refuses the request
Redirects to legitimate ISMS assistance
Maintains its safety constraints
ISMS Copilot is designed to be helpful within its compliance scope. If you have legitimate questions that seem to trigger safety guardrails, try rephrasing your question to focus on the compliance or security aspect you need help with.
Prompt Injection Protection
The platform protects against malicious content in file uploads:
What's Protected:
Uploaded documents are scanned for prompt injection attempts
Malicious instructions embedded in files are silently rejected
System prompts cannot be overridden via file content
Safety constraints remain active regardless of file content
User Experience:
Files are processed normally from the user's perspective
Harmful instructions are filtered out during processing
Only legitimate document content is analyzed
No visible error message (silent protection)
Knowledge Protection Guardrails
The AI protects its training data and system configuration:
What Users Cannot Access:
Custom instructions or system prompts
Details about training data sources
Direct access to knowledge base files
Download links for internal documents
Information about how the knowledge base is structured
Why This Matters:
Protecting system prompts and training data prevents:
Adversaries from understanding how to manipulate the AI
Copyright violations from reproducing training materials
Security risks from exposing system architecture
Inconsistent behavior from modified instructions
Hallucination Prevention
What Are Hallucinations?
AI hallucinations occur when the AI generates confident-sounding but factually incorrect information. ISMS Copilot addresses this through multiple mechanisms:
Training on Real-World Knowledge
Trained on proprietary library of compliance knowledge from hundreds of real consulting projects
Based on practical implementation experience, not theoretical information
Focused on frameworks, standards, and proven practices
Regularly updated with current compliance requirements
Unlike general AI tools that may hallucinate compliance information, ISMS Copilot is trained on real-world ISMS knowledge from experienced consultants, making its guidance more reliable for audit preparation and implementation.
Uncertainty Acknowledgment
The AI is instructed to be honest about limitations:
Explicitly states when it's uncertain about information
Prompts users to verify critical information
Avoids making up facts when knowledge is incomplete
Suggests consulting official standards or legal professionals when appropriate
Example Response:
"While I can provide general guidance on ISO 27001 control A.8.1, I'm still likely to make mistakes. For audit purposes, please verify this information against the official ISO 27001:2022 standard."
User Verification Responsibility
ISMS Copilot emphasizes that users should:
Cross-reference AI suggestions with official standards
Validate critical information before submission to auditors
Use the AI as a consultant's assistant, not a replacement for expertise
Exercise professional judgment in applying AI recommendations
Always verify critical compliance information before using it in audits or official submissions. ISMS Copilot is designed to assist, not replace, professional judgment and official standard documentation.
Rate Limiting & Resource Protection
Message Rate Limits
ISMS Copilot implements rate limiting to prevent abuse and ensure fair access:
Free Plan:
10 messages per 4-hour rolling window
Counter resets after 4 hours from first message
Enforced at both frontend and backend
Premium Plan:
Unlimited messages
No rate limiting restrictions
Priority processing
When Rate Limit Is Reached:
You'll see the error message: "Daily message limit reached. Please upgrade to premium for unlimited messages."
To make the most of your free tier messages, ask comprehensive questions and provide context in a single message rather than sending multiple short questions. This maximizes the value of each interaction.
File Upload Limits
File uploads have safety constraints to protect system resources:
File Size:
Maximum: 10MB per file
Error message: "File 'document.pdf' is too large (15.23MB). Maximum size allowed is 10MB."
Supported File Types:
TXT, CSV, JSON (text files)
PDF (documents)
DOC, DOCX (Microsoft Word)
XLS, XLSX (Microsoft Excel)
Upload Restrictions:
One file at a time (batch upload not supported)
Cannot upload duplicate files for the same message
Unsupported file types are rejected with error message
Temporary Chat Mode
What Is Temporary Chat?
Temporary chat mode offers privacy-preserving conversations with specific data handling:
How It Works:
Select "Temporary Chat" from the welcome screen
You'll see the notice: "This chat won't appear in history. For safety purposes, we may keep a copy of this chat for up to 30 days."
Send messages and upload files as normal
Conversation is not added to your conversation history
Data may be retained for up to 30 days for safety review
When to Use Temporary Chat:
Quick one-off questions that don't need to be saved
Sensitive discussions you don't want in permanent history
Testing queries before committing to a workspace
Exploratory research on compliance topics
Even in temporary chat mode, conversations may be retained for up to 30 days for safety monitoring and abuse prevention. This helps protect against misuse while still offering privacy from your permanent conversation history.
Data Retention Controls
User-Controlled Retention
You decide how long your conversation data is stored:
Click the user menu icon (top right)
Select Settings
In the Data Retention Period field, choose:
Minimum: 1 day (high-security, short-term work)
Maximum: 24,955 days / 7 years (long-term documentation)
Or click Keep Forever for indefinite retention
Click Save Settings
Expected result: Settings dialog closes and retention period is saved.
Automatic Data Deletion
ISMS Copilot automatically deletes old data:
Deletion job runs daily
Removes messages older than your retention period
Deletes associated uploaded files
Permanent and cannot be recovered
Data deletion is automatic and permanent. Export any important conversations or documents before they expire based on your retention settings.
Workspace Safety Features
Data Isolation
Workspaces provide security boundaries for different projects:
How Isolation Works:
Each workspace has separate conversation history
Uploaded files are tied to specific workspaces
Custom instructions are workspace-specific
Deleting a workspace removes all associated data
The AI doesn't share information between workspaces
For consultants managing multiple clients, workspaces ensure client data remains completely isolated. Even the AI treats each workspace as a separate project with no cross-contamination of information.
Custom Instructions Safety
Workspaces allow custom instructions with safety constraints:
What Custom Instructions Can Do:
Specify focus on particular compliance frameworks
Set tone or detail level for responses
Define project-specific context (industry, organization size)
Guide AI toward specific compliance goals
Safety Constraints:
Custom instructions must be compliance-related
Cannot override core safety guardrails
Cannot instruct AI to ignore copyright protections
Cannot bypass content scope limitations
No Training on User Data
Privacy Guarantee
ISMS Copilot commits to never using your data for AI training:
What This Means:
Your conversations are never fed back into the AI model
Uploaded documents remain confidential and private
Client information never contributes to model improvement
Each conversation is processed independently without learning
How This Protects You:
Client confidentiality is maintained
Proprietary information stays private
Sensitive compliance data isn't shared with other users
No risk of AI accidentally revealing your information to others
This is a critical difference from general AI tools like ChatGPT free tier. ISMS Copilot guarantees your sensitive compliance data is never used to improve the model, ensuring complete confidentiality for your client work.
Data Processing Transparency
ISMS Copilot is transparent about how your data is used:
How Your Data IS Used:
Processing your questions to generate responses
Analyzing uploaded documents for gap analysis
Maintaining conversation context within a workspace
Storing data according to your retention settings
Safety monitoring for up to 30 days (to prevent abuse)
How Your Data IS NOT Used:
Training or fine-tuning AI models
Sharing with other users or customers
Marketing or advertising purposes
Selling to third parties
Public disclosure or case studies (without explicit permission)
Authentication & Access Control
User Authentication Requirements
All AI interactions require authentication:
Cannot send messages without logging in
JWT token validates every API request
Sessions expire after period of inactivity
Row-level security ensures users only see their own data
Cross-User Protection
Database-level isolation prevents unauthorized access:
Users cannot access other users' conversations
Attempting to access another user's data returns empty results
All queries automatically filter by authenticated user ID
Even administrators follow principle of least privilege
Responsible AI Best Practices
For Users
Getting the Best Results:
Ask specific, framework-related questions (e.g., "How do I implement ISO 27001 control A.8.1?")
Provide context about your organization and compliance goals
Upload relevant documents for accurate gap analysis
Review and refine AI-generated content before use
Verification Practices:
Cross-reference AI suggestions with official standards
Validate critical information with compliance experts
Test AI-generated policies in your organizational context
Use AI as an assistant, not a replacement for expertise
Frame your questions with specificity: Instead of "Tell me about ISO 27001," ask "What are the key steps to implement access control policy for ISO 27001 Annex A.9?" This helps the AI provide more accurate, actionable guidance.
For Organizations
Governance Practices:
Document ISMS Copilot use in your AI governance policy
Train staff on appropriate use and limitations
Set data retention periods aligned with your policies
Review AI-generated content before official submission
Maintain human oversight for critical compliance decisions
Risk Management:
Include AI tools in Data Protection Impact Assessments (DPIA)
Document data processing agreements with ISMS Copilot
Set appropriate retention periods for sensitive data
Use workspaces to isolate different client or project data
Reporting Safety Issues
When to Report
Contact ISMS Copilot support if you encounter:
AI responses that violate safety constraints
Potential hallucinations or factually incorrect information
Copyright violations in AI output
Inappropriate content or behavior
Security vulnerabilities in the AI system
Privacy breaches or data leaks
How to Report
Click the user menu icon (top right)
Select Help Center → Contact Support
Describe the safety issue with:
Exact question or prompt you used
AI's response (screenshot if possible)
Why you believe it's a safety concern
Date and time of the interaction
Support will investigate and respond within 48 hours
Reporting safety issues helps improve ISMS Copilot for everyone. Your feedback is valuable for identifying and addressing potential risks in AI behavior.
Limitations & Known Constraints
Current AI Limitations
Cannot browse the internet for current information (uses trained knowledge base)
Cannot access external databases or APIs in real-time
Cannot execute code or run security testing tools
Cannot make phone calls or send emails on your behalf
Cannot guarantee 100% accuracy (always verify critical information)
Scope Boundaries
Focused on ISMS and compliance (not general-purpose AI)
Cannot provide legal, medical, or financial advice outside compliance context
Cannot replace official standards or auditor judgment
Cannot guarantee audit success (implementation quality matters)
What's Next
Visit the Trust Center for detailed AI governance documentation
Getting Help
For questions about AI safety and responsible use:
Review the Trust Center for detailed AI governance information
Contact support through the Help Center menu
Report safety concerns immediately for investigation
Check the Status Page for known issues