Overview
This article provides technical transparency into how ISMS Copilot's AI systems are built, tested, and operated. These details demonstrate our commitment to responsible AI development through verifiable implementation practices.
Who This Is For
This article is for:
Security and compliance teams evaluating AI governance controls
Auditors assessing AI system implementation against policies
Risk managers requiring technical transparency for AI systems
Technical users wanting to understand the AI architecture
RAG Architecture
ISMS Copilot uses Retrieval-Augmented Generation (RAG) to ground AI responses in verified compliance documentation rather than relying solely on model knowledge.
How RAG Works
Architecture Components:
Retrieval Layer: Semantic search over curated compliance knowledge bases (ISO 27001, NIS2, GDPR, SOC 2, and industry standards)
Generation Layer: Large language models (LLMs) from enterprise AI providers
Grounding Mechanism: Response validation against retrieved sources to reduce hallucinations
Attribution System: Source citation for all AI-generated content enabling user verification
RAG architecture prioritizes factual accuracy by requiring AI responses to reference retrieved compliance documentation. This reduces hallucinations compared to general-purpose AI tools that generate responses from model knowledge alone.
Why RAG Matters for Compliance:
Responses grounded in official compliance frameworks, not probabilistic guesses
Source attribution enables verification against standards
Context validation ensures responses match retrieved documentation
Hallucination mitigation through retrieval constraints
AI Providers & Data Protection
We use enterprise-grade AI providers with strict data protection agreements.
Current Providers
Supported AI Models:
Mistral AI (configurable models)
xAI (configurable models)
OpenAI (configurable models)
Model selection depends on task requirements such as context window size, response latency, and domain specialization.
Zero Data Retention Agreements
All AI providers operate under Zero Data Retention (ZDR) agreements:
Your data is NEVER used to train AI models. ZDR agreements ensure your conversations, uploaded documents, and workspace content remain confidential and are not retained by AI providers beyond processing your requests.
ZDR Agreement Terms:
No user data retention beyond request processing
No model training on customer content
GDPR-compliant data transfers with Standard Contractual Clauses (SCCs)
Enterprise security standards enforced
For detailed processor information and data flows, see our Register of Processing Activities.
Development Requirements
Every AI system component is developed against documented requirements defining expected behavior, safety constraints, and performance thresholds.
Functional Requirements
Scope Definition:
AI provides compliance assistance, not legal advice
Task boundaries: policy generation, gap analysis, audit preparation, document review
Constraint enforcement: no internet access, no code execution, no personal data processing beyond platform usage
Performance Requirements
Quality Targets:
Response accuracy grounded in retrieved sources with citation
Context window sufficient for multi-document compliance analysis
Response time optimized for interactive use (target: under 10 seconds)
Rate limits defined per user tier to ensure system stability
Safety Requirements
Hallucination Mitigation:
Source grounding: responses must reference retrieved documentation
Retrieval validation: responses checked against source content
Confidence scoring: uncertainty acknowledged when sources are ambiguous
User verification disclaimers: all outputs require human review
Content Filtering:
Inappropriate content detection and blocking
Scope boundaries: AI refuses out-of-scope requests (e.g., unrelated topics, medical/legal advice)
Jailbreak and prompt injection protection
See AI Safety & Responsible Use Overview for detailed safety guardrails.
Data Handling Requirements
Privacy by Design:
No user data for model training (ZDR agreements enforced)
Data minimization: only necessary data processed for retrieval and generation
Temporary processing: no long-term storage of prompts/responses beyond user session logs
Retention controls: user-configurable data retention periods (1 day to 7 years, or keep forever)
Transfer controls: GDPR-compliant data transfers with SCCs
For comprehensive data handling practices, see our Privacy Policy.
Verification & Validation Testing
AI systems undergo rigorous testing before deployment. No system goes live without passing requirements-based validation.
Regression Testing
Automated tests run on every code change to ensure existing functionality remains intact.
Test Coverage:
Retrieval accuracy: Precision and recall against ground truth datasets
Response grounding: Verification that outputs cite retrieved sources
Hallucination detection: Comparison against known incorrect responses
Performance benchmarks: Response time and context handling validation
Security Testing
AI systems undergo the same security validation as all platform components.
Testing Pipeline:
SAST (Static Application Security Testing): Code-level vulnerability scanning with Semgrep integration
DAST (Dynamic Application Security Testing): Runtime security validation
Penetration Testing: Annual third-party security assessments
Prompt Injection Testing: Validation against adversarial inputs attempting to bypass safety constraints
Our secure development lifecycle ensures AI systems meet the same security standards as all other platform components. See our Security Policies for detailed testing practices.
User Acceptance Testing
Real-world scenario validation with compliance professionals ensures:
Outputs meet professional quality standards
Responses are appropriate for compliance use cases
Limitations are clearly communicated
Feedback mechanisms are accessible and effective
Deployment Validation Checklist
AI systems are deployed only after meeting documented requirements:
Deployment requires 100% regression test success, cleared security scans (no critical/high-severity vulnerabilities), met performance benchmarks, updated user documentation with limitations, and configured monitoring/alerting for hallucination rate tracking.
Deployments that fail validation are rolled back until requirements are satisfied.
Monitoring & Continuous Improvement
Post-deployment, we monitor AI system behavior to detect degradation, emerging issues, or misuse.
Monitoring Metrics
What We Track:
Hallucination rate: Tracked through user reports and automated detection
Response accuracy: Sampled validation against ground truth compliance standards
Usage patterns: Detection of out-of-scope or inappropriate use
Performance metrics: Response time, retrieval precision, error rates
User feedback: Adverse impact reports, support tickets, feature requests
Continuous Improvement Cycle
Monitoring data informs iterative improvements:
Feedback Loops:
User feedback and adverse impact reports → model updates and retrieval tuning
Security testing results → safety enhancements and control updates
Regulatory changes and best practices → documentation and framework updates
Performance monitoring → accuracy improvements and response optimization
Incident Response
We notify users of AI-related incidents to maintain transparency and trust.
Notification Channels:
Email alerts for critical incidents affecting AI functionality
Slack notifications for subscribed teams
Status page updates with incident timelines and resolutions
NIS2-compliant early warning notifications (24-hour reporting for significant cybersecurity incidents)
Subscribe to our status page to receive real-time notifications about AI system incidents, maintenance, and updates.
Known Limitations
AI systems have inherent limitations that users must understand to use them responsibly.
Technical Limitations
AI outputs may contain inaccuracies (hallucinations) even with RAG grounding. Users must verify all outputs against official standards and regulations.
Current Constraints:
Probabilistic nature: AI generates responses based on statistical patterns, not deterministic logic
No internet access: AI cannot retrieve real-time information or access external websites
No code execution: AI cannot run calculations, execute scripts, or validate technical implementations
Knowledge cutoff: AI model knowledge is limited to training data cutoff dates (varies by provider)
Context limits: Maximum context window constrains the amount of information processed in a single request
Domain boundaries: AI is trained for compliance/security; performance in other domains is not guaranteed
For detailed limitations and workarounds, see our Known Issues page.
User Verification Responsibility
ISMS Copilot is designed to assist, not replace, professional judgment:
Cross-reference AI suggestions with official standards
Validate critical information before submission to auditors
Use the AI as a consultant's assistant, not a replacement for expertise
Exercise professional judgment in applying AI recommendations
See How to Use ISMS Copilot Responsibly for verification best practices.
Reporting & Feedback
User feedback is critical for AI system improvement. We provide multiple mechanisms for reporting issues, inaccuracies, or unexpected behavior.
How to Report Issues
Adverse Impacts or Hallucinations:
Navigate to user menu (top right) > Help Center > Contact Support
Include prompt, response, and screenshots in your report
Expect response within 48 hours
In-Platform Reporting:
Use "Report Issue" button available throughout the platform to flag specific AI responses
What Happens After You Report
Immediate review (within 48 hours): Support team assesses severity and impact
Investigation: Technical team analyzes the issue, reproduces the problem, and identifies root cause
Response: You receive an update on findings and planned actions
Remediation: Issues addressed through model updates, retrieval tuning, code fixes, or documentation improvements
Continuous improvement: Lessons learned integrated into testing and monitoring processes
High-severity issues (safety risks, data leaks, critical hallucinations) are escalated immediately for urgent remediation.
See AI Safety & Responsible Use Overview for detailed reporting instructions.
Documentation Updates
Technical specifications are updated when:
AI providers change (new models, deprecated APIs)
Architecture evolves (new components, validation methods)
Requirements are revised (new safety constraints, performance targets)
Testing practices expand (new validation techniques, security tools)
Updates are communicated through release notes and this documentation page. Subscribe to our status page for change notifications.
What's Next
Getting Help
For technical questions about AI system specifications or to request additional documentation:
Contact support through the Help Center menu
Report safety concerns immediately for investigation
Review the Trust Center for detailed AI governance information
Check the Status Page for known issues