AI System Technical Overview
Overview
This article provides technical transparency into how ISMS Copilot's AI systems are built, tested, and operated. These details demonstrate our commitment to responsible AI development through verifiable implementation practices.
Who This Is For
This article is for:
Security and compliance teams evaluating AI governance controls
Auditors assessing AI system implementation against policies
Risk managers requiring technical transparency for AI systems
Technical users wanting to understand the AI architecture
Dynamic Framework Knowledge Architecture
ISMS Copilot uses dynamic framework knowledge injection to ground AI responses in verified compliance knowledge. As of version 2.5 (February 2025), this replaces the previous RAG (Retrieval-Augmented Generation) architecture with a more reliable, token-efficient approach.
How Framework Knowledge Injection Works
Architecture Components:
Framework Detection Layer: Regex-based pattern matching detects framework mentions in user queries (ISO 27001, SOC 2, GDPR, HIPAA, CCPA, NIS 2, DORA, ISO 42001, ISO 27701)
Knowledge Injection Layer: Dynamically loads only relevant framework knowledge into AI context based on detected frameworks
Generation Layer: Large language models (LLMs) from enterprise AI providers receive framework knowledge before generating responses
Validation Mechanism: Framework knowledge provided to AI ensures responses are grounded in actual compliance requirements, not probabilistic guessing
Dynamic framework knowledge injection eliminates hallucinations by providing AI with actual framework knowledge before it answers. Detection happens before AI processing (not AI-based), ensuring 100% reliability when frameworks are mentioned.
Why Dynamic Injection Matters for Compliance:
Eliminates hallucination: AI receives verified framework knowledge before answering, preventing fabricated control numbers and requirements
Token efficiency: Only relevant frameworks loaded (~1-2K tokens) vs. sending all knowledge (~10K tokens) on every request
Reliable detection: Regex pattern matching (not AI-based) ensures framework mentions are never missed
Extensible architecture: New frameworks added with single object definition, no model retraining required
Multi-framework support: Handles queries mentioning multiple frameworks simultaneously (e.g., \"Map ISO 27001 to SOC 2\")
Technical Implementation
Detection Process:
User submits query (e.g., \"What is ISO 27001 Annex A.5.9?\")
Framework detection scans query for pattern matches (ISO 27001, GDPR, SOC 2, etc.)
Matched frameworks trigger knowledge injection
Relevant framework knowledge added to AI system prompt before generation
Supported Frameworks (v2.5):
ISO 27001:2022 — Information Security Management System
ISO 42001:2023 — Artificial Intelligence Management System
ISO 27701:2025 — Privacy Information Management System
SOC 2 — Service Organization Control (Trust Services Criteria)
HIPAA — Health Insurance Portability and Accountability Act
GDPR — General Data Protection Regulation
CCPA — California Consumer Privacy Act
NIS 2 — Network and Information Systems Directive
DORA — Digital Operational Resilience Act
More frameworks are continuously being added. Next priorities include NIST 800-53, PCI DSS, and additional regional regulations. Check the Product Changelog for updates.
Evolution from RAG to Dynamic Injection
Previous Approach (Pre-v2.5): RAG Architecture
Semantic search retrieved relevant documentation snippets
Retrieval quality varied based on query phrasing
All ~10K tokens of knowledge sent on many requests
Focused primarily on ISO 27001
Current Approach (v2.5+): Dynamic Framework Injection
Regex-based detection ensures reliable framework identification
Only relevant frameworks loaded (token-efficient)
Supports 9 frameworks simultaneously
Extensible design for rapid framework additions
If you see references to "RAG architecture" in older documentation or external sources, note that ISMS Copilot transitioned to dynamic framework knowledge injection in version 2.5 (February 2025). The new approach is more reliable and supports many more frameworks.
AI Providers & Data Protection
We use enterprise-grade AI providers with strict data protection agreements.
Current Providers
Backend AI Models:
OpenAI GPT-5.2 (default) — Advanced reasoning and compliance analysis
Anthropic Claude Opus — Backend integration for nuanced policy drafting
xAI Grok — Alternative provider for diverse use cases
Mistral AI — EU-based provider for Advanced Data Protection Mode
OpenAI GPT-5.2 is the current default provider powering all conversations. Additional AI providers are integrated on the backend, with model selection UI planned for 2026. All models access the same specialized compliance knowledge base through dynamic framework injection, ensuring consistent, reliable guidance.
Zero Data Retention Agreements
All AI providers operate under Zero Data Retention (ZDR) agreements:
Your data is NEVER used to train AI models. ZDR agreements ensure your conversations, uploaded documents, and workspace content remain confidential and are not retained by AI providers beyond processing your requests.
ZDR Agreement Terms:
No user data retention beyond request processing
No model training on customer content
GDPR-compliant data transfers with Standard Contractual Clauses (SCCs)
Enterprise security standards enforced
For detailed processor information and data flows, see our Register of Processing Activities.
Development Requirements
Every AI system component is developed against documented requirements defining expected behavior, safety constraints, and performance thresholds.
Functional Requirements
Scope Definition:
AI provides compliance assistance, not legal advice
Task boundaries: policy generation, gap analysis, audit preparation, document review
Constraint enforcement: no internet access, no code execution, no personal data processing beyond platform usage
Performance Requirements
Quality Targets:
Response accuracy grounded in retrieved sources with citation
Context window sufficient for multi-document compliance analysis
Response time optimized for interactive use (target: under 10 seconds)
Rate limits defined per user tier to ensure system stability
Safety Requirements
Hallucination Mitigation:
Source grounding: responses must reference retrieved documentation
Retrieval validation: responses checked against source content
Confidence scoring: uncertainty acknowledged when sources are ambiguous
User verification disclaimers: all outputs require human review
Content Filtering:
Inappropriate content detection and blocking
Scope boundaries: AI refuses out-of-scope requests (e.g., unrelated topics, medical/legal advice)
Jailbreak and prompt injection protection
See AI Safety & Responsible Use Overview for detailed safety guardrails.
Data Handling Requirements
Privacy by Design:
No user data for model training (ZDR agreements enforced)
Data minimization: only necessary data processed for retrieval and generation
Temporary processing: no long-term storage of prompts/responses beyond user session logs
Retention controls: user-configurable data retention periods (1 day to 7 years, or keep forever)
Transfer controls: GDPR-compliant data transfers with SCCs
For comprehensive data handling practices, see our Privacy Policy.
Verification & Validation Testing
AI systems undergo rigorous testing before deployment. No system goes live without passing requirements-based validation.
Regression Testing
Automated tests run on every code change to ensure existing functionality remains intact.
Test Coverage:
Retrieval accuracy: Precision and recall against ground truth datasets
Response grounding: Verification that outputs cite retrieved sources
Hallucination detection: Comparison against known incorrect responses
Performance benchmarks: Response time and context handling validation
Security Testing
AI systems undergo the same security validation as all platform components.
Testing Pipeline:
SAST (Static Application Security Testing): Code-level vulnerability scanning with Semgrep integration
DAST (Dynamic Application Security Testing): Runtime security validation
Penetration Testing: Annual third-party security assessments
Prompt Injection Testing: Validation against adversarial inputs attempting to bypass safety constraints
Our secure development lifecycle ensures AI systems meet the same security standards as all other platform components. See our Security Policies for detailed testing practices.
User Acceptance Testing
Real-world scenario validation with compliance professionals ensures:
Outputs meet professional quality standards
Responses are appropriate for compliance use cases
Limitations are clearly communicated
Feedback mechanisms are accessible and effective
Deployment Validation Checklist
AI systems are deployed only after meeting documented requirements:
Deployment requires 100% regression test success, cleared security scans (no critical/high-severity vulnerabilities), met performance benchmarks, updated user documentation with limitations, and configured monitoring/alerting for hallucination rate tracking.
Deployments that fail validation are rolled back until requirements are satisfied.
Monitoring & Continuous Improvement
Post-deployment, we monitor AI system behavior to detect degradation, emerging issues, or misuse.
Monitoring Metrics
What We Track:
Hallucination rate: Tracked through user reports and automated detection
Response accuracy: Sampled validation against ground truth compliance standards
Usage patterns: Detection of out-of-scope or inappropriate use
Performance metrics: Response time, retrieval precision, error rates
User feedback: Adverse impact reports, support tickets, feature requests
Continuous Improvement Cycle
Monitoring data informs iterative improvements:
Feedback Loops:
User feedback and adverse impact reports → model updates and retrieval tuning
Security testing results → safety enhancements and control updates
Regulatory changes and best practices → documentation and framework updates
Performance monitoring → accuracy improvements and response optimization
Incident Response
We notify users of AI-related incidents to maintain transparency and trust.
Notification Channels:
Email alerts for critical incidents affecting AI functionality
Slack notifications for subscribed teams
Status page updates with incident timelines and resolutions
NIS2-compliant early warning notifications (24-hour reporting for significant cybersecurity incidents)
Subscribe to our status page to receive real-time notifications about AI system incidents, maintenance, and updates.
Known Limitations
AI systems have inherent limitations that users must understand to use them responsibly.
Technical Limitations
AI outputs may contain inaccuracies (hallucinations) even with RAG grounding. Users must verify all outputs against official standards and regulations.
Current Constraints:
Probabilistic nature: AI generates responses based on statistical patterns, not deterministic logic
No internet access: AI cannot retrieve real-time information or access external websites
No code execution: AI cannot run calculations, execute scripts, or validate technical implementations
Knowledge cutoff: AI model knowledge is limited to training data cutoff dates (varies by provider)
Context limits: Maximum context window constrains the amount of information processed in a single request
Domain boundaries: AI is trained for compliance/security; performance in other domains is not guaranteed
For detailed limitations and workarounds, see our Known Issues page.
User Verification Responsibility
ISMS Copilot is designed to assist, not replace, professional judgment:
Cross-reference AI suggestions with official standards
Validate critical information before submission to auditors
Use the AI as a consultant's assistant, not a replacement for expertise
Exercise professional judgment in applying AI recommendations
See How to Use ISMS Copilot Responsibly for verification best practices.
Reporting & Feedback
User feedback is critical for AI system improvement. We provide multiple mechanisms for reporting issues, inaccuracies, or unexpected behavior.
How to Report Issues
Adverse Impacts or Hallucinations:
Navigate to user menu (top right) > Help Center > Contact Support
Include prompt, response, and screenshots in your report
Expect response within 48 hours
In-Platform Reporting:
Use "Report Issue" button available throughout the platform to flag specific AI responses
What Happens After You Report
Immediate review (within 48 hours): Support team assesses severity and impact
Investigation: Technical team analyzes the issue, reproduces the problem, and identifies root cause
Response: You receive an update on findings and planned actions
Remediation: Issues addressed through model updates, retrieval tuning, code fixes, or documentation improvements
Continuous improvement: Lessons learned integrated into testing and monitoring processes
High-severity issues (safety risks, data leaks, critical hallucinations) are escalated immediately for urgent remediation.
See AI Safety & Responsible Use Overview for detailed reporting instructions.
Documentation Updates
Technical specifications are updated when:
AI providers change (new models, deprecated APIs)
Architecture evolves (new components, validation methods)
Requirements are revised (new safety constraints, performance targets)
Testing practices expand (new validation techniques, security tools)
Updates are communicated through release notes and this documentation page. Subscribe to our status page for change notifications.
What's Next
Getting Help
For technical questions about AI system specifications or to request additional documentation:
Contact support through the Help Center menu
Report safety concerns immediately for investigation
Review the Trust Center for detailed AI governance information
Check the Status Page for known issues