System Prompts
Overview
This article provides an overview of ISMS Copilot's system prompt capabilities as of January 2, 2026. The system prompt defines how the AI assistant behaves, communicates, and helps users with compliance and security tasks. This serves as a versioned reference point for tracking improvements over time.
For security and safety reasons, exact system prompt text is never disclosed. This article provides high-level overviews of functionality and recent improvements only.
Core Capabilities (January 2, 2026)
What the System Prompt Does
The system prompt is the underlying instruction set that guides ISMS Copilot's behavior. It enables the assistant to:
Provide compliance expertise – Deliver accurate guidance on ISO 27001, SOC 2, NIST, GDPR, DORA, NIS2, and other frameworks
Maintain professional tone – Communicate as a trusted advisor with warmth and clarity
Generate structured outputs – Create policies, risk assessments, gap analyses, and audit-ready documentation
Handle uncertainty transparently – Directly admit when information is incomplete rather than guessing
Apply context-aware guidance – Adjust disclaimers and legal notices based on query type
Stay focused on purpose – Redirect off-topic requests back to compliance and security
Dynamic Configuration
System prompts are dynamically constructed using workspace-level settings:
Persona – Defines the AI's role (ISO 27001 Expert, SOC 2 Consultant, Auditor, etc.)
Custom Instructions – User-provided guidance tailored to specific projects or clients
Answer Style – Controls response length (concise, normal, or detailed)
These configurations combine with framework knowledge injection to generate contextual, accurate responses.
Recent Improvements
January 2026: Assistant Personality Enhancements
The most significant system prompt update focused on making the assistant more helpful through:
Warmer Professional Tone Responses now feel like guidance from a trusted advisor rather than a formal documentation tool. The assistant balances professionalism with approachability.
More Natural Communication The assistant now uses natural prose instead of defaulting to bullet points, making conversations feel less robotic and more conversational.
Proactive Editing Behavior Instead of only suggesting changes, the assistant now provides direct fixes and actionable checklists when reviewing documents or policies.
Context-Aware Legal Disclaimers Legal disclaimers and warnings now appear only when relevant (e.g., discussions of fines, contracts, legal obligations) rather than in every response.
Clearer Uncertainty Handling When the assistant doesn't have complete information, it directly admits gaps instead of using hedging language like "typically" or "usually."
These changes make ISMS Copilot feel less like a generic AI tool and more like a specialized compliance partner.
December 2025: Relaxed Guardrails
System prompt guardrails were refined to balance safety with flexibility:
Workspace instructions now accept legitimate custom context, client-specific guidance, and intellectual property
Reduced false positives where harmless compliance questions triggered overly restrictive responses
Maintained core safety boundaries: still refuses harmful, illegal, or unethical requests
Improved support for multi-client workspace configurations
This update made the assistant more adaptable to real-world consulting and audit scenarios.
October 2025: Response Refinements
Earlier system prompt adjustments focused on making responses more concise and natural, laying the groundwork for January 2026's personality improvements.
Check the Product Changelog for detailed release notes on these updates and future enhancements.
Technical Foundation
Framework Knowledge Injection
The system prompt works alongside dynamic framework knowledge injection (version 2.5, launched December 2025). This system enriches queries with specialized compliance information before processing, ensuring responses are grounded in actual framework requirements.
Learn more in the Dynamic Framework Knowledge Injection article.
Safety Mechanisms
Embedded safety guardrails ensure the assistant:
Refuses harmful, illegal, or unethical requests
Prevents reproduction of copyrighted framework standards
Blocks prompt disclosure attempts and jailbreak techniques
Maintains focus on compliance and security topics
Users cannot view or directly edit raw system prompts. This is an intentional security measure that protects the assistant's integrity and safety controls.
How Users Experience System Prompts
Workspace Configuration
While you can't access the raw system prompt, you can influence AI behavior through workspace settings:
Navigate to Workspaces
Select a workspace and click Edit
Choose a Persona that matches your needs
Add Custom Instructions for project-specific guidance
Select your preferred Answer Style
Save changes to apply them to all conversations in that workspace
Visible Behavior Changes
The January 2026 improvements are most noticeable when you:
Ask for policy reviews – You'll receive direct edits instead of vague suggestions
Request document generation – Responses use natural language with contextual structure
Encounter knowledge gaps – The assistant clearly states what it doesn't know
Discuss complex scenarios – Legal disclaimers appear only when genuinely needed
Related Resources
Product Changelog – Detailed release notes and historical updates
AI System Technical Overview – Architecture and technical foundation
Prompt Engineering Overview – Tips for crafting effective user queries
AI Safety & Responsible Use Overview – Detailed guardrail policies
Future Tracking
This article will be updated when significant system prompt changes occur. Future versions may document:
Enhanced reasoning capabilities
Multi-agent collaboration features
Framework-specific behavior presets
Enterprise-grade safety enhancements
By maintaining this changelog-style overview, teams can understand how ISMS Copilot's AI capabilities evolve while protecting the security and intellectual property of the underlying system.