Conversation Too Long Error
If you see "Your conversation is too long. Please start a new conversation to continue." or "AI service ran into an error. Please try again." when sending a message, you've reached the maximum conversation length that the AI can process.
What Happened?
The combined size of your conversation history, uploaded files, and current message has exceeded the AI model's context window limit. This technical limit is 200,000 tokens (approximately 500 pages of text or 800,000 characters).
Think of it like the AI's "working memory"—once the conversation gets too long, the AI can no longer process all the information at once.
Why Did This Happen?
This error typically occurs when you have:
Long Conversation History
Conversations with 100+ messages, especially detailed back-and-forth discussions about complex compliance topics, can accumulate significant context.
Large File Uploads
Files consume a large portion of the context limit, particularly:
Multi-sheet spreadsheets - Large Excel files with extensive control matrices or detailed requirements can split into 100+ parts when processed
Lengthy PDF documents - Full policy manuals, standards documentation, or audit reports
Multiple document uploads - Several files uploaded throughout the conversation
Combination of Both
Extended conversations with many messages combined with large file uploads can quickly exceed 200,000 tokens, even if each individual message seems reasonable.
Example: A comprehensive compliance spreadsheet (such as a detailed control matrix or multi-framework requirements document) can split into 100+ parts when processed. When combined with an extended conversation history and additional document uploads, this can easily push total token usage beyond the 200,000 limit.
How to Fix It
1. Start a New Conversation (Recommended)
Your previous conversation is automatically saved and accessible anytime. Simply start a fresh conversation to continue your work.
Steps:
Click the ISMS Copilot logo (top left) to return to the welcome screen
Start typing your next question in a new conversation
Your old conversation remains in your history and workspace
2. Summarize Previous Context
If you need to reference earlier work, copy key findings from your long conversation and include them in your new chat.
Example:
"Based on our previous gap analysis, we identified 12 missing ISO 27001 controls in Annex A.8 (Asset Management). We're implementing a CMDB for A.8.1. Now I need help with A.8.2 (information classification)..."
This gives the AI the context it needs without loading the entire conversation history.
3. Upload Smaller or Fewer Files
For your new conversation, be strategic about file uploads:
Break large documents into sections - Upload only the relevant pages or tabs you need analyzed
Upload one file at a time - Analyze the first document, then start a new conversation for the next
Convert large spreadsheets - Extract specific worksheets to separate files instead of uploading entire workbooks
Remove unnecessary content - Delete cover pages, images, or appendices that aren't needed for analysis
For large compliance spreadsheets with multiple tabs or requirement domains, upload only the specific sections you're working on rather than the entire workbook.
4. Use Separate Conversations for Different Topics
Instead of one long conversation covering everything, create focused conversations:
One conversation per control domain - Separate chats for Access Control (A.5), Cryptography (A.8), Physical Security (A.7), etc.
One conversation per document - Analyze each policy or procedure in its own thread
One conversation per audit area - Keep pre-audit prep separate from post-audit remediation
This approach also makes it easier to find specific discussions later.
Best Practices to Avoid This Error
For Consultants Managing Client Projects
Create separate workspaces for each client
Within each workspace, use separate conversations for different phases (gap analysis, implementation, audit prep)
Export important findings to your own documentation regularly
For ISO 27001 Implementations
Create one conversation per Annex A control category
Generate policies in focused sessions rather than all at once
Keep risk assessments in a separate conversation from control implementation
For Document Analysis
Upload and analyze one policy at a time
For gap analysis of multiple documents, create separate conversations for each
Summarize findings from previous analyses rather than re-uploading files
Automatic Compaction for Think Mode
Good news: Automatic conversation compaction is now live for Think mode (Claude Opus 4.6). When your Think mode conversation approaches the context limit, the system automatically summarizes earlier messages in the background, allowing you to continue indefinitely without starting a new chat.
How it works:
Automatic Summarization: When approaching ~150,000 tokens in Think mode, the backend compacts older messages while preserving key context
Visual Indicator: You'll see a brief "Compacting our conversation..." progress message (amber indicator) during the process
Seamless Continuation: After a few seconds, your conversation resumes normally with full context preserved
Infinite Conversations: No more manual restarts for extended compliance discussions, gap analyses, or policy reviews
Think Mode Only: Compaction is currently available exclusively for Think mode (Claude Opus 4.6). Fast mode and other AI models still have standard conversation length limits. For long, complex compliance work requiring extended context, switch to Think mode.
For other modes: If you're using Fast mode or other AI models and hit the conversation length limit, starting a new conversation remains the recommended approach. Your previous conversation is automatically saved and accessible anytime.
Understanding Token Limits
Different AI models have different context window sizes:
Claude Opus 4.5: 200,000 tokens (~500 pages)
Mistral Medium: 128,000 tokens (~320 pages)
What counts toward the limit:
Every message you send
Every AI response
All uploaded file contents (text extracted from PDFs, spreadsheets, etc.)
System prompts and framework knowledge (minimal, but present)
Token estimation: As a rough guide, 1 token ≈ 4 characters of text. So 200,000 tokens ≈ 800,000 characters ≈ 500 pages of typical business documents.
Getting Help
If you're consistently hitting this limit or need help recovering context from a very long conversation:
Contact support through User Menu → Help Center → Contact Support
Include the conversation title or workspace name
Explain what you were working on and what context you need to preserve
We can help you extract key information and structure your work into manageable conversations
January 2026: We've implemented backend tracking to identify users affected by this error, allowing our support team to provide faster, more targeted assistance.
Related Resources
Known Issues - Token Limit Errors - Technical details and development status
Troubleshooting Common Issues - Other chat and messaging errors
Organizing Work with Workspaces - Best practices for managing multiple projects
Uploading and Analyzing Files - File upload guidelines and limits