ISMS Copilot
Getting started

Conversation Too Long Error

If you see "Your conversation is too long. Please start a new conversation to continue." or "AI service ran into an error. Please try again." when sending a message, you've reached the maximum conversation length that the AI can process.

What Happened?

The combined size of your conversation history, uploaded files, and current message has exceeded the AI model's context window limit. This technical limit is 200,000 tokens (approximately 500 pages of text or 800,000 characters).

Think of it like the AI's "working memory"—once the conversation gets too long, the AI can no longer process all the information at once.

Why Did This Happen?

This error typically occurs when you have:

Long Conversation History

Conversations with 100+ messages, especially detailed back-and-forth discussions about complex compliance topics, can accumulate significant context.

Large File Uploads

Files consume a large portion of the context limit, particularly:

  • Multi-sheet spreadsheets - Large Excel files with extensive control matrices or detailed requirements can split into 100+ parts when processed

  • Lengthy PDF documents - Full policy manuals, standards documentation, or audit reports

  • Multiple document uploads - Several files uploaded throughout the conversation

Combination of Both

Extended conversations with many messages combined with large file uploads can quickly exceed 200,000 tokens, even if each individual message seems reasonable.

Example: A comprehensive compliance spreadsheet (such as a detailed control matrix or multi-framework requirements document) can split into 100+ parts when processed. When combined with an extended conversation history and additional document uploads, this can easily push total token usage beyond the 200,000 limit.

How to Fix It

Your previous conversation is automatically saved and accessible anytime. Simply start a fresh conversation to continue your work.

Steps:

  1. Click the ISMS Copilot logo (top left) to return to the welcome screen

  2. Start typing your next question in a new conversation

  3. Your old conversation remains in your history and workspace

2. Summarize Previous Context

If you need to reference earlier work, copy key findings from your long conversation and include them in your new chat.

Example:

"Based on our previous gap analysis, we identified 12 missing ISO 27001 controls in Annex A.8 (Asset Management). We're implementing a CMDB for A.8.1. Now I need help with A.8.2 (information classification)..."

This gives the AI the context it needs without loading the entire conversation history.

3. Upload Smaller or Fewer Files

For your new conversation, be strategic about file uploads:

  • Break large documents into sections - Upload only the relevant pages or tabs you need analyzed

  • Upload one file at a time - Analyze the first document, then start a new conversation for the next

  • Convert large spreadsheets - Extract specific worksheets to separate files instead of uploading entire workbooks

  • Remove unnecessary content - Delete cover pages, images, or appendices that aren't needed for analysis

For large compliance spreadsheets with multiple tabs or requirement domains, upload only the specific sections you're working on rather than the entire workbook.

4. Use Separate Conversations for Different Topics

Instead of one long conversation covering everything, create focused conversations:

  • One conversation per control domain - Separate chats for Access Control (A.5), Cryptography (A.8), Physical Security (A.7), etc.

  • One conversation per document - Analyze each policy or procedure in its own thread

  • One conversation per audit area - Keep pre-audit prep separate from post-audit remediation

This approach also makes it easier to find specific discussions later.

Best Practices to Avoid This Error

For Consultants Managing Client Projects

  • Create separate workspaces for each client

  • Within each workspace, use separate conversations for different phases (gap analysis, implementation, audit prep)

  • Export important findings to your own documentation regularly

For ISO 27001 Implementations

  • Create one conversation per Annex A control category

  • Generate policies in focused sessions rather than all at once

  • Keep risk assessments in a separate conversation from control implementation

For Document Analysis

  • Upload and analyze one policy at a time

  • For gap analysis of multiple documents, create separate conversations for each

  • Summarize findings from previous analyses rather than re-uploading files

What's Coming Soon

We're actively developing an automatic rolling context window that will manage long conversations seamlessly—similar to how Claude.ai and ChatGPT handle extended discussions.

How it will work:

  • The system will automatically summarize older messages while preserving key information

  • Recent conversation context and uploaded files will always be available

  • You'll be able to continue conversations indefinitely without manual intervention

  • Important context like audit findings, control numbers, and decisions will be pinned

Until the rolling context window is implemented, starting a new conversation is the recommended solution. Your work is always saved, and you can reference previous conversations anytime.

Understanding Token Limits

Different AI models have different context window sizes:

  • Claude Opus 4.5: 200,000 tokens (~500 pages)

  • Mistral Medium: 128,000 tokens (~320 pages)

What counts toward the limit:

  • Every message you send

  • Every AI response

  • All uploaded file contents (text extracted from PDFs, spreadsheets, etc.)

  • System prompts and framework knowledge (minimal, but present)

Token estimation: As a rough guide, 1 token ≈ 4 characters of text. So 200,000 tokens ≈ 800,000 characters ≈ 500 pages of typical business documents.

Getting Help

If you're consistently hitting this limit or need help recovering context from a very long conversation:

  1. Contact support through User Menu → Help Center → Contact Support

  2. Include the conversation title or workspace name

  3. Explain what you were working on and what context you need to preserve

  4. We can help you extract key information and structure your work into manageable conversations

January 2026: We've implemented backend tracking to identify users affected by this error, allowing our support team to provide faster, more targeted assistance.

Was this helpful?