GRC prompts libraries

ISO 42001 AI management prompt library

About this prompt library

This prompt library helps organizations implement ISO/IEC 42001:2023, the first international standard for AI Management Systems (AIMS). Use these prompts with ISMS Copilot to build responsible AI governance frameworks addressing unique AI risks like bias, transparency, and ethical concerns.

ISO 42001 is designed for organizations developing, providing, or using AI systems. It complements AI regulations like the EU AI Act and provides a management system approach to responsible AI.

Scoping and planning

AI system inventory and classification

Create an inventory of our AI systems per ISO 42001:

AI systems we develop/use/provide:
[List AI applications: chatbots, recommendation engines, predictive models, computer vision, NLP, automated decision systems, etc.]

For each AI system, document:
- System name and description
- AI techniques used (machine learning, deep learning, NLP, computer vision, generative AI)
- Purpose and use case
- Development status (research, development, production, retired)
- Internal vs. external use (internal tooling, customer-facing, embedded in products)
- Risk classification (high-risk per EU AI Act, limited risk, minimal risk)
- Data sources and training data
- Stakeholders affected (employees, customers, public)
- Lifecycle stage (design, development, deployment, monitoring, retirement)

Create an AI system register template suitable for ongoing management and regulatory compliance.

Responsible AI policy framework

Develop a comprehensive AI Management System policy per ISO 42001:

Organization: [name, industry, AI maturity]
AI use cases: [describe key AI applications]

Policy sections aligned with ISO 42001:

1. AI governance and oversight
- AI ethics principles (fairness, transparency, accountability, safety, privacy)
- Governance structure (AI oversight board, AI ethics committee, responsible AI roles)
- Management commitment to responsible AI

2. Risk management (ISO 42001 Section 6)
- AI-specific risk assessment methodology
- Risk treatment and controls
- Integration with enterprise risk management

3. AI system lifecycle management
- Development lifecycle (design, data, training, testing, deployment, monitoring)
- Change control for AI models and data
- Version control and model registry

4. Transparency and explainability
- Documentation requirements for AI systems
- Explainability mechanisms (model cards, fact sheets, interpretability tools)
- Disclosure to affected parties (when AI makes decisions)

5. Fairness, bias, and discrimination prevention
- Bias detection and mitigation in data and models
- Fairness metrics and testing
- Protected attribute handling
- Diverse and representative training data

6. Privacy and data protection
- Data minimization for AI training and inference
- Consent and lawful basis (GDPR alignment)
- Data subject rights in AI context (right to explanation, right not to be subject to automated decisions)
- Privacy-preserving techniques (federated learning, differential privacy, synthetic data)

7. Safety, security, and robustness
- Adversarial robustness and attack prevention
- Model security (model theft, poisoning, evasion)
- Safety testing and validation
- Fail-safe mechanisms and human oversight for high-risk AI

8. Human oversight and control
- Human-in-the-loop for critical decisions
- Override mechanisms
- Monitoring of AI system performance and decisions

9. Competence and awareness (ISO 42001 Section 7.2-7.3)
- AI training for developers, users, and leadership
- Responsible AI awareness program

10. Incident management
- AI-specific incident types (bias incidents, model drift, adversarial attacks, unintended harms)
- Incident response and notification
- Post-incident review and model retraining

Address ISO 42001 Annex A controls relevant to your AI risk profile.

AI risk assessment

AI-specific risk assessment

Conduct an AI risk assessment per ISO 42001 Section 6 for [AI system name]:

AI system details:
- Description: [what the AI does]
- AI technique: [ML model type, architecture]
- Data sources: [training data, inference data]
- Use case: [how it's used, who it affects]
- Deployment: [production, pilot, development]

AI-specific risks to assess:

1. Bias and fairness risks
- Training data bias (historical bias, sample bias, measurement bias)
- Algorithmic bias (model amplifies disparities)
- Impact on protected groups (discrimination based on race, gender, age, etc.)
- Fairness metrics: [demographic parity, equalized odds, disparate impact]

2. Transparency and explainability risks
- Black-box models (lack of interpretability)
- Inability to explain decisions to users/regulators
- Compliance with right to explanation (GDPR Article 22)

3. Privacy risks
- Re-identification from anonymized training data
- Model inversion attacks (extracting training data from model)
- Membership inference (determining if data was in training set)
- Data leakage or unintended memorization

4. Safety and robustness risks
- Model errors and incorrect predictions
- Adversarial attacks (evasion, poisoning, backdoors)
- Model drift and degradation over time
- Edge cases and distributional shift
- Safety-critical failures (in autonomous systems, medical AI, etc.)

5. Security risks
- Model theft or extraction
- Data poisoning during training
- Adversarial inputs at inference time
- Supply chain risks (poisoned datasets, compromised libraries)

6. Ethical and societal risks
- Unintended harms or negative consequences
- Misuse of AI system
- Environmental impact (carbon footprint of training large models)
- Manipulation or deception (deepfakes, disinformation)

7. Compliance and legal risks
- Regulatory non-compliance (EU AI Act, GDPR, sector regulations)
- Liability for AI decisions
- Intellectual property issues (training on copyrighted data)

For each risk, assess:
- Likelihood (based on data quality, model complexity, deployment context)
- Impact (severity of harm if materialized)
- Existing controls and mitigations
- Residual risk and treatment plan

Create AI risk register and treatment plan aligned with ISO 42001 risk management requirements.

AI system development and lifecycle

Responsible AI development procedure

Create a responsible AI development procedure covering the full AI lifecycle:

1. Design and scoping (ISO 42001 controls 6.2, A.2.1)
- Define AI system purpose and requirements
- Identify affected stakeholders
- Determine fairness and performance criteria
- Assess regulatory requirements (EU AI Act risk class, GDPR applicability)
- Document intended use and limitations

2. Data collection and preparation (controls A.3.x)
- Data requirements specification (volume, quality, representativeness)
- Data sourcing (internal, external, synthetic, licensed)
- Bias assessment in data collection
- Data quality assurance (completeness, accuracy, consistency)
- Data labeling and annotation (quality control, annotator training, inter-annotator agreement)
- Privacy-preserving techniques (anonymization, differential privacy)
- Data documentation (data cards, dataset sheets)

3. Model development and training (controls A.4.x)
- Model selection (algorithm choice, architecture design)
- Feature engineering and selection
- Training and validation methodology (train/val/test splits, cross-validation)
- Hyperparameter tuning
- Bias detection and mitigation during training
- Performance evaluation (accuracy, precision, recall, F1, AUC, fairness metrics)
- Model documentation (model cards, fact sheets)

4. Testing and validation (controls A.5.x)
- Functional testing (does it work as intended?)
- Fairness testing (disparate impact analysis, subgroup performance)
- Robustness testing (adversarial examples, edge cases, out-of-distribution data)
- Safety testing (failure modes, unsafe outputs)
- Explainability validation (can we explain decisions?)
- Regulatory compliance testing (meets EU AI Act requirements if applicable)

5. Deployment (controls A.6.x)
- Deployment planning and approval
- User training and documentation
- Monitoring and alerting setup
- Human oversight mechanisms
- Rollback procedures
- Staged rollout (canary, A/B testing)

6. Monitoring and maintenance (controls A.7.x)
- Performance monitoring (accuracy, latency, uptime)
- Bias monitoring (ongoing fairness metrics)
- Model drift detection (data drift, concept drift)
- Incident detection (anomalous predictions, errors)
- Retraining triggers and procedures
- Model version management

7. Retirement and decommissioning
- End-of-life decision criteria
- Data retention or deletion
- Communication to users
- Transition planning (replacement system, manual processes)

For our AI development context: [describe teams, tools, platforms, methodologies]

Include checkpoints, approvals, and documentation requirements at each stage per ISO 42001.

AI model documentation (Model Cards)

Create a model card for [AI model name] per ISO 42001 transparency requirements:

Model card sections:

1. Model details
- Model name and version
- Model type and architecture: [e.g., neural network, random forest, transformer]
- Training algorithm and framework: [TensorFlow, PyTorch, scikit-learn]
- Model developers and contact
- Model date and lifecycle stage
- License and usage restrictions

2. Intended use
- Primary intended uses and users
- Out-of-scope uses (what it should NOT be used for)
- Ethical considerations and known limitations

3. Factors (relevant characteristics)
- Groups or factors considered (demographics, geographies, use contexts)
- Instrumentation and environment factors

4. Metrics
- Model performance metrics: [accuracy, precision, recall, F1, AUC, etc.]
- Fairness and bias metrics: [demographic parity, equalized odds, etc.]
- Decision thresholds and trade-offs

5. Training data
- Dataset description and source
- Data preprocessing and feature engineering
- Training data size and splits
- Known biases or limitations in data

6. Evaluation data
- Evaluation dataset(s) used
- Motivation for dataset choice
- Preprocessing applied

7. Quantitative analyses
- Overall performance results
- Performance across subgroups (fairness analysis)
- Confidence intervals or uncertainty quantification

8. Ethical considerations
- Potential harms and biases identified
- Mitigation strategies implemented
- Sensitive use cases and risk factors

9. Caveats and recommendations
- Known limitations and failure modes
- Recommendations for responsible use
- Monitoring and maintenance recommendations

Our model: [provide details for each section based on your AI system]

Use this model card template for all AI systems to meet ISO 42001 documentation and transparency requirements.

Bias detection and fairness

Fairness assessment and mitigation

Conduct a fairness assessment for [AI system name]:

System context:
- Decision type: [classification, ranking, recommendation, prediction]
- Impacted individuals: [customers, employees, loan applicants, students, etc.]
- Protected attributes: [race, gender, age, disability, etc. - per applicable anti-discrimination laws]
- Potential harms: [denial of service, unfair pricing, discrimination, reputational harm]

Fairness assessment methodology:

1. Identify protected groups
- Define sensitive attributes and groups (e.g., gender: male/female/non-binary; race: categories)
- Determine if proxies for protected attributes exist in data (ZIP code → race, name → gender)

2. Select fairness metrics (choose appropriate for use case)
- Demographic parity: Equal positive outcome rates across groups
- Equalized odds: Equal true positive and false positive rates across groups
- Predictive parity: Equal precision (positive predictive value) across groups
- Individual fairness: Similar individuals receive similar outcomes
- Calibration: Predicted probabilities match actual outcomes across groups

3. Measure fairness
- Calculate chosen metrics for each protected group
- Compare against fairness thresholds (e.g., 4/5ths rule for disparate impact)
- Identify disparities and bias patterns

4. Analyze root causes of bias
- Data bias: Underrepresentation, historical bias, measurement bias in training data
- Algorithmic bias: Model overfits to majority group, features correlate with protected attributes
- Deployment bias: Different usage patterns across groups

5. Mitigation strategies
- Pre-processing: Reweigh training data, remove bias from data, balance datasets
- In-processing: Fairness-aware training algorithms, adversarial debiasing, regularization
- Post-processing: Adjust decision thresholds per group, recalibrate predictions
- Structural: Remove or mask protected attributes, use fairness constraints

6. Trade-offs analysis
- Accuracy vs. fairness trade-offs (mitigating bias may reduce overall accuracy)
- Fairness metric conflicts (can't satisfy all fairness definitions simultaneously)
- Decision on acceptable trade-offs based on ethical principles and legal requirements

Document fairness assessment results, mitigation measures, and residual bias for ISO 42001 compliance and regulatory inquiries.

Our AI system: [describe system, decision context, and protected groups]

AI security and robustness

Adversarial robustness testing

Test AI model robustness against adversarial attacks:

AI model: [name and type]
Attack surface: [inference API, model file, training pipeline]

Adversarial threat scenarios:

1. Evasion attacks (at inference time)
- Adversarial examples: Carefully crafted inputs causing misclassification
- Test methods: FGSM, PGD, C&W attacks
- Defense: Adversarial training, input sanitization, certified robustness

2. Poisoning attacks (during training)
- Data poisoning: Inject malicious samples into training data
- Test methods: Label flipping, backdoor injection
- Defense: Data validation, anomaly detection, robust training algorithms

3. Model extraction/theft
- Query attacks to replicate model
- Test methods: Model stealing via API queries
- Defense: Query rate limiting, output perturbation, watermarking

4. Model inversion
- Reconstruct training data from model
- Test methods: Membership inference, attribute inference
- Defense: Differential privacy, output rounding, query restrictions

5. Prompt injection (for LLMs/generative AI)
- Malicious prompts to bypass safety filters or extract sensitive information
- Test methods: Jailbreaking prompts, indirect injection
- Defense: Prompt filtering, output guardrails, content moderation

Robustness testing procedure:
- Generate adversarial examples using attack libraries (Adversarial Robustness Toolbox, CleverHans, Foolbox)
- Measure model accuracy on adversarial inputs
- Assess transferability of attacks across models
- Document vulnerabilities and mitigation measures
- Integrate into CI/CD (automated adversarial testing)

Create robustness test suite and acceptance criteria: [e.g., model accuracy >X% on adversarial examples].

Address ISO 42001 control A.5.3 (robustness evaluation) and A.6.5 (security of AI system).

AI monitoring and incident management

AI system monitoring framework

Implement continuous monitoring for AI systems per ISO 42001 control A.7.3:

AI system: [name]
Deployment: [production environment]

Monitoring dimensions:

1. Performance monitoring
- Accuracy, precision, recall, F1 score (vs. baseline)
- Prediction latency and throughput
- Error rates and types
- User feedback and corrections

2. Data drift monitoring
- Input distribution changes (covariate shift)
- Statistical tests: KL divergence, Kolmogorov-Smirnov test, population stability index (PSI)
- Feature drift detection
- Alerting when drift exceeds thresholds

3. Model drift monitoring (concept drift)
- Model performance degradation over time
- Changes in ground truth distribution
- Comparison of predictions vs. actual outcomes
- Triggers for model retraining

4. Fairness monitoring
- Ongoing fairness metrics by demographic group
- Disparate impact monitoring
- Alerting on fairness degradation

5. Safety and anomaly monitoring
- Out-of-distribution inputs (OOD detection)
- Anomalous predictions (confidence thresholds, uncertainty estimates)
- Unsafe or harmful outputs (content moderation, safety classifiers)

6. Security monitoring
- Adversarial attack detection
- Unusual query patterns (potential model extraction)
- Access control and authentication to model APIs

7. Explainability monitoring
- Track explanations provided to users
- Monitor low-confidence predictions requiring human review
- Feature importance drift

Monitoring implementation:
- Logging and telemetry (predictions, inputs, metadata)
- Dashboards and visualization (Grafana, custom dashboards)
- Alerting rules and thresholds
- Automated responses (circuit breakers, fallback to simpler model, human escalation)
- Regular review meetings (weekly/monthly model health reviews)

Create monitoring playbook: what to monitor, thresholds, alert recipients, response procedures.

Integration with incident management: AI-specific incidents (bias discovered, model drift, adversarial attack) → incident response procedure.

AI incident response procedure

Develop an AI incident response procedure per ISO 42001 control A.8.1:

AI-specific incident types:

1. Bias or fairness incidents
- Discriminatory outcomes discovered
- Protected group harmed
- Disparate impact exceeds thresholds

2. Model performance degradation
- Accuracy drops below acceptable level
- Model drift detected
- Systematic errors in predictions

3. Privacy incidents
- Training data leakage
- Model inversion or membership inference attack
- Re-identification of individuals

4. Safety incidents
- Unsafe or harmful AI outputs
- AI system causes physical or psychological harm
- AI failure in safety-critical application

5. Security incidents
- Adversarial attack successful
- Model theft or extraction
- Data poisoning detected

6. Ethical or reputational incidents
- AI misuse or unintended consequences
- Public backlash or media attention
- Regulatory inquiry

Incident response workflow:

1. Detection and reporting
- Automated detection (monitoring alerts)
- User reports (feedback mechanisms, hotlines)
- Internal discovery (audits, reviews, testing)

2. Initial assessment and triage
- Severity classification (critical/high/medium/low)
- Scope and affected users
- Immediate containment needed? (disable AI system, revert to manual process)

3. Containment and mitigation
- Take AI system offline if necessary
- Roll back to previous model version
- Implement temporary compensating controls (human review, output filtering)

4. Investigation and root cause analysis
- Examine training data, model behavior, predictions
- Identify bias source, model flaw, or attack vector
- Document findings

5. Remediation
- Retrain model with corrected data or fairness constraints
- Apply patches or defenses
- Update procedures to prevent recurrence

6. Communication
- Notify affected individuals (transparency)
- Regulatory notification if required
- Internal communication (leadership, legal, PR)
- Public disclosure (if appropriate)

7. Recovery and lessons learned
- Redeploy remediated AI system
- Enhanced monitoring for recurrence
- Update risk assessments and controls
- Post-incident review and documentation

Create incident response playbooks for each AI incident type with roles, timelines, and communication templates.

Ensure alignment with organizational incident management and ISO 42001 requirements.

Compliance and documentation

EU AI Act compliance mapping

Map our AI systems to EU AI Act requirements and align with ISO 42001:

AI system inventory: [list AI systems]

For each AI system, assess:

1. AI Act risk classification
- Prohibited AI: [biometric categorization, social scoring, harmful manipulation, indiscriminate scraping] → Must not deploy
- High-risk AI: [CV screening, credit scoring, law enforcement, critical infrastructure, medical devices, biometric ID, education/employment evaluation, essential services] → Strict requirements
- Limited risk: [Chatbots, deepfakes, emotion recognition] → Transparency obligations
- Minimal risk: [AI video games, spam filters] → Voluntary codes of conduct

2. High-risk AI requirements (if applicable)
- Risk management system: ISO 42001 Sections 6, 8.1 → AI risk assessment and treatment
- Data governance: ISO 42001 control A.3 → Training data quality, bias mitigation
- Technical documentation: ISO 42001 controls A.2.2, A.4.5 → System documentation, model cards
- Record keeping and logging: ISO 42001 control A.7.1 → Automated logs for auditability
- Transparency to users: ISO 42001 controls A.6.3, A.6.4 → User information, explainability
- Human oversight: ISO 42001 control A.6.7 → Human-in-the-loop mechanisms
- Accuracy, robustness, cybersecurity: ISO 42001 controls A.5.x, A.6.5 → Testing and security measures

3. General-purpose AI (GPAI) requirements
- If we provide foundation models or GPAI: Transparency, copyright compliance, energy consumption disclosure
- Systemic risk models (large-scale): Adversarial testing, serious incident reporting, model evaluation

4. Transparency obligations (all AI)
- Inform users when interacting with AI (AI Act Article 52)
- Deepfake labeling
- Emotion recognition or biometric categorization disclosure

Create compliance mapping table: AI system | AI Act classification | Applicable requirements | ISO 42001 controls addressing | Compliance status | Gaps

Develop AI Act compliance roadmap aligned with ISO 42001 implementation.

Note: EU AI Act enforcement begins phased in 2024-2026. Track implementation timelines for your AI systems.

ISO 42001 certification demonstrates conformity with many EU AI Act requirements for high-risk AI systems, streamlining regulatory compliance and building trust with customers and regulators.

Was this helpful?