Document AI Model Characteristics and Risks
Use specific documentation to detail AI models, their architecture, usage, and potential security risks.
Plain language
This control is about making sure that any artificial intelligence (AI) systems used in your organisation are well-documented. This includes knowing how they work, what they're used for, and what security risks they might pose. If this isn't done, your business could unknowingly face privacy breaches or make decisions based on flawed AI, leading to financial loss or reputational damage.
Framework
ASD Information Security Manual (ISM)
Control effect
Proactive
Classifications
NC, OS, P, S, TS
ISM last updated
Nov 2025
Control Stack last updated
19 Mar 2026
E8 maturity levels
N/A
Guideline
Guidelines for software developmentOfficial control statement
Artificial intelligence-specific documentation, including model and system cards (or equivalent artefacts), is used to document model characteristics, system architectures, use cases and security risks.
Why it matters
Without model/system cards documenting AI characteristics, use cases and architecture, security risks can be missed, enabling misuse, data leakage and unsafe decisions.
Operational notes
Maintain model/system cards for each AI system and update after model, data or architecture changes; record intended use, limits, threats and security risks.
Implementation tips
- AI team should document the model: The team responsible for AI should write down details about how the AI system is built and how it's meant to work. They can start by creating a system card that explains the model's architecture and intended uses clearly.
- Managers should assess use cases: Business managers should look at how AI is being used in their operations. They should sit with the AI developers and list all current and potential uses of the AI system, ensuring each use case is justified and documented.
- IT security personnel should identify risks: The IT security team should evaluate the AI system for potential security risks. They should conduct a risk assessment and list known risks, then document them alongside the AI model's system card.
- Legal advisors should review compliance: Legal officers should ensure the AI documentation complies with privacy and data protection laws. This involves reviewing the documented AI model and its uses against relevant legislation and industry standards.
- Train all staff on AI understanding: The HR department should organise training sessions for all staff members. These sessions should cover basic understanding of AI models in use, highlighting how to identify and report any unusual behaviour that could indicate a problem.
Audit / evidence tips
-
Askthe AI model and system cards: Request documentation showing the AI model's architecture, usage, and risks
Goodis a detailed system card that clearly outlines these elements with supporting diagrams
-
Askthe risk assessment report: Seek the document detailing the security risks identified during the AI system evaluation
Goodis a report that is dated, with risks clearly prioritised and mitigation steps outlined
-
Askrecords of training sessions: Request the list and content of training sessions conducted for staff about AI understanding
Goodincludes records showing training dates, topics covered, attendee numbers, and evaluation results
-
Askcompliance review outcomes: Request the findings from the legal team's compliance reviews of the AI model
Goodis a document that references specific laws and how each is being addressed
-
Askuse case documentation: Request detailed documentation on AI application within the organisation
Goodincludes justification for AI deployment, referencing specific business needs
Cross-framework mappings
How ISM-2084 relates to controls across ISO/IEC 27001, Essential Eight, and ASD ISM.
ISO 27001
| Control | Notes | Details |
|---|---|---|
| sync_alt Partially overlaps (1) expand_less | ||
| Annex A 8.27 | ISM-2084 requires AI-specific documentation (e.g | |
| handshake Supports (1) expand_less | ||
| Annex A 8.9 | ISM-2084 requires organisations to document AI model characteristics, system architecture, intended use and security risks in AI-specific... | |
| link Related (1) expand_less | ||
| Annex A 5.8 | Annex A 5.8 requires information security to be integrated into project management so project delivery considers security risks and controls | |
These mappings show relationships between controls across frameworks. They do not imply full equivalence or certification.