Monitor and Investigate AI Model Anomalies
Keep track of AI model performance and check any unusual results.
Plain language
Think of AI models like your car's engine – they need regular check-ups to ensure they're running smoothly. This control is about keeping an eye on these models to spot any unusual behaviour early. If issues aren't detected and investigated, it can lead to faulty AI outcomes, which might misguide important decisions or harm your business reputation.
Framework
ASD Information Security Manual (ISM)
Control effect
Detective
Classifications
NC, OS, P, S, TS
ISM last updated
Nov 2025
Control Stack last updated
19 Mar 2026
E8 maturity levels
N/A
Guideline
Guidelines for software developmentTopic
Unbounded ConsumptionOfficial control statement
Artificial intelligence model performance metrics are monitored and anomalies are investigated.
Why it matters
Neglecting AI model anomalies can lead to flawed outputs that skew decision-making and damage organisational reputation.
Operational notes
Monitor model performance metrics (accuracy, drift, latency) and alert on deviations; triage anomalies, investigate root cause, and remediate (rollback/retrain) promptly.
Implementation tips
- The IT team should set up tools to monitor AI models: Choose software that tracks the performance of AI models over time and can alert the team to any unusual behaviour. This ensures issues are identified promptly.
- Managers should schedule regular reviews of AI model performance: Organise monthly check-ins with data scientists or IT staff to go over these reports. This helps catch any emerging problems early before they escalate.
- Data scientists should document anomaly investigation processes: After receiving alerts of unusual behaviour, they should investigate and record the steps taken and findings. A clear record helps in understanding recurring problems and adjusting models accordingly.
- System owners need to maintain a clear log of model versions and changes: Document every update or tweak made to an AI model. This history aids in tracing back any anomalies to specific changes made in the past.
- Security teams must collaborate with IT and data teams: Create a communication protocol where these teams regularly share findings and concerns. This collaborative approach helps in quickly addressing potential security risks posed by AI anomalies.
Audit / evidence tips
-
Askthe AI performance monitoring report: Request documentation showing the performance metrics tracked for each AI model
Goodincludes frequent and consistent reports with action points highlighted for any detected anomalies
-
Askmeeting records discussing AI performance: Request records or minutes from the regular review meetings. Check if specific issues and resulting actions were documented
Goodis a detailed, dated log showing concerns raised and how they were addressed
-
Askincident reports related to AI anomalies
Goodincludes a clear trail showing identification, investigation, action, and outcome of incidents
-
Askthe AI model change log: Request the document recording all changes and updates to AI models. Verify if changes are linked to any unusual model behaviours
Goodcontains a comprehensive log covering all changes with dates and reasons
-
Askevidence of cross-team collaboration: Check communications or reports showing collaboration between IT, security, and data teams in managing AI models
Goodis a document or set of records showing regular, structured communication and collaborative problem-solving efforts
Cross-framework mappings
How ISM-2089 relates to controls across ISO/IEC 27001, Essential Eight, and ASD ISM.
ISO 27001
| Control | Notes | Details |
|---|---|---|
| layers Partially meets (2) expand_less | ||
| Annex A 8.15 | ISM-2089 requires organisations to monitor AI model performance metrics and investigate anomalies | |
| Annex A 8.16 | ISM-2089 requires organisations to monitor AI model performance metrics and investigate anomalies in model behaviour or outputs | |
| handshake Supports (1) expand_less | ||
| Annex A 5.28 | ISM-2089 requires organisations to monitor AI model performance metrics and investigate anomalies | |
E8
| Control | Notes | Details |
|---|---|---|
| layers Partially meets (2) expand_less | ||
| E8-MF-ML2.9 | ISM-2089 requires organisations to monitor AI model performance metrics and investigate anomalies | |
| E8-AH-ML2.15 | ISM-2089 requires organisations to monitor AI model performance metrics and investigate anomalies | |
These mappings show relationships between controls across frameworks. They do not imply full equivalence or certification.