Skip to content
arrow_back
search
ISM-2087 policy ASD Information Security Manual (ISM)

Ensuring Integrity of AI Model Training Data

Verify the source and integrity of data used to train AI models to prevent poisoning.

record_voice_over

Plain language

This control is about making sure the data used to train AI models comes from a trustworthy source and hasn’t been tampered with. If someone messes with this data, they can teach the AI to make bad decisions, which could harm your business, clients, or students with wrong insights or faulty automation.

Framework

ASD Information Security Manual (ISM)

Control effect

Preventative

Classifications

NC, OS, P, S, TS

ISM last updated

Nov 2025

Control Stack last updated

19 Mar 2026

E8 maturity levels

N/A

Official control statement

The source and integrity of training data for artificial intelligence models is verified.
policy ASD Information Security Manual (ISM) ISM-2087
priority_high

Why it matters

If training data sources or integrity aren’t verified, poisoned or altered data can skew model outputs, causing unsafe decisions and loss of trust.

settings

Operational notes

Maintain a verified data provenance record, validate hashes/signatures on datasets, and periodically re-check sources to detect tampering before retraining.

Mapping detail

Mapping

Direction

Controls