Skip to content
arrow_back
search
ISM-1924 policy ASD Information Security Manual (ISM)

Preventing Prompt Injection in AI Applications

AI apps must check user inputs to stop harmful or unintended content creation.

record_voice_over

Plain language

AI applications need to be careful about user inputs to avoid creating content that's harmful or goes against what the app is meant to do. If inputs aren't checked properly, the app could generate misleading or hazardous information, which can lead to serious consequences for the users and the reputation of the organisation.

Framework

ASD Information Security Manual (ISM)

Control effect

Preventative

Classifications

NC, OS, P, S, TS

ISM last updated

Aug 2025

Control Stack last updated

19 Mar 2026

E8 maturity levels

N/A

Official control statement

Generative artificial intelligence applications evaluate user prompts to detect and mitigate adversarial inputs or suffixes designed to elicit unintended behaviour or assist in the generation of sensitive or harmful content.
policy ASD Information Security Manual (ISM) ISM-1924
priority_high

Why it matters

Failure to evaluate prompts for injection may allow adversarial inputs to bypass safeguards, generating sensitive/harmful content and causing legal and reputational harm.

settings

Operational notes

Test prompts against known prompt-injection patterns (instruction overrides, jailbreak suffixes, data-exfiltration cues) and tune detectors/filters; log and review bypass attempts regularly.

Mapping detail

Mapping

Direction

Controls