Skip to content
arrow_back
search
ISM-2072 policy ASD Information Security Manual (ISM)

Ensure AI Models are Stored Securely

AI models must be kept in a format that prevents them from running unwanted code.

record_voice_over

Plain language

This control is about keeping your AI models from accidentally or maliciously running harmful code. If AI models aren't stored safely, someone might trick the system into doing something it's not supposed to do, which could compromise sensitive information or harm your organisation's reputation. It's a bit like making sure a recipe book can't accidentally catch fire just because it's near a hot stove.

Framework

ASD Information Security Manual (ISM)

Control effect

Preventative

Classifications

NC, OS, P, S, TS

ISM last updated

Nov 2025

Control Stack last updated

19 Mar 2026

E8 maturity levels

N/A

Official control statement

Artificial intelligence models are stored in a non-executable file format that does not allow arbitrary code execution.
policy ASD Information Security Manual (ISM) ISM-2072
priority_high

Why it matters

Improper storage of AI models can lead to execution of malicious code, risking data breaches and causing severe reputational damage.

settings

Operational notes

Store model artefacts only in non-executable formats (e.g., weights/checkpoints). Block pickled/serialised objects that can run code and scan uploads for unsafe formats.

build

Implementation tips

  • AI developers should ensure they save models in a secure file format that can't directly execute code. They can do this by not storing models in formats like .exe, which can run programs, and instead using non-executable formats such as .zip or .h5.
  • IT managers should regularly review the file storage methods used for AI models. This can be done by checking that models are saved on a secure server with limited access and that their formats comply with safety standards.
  • Security teams should implement regular checks to validate the integrity of AI models. Use checksum or hashing techniques to ensure that the stored model hasn't been tampered with or altered.
  • Asksuppliers about the file formats used and how they prevent execution of arbitrary code

  • System owners should organise training sessions to familiarise their team with safe storage practices for AI models. This training should cover the risks of improper storage and provide hands-on examples of securing model files correctly.
fact_check

Audit / evidence tips

  • Askthe list of AI models stored: Request a record showing how and where AI models are saved

    Goodis a list with non-executable formats clearly marked and secure storage descriptions

  • AskIT policy documents on AI model storage: Check if there's an outlined policy about saving models in non-executable formats

    Goodincludes specific mentions of approved file formats and storage locations

  • Aska security check report on AI model integrity: Request evidence of regular security checks for model integrity

    Goodwould show consistent use of techniques like checksum or hash verification

  • Asksupplier compliance records: Ensure procurement policies require compliance checks of AI storage standards from suppliers

    Goodwill show documented compliance with secure storage practices

  • Askstaff training records around AI model storage: Request attendance sheets and training materials

    Goodincludes records showing sessions on secure storage practices with attendee lists

link

Cross-framework mappings

How ISM-2072 relates to controls across ISO/IEC 27001, Essential Eight, and ASD ISM.

ISO 27001

Control Notes Details
layers Partially meets (2) expand_less
Annex A 7.10 ISM-2072 requires AI models to be stored in a non-executable file format that prevents arbitrary code execution
Annex A 8.26 ISM-2072 requires AI model artefacts to be stored in a non-executable file format to prevent arbitrary code execution

These mappings show relationships between controls across frameworks. They do not imply full equivalence or certification.

Mapping detail

Mapping

Direction

Controls