Restrict Access to AI Data with Role-Based Controls
Access to sensitive AI data is restricted using roles to ensure only authorised personnel can view it.
Plain language
This control ensures that only the right people can access sensitive data used in artificial intelligence systems by assigning different roles to different people. If not done, there’s a risk of unauthorised people gaining access, which could lead to data breaches, misuse of information, or even financial and reputational damage to your organisation.
Framework
ASD Information Security Manual (ISM)
Control effect
Preventative
Classifications
NC, OS, P, S, TS
ISM last updated
Nov 2025
Control Stack last updated
19 Mar 2026
E8 maturity levels
N/A
Guideline
Guidelines for software developmentTopic
Excessive AgencyOfficial control statement
Role-based access controls are implemented for artificial intelligence applications to restrict access to sensitive data.
Why it matters
Without role-based access, unauthorised access to AI data can lead to breaches, data misuse, and significant financial or reputational harm.
Operational notes
Regularly review and update roles to ensure access aligns with current duties, preventing data exposure through outdated permissions.
Implementation tips
- Business owners should identify the types of sensitive data that their AI systems use and establish a list of people who need access to this information. This involves considering who within the organisation relies on this data to perform their job effectively.
- Managers should define clear roles and responsibilities for team members who need access to AI data. They can do this by mapping out job functions and associating them with different levels of access according to the sensitivity of the data involved.
- The IT team should set up the technical system to match these roles with their respective access permissions. This means using software that supports role-based access controls and configuring it to allow or deny access based on each user's role.
- The Human Resources (HR) team should regularly review and update the access list as people's roles within the organisation change. This could be incorporated into routine checks such as staff performance reviews or changes in job responsibilities.
- Managers should provide training for all staff about the importance of protecting sensitive AI data and how role-based controls help with this. This can be done through workshops or e-learning modules that explain how data is managed and why certain people have different levels of access.
Audit / evidence tips
-
Askthe list of roles and associated access levels: Ensure there is a document or system record specifying which roles have access to what data
Gooda clear, updated list showing role names, data access details, and rationale
-
Askan access control policy document: Request the formal policy that explains how access to AI data is managed within the organisation
Goodincludes defined roles, data classification, and enforcement procedures all documented and approved
-
Askaccess logs: Ensure there is a record of who accessed certain AI data and when. Check these logs to see if access is appropriately restricted to authorised roles
Goodconsistent records showing access correlating with authorised job roles
-
Askevidence of staff training on role-based access: Request records showing that staff involved with AI data have received appropriate training
-
Aska review schedule and records: Request information on how often and how access permissions are reviewed and updated
Cross-framework mappings
How ISM-2093 relates to controls across ISO/IEC 27001, Essential Eight, and ASD ISM.
ISO 27001
| Control | Notes | Details |
|---|---|---|
| layers Partially meets (2) expand_less | ||
| Annex A 5.15 | ISM-2093 requires organisations to implement RBAC in AI applications to prevent unauthorised access to sensitive AI data | |
| Annex A 8.3 | ISM-2093 requires RBAC for AI applications to restrict access to sensitive AI data | |
| sync_alt Partially overlaps (1) expand_less | ||
| Annex A 5.18 | ISM-2093 requires RBAC enforcement in AI applications so only authorised roles can access sensitive AI data | |
| handshake Supports (1) expand_less | ||
| Annex A 5.3 | Annex A 5.3 requires organisations to segregate conflicting responsibilities to reduce opportunities for misuse, fraud or error | |
E8
| Control | Notes | Details |
|---|---|---|
| sync_alt Partially overlaps (1) expand_less | ||
| E8-RA-ML3.1 | ISM-2093 requires role-based access controls (RBAC) for AI applications to restrict access to sensitive AI data to authorised personnel | |
These mappings show relationships between controls across frameworks. They do not imply full equivalence or certification.