Implement Fine-Grained AI Application Permissions
Organisations set detailed access rules to control who can use artificial intelligence applications.
Plain language
This control is about making sure that only the right people in an organisation have access to artificial intelligence applications. It matters because if someone who shouldn’t be using these applications gets access, they might misuse sensitive information, leading to privacy breaches or damaging decisions.
Framework
ASD Information Security Manual (ISM)
Control effect
Preventative
Classifications
NC, OS, P, S, TS
ISM last updated
Nov 2025
Control Stack last updated
19 Mar 2026
E8 maturity levels
N/A
Guideline
Guidelines for software developmentTopic
Excessive AgencyOfficial control statement
Access control policies are implemented to enforce fine-grained permissions for artificial intelligence applications.
Why it matters
Lax AI access controls could lead to data leaks, misuse of sensitive information, and unauthorised decisions, risking reputation and financial loss.
Operational notes
Regularly review AI app permissions by role, tool and dataset scope, and promptly update least-privilege rules when models, plugins or responsibilities change.
Implementation tips
- System owners should identify who needs access: Start by listing everyone who uses the AI applications in your organisation. Ensure you clearly understand what each person needs the AI for, so you can decide the level of access they require.
- IT teams should create access rules: With input from system owners, set up access permissions for each user. Use your systems' settings to limit what each person can see or do based on their role.
- Managers should review permissions regularly: Every few months, check the list of people with access to ensure it is still appropriate. Remove access for anyone who no longer needs it.
- HR should communicate access changes: If staff roles change, inform the system owner and IT team so they can adjust access permissions quickly.
- Security officers should monitor AI application use: Set up alerts for any unusual activity within AI applications. This helps catch unauthorised access early.
Audit / evidence tips
-
Askthe current access list: Request a document showing everyone with permissions to use AI applications
Goodshould show names, roles, and a recent review date
-
Askaccess review records: Request evidence of past access reviews
-
Askchanges to permissions: Request logs detailing any changes made to access permissions
Goodwill have justified, logged changes with timestamps
-
Askabout training for authorised users: Request records of any training conducted for users with access. Check for details on the training content and attendees
Goodincludes recent training covering secure use of AI applications
-
Askincident logs related to AI applications: Request records of any security incidents involving AI applications
Cross-framework mappings
How ISM-2092 relates to controls across ISO/IEC 27001, Essential Eight, and ASD ISM.
ISO 27001
| Control | Notes | Details |
|---|---|---|
| layers Partially meets (3) expand_less | ||
| Annex A 5.15 | ISM-2092 requires organisations to implement access control policies that enforce fine-grained permissions specifically for artificial in... | |
| Annex A 5.18 | ISM-2092 requires fine-grained permissioning for AI applications, ensuring only authorised users can use AI capabilities in line with policy | |
| Annex A 8.3 | ISM-2092 requires restricting AI application use through fine-grained permissions enforced by access control policies | |
| handshake Supports (1) expand_less | ||
| Annex A 8.5 | ISM-2092 requires enforcing fine-grained permissions for AI applications, which relies on the ability to correctly identify and authentic... | |
These mappings show relationships between controls across frameworks. They do not imply full equivalence or certification.