AI Governance
AI Governance Roadmap: From Policy to Practice
By Daman David Pant
May 2026
12 min read
Most organisations understand that they need AI governance. Fewer know where to start, what order to do things in, or how to tell whether their governance is actually working. This roadmap gives you a structured, phased approach to building AI governance from the ground up, grounded in the frameworks that appear in the AIGP exam and in real-world practice.
Who this is for: AI governance professionals, compliance leads, DPOs, risk managers, and anyone building or reviewing an AI governance programme. Also useful for AIGP exam candidates who want to understand how the frameworks connect in practice.
Why Most AI Governance Efforts Stall
Organisations typically stall at one of three points: they produce a policy but fail to operationalise it; they build a risk framework but lack the data to populate it; or they complete an assessment but have no mechanism to act on findings. Each of these failures has the same root cause: governance was designed as a document exercise rather than an operational system.
Effective AI governance is not a policy. It is a set of repeatable processes, accountable roles, and feedback loops that keep AI systems aligned with organisational values and regulatory requirements over time.
The Five-Phase Roadmap
Phase 1 · Foundation
Establish accountability and policy
Before assessing or classifying any AI system, establish who is responsible for AI governance and what the organisation's position on AI is.
- Appoint an AI governance lead or committee with clear accountability
- Define your AI Acceptable Use Policy: what is permitted, what is prohibited, and who approves exceptions
- Create an AI inventory: a register of all AI systems in use or development, including third-party tools
- Map roles under applicable regulations (provider, deployer, importer, distributor under the EU AI Act)
- Establish a governance review cadence: quarterly minimum
Phase 2 · Risk Assessment
Classify and prioritise AI systems by risk
Use the EU AI Act risk tiers as your primary classification framework, supplemented by the NIST AI RMF Map function for contextual risk identification.
- Classify each AI system: prohibited, high-risk, limited transparency, or minimal risk
- Conduct an AI Impact Assessment (AIA) for high-risk systems
- Complete DPIAs for systems processing personal data at scale
- Document identified risks in a risk register with likelihood, severity, and ownership
- Prioritise remediation based on residual risk, not inherent risk alone
Phase 3 · Framework Implementation
Operationalise controls across the AI lifecycle
Governance controls must be embedded at each stage of the AI lifecycle: design, development, deployment, and operation. A control that only exists at deployment is too late to prevent many risks.
- Apply NIST AI RMF Govern and Manage functions to embed controls in development workflows
- Implement human oversight mechanisms for high-risk systems
- Define incident response procedures for AI failures or unexpected outputs
- Establish data governance: training data provenance, quality checks, and retention
- Document technical documentation requirements per EU AI Act Article 11
Phase 4 · Monitoring and Measurement
Track performance, drift, and compliance
Governance without monitoring is a policy, not a programme. The NIST AI RMF Measure function and EU AI Act post-market monitoring obligations both require ongoing tracking of AI system behaviour.
- Define key performance indicators for each high-risk AI system
- Monitor for model drift: degradation in accuracy or fairness over time
- Track bias indicators across demographic groups where relevant
- Log incidents, near-misses, and user complaints in a structured format
- Report to the governance committee on a defined schedule
Phase 5 · Audit and Continuous Improvement
Test, verify, and iterate
Governance must be audited, not just maintained. Internal audits verify that controls are working as intended. External audits provide independent assurance. Both are required for high-risk systems under the EU AI Act.
- Conduct internal audits of high-risk systems against documented controls
- Commission third-party conformity assessments where required by regulation
- Review and update risk assessments after significant model changes
- Feed audit findings back into Phase 1 policy review
- Build a culture of accountability: governance fails when it is treated as a compliance checkbox
How the Major Frameworks Map to This Roadmap
| Phase | NIST AI RMF | EU AI Act | ISO 42001 |
| Foundation | Govern | Roles and obligations | Context and leadership |
| Risk Assessment | Map | Risk classification, AIA, DPIA | Risk assessment |
| Implementation | Manage | Technical documentation, human oversight | Controls and treatment |
| Monitoring | Measure | Post-market monitoring | Performance evaluation |
| Audit | Govern (review) | Conformity assessment | Internal audit, improvement |
Common Mistakes to Avoid
- Starting with a policy document instead of an inventory. You cannot govern what you have not identified.
- Treating risk classification as a one-time exercise. AI systems evolve. Risk classification must be reviewed when systems change significantly.
- Assigning governance to a single team with no executive sponsorship. Governance without authority cannot enforce controls.
- Conflating compliance with governance. Compliance is the minimum required by law. Governance is the operational system that makes AI trustworthy beyond compliance.
- Skipping decommissioning. AI systems that are retired without proper data disposal and documentation create long-tail risk.
Test your AI governance knowledge
200 scenario-based AIGP practice questions covering risk management, frameworks, EU AI Act, and the full governance lifecycle.
Start Free Practice Quiz →