📋 Framework 🗂 Domains 📏 Master Rules 🎯 Exam Day ✏️ Practice Quiz ✉️ Contact

The AIGP
Certification Playbook

Daman David Pant (AIGP)
Principal Consultant · Infosys  ·  Verify Certificate ↗

A comprehensive preparation guide for the IAPP AI Governance Professional Exam - built from intensive scenario-based study across all 12 domains.

This playbook was developed through intensive scenario-based study covering all AIGP domains. The principles, frameworks, and rules reflect the analytical approach needed to succeed on the exam - not memorization of specific questions, but the ability to think through governance scenarios systematically.

Good luck on your AIGP journey.

→ Start Reading 🎯 Practice Quiz ✉ Contact
👥 visitors · 📍 detecting... · 📅 today
12Domains
19Master Rules
10Trap Types
15Practice Questions
475Exam Score
The SLIDE Framework

A 5-step decision framework for evaluating any AIGP question in under 60 seconds. Apply it to every question, every time.

S Spot the Key Words
Identify signal words before reading options. "MOST," "FIRST," "EXCEPT," "NOT" change everything.
L Locate the Domain
Match keywords to the right AIGP domain so you apply the correct governance lens.
I Identify the Root
Pick the most upstream answer. Data beats design beats training beats deployment.
D Disqualify the Traps
Eliminate absolutes, responsibility shifting, single safeguards, wrong roles, wrong phases.
E Evaluate Principles
When stuck: people beat performance, proactive beats reactive, upstream beats downstream.
S - SPOT the Key Words

Before reading the options, identify the signal words in the question stem.

Question Type Signals
"MOST important" / "PRIMARY" / "BEST" The exam wants the root cause or foundational answer, not just a valid one
"FIRST" / "BEFORE" The exam wants sequencing - what comes earliest
"MOST likely" / "MOST accurate" Multiple options may be partially correct - pick the most complete one
"EXCEPT" / "NOT" Flip your thinking - three options are correct, find the odd one out
"LEAST relevant" / "LEAST likely" Find the weakest connection, not a wrong statement
Bold or CAPITALIZED words These change the entire meaning. Read them twice
Role words (provider, deployer, controller, processor, importer) The answer depends on who is who
Jurisdiction words (EU, GDPR, EU AI Act, NIST, Canada, OECD) Match the answer to that specific framework
The Golden Rule: If you miss the key word, you pick the wrong answer. Read the stem twice before looking at options.
L - LOCATE the Domain

Identify which AIGP knowledge domain the question is testing. This tells you what lens to apply.

If you see...Think...Lead with...
Training data, labels, featuresData GovernanceData quality and representativeness
Provider, deployer, high-risk, importerEU AI ActRole-based obligations
Controller, processor, legal basisGDPR / PrivacyPurpose limitation and lawful basis
Fairness, bias, discriminationAI Ethics / FairnessProtected groups and root cause
Consent, transparency, disclosureTransparencyWho must be told what
Copyright, licensing, training data, authorshipIP / CopyrightWho owns what and who is liable
Risk assessment, impact assessmentRisk ManagementTiming and proportionality
Lifecycle, monitoring, drift, champion/challengerAI LifecycleContinuous governance
Vendor, third-party, procurementSupply Chain / Vendor RiskAccountability is non-delegable
GPAI, foundation model, systemic riskGPAI ObligationsTwo-tier obligations
Govern, Map, Measure, ManageNIST AI RMFFour functions and their boundaries
Innovation, ethics, self-regulationOECD AI PrinciplesBalance innovation with ethics
Generative AI, LLM, deepfakesGenerative AIAccuracy, downstream harms, content responsibility
I - IDENTIFY the Root
Data (most upstream)
  → Model Design
    → Model Training
      → Testing & Validation
        → Deployment
          → Monitoring (most downstream)
"Most important factor" questions Pick the most UPSTREAM answer
"First step" questions Pick the EARLIEST in the lifecycle
"Primary concern" questions Pick the ROOT CAUSE, not the symptom
The "Most Important REASON" Exception
"Most important FACTOR in achieving X" Most upstream (root cause)
"Most important REASON for doing X" Most practical outcome (motivation)
"Most important STEP" Most upstream (foundational action)
"Primary PURPOSE of X" Most practical outcome (why organizations actually do this)
The Umbrella Rule
If one option is a broad category and others are specific examples, the broad category is usually correct.
The Independence Test
Ask: "If I fix Option X, does it fix the others?" - If yes, X is the root.
Ask: "If I fix Y, does X still exist?" - If yes, X is deeper.
D - DISQUALIFY the Traps
Trap 1: ABSOLUTES
Words like "always," "never," "all," "none," "only," "automatically," "any," "solely," "eliminate" - Absolute statements are almost always wrong in governance. Governance is contextual.
Trap 2: RESPONSIBILITY SHIFTING
Options that transfer all liability to one party: "the vendor is solely responsible," "the AI generated it autonomously" - Accountability is shared and non-delegable.
Trap 3: SINGLE SAFEGUARD
Options that claim one action resolves everything: "anonymize and all concerns are resolved," "add a disclaimer" - AI governance requires layered controls.
Trap 4: CORRECT CONCEPT, WRONG ROLE
The option describes a real obligation but assigns it to the wrong party. Always verify WHO the obligation belongs to, not just WHAT it is.
Trap 5: DISMISSING CONCERNS
Options that argue something isn't a problem: "public data has no privacy protections," "voluntary sharing equals consent" - If an option dismisses a governance concern entirely, it's almost certainly wrong.
Trap 6: DISPROPORTIONATE RESPONSE
Options suggesting extreme actions: "cancel the project entirely," "stop using AI," "delete everything" - Governance seeks proportionate responses. Mitigate and manage, don't abandon.
Trap 7: AGGREGATE METRICS
Options that rely on overall accuracy to justify deployment - Aggregate metrics mask subgroup failures.
Trap 8: CORRECT CONCEPT, WRONG PHASE
The option describes a valid activity but places it in the wrong lifecycle phase - Design activities don't belong in planning; monitoring activities don't belong in design.
Trap 9: CORRECT CONCEPT, WRONG FRAMEWORK
The option describes a real requirement but from a different regulation - NYC Local Law 144 requirements are not EU AI Act requirements. GDPR access rights are not GDPR transparency obligations.
Trap 10: CORRECT CONCEPT, WRONG RISK TIER
The option describes a real AI practice but assigns it to the wrong risk classification - Social scoring is prohibited, not high-risk. Resume screening is high-risk, not prohibited. Don't confuse tiers.
E - EVALUATE Against Core Principles

When stuck between two options, apply these tiebreaker principles:

#PrincipleWhat it means
1People over performanceFairness and rights beat accuracy and efficiency
2Proactive over reactivePrevent harm before deployment, don't wait and monitor
3Upstream over downstreamData and design fixes beat output-level patches
4Specific over vaguePrecise, actionable answers beat general statements
5Layered over singleMulti-control answers beat single-safeguard answers
6Context-dependent over absolute"It depends" answers beat "always/never" answers
7Accountability staysYou can outsource tasks but never accountability
8Prevent over compensatePrevention beats financial remedy
9Principles over practiceGovernance principles beat industry conventions
10People-centered over org-centeredImpact on individuals beats organizational convenience
60-Second Per Question Routine
[10s] S - Read stem twice. Circle key words. What is it REALLY asking?
[10s] L - What domain? What lens should I apply?
[15s] I - What's the root? What comes first?
[15s] D - Scan for traps. Eliminate 1-2 options immediately.
[10s] E - Stuck between two? Apply tiebreaker principles.
The Knowledge Base

12 domains organized as principles and rules. Click any domain to expand the key concepts.

📌 Note: Content is organized by topic for study purposes and does not reflect the official AIGP exam domain structure.

D1
EU AI Act
Risk Classification · Roles · GPAI

Four Risk Tiers

Unacceptable (Prohibited): Social scoring, real-time biometric ID in public spaces, subliminal manipulation, emotion recognition in workplace/education (except medical/safety), predicting criminality solely from personality profiling, untargeted facial scraping.

High Risk: Biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice (Annex II/III).

Limited Risk: Transparency obligations apply regardless of risk tier - AI interaction disclosure, emotion recognition disclosure, AI-generated content labeling.

Minimal Risk: No specific obligations. Most AI systems fall here.

The word "solely" separates prohibited from high-risk. Predicting criminality solely from personality profiling = prohibited. Using factual data to assist human decision-makers = high-risk.

Roles

Provider: Creates compliance artifacts - CE marking, conformity assessment, technical documentation (Annex IV), risk management system, human oversight mechanisms, post-market monitoring.

Deployer: Verifies proper use - FRIA before deployment (public bodies/public service providers), logs/documentation retained 6+ months, human oversight during operation.

Importer: Verifies artifacts exist and are valid. Gatekeepers for non-EU providers.

Substantial modification of a system = becomes a provider and assumes all provider obligations.

GPAI & Systemic Risk

Tier 1 (all GPAI): Technical documentation + copyright compliance policy incl. text/data mining opt-outs.

Tier 2 (systemic risk): Training data summary, adversarial testing, cybersecurity, report incidents to EU AI Office, energy consumption reporting.

Systemic risk threshold: >10²⁵ FLOPs OR Commission designation. GPAI obligations are parallel to - not overlapping with - high-risk system obligations.

Penalties

Prohibited practices: €35M or 7% global turnover. Provider breaches (high-risk): €15M or 3%. Misleading info: €7.5M or 1%.

D2
GDPR and Privacy
Purpose Limitation · DPIA · Controller

Seven Principles (Article 5)

Lawfulness/fairness/transparency · Purpose limitation · Data minimization · Accuracy · Storage limitation · Integrity & confidentiality · Accountability

Purpose Limitation - Most Tested GDPR Concept

Original consent for service delivery does NOT automatically cover AI model training. "Research & development" licenses may not cover commercial AI. Publicly available data ≠ blanket permission. Voluntary sharing in employment/insurance contexts ≠ valid GDPR consent.

Controller vs Processor

Controller determines purposes and means. Processor acts on behalf of controller. Roles are activity-specific, not entity-specific - same company can be both. When a processor uses data for its own purposes, it becomes a controller for that processing.

Controller obligations focus on HOW data is processed responsibly, not WHERE processors are geographically located. GDPR does not require processors to be in the same country.

DPIA Triggers (2+ = generally required)

Profiling and prediction · Systematic/extensive evaluation · Sensitive data · Automated decision-making with significant effects · Large-scale processing · Combining datasets

Automated Decision-Making Transparency

Controllers must proactively disclose: (1) existence of ADM, (2) meaningful info about logic, (3) significance and envisaged consequences. This is distinct from Article 15 access rights (reactive, upon request).

D3
AI Fairness and Bias
Data Root Cause · Disaggregated Metrics

Data Is the Root

Data attributes and variability are the most important factors for fairness. No amount of architectural sophistication can overcome fundamentally unfair training data.

What Counts as Bias

IS bias: Content of lower quality for a minority group · Stereotyped images of protected groups · Advertising that focuses on appearance over function for female audiences.

IS NOT bias: Directing ads to companies rather than individuals (business targeting decision) · Producing less content for a segment if determined by business scope, not demographics.

Diagnostic test: Does the content itself differ in quality, accuracy, or respectfulness based on a protected characteristic? Yes = bias. Deliberate business scope decision = not bias.

Prioritization Order

1. Discrimination against protected groups (fairness + legal compliance) → 2. Regulatory compliance gaps → 3. Technical performance issues → 4. Aggregate metrics (accuracy) - last.

Bias by Lifecycle Phase

Planning/Design: Stakeholder involvement, feature selection, data collection planning (proactive).

Operational: Human oversight, disparity testing, performance monitoring. Human oversight is a deployment activity, NOT a planning activity.

D4
NIST AI Risk Management Framework
Govern · Map · Measure · Manage

Four Core Functions

Govern: Organizational foundation - policies, accountability structures, risk tolerance, culture. Cross-cutting, underpins all others. "How is our org set up for AI risk?"

Map: System-specific risk identification - context, risks, stakeholders for a specific AI system. "What are the risks of THIS system?"

Measure: Quantitative/qualitative risk assessment. "How severe are these risks?"

Manage: Respond, mitigate, monitor. "What do we do about them?"

Risk tolerance and risk appetite = Govern decisions (organizational), not Map (system-specific).

Seven Trustworthy AI Characteristics

Valid & Reliable · Safe · Secure & Resilient · Accountable & Transparent · Explainable & Interpretable · Privacy Enhanced · Fair with Bias Managed

"Tested and Effective" and "Commercially viable" are NOT NIST characteristics. NIST uses "Valid and Reliable." Commercial viability is a business concern, not a trustworthiness property.

ARIA Program

Assessing Risks and Impacts of AI - NIST's primary program to provide organizational resources for managing AI-specific risks. ARIA ≠ standard-setting ≠ interoperability initiative ≠ regulatory sandbox ≠ red-teaming program.

D5
AI Lifecycle Governance
Phases · Drift · Champion/Challenger

Lifecycle Phases

Planning: Objectives, governance approach, operational context. Does NOT include architecture selection.

Design: Architecture choice, algorithm selection, risk estimation, data approach. Architecture belongs here - needs planning outputs as inputs.

Data Preparation: Collect, clean, label, augment, de-duplicate.

Development: Build and train.

Testing & Validation: Evaluate performance, explainability, bias testing, TEVV.

Deployment: Launch; conformity assessment happens BEFORE deployment.

Monitoring: Disparity testing, drift detection, champion/challenger testing.

Data Subsets

Training = textbook (learns patterns) · Validation = practice quiz (prevents overfitting during dev) · Testing = final exam (never seen before, robust evaluation)

Model Maintenance

Accuracy deterioration → champion/challenger testing first (diagnostic + solution-oriented). Retraining addresses data drift, concept drift, hyperparameter tuning. Retraining does NOT fix interpretability - that requires architecture changes.

Documentation Purpose by Phase

Development = preserves design decisions · Post-testing = verifiable audit trail · Post-deployment = captures current state · Post-incident = demonstrates due diligence

D6
Privacy-Enhancing Technologies
PETs · Federated Learning · Synthetic Data

Key PETs

Federated Learning: Multi-party training without sharing raw data. Each party shares only model updates (gradients/weights). Best for multi-institution collaboration.

Differential Privacy: Adds calibrated mathematical noise. Mathematical guarantee against membership inference attacks and training data reconstruction. Homomorphic encryption protects during computation but doesn't prevent output identification.

Homomorphic Encryption: Computation on encrypted data without decryption. Powerful but computationally expensive.

Data Anonymization: Removes PII. Difficult to achieve truly, especially for health data. Residual re-identification risk remains.

Synthetic Data: Generated from scratch to preserve statistical properties without being derived from real personal data. No residual re-identification risk. Best when real data is insufficient, consent unavailable, or sensitive data (health records) cannot be used directly.

When the exam presents a scenario where real data is insufficient and consent cannot be obtained - synthetic data is the answer. It is the purpose-designed solution, not a compromise.
D7
Vendor and Third-Party Risk
Procurement · AUP · Accountability

Core Principle

Accountability follows deployment, not development. Deploying a third-party AI system = accountable for outcomes regardless of who built it.

Policy Hierarchy (Operational Relevance)

1. Acceptable Use Policy (AUP) - defines what is permitted (most operationally relevant, review first) · 2. Privacy Policy - data handling · 3. Security Policy - access controls · 4. Code of Conduct - behavioral norms

The AUP comes first because it determines whether the intended use is even permissible. If AUP prohibits a use, privacy/security policies are irrelevant.

Procurement Governance

"Trust us, it's audited" is never sufficient. Review terms of use and license agreements before using external data - most important first step. Finance and Legal are the most critical additional procurement stakeholders.

IP indemnification = standard contractual mechanism for IP protection. Creates vendor obligation to defend against third-party IP claims.

Deployer Responsibilities

Deployers are responsible for: ethical testing (outputs for fairness), technical performance, regulatory compliance. NOT responsible for: ethical design (provider's domain), system documentation (provider's obligation).

D8
Copyright and Intellectual Property
AI Authorship · Model Disgorgement · Open Source

Key Principles

AI cannot bear legal responsibility - liability flows to humans and organizations. Fair use for AI training is unsettled law, not a guaranteed defense. The entity that commercially benefits from AI-generated output bears responsibility for it.

Copyright for AI-Generated Content

Purely AI-generated content is generally NOT eligible for copyright protection in the US - human authorship required. However, when a human takes AI output and modifies it with sufficient creative expression, the resulting work may qualify. Key test: demonstrating human creative input. Prompts alone are generally insufficient to establish authorship.

Model Disgorgement

When a model was trained on improperly obtained data, disgorgement removes the effects of that data. Simply deleting the original data is insufficient - the model retains learned patterns. May require full model destruction + retraining, machine unlearning, or partial retraining from a checkpoint.

Open Source

Open-source licensing does NOT exempt high-risk AI systems from the EU AI Act. Risk profile determines regulatory treatment, not the licensing model.

D9
Regulatory Landscape
OECD · US Sectoral · Canadian AIDA

OECD AI Principles

Self-regulation model - voluntary, not legally binding. Balances AI innovation with ethical considerations. Five principles: inclusive growth & sustainable development · human-centered values & fairness · transparency & explainability · robustness & security & safety · accountability.

OECD Assessment Tool Types: Procedural (codes of conduct, governance committees) · Technical (auditing software, bias tools) · Educational (training programs, guidelines) · Analytical (risk assessments, impact assessments). Codes of conduct = procedural, not technical.

US Regulatory Approach

Sectoral approach, not comprehensive legislation. Key laws: anti-discrimination (Title VII, ECOA, ADA, ADEA) · privacy (CCPA/CPRA) · consumer protection (FTC Act) · NYC Local Law 144 for AI hiring tools. Product liability applies to vendors/manufacturers - NOT deployers of third-party AI.

Canadian AIDA

Minister of Innovation must be notified when a high-impact AI system causes or is likely to cause material harm. Notification trigger is harm-based, not deployment-based.

Framework Comparison

EU AI Act: prescriptive, risk-based classification, legally binding · NIST AI RMF: structured methodology, voluntary · OECD: principles-based, balance-oriented, voluntary · IEEE 7000-21: system design methodology, voluntary · HUDERIA: impact assessment tool.

D10
Generative AI Governance
RAG · Expert Systems · Content Responsibility

When to Use What

RAG: Frequently changing information in non-regulated, non-deterministic contexts.

Expert Systems: Regulated, rule-based decisions requiring accuracy, consistency, explainability, auditability (financial offers, insurance quotes, compliance decisions). Frequently changing data ≠ automatically use RAG - if rules are structured, expert system with updated rule base is better.

Classic ML models: When customization and vendor lock-in avoidance is the priority.

Content Responsibility

Organizations using generative AI to produce content are responsible for that content. Using AI as a tool doesn't transfer professional responsibility to the tool.

Deepfakes Risk

Most significant risk = downstream harms (disinformation, non-consensual deepfakes, erosion of media trust, societal destabilization). Downstream harms is the broadest concept, outweighing narrower risks like copyright infringement.

Paid vs Free GenAI

Paid tools: convenient, extra privacy/security controls, frequent updates. Do NOT provide transparency into model decision-making. Do NOT eliminate data concerns.

D11
Foundational AI Terminology
Definitions · Model Types · Narrow vs Strong AI

Critical Definitions

AI Model: Program trained on data to find patterns → then applies patterns to new inputs. NOT a rule-based system (explicitly programmed logic, no training).

Machine Learning: Systems automatically improve from experience through predictive patterns. Key signal: "automatically improve from experience."

Inference: Process of using a trained model on new data. A process, not a model type.

Taxonomy: XAI vs Trustworthy AI vs Responsible AI

Explainable AI (XAI): Processes allowing users to understand and trust AI outputs. When question asks about "making AI outputs understandable to humans" → answer is XAI.

Interpretable AI: Degree to which internal model mechanics can be understood. Narrower and technically-facing vs XAI which is user-facing.

Trustworthy AI: Full set of properties - valid, reliable, safe, secure, explainable, fair, privacy-preserving. Broader than XAI.

Responsible AI: Organizational commitment and culture. Broader than both. Aspirational, not a technical property.

Narrow vs Strong AI

Narrow AI: Specific well-defined task only. Self-driving cars, LLMs, image recognition - all narrow AI.

Strong AI / AGI: Full generalized human cognitive ability across all domains. Does not exist in any commercially deployed system.

Model Taxonomy

Discriminative: Classifies existing data (random forests, SVMs, logistic regression) → classifies inputs → discriminative.

Generative: Creates new data (GANs, VAEs, LLMs, diffusion models) → LLMs generate text → generative.

Symbolic: Explicit logic and rules (expert systems, knowledge graphs). NLP is a domain of application, not a model type.

System Properties

Robust = withstands adversarial conditions ("despite") · Reliable = consistent under normal conditions · Resilient = recovers after disruption · Brittle = opposite of robust

AI-Unique Characteristics

Unique to AI: Autonomy, Adaptability. NOT unique: Automation (decades old), Speed & scale (all modern computing - it's a risk amplifier).

D12
Impact Assessments
People-Centered · FRIA · Subjects Covered

What Impact Assessments Evaluate

Impact assessments are fundamentally people-centered. They evaluate:

  • Fundamental rights (dignity, non-discrimination, privacy, freedom of expression)
  • Data protection (lawful processing, purpose limitation, data minimization)
  • Safety (physical, psychological, societal harm prevention)
Impact assessments evaluate impact on PEOPLE - not impact on the ORGANIZATION (business risks) or the TECHNOLOGY (technical metrics like toxicity, accuracy).

What They Do NOT Focus On

Organizational risk categories (third-party risk, model risk, legal risk) · Technical components (datasets, behavior, tooling) · Model quality metrics (toxicity, accuracy, development).

FRIA (Fundamental Rights Impact Assessment)

A deployer obligation - only for public bodies or private entities providing public services (banks, schools, hospitals, insurers). Must be conducted before putting high-risk AI into use.

Opt-Out Rights Factors

Risk to users (primary driver) · Feasibility of opt-out mechanisms · Cost of alternative mechanisms. Industry practice = least relevant factor for rights decisions.

The Master Rules

19 cross-topic principles that connect all domains. These are the governing logic behind the correct answer on almost every question.

RULE 01
Accountability Never Transfers
You can outsource tasks but never accountability. "The vendor is solely responsible" and "the AI generated it autonomously" are always wrong.
RULE 02
Data Is the Root of Everything
Fairness, bias, and quality problems trace back to data. Having data doesn't mean you can use it for anything. Public data still has privacy protections.
RULE 03
Purpose Limitation Is Always Primary
Original consent doesn't cover AI training. Context matters more than visibility. Review terms of use before using licensed data.
RULE 04
Roles Determine Obligations
Same entity can hold different roles for different activities. Substantial modification makes you a provider. Providers create compliance artifacts; importers verify them.
RULE 05
Proactive Beats Reactive
Impact assessments happen before deployment. Risk estimation belongs in design. Testing before deployment beats disclaimers after. Report incidents first, investigate second.
RULE 06
Aggregate Metrics Mask Problems
Overall accuracy never justifies subgroup disparity. Fairness requires disaggregated analysis. Disparity testing reveals what aggregate metrics hide.
RULE 07
Transparency Is Independent of Risk
EU AI Act transparency obligations apply regardless of risk classification. Three triggers always apply: AI interaction disclosure, emotion recognition disclosure, AI-generated content labeling.
RULE 08
The Two-Tier GPAI Structure
All GPAI providers have baseline obligations. Systemic risk adds training data summary and adversarial testing. Two pathways: 10²⁵ FLOPs or Commission designation.
RULE 09
The Lifecycle Sequence Matters
Planning is strategic. Design is technical. Architecture = design, not planning. Impact assessments = pre-deployment. Governance continues post-deployment.
RULE 10
Territorial Scope Is Effects-Based
EU AI Act applies based on where effects are felt. Output used within EU triggers applicability. No EU nexus = no applicability. Company HQ doesn't matter.
RULE 11
Documentation Is the Backbone
Documentation survives personnel changes and enables governance continuity. System documentation = provider obligation. Testing documentation = verifiable audit trail.
RULE 12
Rights Over Convenience
Individual rights outweigh organizational convenience. Industry practice is least relevant for rights decisions. Cost doesn't override legal requirements.
RULE 13
Prohibited Practices Are Narrow
"Solely" separates prohibited from high-risk. Social scoring = banned. Resume screening = regulated. Emotion recognition in workplace = banned, medical context = excepted.
RULE 14
Know the Glossary Distinctions
Training teaches. Testing evaluates. Validation tunes. Robust withstands. Reliable is consistent. Resilient recovers. Inference = process. Cognitive learning ≠ ML type. AI model = trained; Rule-based = programmed.
RULE 15
Deployer Responsibilities Are Bounded
Deployers own: ethical testing, technical performance, regulatory compliance. Deployers don't own: ethical design (provider), system documentation (provider).
RULE 16
Strategy Before Technology
People, principles, and stakeholders come before platforms. An integrated compliance strategy is built through ethical consultation - not by procuring a software platform.
RULE 17
Accountability Needs Named Owners
Cross-functional teams create accountability, not just coordination. Defining roles and responsibilities is the primary accountability mechanism. Without named owners, governance has no enforcement point.
RULE 18
Policy Hierarchy Starts with Acceptable Use
AUP defines what is permitted - review first. Privacy policy governs how permitted uses handle data. Security policy governs protection. AUP gates all other policies.
RULE 19
Retraining Fixes Bias at the Root
Auditing and feature deletion treat symptoms. Only retraining with demographically balanced data corrects root cause. Synthetic data = solution when real data unavailable.
For "MOST IMPORTANT" / "PRIMARY" / "BEST"
1
"Most important FACTOR" → Pick the most upstream answer
2
"Most important REASON" → Pick the most practical outcome/motivation
3
"First step" → Pick the earliest lifecycle action
4
Fairness question → Follow the data, always
5
"Who is responsible" → Check the role first
6
Two good options → Pick the broader umbrella answer
7
Stakeholder-specific risk → Match risk to stakeholder's core responsibility
For "EXCEPT" / "NOT" / "LEAST LIKELY"
1
"EXCEPT" → Find the odd one out by category
2
"Least relevant" → Apply the "so what" test
3
"Least likely" → Find the positive outcome among negatives
4
"NOT" → Look for wrong role, phase, or framework
Trap Detection - Almost Always WRONG
Absolute language ("always," "never," "eliminate")
"Solely responsible" - accountability is shared
Single safeguard claims - governance requires layered controls
Dismisses a governance concern entirely
Disproportionate response ("cancel the project entirely")
Wrong risk tier (prohibited vs high-risk confusion)
Wrong framework (NYC law ≠ EU AI Act)
Technology as strategy (buying platform = compliance)
Product liability applied to deployers of third-party AI
Industry practice cited as justification for rights decisions
Tiebreakers - When Stuck Between Two
Rights vs convenience → Rights win
Proactive vs reactive → Proactive wins
Aggregate vs disaggregated → Disaggregated wins
Prevent vs compensate → Prevention wins
Industry practice vs principles → Principles win
People-centered vs org-centered → People-centered wins
Upstream vs downstream → Upstream wins
Specific & actionable vs vague → Specific wins
Context-dependent vs absolute → Context-dependent wins
Deterministic vs probabilistic → Deterministic wins (regulated)
Final Exam Day Advice

10 reminders to carry into the exam room. These are not about content - they are about execution.

01
Read every question twice. The first read for understanding, the second for key words.
02
Don't argue with the question. Accept the premise and work within it. If the question places you at a specific lifecycle phase, don't rewind the clock.
03
Eliminate before selecting. Removing two wrong answers makes the decision between the remaining two much clearer.
04
Watch for role mismatches. More than any other trap, the exam assigns correct obligations to wrong roles. Always verify WHO before WHAT.
05
Don't confuse prohibited with high-risk. Social scoring is banned. Resume screening is regulated. These are different tiers with different answers.
06
Don't confuse frameworks. NYC Local Law 144 is not EU AI Act. GDPR transparency is not GDPR access rights. OECD principles are not NIST methodology.
07
Trust the framework, not your instinct. When your gut says one thing and the SLIDE framework says another, trust the framework.
08
Don't waste time on suspected unscored questions. You cannot reliably identify them. Answer everything with full effort.
09
Bold and capitalized words change everything. "EXCEPT," "NOT," "FIRST," "MOST," "LEAST" - missing these words leads to picking the opposite of the correct answer.
10
Match risks to roles. When the question asks about risk for a specific stakeholder, evaluate from that stakeholder's primary responsibility - not from general governance principles.
Practice Questions

15 scenario-based questions across all AIGP domains. Timed, with detailed explanations for every answer.

Ready to Test Your Knowledge?

15 AIGP-style questions covering EU AI Act, GDPR, Fairness, NIST, Lifecycle, GenAI, and more. Each question includes a full explanation.

⏱ 90 seconds per question
📋 15 questions
💡 Full explanations
📊 Score & review
Correct
Wrong
Avg Time
📖 Review Domains
Get in Touch

Let's Connect

I'm a Principal Consultant at Infosys with 20+ years in IT, focused on AI governance, agentic AI accountability, and helping practitioners navigate the evolving regulatory landscape.

Whether you have questions about AIGP preparation, AI governance in practice, or want to collaborate on thought leadership - reach out.

Send a Message