Skip to content

AI Governance and Compliance

AI governance and compliance is about establishing the policies, procedures, and oversight mechanisms to ensure AI systems are developed and used responsibly. As AI adoption grows, so does regulatory scrutiny and the need for robust governance frameworks.

The AI regulatory landscape is evolving rapidly. Organizations using AI must navigate a complex patchwork of regulations that vary by jurisdiction, industry, and use case. Compliance is no longer optional—it’s becoming a legal requirement and a business imperative.


The world’s first comprehensive AI law, with phased application starting 2024.

Risk-Based Classification:

Risk LevelDescriptionExamplesRequirements
UnacceptableProhibitedSocial scoring, real-time biometric ID in public spaces (narrow law-enforcement exceptions)Prohibited
HighStrict obligationsCritical infrastructure, hiring, medical devicesConformity assessment, risk management, data governance, logging, human oversight, accuracy/robustness/cybersecurity, CE marking (where applicable)
LimitedTransparency obligationsChatbots, AI-generated content/deepfake labelingUsers must be informed they’re interacting with AI; AI-generated content must be labeled
MinimalNo restrictionsSpam filters, video gamesNo specific obligations (existing laws apply)

Key deadlines:

  • Feb 2025: Prohibited practices apply
  • Aug 2025: GPAI obligations start applying
  • Aug 2026: Most high-risk obligations apply
  • Aug 2027: Some high-risk product/safety-component cases apply later

EU’s General Data Protection Regulation intersects with AI in several areas:

AspectRequirement
Automated decision-makingRestrictions/safeguards on solely automated significant decisions (Art. 22), incl. ability to seek human intervention/contest
Data minimizationUse only necessary data for AI training
Purpose limitationTrain for specific, legitimate purposes
Individual rightsData subject rights may apply to personal data used in training; operationalizing deletion/unlearning is complex
Data portabilityRight to transfer data to another service

Directs federal agencies to:

AreaAction
StandardsNIST to develop AI safety and security standards
ReportingDirects agencies to establish reporting requirements for certain advanced AI models/compute clusters (scope depends on final rules)
GuidanceSector-specific guidance for AI use in critical infrastructure
TalentAttract and retain AI talent in government
RegionStatusKey Framework
UKPro-innovation approachVoluntary codes of practice
ChinaComprehensive regulationsRecommendation + deep synthesis + generative AI measures
CanadaNo federal AI Act yetVoluntary code + privacy/provincial rules
SingaporeModel AI Governance FrameworkAI Verify foundation

AI governance is becoming a board-level responsibility.

ConsiderationQuestions to Ask
AccountabilityWho is ultimately responsible for AI outcomes?
ExpertiseDoes the board have sufficient AI literacy?
OversightHow are AI decisions reviewed and challenged?
Risk toleranceWhat level of AI risk is acceptable?

Effective AI governance typically involves:

ComponentDescription
Steering committeeCross-functional group making AI policy decisions
Responsible AI officeCenter of expertise for ethical AI practices
Review boardsTechnical and ethical review of AI projects
Subject matter expertsLegal, ethical, technical advisors
RoleResponsibilities
Executive sponsorOverall accountability for AI initiatives
AI Ethics CommitteeReview and approve high-risk projects
Data stewardsEnsure data quality and appropriate use
Model ownersResponsible for specific models in production
Compliance officerEnsure adherence to regulations and policies

Four-part continuous cycle:

PhaseActivities
GovernEstablish policies, procedures, and oversight
MapIdentify AI systems, categorize risks, set tolerance
MeasureAssess systems against mapped risks
ManageRespond to risks with appropriate controls

First international standard for AI management systems.

Focus AreaKey Requirements
Leadership and commitmentExecutive buy-in, AI policy
PlanningAI impact assessment, risk management
Risk assessmentSystematic evaluation of AI risks
ControlsImplementing controls to address risks
Information sharingDocumenting and communicating AI risks
Continuous improvementMonitoring and enhancing AI management system

Risk CategoryExamplesMitigation
ReputationalAI makes offensive statementsContent filtering, review processes
LegalCopyright infringement, privacy violationsLegal review, data governance
OperationalSystem failure, downtimeMonitoring, fallback mechanisms
FinancialFines, penalties, remediation costsCompliance programs, insurance
EthicalUnfair treatment, discriminationBias testing, impact assessments
  1. Identify AI systems in use
  2. Categorize by risk level and application domain
  3. Assess likelihood and impact of potential harms
  4. Mitigate with appropriate controls
  5. Monitor for new risks and effectiveness of controls

Document TypePurpose
AI inventoryTrack all AI systems in use
Model cardsDocument model capabilities, limitations, intended use
Impact assessmentsAssess potential impacts before deployment
Risk assessmentsEvaluate and document risk mitigation measures
PoliciesClear guidelines for acceptable AI use
Training recordsEvidence of staff training on AI ethics
ActivityFrequency
Model performance monitoringContinuous
Bias auditsQuarterly or after significant updates
Security testingRegular, especially for high-risk systems
Legal reviewBefore major deployments, when regulations change
Stakeholder feedbackOngoing mechanisms for affected communities
Incident responseAs needed, with post-incident reviews

PitfallConsequencePrevention
Waiting until regulations are finalRushing compliance, catching up laterStart now, use frameworks as guides
Focusing only on technologyNeglecting process and peopleTake holistic approach
One-time complianceSystems drift out of complianceContinuous monitoring and updates
Ignoring international standardsMarket access limitationsConsider global requirements from the start

  • Regulatory landscape is evolving: EU AI Act is the first comprehensive AI law, with other jurisdictions following
  • Risk-based approach: Most regulations use risk tiers (unacceptable, high, limited, minimal)
  • Corporate governance is essential: Board-level oversight, clear roles, accountability
  • Use established frameworks: NIST AI RMF, ISO/IEC 42001, and others provide structure
  • Compliance is ongoing: Not a one-time project but continuous process
  • Documentation is critical: Inventories, assessments, policies, and records are essential
  • Start now: Even if regulations aren’t final in your jurisdiction, begin preparing