Skip to content

faridukhan/ai-governance-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 

Repository files navigation

Enterprise AI Governance Framework

A practical, implementation-ready AI governance framework aligned to NIST AI RMF — built for enterprise architects, not compliance teams

Most AI governance frameworks are policy documents. This one is code — automated risk scoring, lineage tracking, model cards, and audit tooling you can actually deploy.


What This Is

This repository provides the scaffolding for an enterprise AI governance program. It translates the NIST AI Risk Management Framework (AI RMF) and EU AI Act requirements into concrete, executable artifacts: risk scoring scripts, model documentation templates, data lineage validators, bias detection utilities, and a governance dashboard.

Built from experience implementing AI governance at federal agencies and large enterprises — where "responsible AI" has to mean something specific enough to audit, not just something aspirational to say.


Why AI Governance Fails (and What This Fixes)

Most enterprise AI governance programs fail for one of three reasons:

  1. Too abstract — principles and policies with no implementation path
  2. Too late — governance applied after deployment instead of during design
  3. Too siloed — compliance owns it instead of architecture

This framework embeds governance into the development lifecycle at four control points: data sourcing, model development, pre-deployment review, and production monitoring.


Framework Structure (Aligned to NIST AI RMF)

NIST AI RMF Function    │  This Framework
────────────────────────┼──────────────────────────────────────────────
GOVERN                  │  governance_policies/        — policy templates
                        │  risk_taxonomy/              — risk classification
                        │  roles_and_responsibilities/ — RACI matrix
────────────────────────┼──────────────────────────────────────────────
MAP                     │  risk_assessment/            — automated risk scorer
                        │  use_case_registry/          — AI use case catalog
                        │  impact_assessment/          — AIIA templates
────────────────────────┼──────────────────────────────────────────────
MEASURE                 │  model_evaluation/           — fairness + performance
                        │  bias_detection/             — bias audit toolkit
                        │  data_quality/               — quality scorecards
────────────────────────┼──────────────────────────────────────────────
MANAGE                  │  model_cards/                — documentation templates
                        │  incident_response/          — AI incident playbooks
                        │  monitoring/                 — production drift alerts

Core Components

1. AI Risk Scorer (risk_assessment/risk_scorer.py)

Automated risk classification for AI use cases based on:

  • Data sensitivity (PII, PHI, financial, classified)
  • Decision autonomy (human-in-the-loop vs. fully automated)
  • Impact scope (individual, organizational, societal)
  • Domain risk tier (healthcare, finance, defense = high; internal analytics = low)

Outputs a risk tier (Critical / High / Medium / Low) with a detailed justification trace — making governance review decisions auditable.

2. Data Lineage Validator (data_quality/lineage_validator.py)

Integrates with Azure Purview to validate that:

  • Training data sources are documented
  • Data transformations are traceable
  • Consent and licensing are verified for each source
  • Sensitive data flags are propagated through the lineage graph

3. Model Card Generator (model_cards/card_generator.py)

Generates structured model documentation (following Google's Model Card spec + EU AI Act Article 13 requirements) from:

  • MLflow experiment metadata
  • Evaluation results
  • Training data provenance
  • Intended use and out-of-scope use declarations

4. Bias Detection Toolkit (bias_detection/)

Fairness metrics across demographic groups using the Fairlearn library:

  • Demographic parity
  • Equalized odds
  • Individual fairness Outputs an HTML fairness report with visualizations suitable for governance review boards.

5. Governance Dashboard (dashboard/)

A Streamlit dashboard showing:

  • AI use case registry with risk tiers
  • Model inventory with governance status (approved / under review / deprecated)
  • Data quality scores by source
  • Open compliance findings

Project Structure

02-ai-governance-framework/
├── README.md
├── framework/
│   ├── nist-ai-rmf-mapping.md          # How this framework maps to NIST AI RMF
│   ├── eu-ai-act-mapping.md            # EU AI Act compliance crosswalk
│   └── governance-lifecycle.md        # Governance at each SDLC stage
├── src/
│   ├── risk_assessment/
│   │   ├── risk_scorer.py              # Automated risk classification
│   │   ├── risk_taxonomy.yaml          # Risk category definitions
│   │   └── use_case_registry.py        # AI use case catalog management
│   ├── data_quality/
│   │   ├── lineage_validator.py        # Azure Purview lineage checks
│   │   ├── quality_scorer.py           # Data quality dimension scoring
│   │   └── pii_classifier.py           # Sensitive data detection
│   ├── model_evaluation/
│   │   ├── bias_detector.py            # Fairness metrics (Fairlearn)
│   │   ├── drift_monitor.py            # Production model drift detection
│   │   └── evaluation_harness.py       # Standardized eval pipeline
│   ├── model_cards/
│   │   ├── card_generator.py           # Auto-generate from MLflow metadata
│   │   └── templates/
│   │       ├── model_card_template.md
│   │       └── system_card_template.md
│   ├── incident_response/
│   │   ├── incident_classifier.py      # AI incident triage
│   │   └── playbooks/
│   │       ├── bias-incident-playbook.md
│   │       ├── drift-incident-playbook.md
│   │       └── data-breach-playbook.md
│   └── dashboard/
│       └── governance_dashboard.py     # Streamlit governance dashboard
├── policies/
│   ├── acceptable-use-policy.md
│   ├── data-governance-policy.md
│   └── model-lifecycle-policy.md
├── templates/
│   ├── ai-impact-assessment.md         # AIIA template
│   ├── governance-review-checklist.md
│   └── raci-matrix.md
├── notebooks/
│   ├── 01-risk-scoring-demo.ipynb
│   ├── 02-bias-audit-demo.ipynb
│   └── 03-lineage-validation-demo.ipynb
└── requirements.txt

Quick Start

# Install dependencies
pip install -r requirements.txt

# Score an AI use case
python src/risk_assessment/risk_scorer.py \
  --use-case "Automated loan approval" \
  --data-sensitivity high \
  --autonomy automated \
  --domain finance

# Run bias audit on a model
python src/model_evaluation/bias_detector.py \
  --model-path ./models/loan_model.pkl \
  --test-data ./data/test_set.csv \
  --sensitive-features gender,race,age

# Generate a model card
python src/model_cards/card_generator.py \
  --mlflow-run-id abc123 \
  --output ./model-cards/loan-model-v2.md

# Launch governance dashboard
streamlit run src/dashboard/governance_dashboard.py

Key Design Principle

Governance documentation that only exists in a SharePoint folder is not governance — it's paperwork. Every artifact in this framework is either auto-generated from system metadata or validated programmatically. If a governance check can be automated, it is.


Related Projects

About

Practical, implementation-ready AI governance framework aligned to NIST AI RMF — automated risk scoring, data lineage validation, bias detection, model cards, and a governance dashboard. Built for enterprise architects deploying AI in regulated environments.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors