The Algorithm Sees You.
You Should See the Algorithm.

When banks use algorithms to deny credit, federal law requires disclosure. When government uses algorithms to make decisions about your life—no such requirement exists.

Learn More See Examples

The Disclosure Gap

For credit decisions, the Fair Credit Reporting Act requires:

This applies even when AI makes the decision. Companies can't hide behind "the algorithm is too complex to explain."

For government algorithmic decisions, there is no equivalent requirement. AI is already being used to:

Public Benefits

Medicaid eligibility, SNAP approvals, unemployment fraud detection, housing assistance screening

Public Safety

Predictive policing, bail/parole risk scores, 911 call prioritization, emergency response routing

Licensing & Permits

Business license approvals, building permits, professional certification, inspection prioritization

Education

School discipline algorithms, special education placement, college admissions screening

Healthcare

Hospital bed allocation, prior authorization, organ transplant priority, Medicaid service approvals

Immigration

Visa screening, asylum claim evaluation, enforcement prioritization

For most of these decisions: No disclosure that AI was involved. No explanation of what data was used. No clear path to challenge algorithmic errors. No accountability when the system is wrong.

Why This Matters Now

Compliance pressure is forcing rapid AI deployment. HR1 requires states to cut Medicaid error rates or face federal funding clawbacks—states are turning to AI to meet impossible timelines. Similar dynamics are playing out across unemployment insurance, SNAP, child welfare, and housing.

The risk: Agencies will deploy what vendors promise will work. Caseworkers will defend decisions they didn't make and don't understand. People will lose benefits, housing, or custody due to system errors that are never identified.

The gap: We're importing private sector AI tools without building public sector accountability infrastructure.

What's missing: A lightweight standard for disclosure—something agencies can actually implement under pressure. Not aspirational principles. Not academic frameworks. Just: here's the template, fill it out, publish it, make authority visible.

Three Tools to Close the Gap

AI Enablement Statement

A public-facing disclosure—like a nutrition label for algorithmic authority. Tells people:

  • What the AI does (recommendation vs. automatic decision)
  • Where humans are in the loop
  • How to challenge a decision
  • How the system is monitored for errors

Use when: AI participates in any decision affecting benefits, services, rights, or enforcement

View Framework →

Decision-Grade AI Taxonomy

A classification system for AI authority levels:

  • Advisory: AI recommends, human decides everything
  • Assistive: AI handles routine cases, human handles complexity
  • Determinative: AI decides, human reviews only on appeal (high risk)

Use when: Evaluating whether your AI system has appropriate safeguards for its authority level

View Framework →

AI Practice Standards

Internal checklists for responsible deployment:

  • Pre-deployment: What to verify before launch
  • Vendor requirements: What to demand in contracts
  • Ongoing monitoring: What to track, when to pause
  • Red flags: When to pull back or shut down

Use when: Procuring AI systems, evaluating vendors, or monitoring deployed systems

View Framework →

The Credit Standard, Applied Everywhere

We already know disclosure works. FCRA/ECOA require it for credit decisions—even algorithmic ones. The Consumer Financial Protection Bureau explicitly said: creditors can't hide behind algorithmic complexity.

This project extends that principle: If an algorithm affects you, you have the right to know.

Not just for credit. For everything government does with AI:

Make authority visible. Make decisions contestable. Make systems worthy of trust.

Who This Is For

Role How to Use This
State/Local CIOs & CDOs Adopt as lightweight policy standard; require Enablement Statements for decision-grade systems
Agency Digital Teams Use templates to design transparency into AI systems before launch
Procurement Staff Use Practice Standards as contract requirements; ask vendors for completed Enablement Statements
Program Leadership Use Taxonomy to classify systems; ensure staff understand AI's role vs. their authority
Advocates & Researchers Use frameworks to evaluate government AI; push for disclosure adoption
Vendors Complete Enablement Statements for your products; show how you meet these standards

Get Started

  1. Classify your systems using the Taxonomy (what authority does your AI actually have?)
  2. Draft Enablement Statements using templates and real-world examples
  3. Publish them where people use your service (/ai or /transparency)
  4. Share back so others can learn from your implementation

View Frameworks See Examples Contribute