When banks use algorithms to deny credit, federal law requires disclosure. When government uses algorithms to make decisions about your life—no such requirement exists.
For credit decisions, the Fair Credit Reporting Act requires:
This applies even when AI makes the decision. Companies can't hide behind "the algorithm is too complex to explain."
For government algorithmic decisions, there is no equivalent requirement. AI is already being used to:
Medicaid eligibility, SNAP approvals, unemployment fraud detection, housing assistance screening
Predictive policing, bail/parole risk scores, 911 call prioritization, emergency response routing
Business license approvals, building permits, professional certification, inspection prioritization
School discipline algorithms, special education placement, college admissions screening
Hospital bed allocation, prior authorization, organ transplant priority, Medicaid service approvals
Visa screening, asylum claim evaluation, enforcement prioritization
For most of these decisions: No disclosure that AI was involved. No explanation of what data was used. No clear path to challenge algorithmic errors. No accountability when the system is wrong.
Compliance pressure is forcing rapid AI deployment. HR1 requires states to cut Medicaid error rates or face federal funding clawbacks—states are turning to AI to meet impossible timelines. Similar dynamics are playing out across unemployment insurance, SNAP, child welfare, and housing.
The risk: Agencies will deploy what vendors promise will work. Caseworkers will defend decisions they didn't make and don't understand. People will lose benefits, housing, or custody due to system errors that are never identified.
The gap: We're importing private sector AI tools without building public sector accountability infrastructure.
What's missing: A lightweight standard for disclosure—something agencies can actually implement under pressure. Not aspirational principles. Not academic frameworks. Just: here's the template, fill it out, publish it, make authority visible.
A public-facing disclosure—like a nutrition label for algorithmic authority. Tells people:
Use when: AI participates in any decision affecting benefits, services, rights, or enforcement
A classification system for AI authority levels:
Use when: Evaluating whether your AI system has appropriate safeguards for its authority level
Internal checklists for responsible deployment:
Use when: Procuring AI systems, evaluating vendors, or monitoring deployed systems
We already know disclosure works. FCRA/ECOA require it for credit decisions—even algorithmic ones. The Consumer Financial Protection Bureau explicitly said: creditors can't hide behind algorithmic complexity.
This project extends that principle: If an algorithm affects you, you have the right to know.
Not just for credit. For everything government does with AI:
Make authority visible. Make decisions contestable. Make systems worthy of trust.
| Role | How to Use This |
|---|---|
| State/Local CIOs & CDOs | Adopt as lightweight policy standard; require Enablement Statements for decision-grade systems |
| Agency Digital Teams | Use templates to design transparency into AI systems before launch |
| Procurement Staff | Use Practice Standards as contract requirements; ask vendors for completed Enablement Statements |
| Program Leadership | Use Taxonomy to classify systems; ensure staff understand AI's role vs. their authority |
| Advocates & Researchers | Use frameworks to evaluate government AI; push for disclosure adoption |
| Vendors | Complete Enablement Statements for your products; show how you meet these standards |