top of page

Making Institutional Trust Verifiable for Humanity
 

The Sarbanes–Oxley Layer for AI Decision-Making

The next AI alignment challenge isn’t between machines and humans. It’s between institutions and the values they claim to uphold.

Today, organizations make high-impact decisions about technology, labor, public policy, and social systems inside governance environments that are fragmented, contradictory, and difficult to audit. Even well-intentioned leaders drift when incentives conflict and decision processes are opaque.

History shows what happens next. After the Enron and WorldCom scandals, the Sarbanes–Oxley Act forced corporations to build systems that made financial accountability verifiable. Companies didn’t just promise integrity—they built infrastructure that records, verifies, and audits decisions.

AI governance may be approaching a similar moment.

The Institutional Coherence Initiative (ICI) explores how institutions can build the next layer of governance infrastructure: tools that translate ethical commitments and regulatory frameworks into operational decision systems.

Our work focuses on public governance architecture that can:

• flag high-risk AI uses
• trigger structured review
• record decision authority and reasoning
• make governance processes auditable and contestable

Much of the AI race is framed as a race for more compute and more data centers. But engineers working with large language models know that significant effort is spent reconciling contradictory instructions coming from the human systems that govern them.

AI systems often inherit the incoherence of the institutions that build them.

The institutions that succeed in the AI era will not only scale technology.


They will scale coherent governance.

Institutional coherence may become the decisive competitive advantage of the AI era.

image4.jpeg

Coherent governance acts like compression, reducing contradiction so models can represent reality more efficiently.

Click here to apply to become a part of ICI's founding pilot cohort
What’s Next: AI’s Impact on Workers & the Law

Founding Humanity Partner Andi Mazingo will be speaking at the National Employment Lawyers Association's Spring Seminar in Chicago, IL.

March 21-22, 2026

Upcoming Events

Recent
Blog
Posts

Coherent Computer Code

What We Are Building

  • Artificial Intelligence Visual

    Open-access report and governance toolkit

    The report will describe present systemic realities and delineate a post-extractive stewardship framework for AI-integrated organizations.

  • Check assumptions and decision quality

    The Coherence Checker

    A prototype to audit high-impact institutional decisions. The tool pauses high-risk autonomous and human decisions and securely routes them through an open-source middleware layer that scans for linguistic markers of neutralizations used to rationalize poor-quality decisions (i.e., retaliation, assumption stacking), recognizes when a decision is impacted by optimization anxiety, and evaluates historical decisional ledgers to evaluate for moral drift. It then returns authority to humans by advancing the decision to a dashboard for human review that categorizes the specific fear-pattern or assumption identified, ensuring automated cryptographic logging. Crucially, we are building this as open-source, non-proprietary public infrastructure. We want no one to own it, because trust infrastructure cannot be proprietary.

  • Image by David Werbrouck

    Field-building

    A field-building initiative advancing structurally enforceable AI governance.

Frequently asked questions

bottom of page