Why We Built It — and How It Works
- Andrea Mazingo
- 5 days ago
- 4 min read
By Andi Mazingo
The Trump administration has made its position clear: there will be no public regulatory agency for artificial intelligence. No equivalent of the Public Company Accounting Oversight Board. No Sarbanes-Oxley moment for the AI era — at least not from this government, and not now.
We didn't wait.
The Institutional Coherence Initiative exists because governance vacuums don't stay empty. They fill, either with public infrastructure built in the interest of everyone, or with proprietary products built in the interest of those who can afford them. We chose to build the former before the latter became the only option.
What a public regulator would actually mean
Before Sarbanes-Oxley, corporate financial accountability was largely voluntary. Companies could claim sound governance while their books told a different story. Enron happened. WorldCom happened. The resulting legislation didn't ask corporations to be ethical. It required them to demonstrate that they were, through auditable, standardized, public reporting.
A public AI regulator would do the same thing for institutional decision-making. It would establish a floor: minimum standards for transparency, contestability, and accountability in AI-integrated decisions — hiring, termination, resource allocation, benefits, public-benefit administration. It would require that those standards be independently verifiable, not self-reported. And it would make the results public, so that the people most affected by those decisions could see, challenge, and contest them.
That regulator does not currently exist. ICI is building the infrastructure it would have created — openly, non-proprietarily, and without waiting for political permission.
How ICI works without legal enforcement authority
This is the question we hear most often, and it deserves a direct answer.
ICI cannot fine anyone. We cannot compel disclosure. We have no subpoena power. What we have is something that, over time, may be more durable: the architecture of trust.
Here is the mechanism:
Institutions that commit to ICI's framework publish standardized audit reports. Those reports are generated by the Coherence Checker — our open-source governance middleware — and cryptographically logged so they cannot be altered after the fact. They document how high-impact decisions were made, what reasoning was applied, what assumptions were surfaced, and whether the process was contestable by the people it affected.
Institutions that don't commit publish nothing verifiable. They can still claim to be ethical. They simply cannot prove it.
And this is where you come in.
For this to work, society must choose. Workers who have options must choose to work for institutions that publish coherent audit reports over those that don't. Consumers and clients must factor institutional accountability into their decisions. Philanthropic funders, impact investors, and institutional partners must make ICI compliance a condition of their relationships. Journalists must ask, routinely, whether an institution's AI governance is auditable -- and report when it isn't.
Supply and demand becomes governance when legal enforcement is absent. Trust becomes the currency that coherent institutions earn and incoherent ones spend down. This is not a utopian theory. It is how every major accountability norm in modern institutional life began -- as voluntary, then expected, then required.
The existing AI governance industry: a necessary conversation
There is a growing industry of AI governance professionals -- consultants, compliance officers, ethics researchers, policy advocates, and nonprofit watchdogs. Many of them are doing genuinely important work. Many are also operating within a structural conflict of interest that deserves to be named honestly.
Proprietary AI governance products are, by definition, owned by someone. That someone has a financial interest in the continued complexity and opacity of AI governance -- because complexity and opacity are what their product promises to manage. When governance infrastructure is proprietary, the institution paying for it controls what gets surfaced, what gets buried, and what gets reported publicly. The people most affected by AI-mediated decisions rarely have access to those reports at all.
Nonprofit advocacy and research organizations face a different but related constraint: their value lies in identifying problems and pushing for policy solutions. That work is essential. It is not the same as building the operational infrastructure that makes accountability enforceable at the level of individual institutional decisions.
ICI is not a competitor to these organizations. We are a different category of thing -- public infrastructure, in the same way that roads are public infrastructure. Roads don't compete with car companies. They make car travel possible for everyone, not just those who can afford private transportation.
What changes when public infrastructure exists? The compliance professionals, the ethics researchers, the governance consultants -- their expertise doesn't disappear. It finds a new application: implementing, adapting, and improving a shared public standard rather than building proprietary variations of it. The question shifts from "whose governance product should we buy?" to "how do we implement ICI's framework well in our specific context?" That is a legitimate, skilled, and compensable body of work.
The realization required is simply this: the goal was never to build a governance industry. The goal was always to make institutions accountable. Public infrastructure serves that goal more completely than any proprietary product can -- and it creates more equitable access to the expertise needed to use it well.
What we're asking of you
If you lead an institution: commit publicly to ICI's framework and publish your audit reports. Not because you are required to -- because your workers, clients, and partners deserve to see that your stated values and your operational decisions are the same thing.
If you work inside an institution: ask whether your organization's AI-mediated decisions are auditable and contestable. Ask who can challenge them and how. Ask what gets documented. Those are not radical questions. They are the minimum standard of institutional accountability.
If you fund institutions: make ICI compliance a condition of your relationships. Not as a punitive measure -- as a signal that you take seriously the gap between stated values and operational reality.
If you work in AI governance professionally: we need you. Not to sell your product alongside ours -- to help build shared infrastructure that serves everyone. The expertise you've developed is exactly what implementation requires. The question is whether that expertise flows toward public accountability or toward private advantage.
And if you are simply a person navigating institutions that shape your livelihood, your access to resources, your dignity -- know that this infrastructure is being built for you. You are the reason the window has to stay open long enough to get this right.
The Institutional Coherence Initiative is a public-interest project building open-source, non-proprietary governance infrastructure for AI-integrated institutions. No one owns it -- including us. Learn more at InstitutionalCoherenceInitiative.com.



Comments