Algorithmic Dissonance
- Andrea Mazingo
- 7 days ago
- 4 min read
The whopper of a contradiction at the heart of AI governance
Mar 10, 2026
Read on Substack here.
Subscribe to Demand the Impossible here.

The biggest incoherence in the modern debate about AI is actually surprisingly simple. Publicly, AI is discussed as a safety problem. Institutionally, it is treated as an economic growth technology. Most of the confusion surrounding AI governance flows from this whopping contradiction.
If you listen to the public language of AI governance, the tone is cautious. Governments talk about AI safety, alignment, model evaluations, and responsible deployment. Conferences convene panels of ethicists and researchers to discuss bias, hallucinations, misuse, and long-term existential risks. National governments establish AI safety institutes. Regulators publish principles about transparency, accountability, fairness, and human-in-the-loop. Yada yada.
The vocabulary of the debate resembles that of other high-risk technological domains. Aviation regulators talk about safety. Nuclear engineers talk about containment and fail-safes. Pharmaceutical companies must demonstrate that drugs are safe before releasing them to the public. When policymakers and the commentariat discuss AI in this register, the implication is clear, this is a powerful and potentially dangerous technology that must be handled with care.
And yet, if that framing genuinely governed institutional behaviour, then surely we would expect to see a cautious and incremental approach to development. Deployment would be slow and capabilities would be properly tested before release. Governments would be limiting the scaling of large models until reliable safeguards were proven etc.
But, of course, that is exactly not what is happening.
If you look at what institutions are actually doing, its a very different story. Governments across the world are investing massive amounts of dough into AI development and adoption. The US is pouring billions into semiconductor manufacturing and advanced compute. China has declared AI leadership a national priority. The EU talks about technological sovereignty (not sure what that means, tbh, but it sounds impressive) and the UK wants to position itself as a ‘global AI hub’.
This is not the behaviour of institutions treating AI primarily as a safety risk at all. It is the behaviour of institutions treating AI as industrial policy.
Corporations are behaving in much the same way. Tech firms are in a perpetual race to release more powerful models. VC continues to pour cash into AI startups at extraordinary rates. Corporate leaders speak openly about productivity gains and competitive advantage. The dominant logic here is acceleration.
So we end up with two incompatible narratives operating simultaneously.
One narrative says AI must be handled cautiously because it introduces serious risks to society. The other says AI must be developed rapidly because it represents a once-in-a-generation economic opportunity.
What’s worse, is that these two narratives will coexist within the very same institutions. Governments that fund AI safety research also subsidise AI infrastructure. Technology companies that publish responsible AI principles are also competing intensely to release the most capable systems as quickly as possible.
When these two logics collide, it’s only one of them that almost always wins.
Economic competition will always dominate abstract safety concerns. A government that slows development risks falling behind rivals. A company that delays a release risks losing market share to competitors. In a competitive system, the incentives reward those who move the fastest.
The result is something that yer sociologists call performative governance.
Institutions construct a layer of responsible language and ethical frameworks that signal caution and legitimacy. Advisory boards are formed, principles are published and institutes are created. These structures communicate that risks are being taken seriously.
But underneath that window dressing, the operational system continues to push toward rapid capability growth.
A useful analogy can be seen in the UK’s energy policy around North Sea oil and gas. A popular topic around these parts in the North East. Successive governments have presented the winding down of domestic extraction as evidence of their climate leadership. Hurrah. Reducing production within UK territory allows governments to signal their environmental responsibility and commitment to decarbonisation.
Yet the earth’s atmosphere does not give a shit where fossil fuels are extracted. If domestic production declines while demand remains broadly unchanged, the gap is simply filled by imports. Oil and gas are purchased from Norway, the United States, Qatar, or elsewhere. The emissions associated with burning the fuel remain largely the same, only the geography of extraction changes. Howzat, for sleight of hand Ed Milliband.
In other words, the policy can create the APPEARANCE of decarbonisation without reducing the underlying system’s dependence on fossil fuels at all.
This example illustrates something fundamental about institutional behaviour. Governments operate not only in the domain of material outcomes but also in the domain of signals. Policies serve communicative purposes as well as practical ones (more so, even). They demonstrate values, signal responsibility, and reassure us dimwitted voters that action is being taken.
AI governance opacity follows a similar pattern.
Institutions publish safety principles, establish ethics boards, and convene international summits discussing responsible development. These activities signal caution and responsibility and ‘reassure’ the public that powerful technologies are being handled carefully.
But at the same time, those same institutions invest heavily in expanding AI capabilities, scaling compute infrastructure, and accelerating adoption across industry. The underlying system continues to move toward more powerful models and broader deployment.
The symbolic layer emphasises restraint. The operational layer emphasises acceleration.
This is the fundamental incoherence in the governance debate. We are trying to regulate a technology using the same institutions that are simultaneously committed to accelerating its expansion.
Until those incentives align, the discussion about AI risks will continue to oscillate between two competing claims: that AI is too powerful to release without careful safeguards, and that AI is too important to slow down.
Both claims can be defended. Both are sincerely believed by many of the people making them.
But they cannot both serve as the guiding principle of institutional behaviour at the same time. And until that contradiction is resolved, the gap between principles and behaviour will remain deep and wide.


Comments