top of page

STOP MANAGING PERCEPTION. START SHOWING YOUR WORK.

  • Andrea Mazingo
  • 4 days ago
  • 7 min read

By Eaon Pritchard

April 1, 2026


For years, institutions, corporations and governments have been able to get away with the assumption that no one would ever really look too closely at how their decisions were made. As long as outcomes were delivered, the process could remain fairly opaque. Part policy, part judgment, part ‘don’t ask too many questions about how decisions get made, and we’ll keep the machine moving’. That era is clearly ending. Not because institutions have suddenly become less competent, but because the systems now acting on their behalf are exposing just how little of that decision-making was ever fully understood in the first place.


Then along comes Donald Trump. A man who bullshits with abandon, but who also, in the process, has ripped the veil off how much of institutional decision-making was always a mix of narrative, power, and selective transparency. He’s both the distortion and the x-ray - people might not trust him, but then they trust the system even less. And once that doubt sets in it hangs around like a bad smell, waiting for something else to expose what’s really going on underneath. 


Now, of course, we have AI starting to make important decisions faster than organisations can explain them. Hiring, firing, access, funding and increasingly more of choices that affect people’s lives are being automated, but the logic (if there is method) behind them is unclear or impossible to reconstruct, ‘known’ only to the black box. It’s not just that decisions are happening quickly, it’s that they’re happening without leaving a trail anyone can follow.


So, when people can’t see how decisions are made, or challenge them, then why should we trust the outcome. This isn’t really about AI going wrong. It’s about trust breaking down.


What this also reveals is that - just like our friend the Donald - AI isn’t always creating the problem, it’s exposing it. These systems learn from the environments they’re built in. If an organisation says one thing but rewards another, if its decision-making is messy or contradictory, the AI absorbs that. So at ICI we’re starting to understand that the real alignment problem isn’t between humans and machines. It’s between institutions and the values they claim to stand for.


We’ve seen this before. After financial scandals like at Enron, companies weren’t just told to behave better, they had to prove it. Systems were built to track, verify, and audit decisions. AI adoption should be pushing us toward a similar point. The organisations that succeed won’t just be those who build smarter technology. They’ll build systems that make their decisions clear, consistent, transparent and accountable. 

The answer isn’t more complexity. It’s better discipline.


Just the other day we couldn't help having a chuckle at Karl Bode coining "CEO Said A Thing!" Journalism It’s basically a takedown of a very specific media habit. A (usually tech) CEO says something that has just popped into their head and it instantly becomes news. No context, no scrutiny, no memory of the last ten times they said something equally grand that never happened. Just a straight line from ‘a man with a load of money had a thought’ to ‘this is now a headline’. Wild claims about AI, space, disruption, whatever’s fashionable this week, get treated like weather forecasts rather than marketing. No one ‘circles back’ to see if it came true. 


What’s needed is a clear, consistent way of making decisions visible, understandable, and testable. At the moment, too many decisions, especially those involving AI, are effectively stuck inside those black boxes. They produce outcomes, but not explanations. And when explanations do exist, they’re often stitched together after the fact, shaped to justify rather than reveal anything. 


The fix is surprisingly straightforward in principle, but it needs commitment in practice. Every decision should clearly show what it’s based on. What is actual evidence? What is assumption? What is interpretation? And what ‘kind’ of knowledge is being used. Is it data, experience, logic, belief? Not everything has to be empirical, but everything has to be honest about what it is.


This sounds a bit basic, but it forces a level of clarity most organisations don’t currently operate with. It removes the ability to blur lines between we know this and we think this or we’d like this to be true. That distinction matters more than ever in environments where decisions are being scaled and automated. AI doesn’t just amplify intelligence, it amplifies whatever structure (or lack of structure) it’s fed. If the underlying decision-making is inconsistent or opaque, the system will inherit that inconsistency at scale. And don’t get me started on model collapse.


From there, the second requirement is traceability. It’s not enough to say a decision was made ‘based on the data’ or ‘following policy’. We need a reconstructable path. Some record that shows how a conclusion was reached, step by step. What inputs were considered, how they were weighted, who had authority, and what alternatives were rejected. Crucially, this record needs to be understandable to humans, not just technically logged somewhere no one can interpret. If a decision affects people, those people should be able to see, in understandable terms, how it came to be.


And just as importantly, decisions need to remain open to challenge. That means building in the expectation that disagreement will happen and making space for it. Instead of rubbing out dissent to present a unified front, organisations should record it. Who disagreed and why? This isn’t weakness at all, it’s more like a guardrail. When dissent is visible, it becomes PART OF THE SYSTEM’S INTELLIGENCE. It shows where uncertainty exists, where assumptions might be fragile, where further evidence might be needed. When dissent is hidden, those same weaknesses don’t disappear they just go underground, where they’re harder to detect until something breaks.


What is intelligence, anyway? From an evolutionary standpoint, it’s not about always getting the ‘right’ answer. It’s about coping with uncertainty. Our ancestors had to make constant decisions with incomplete information (is that danger? can I trust this person? is this worth the risk?) and intelligence evolved as a way to make good enough calls under pressure, not flawless ones. Which is why the much fabled AGI is a complete misnomer, but that’s something for another article…


A brain relies on shortcuts, pattern recognition, and gut judgments. It’s less about certainty, more about survival, about acting even when you don’t fully know what’s going on. Intelligence is what evolution built to help organisms act when they don’t have enough information and never will. The same goes for organisations

At the centre of all this is a simple but demanding standard for truth (long established in science, to be fair). ANY CLAIM IS ONLY WORTH TRUSTING IF IT BOTH FITS THE AVAILABLE EVIDENCE AND COULD, IN PRINCIPLE, BE PROVEN WRONG. That second part is critical. If nothing could ever count against a claim then it isn’t tracking reality, it’s protecting itself. That’s how institutions end up convincing themselves of things that aren’t true, and then acting on them at scale. A system that enforces this standard across different types of knowledge - data, experience, philosophy, even intuition- doesn’t eliminate those perspectives, it disciplines them. It asks each to be clear about its grounding and its limits.


This is where many governance efforts go wrong (and what ICI are trying to ensure we avoid). They assume the solution is to prioritise one type of knowledge over all others, usually hard data, or formal analysis. But real-world decisions are rarely that simple. Experience matters, context totally matters and human judgment matters. The goal isn’t to exclude those inputs, but to make them accountable. An experiential insight can be valuable, but it should be presented as experiential, not smuggled in as evidence. A philosophical position can guide a decision, but it should be recognised as a framework to be played with, not a fact. When each type of input is clearly labelled and evaluated on its own terms, you can get the benefits of a kind of pluralism but without confusion.


Instead of concentrating authority in a single mode of thinking, whether that’s technical expertise, executive judgment, or democratic input, it distributes responsibility across them. Different perspectives interact, challenge each other, and balance each other out. No single lens gets to define reality on its own, reducing the risk that blind spots in one area quietly dominate the outcome.


Then you can start to change the environment decisions are made in. Organisations become more internally coherent. The gap between what they say and what they do narrows, because contradictions are harder to hide. Incentives become more aligned with stated values, because inconsistencies are easier to spot and challenge.


We don’t just audit decisions, we begin to improve the conditions that produce them.

That matters enormously for AI going forward because these systems don’t operate in isolation. They are trained on, embedded in, and shaped by the institutional environments around them. If those environments are contradictory, opaque, or driven by unexamined assumptions, the systems will be like that too. But if those environments become more coherent then the systems built within them inherit that same coherence. They become easier to understand, easier to govern, and ultimately more trustworthy.


None of this works if it’s hidden. If the tools that enable this kind of accountability are locked away, proprietary, or only accessible to a few, they won’t rebuild trust they’ll simply shift the opacity somewhere else. For this to have real impact, it has to be open, auditable, and shared. People need to be able to see not just the outputs, but the processes behind them. 


A few years ago I saw Harvard Psychology Professor Michael Norton call it operational transparency. This is the idea that people don’t just want outcomes, they want to see how those outcomes were arrived at. Not a polished explanation after the fact, but visibility into the process itself. Because when you’re dealing with uncertainty - and that’s what intelligence is for in the first place - you’re constantly making bets. And people are far more willing to accept those bets, even imperfect ones, if they can see the logic behind them.


Which gets to the key point. Trust doesn’t come from being told the system is fair. It comes from being able to verify that claim.


So the shift we are calling for isn’t toward more sophisticated AI in isolation. It’s toward more coherent governance. Toward systems where decisions can be seen, understood, and challenged in real time. Where power is exercised in ways that are demonstrably aligned with stated values. And where the gap between intention and action is no longer something that we worry about later.


Get that right, and we don’t just make better decisions. We rebuild institutions into something that people can trust again.


 
 
 

Comments


bottom of page