External Resources
The Institutional Coherence Initiative (ICI) draws on research and practice spanning AI safety and reliability, causal/generalization science, psychology, and institutional governance. We curate external resources here to (1) ground our framework in prior art, (2) clarify where our approach aligns or differs, and (3) provide auditable sources for key concepts such as robustness under shift, accountability mechanisms, and evaluation standards.
**Sources have not yet undergone internal verification for reliability.**
NIST AI Risk Management Framework (AI RMF 1.0) is a widely adopted, risk-based framework for governing AI across the lifecycle (govern, map, measure, manage).
EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the clearest example of a major jurisdiction operationalizing risk tiers, obligations, and accountability mechanisms for AI systems. Even if ICI is non-regulatory, this is the baseline reality institutions will be navigating.
OECD AI Principles are a globally referenced set of principles and policy recommendations that help frame “public interest” and interoperability across jurisdictions without being tied to one country’s law.
The Model Cards for Model Reporting proposes standardized “model cards” documenting intended use, performance across conditions/groups, and evaluation context—exactly the kind of institutional transparency primitive that supports coherence and reduces misuse.
The authors of "The Information-Theoretic Imperative and Compression Efficiency" work on predictive compression under distribution shift, arguing that systems pressured to remain accurate across changing conditions tend to develop more shift-stable, reality-tracking representations and that this can be measured via excess predictive codelength (“exception tax”) under specified shift families.
This lens is relevant to ICI because it supports governance requirements that are measurable (report shift penalties and cross-environment regret), context-sensitive (define the relevant shift family for a deployment), and institutionally actionable (identify when organizations are relying on brittle patching rather than robust mechanisms).
Julia Haas et al., “A roadmap for evaluating moral competence in large language models” (Nature, published online Feb 18, 2026) argues that institutions need to move beyond testing whether LLMs output “morally appropriate” answers (moral performance) and instead test whether they do so for the right reasons—that is, by correctly identifying and integrating morally relevant considerations (moral competence).
The paper highlights three governance-critical obstacles: (1) the facsimile problem (models can imitate moral reasoning without genuine understanding), (2) moral multidimensionality (moral judgments depend on many context-sensitive moral and non-moral factors), and (3) moral pluralism (globally deployed systems must navigate legitimate value disagreement).
For ICI, the practical relevance is its recommended evaluation posture: use a suite of adversarial + confirmatory evaluations to justify—or limit—claims about moral capability, calibrate public trust, and avoid over-attributing “moral agency” to systems that may be pattern-matching.
NIST AI Risk Management Framework (AI RMF 1.0)
EU Artificial Intelligence Act (Regulation (EU) 2024/1689)
OECD AI Principles (updated May 2024)
Model Cards for Model Reporting (Mitchell et al.)
The Information-Theoretic Imperative and Compression Efficiency: Why Brains and Deep Networks Converge
Christian Dittrich, PhD and Jennifer Kinne
Julia Haas, Sophie Bridgers, Arianna Manzini, Benjamin Henke, Joshua May, Sydney Levine, Laura Weidinger, Murray Shanahan, Kristian Lum, Iason Gabriel & William Isaac
https://www.nature.com/articles/s41586-025-10021-1
AI Governance Authorities

A.C. Ping’s “Why Good People Do Bad Things in Business…” argues that many unethical outcomes in organizations are not best explained by “bad people,” but by situational/systemic pressures, self-serving biases, and rationalizations that allow well-intentioned people to justify harmful actions. The paper emphasizes the gap between ethics in theory and ethics in action, and proposes a “moral intention” approach: institutions should focus less on teaching abstract ethical reasoning and more on helping people set clear moral intent (bounded by intrinsic values) and protect that intent from being neutralized by excuses, pressure, ambiguity, and post-hoc justification.
This supports ICI’s work by grounding a key institutional design premise:
-
Transparency and conflict policies are not “nice”—they are anti-rationalization infrastructure. When people and organizations face pressure, they can rationalize exceptions; governance needs explicit boundaries, disclosures, and correction mechanisms to keep intent aligned with outcomes.
-
The paper also reinforces a systems lens (similar to Zimbardo-style thinking): unethical behavior often emerges from context, so governance architecture should address incentives, ambiguity, accountability, and decision processes—not just individual virtue.
“The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” (Brundage et al., 2018) is a widely cited international report synthesizing how AI can be misused across digital, physical, and political threat domains and proposes governance-oriented recommendations to forecast and reduce dual-use risks (e.g., integrating misuse considerations into research norms and proactively coordinating with relevant stakeholders). For ICI, it supports treating dual-use risk as a core institutional coherence issue—requiring disclosure, red-teaming, and cross-sector coordination mechanisms that don’t rely on individual virtue alone.
The ICRC’s work on autonomous weapon systems frames AI governance through international humanitarian law and civilian protection, emphasizing limits where effects cannot be sufficiently understood, predicted, or explained, and arguing for internationally agreed constraints to maintain legal and ethical acceptability. For ICI, it’s a clean reference for hard accountability boundaries (where “institutional coherence” demands clear prohibitions/limits, not just risk scoring) and for the principle that human responsibility cannot be outsourced to opaque autonomy in high-stakes contexts.
Why Good People Do Bad Things in Business and the application of existentialism to minimise unethical outcomes
A.C.Ping, PhD
University Gold Coast, Australia
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Brundage et al., 2018)
ICRC position/analysis on Autonomous Weapon Systems + IHL (International Committee of the Red Cross)
International Research

“Emotional Posture™: Unifying Positive States of Emotion and Energetic Laws to Increase Consciousness and Neurobiological Health” presents a holistic training approach that pairs specific positive emotional states (e.g., gratitude, acceptance, forgiveness, compassion, love) with “universal spiritual concepts,” aiming to improve participants’ well-being and functional life domains over a short training period. The study reports pre/post improvements in life satisfaction and self-reported functioning (notably cognition, mood, health, and social/relational domains), and describes results from a subset using biofeedback, including increased time spent in “high coherence” states and reduced time in “low coherence” states.
For ICI, the relevance isn’t “energetic laws” per se—it’s the institutional-design implication that governance capacity depends on human nervous-system and emotion regulation, especially under uncertainty, conflict, and high-stakes coordination. This resource can be cited as an example of a skills-and-practices orientation: cultivating stable pro-social emotional states and self-regulation may support more consistent ethical posture, better deliberation, and reduced reactive decision-making—important ingredients for institutional coherence work.
Ann E. Tenbrunsel & David M. Messick, Ethical Fading: The Role of Self-Deception in Unethical Behavior (2004), describes how unethical outcomes can arise when the moral aspects of decisions become obscured through self-deception and rationalization, allowing people to act against their values without fully recognizing it. For ICI, the paper is a foundational behavioral-ethics reference supporting governance design that keeps moral stakes visible—through transparency practices, conflict disclosure, decision discipline, and correction pathways that reduce “ethical fading” in high-pressure institutional environments.
L. Jason Anastasopouos's article argues that modern information systems increasingly reward engagement and amplification rather than truth, expertise, or careful reasoning, weakening the institutions that help societies distinguish credible knowledge from noise. This aligns closely with ICI’s premise that system incentives can quietly distort institutional behavior even when individuals have good intentions. Both perspectives point to the need for structural mechanisms that preserve epistemic integrity and accountability, rather than relying on good judgment alone. In that sense, Jason’s analysis of the information ecosystem is supportive of ICI's effort to build institutional infrastructure that helps organizations act coherently with their stated values and responsibilities.
The Four Agreements draws from the ancient Toltec tradition of Mesoamerica, an Indigenous philosophy concerned with cultivating personal coherence between intention, speech, and action. ICI is inspired by the spirit of this framework in asking what it would mean for institutions to pursue similar coherence structurally—making commitments verifiable, surfacing assumptions in decision-making, welcoming challenge as a path to improvement, and continuously learning from governance outcomes.
Emotional Posture™: Unifying Positive States of Emotion and Energetic Laws to Increase Consciousness and Neurobiological Health
Dr. Colette Sinclair, PhD
Emotional Posture (2024)
Ethical Fading: The Role of Self-Deception in Unethical Behavior
Anne E. Tenbrunsel & David M. Messick, Social Justice Research (2004)
How Many Followers Would Plato Have?
L. Jason Anastasopoulos
Journal of Democracy (March 2026)
https://www.journalofdemocracy.org/online-exclusive/how-many-followers-would-plato-have/
The Four Agreements
A Toltec Wisdom Book
Don Miguel Ruiz (1997)
Psychology & Political Economy Research
