Personal reflection on what it really takes for an organization to be accountable for its artificial intelligence.
A note on this piece. The four-gap framework discussed below is drawn from the broader AI ethics and philosophy-of-technology literature, including work by Andreas Matthias and subsequent scholars who have extended and refined these distinctions. The AAISM Review Manual published by ISACA synthesizes this framework for AI security management practitioners, and the manual is what brought the framework to my attention in its current form. This piece is my own reflection, written in my own words, and is intended as professional commentary rather than as a reproduction, summary, or substitute for any certification material. Specific definitions, examples, and framings in the Review Manual remain the copyrighted expression of ISACA; nothing here reproduces that expression. This is personal commentary and not legal, compliance, or investment advice, nor does it represent the views of any employer or client.
Why This Matters
We are responsible entities. Our products and services shape citizen behavior, directly and indirectly. We owe compliance to the law of the land, and we owe the world an explanation when compliance fails. That enterprise framing is the right starting point — but it only becomes actionable when we understand that “responsibility” is not a single thing that AI disrupts. It is four distinct things, and AI disrupts each of them differently.
Every model card, bias test, explainability tool, and risk assessment your organization has deployed is trying to close some gap. If you cannot name which gap each control addresses, you cannot tell whether your program is coherent or whether it is a collection of fashionable artifacts leaving the real fractures untouched.
The Four Gaps
The AI ethics literature, developed over the last two decades by scholars including Andreas Matthias and those who followed, distinguishes four separate responsibility gaps that AI opens in the moral and governance machinery our societies have built over centuries.
Active responsibility is forward-looking — the duty to actively pursue right outcomes, not merely to avoid harm. The gap arises because actors in the design or use of AI may not be sufficiently aware of their own responsibility, or may lack the motivation to exercise it. The engineer optimizing for accuracy, the manager optimizing for deployment velocity — none is malicious, but moral attention diffuses across a division of labor. Philosophers call this the problem of many hands.
Public accountability is the duty of public agents to explain their actions to the public. AI disrupts this by shifting discretionary power toward technical specialists, often employed by private vendors whose models are protected as trade secrets. When a public agency procures an algorithmic decision tool and the model is proprietary, discretionary public authority migrates into a system the public cannot inspect. This pattern has emerged across benefits administration, criminal justice, and immigration in multiple jurisdictions.
Moral accountability is the interpersonal duty to give reasons to specific affected individuals. Consider a credit officer recommending a loan denial whose rationale lives inside a model she cannot interrogate, or a case manager acting on a risk score she cannot defend when the affected person asks her why. This is the most epistemically intimate of the four gaps, where automation bias collides with the duty to give reasons.
Culpability is blameworthiness, grounded in the classical conditions of intention, knowledge, or control. AI produces cases in which real harm occurs — a trading algorithm that destabilizes a market in ways no single quant could have predicted, a content moderation system that silences a community through emergent behavior nobody designed — that nobody could individually predict or prevent. The victim is real; the moral apparatus for assigning blame returns an empty seat.
Four different fractures. Four different governance problems. Four different classes of control.
The Claim and Its Refinement
It is tempting to think that closing the active responsibility gap, because it is upstream of the others, automatically closes them. An organization that genuinely cares, the argument runs, will naturally produce accountability downstream.
The claim is half right. Without active responsibility, the other closures lose their substance; an organization without the will to be responsible produces only theater. But active responsibility alone is insufficient. A well-intentioned public agency still fails public accountability if its vendor contract shields the model. A morally serious doctor still cannot explain a black-box diagnosis without explainability infrastructure. A culturally responsible enterprise still faces the culpability gap when harm emerges from interactions no individual could have predicted.
The precise claim worth carrying: active responsibility is the necessary animating condition for closing the other three gaps, but each gap requires its own specific structural infrastructure. Soil and crops. Both are necessary; neither is sufficient.
The Four Animating Factors
Active responsibility, treated as a single thing, is too coarse to be useful. Decomposed properly, it has four components.
Intention is the motivational commitment to right outcomes. Knowledge is the understanding of what the system does, what harms it could produce, and what duties are owed. Power to prevent is the organizational capacity — authority, decision rights, technical controls — to act on intention informed by knowledge. And courage is the willingness to bear the personal cost of doing what the other three indicate.
Aristotle singled out courage as the virtue that enables the exercise of all other virtues under pressure. The same logic applies to organizations. Intention, knowledge, and power are easy in fair weather; it is under duress, when doing the right thing is costly, that you discover whether these capacities are real. In post-incident reports, the phrase that appears repeatedly is: “concerns had been raised but were not escalated.” The concerns existed. The knowledge existed. The authority existed. What was missing was willingness to bear the cost of speaking against momentum.
Courage is what mature governance tries to make cheaper — through independent oversight, escalation channels that bypass pressure points, documentation requirements that distribute the burden of dissent, and incentive structures that align executive interests with governance integrity.
Transparency as Manifestation
Transparency, in trustworthy AI frameworks, is often treated as a technical achievement — model cards, disclosure notices, audit logs. Those artifacts matter. But they are the outward surface of something deeper.
Transparency is the manifestation of the four factors. An organization that does not intend to be transparent will not be. An organization that does not know what to disclose cannot be. An organization without power to disclose is prevented. An organization without courage offers only sanitized transparency. The transparency we see in the world is the visible expression of an invisible moral and organizational substrate.
This is why transparency theater is so recognizable: beautifully formatted artifacts revealing nothing material, public AI principles contradicting internal incentives. The artifacts exist; the factors behind them do not.
The Honest Limit of Measurement
A natural board question follows: how do we know these controls are actually working? The conventional answer proposes metrics — leading and lagging indicators. Useful, and I do not dismiss them.
But the honest articulation is that no measure can truly capture the four factors that work behind the scenes. Intention, knowledge, power, and courage are the inputs that pass through every control, every metric, every artifact. They give measurements their meaning. Without them, the same metrics can indicate either substantive governance or sophisticated theater, and you cannot tell which from the numbers alone.
This is not an argument against measurement. It is an argument that measurement is a proxy, and the proxy is only as good as the underlying factors. The day you believe your dashboard has fully captured your program’s health is the day the program begins to drift, because the dashboard continues showing green while the underlying factors quietly erode.
The Synthesis
We arrive at an integrated picture that is neither purely virtue-ethical nor purely structural. The four responsibility gaps are the fractures AI opens in inherited governance machinery. Each gap requires its own structural closure: impact assessments for active responsibility, external transparency regimes for public accountability, explainability combined with workflow design for moral accountability, organizational accountability doctrines for culpability.
Beneath all of that, animating it, sit intention, knowledge, power, and courage. These factors cannot themselves be fully measured, because they are what gives measurement meaning. A mature AI security program builds the structural infrastructure each gap requires and cultivates the factors that make the infrastructure real rather than performative.
Everything begins in moral seriousness. But moral seriousness must be encoded into structure to survive organizational reality, and the structure must be designed with enough understanding of human and organizational behavior to actually work.
Programs survive the departure of well-intentioned individuals. Individuals animate programs that structure could not animate on its own. Both are true. Neither is enough.
This piece is personal reflection drawn from the public AI ethics literature and my own practitioner experience preparing for the AAISM certification. Readers interested in the original academic sources should consult the literature on the responsibility gap, beginning with Matthias (2004) and the subsequent expansions by Santoni de Sio, Mecacci, and others. Readers interested in AI security management as a discipline should consult ISACA’s own published resources and certification materials directly. Nothing in this piece constitutes legal, compliance, or investment advice, nor does it represent the views of any employer or client.

Leave a comment