When AI Pulls the Lever – Why Leadership Still Matters More Than Code
- Elizabeth Gilbert

- 3 hours ago
- 3 min read

Artificial intelligence has moved from labs into the heart of everyday life. It decides which patients get seen first, how freight moves across the country, which transactions are flagged as fraud, and which citizens get extra scrutiny. Algorithms now sit in the room where consequential decisions happen.
Because of this, regulators and standards bodies are building increasingly sophisticated AI rules: risk management frameworks, safety guidelines, bias tests, transparency requirements. These are essential. But they all share a blind spot: they focus mainly on what the system does, not on who is leading it.
When AI “pulls the lever” in a high‑stakes situation, the deeper question is not just whether the model behaved a
s designed. It is whether leaders created the conditions for ethical behavior in the first place.
The AI Trolley Problem Is Really a Governance Problem
The trolley problem is a familiar thought experiment: a runaway trolley is headed toward five people; you can pull a lever to divert it onto a track where it will kill one person instead. It’s not meant to have a comfortable answer. It’s meant to expose how we think about harm, responsibility and trade‑offs.
In AI, we face a version of this every day. An automated vehicle has milliseconds to choose how to brake or swerve. A clinical triage model must prioritize scarce resources. A fraud‑detection system may block legitimate users in the name of risk reduction.
The physics are straightforward: speed, mass, friction, probabilities of harm. The models can optimize against those variables. But physics can’t tell us whose risk “counts” more, or what level of trade‑off is morally acceptable. That is a governance question, and governance is a leadership function.
From Algorithms to the Humans Behind Them
Most current AI governance focuses on:
Data quality and provenance
Bias detection and mitigation
Model robustness and security
Documentation and explainability
All of this matters. But many high‑profile AI failures did not occur because there were no technical controls available. They occurred because leadership chose speed over safety, optics over honesty, or revenue over dignity.
Examples include:
Deploying models in communities that were never meaningfully consulted
Ignoring internal warnings about bias or unfair outcomes
Treating explainability as a public‑relations problem, not as a duty to those impacted
In other words: the problem is not just flawed algorithms; it is flawed ethical leadership.
A Simple Lens: Compliance, Compassion, Culture
One way to bring leadership back into the center of AI governance is to ask three simple questions before any system goes live:
Compliance – Can we defend this?
Are decisions traceable and documented?
Can we reconstruct why the system acted as it did?
Would our records stand up to regulatory, legal and public scrutiny?
Compassion – Does this respect people?
How are those affected by the system treated before, during and after deployment?
Do we prioritize harm reduction, psychological safety and clear communication?
Would we accept this treatment if we or our families were on the receiving end?
Culture – Will we learn from this?
Do people feel safe raising concerns or dissenting?
Are incidents treated as opportunities to improve, or to punish and silence?
Is there a habit of revisiting decisions as new information emerges?
These questions are simple, but they are not soft. They surface whether leadership is prepared to accept responsibility for AI decisions, not just the benefits.
Why Ethical Leadership Has to Be Measured
If leadership ethics remain “nice to have” values on a poster, they will be the first thing sacrificed under pressure. To matter, they need to be:
Visible in how decisions are made
Embedded in formal roles and responsibilities
Measured with the same seriousness as safety, quality or revenue
That is the idea behind tools like the PillarMetric™ Ethical Leadership Scorecard: give leaders a structured way to assess their own readiness along dimensions such as ethical competence, transparency and long‑term accountability. The goal is not perfection. The goal is honest visibility.
When AI pulls the lever, the public will not accept “the algorithm did it” as an answer. They will look for the humans who designed, approved and governed that system. The organizations that prosper will be the ones whose leaders can show not just how the system worked, but why it reflected a thoughtful, accountable moral stance.




Comments