The Right to Be Wrong
We usually treat being wrong as a personal failure or a moral stain. That reaction is understandable; errors can hurt people. But if we start from the fact that fallibility is non-optional, then error isn’t an exception to the human condition; it’s a constant. The meaningful distinction is not “right vs. wrong,” but how we behave after we discover we’re wrong.
This essay argues for a qualified right to be wrong: a cultural, organizational, and personal commitment to tolerate honest error so that we can surface it quickly and correct it. The right is paired with duties; to justify, to revise, to repair harm, and to improve our judgment. Without those duties, “the right to be wrong” decays into license. With them, it becomes a driver of progress.
Why a “right” at all?
Rights protect preconditions for other goods. Free inquiry requires the ability to state hypotheses that may be false. Scientific progress requires experiments that may fail. Entrepreneurship entails bets that may miss. Music and writing depend on drafts that don’t work; until they do. If the penalty for wrongness is social exile, people will hide uncertainty, distort evidence, and double down on bad judgments. That undermines truth-seeking, safety, and ethics.
Psychology and organizational research back this up. Teams with psychological safety; where members can admit mistakes without humiliation; tend to detect problems earlier and learn faster. Error-based learning is real: recall often increases when feedback is immediate and specific. None of this implies that errors are costless; it implies that suppressing them is usually costlier.
The ethical limit: rights come with duties
“The right to be wrong” is not a blank check. It comes with responsibilities:
- Duty to justify: Offer reasons and evidence. A claim isn’t shielded merely because it’s sincerely held.
- Duty to revise: Update beliefs and policies when the evidence changes. Stubbornness isn’t integrity.
- Duty to repair: Acknowledge harm, apologize, and compensate when feasible. Good intentions don’t erase bad outcomes.
- Duty to learn: Extract process changes so the same mistake is less likely next time.
Kant’s three maxims of sound judgment
Immanuel Kant famously distilled everyday rationality into three maxims (often paraphrased):
- Think for yourself. Resist outsourcing judgment to authority or habit. Blind deference breeds prejudice.
- Think from the standpoint of everyone else. Practice perspective-taking (“enlarged mentality”). Ethical action requires imagining the other.
- Think consistently. Apply the same standards to friends and foes; and to yourself.
These aren’t abstract ornaments. They operationalize the duties above. Independent thinking makes revision possible. Perspective-taking uncovers the real effects of our errors on others. Consistency keeps us from special pleading when our tribe misfires.
Intention is not outcome
Conflating intent with result is a reliable way to never learn. In complex systems, well-intended actions produce side effects. A “friendly” policy can backfire; a harsh critique can prevent a larger failure. Learning requires separating the moral appraisal of motives from the empirical appraisal of consequences. You can keep the first and still update on the second.
Culture shapes judgment (and error)
Different cultures weight unity, difference, authority, and doubt in different ways. Where truth is treated as inward and self-authenticating, criticism feels like a personal attack; where truth is treated as intersubjective, disagreement is expected and structured. Dualistic purity ethics (“good people do good things”) discourages nuanced evaluation. A tragic view (good and bad entwined) makes room for unintended consequences. You don’t have to buy any single civilizational story to see the pattern: systems that tolerate articulated error generally achieve better safety reporting and often better outcomes than systems that punish it into silence.
Learning loops: from blame to improvement
Blame is about past deserts; learning is about future changes. We need both, but most institutions are oversupplied with the first and underbuilt for the second. Replace ad hoc finger-pointing with explicit loops:
- Pre-mortems: Before launching, list ways things could fail and how you’d detect them.
- Incident postmortems (blameless but accountable): Describe what happened, why it made sense at the time, contributing factors, and specific action items. Keep names only where necessary for responsibility, not humiliation.
- Error budgets: Allocate a quantifiable tolerance for failure (common in SRE). Force trade-offs to be explicit.
- Red-teaming & steelmanning: Attack your own proposal; also articulate the strongest version of the opposing view.
- Bayesian updates, explicitly: Write down your prior, the evidence, and your posterior. If you can’t say how new data changed your view, you probably didn’t update.
Common failure modes (and counters)
- Identity capture: Beliefs fused with belonging make revision feel like betrayal. Counter: Praise updates; treat changed minds as strength.
- Moral licensing: Past virtue excusing present sloppiness. Counter: Separate track record from the claim at hand.
- Motivated reasoning: We search for confirming evidence. Counter: Pre-commit to disconfirming tests and decision criteria.
- Overconfidence: Calibration drift grows with status. Counter: Keep score; compare predicted probabilities with outcomes.
- Scope neglect: Big numbers numb empathy. Counter: Normalize by person-level or unit-level effects before judging policies.
Personal practice: earn your right to be wrong
- Write predictions. Even rough probabilities beat vibes. Review quarterly.
- Keep a decision log. What did you believe, why, and what would change your mind?
- Adopt reversible defaults. Prefer choices you can roll back cheaply while learning.
- Use checklists for high-stakes routines. They can reduce preventable errors, though not in every setting.
- Practice perspective-taking. Before acting, state how the affected party would describe the impact.
- Rehearse apologies. Acknowledge, own, repair, and specify what will change. No “ifs,” no “buts.”
Institutional practice: design for corrigibility
- Separate exploration from exploitation. Sandbox new ideas; guardrails for production.
- Rotate roles and expose dissent. Prevent monocultures; make disagreement visible and safe.
- Publish assumptions. Document models, thresholds, and kill-switches.
- Reward updates. Promotion and prestige should track demonstrated learning, not just initial success.
What about harmful speech and reckless action?
A predictable objection: Won’t a “right to be wrong” excuse negligence or enable harm? No. The right protects honest error made under reasonable care, coupled with the duties above. It does not shield willful ignorance, fraud, or repeated negligence. The boundary condition is due diligence: did the actor gather salient evidence, consult stakeholders, and choose a proportionate risk? If not, we’re not discussing tolerated fallibility; we’re discussing breach of duty.
Leadership’s paradox
Leaders fear that admitting error will undermine authority. The evidence suggests the reverse: when done competently and paired with corrections, admitting error often increases trust and team learning. Effects vary by context, but the performance you’re graded on is not “never wrong,” it’s “finds and fixes wrongs fast.” If your culture punishes the messenger, you will buy temporary comfort and pay with systemic failure later.
A closing stance
To grant each other the right to be wrong is not to lower standards. It is to raise them; by moving accountability from theatrics (denial, defensiveness, scapegoating) to mechanisms (revision, repair, and learning). Pair Kant’s maxims with modern error science and you get an ethic of corrigibility: independent thought, empathetic perspective, consistent rules; applied to a world where outcomes often surprise us.
We won’t eliminate mistakes. We can decide what they purchase: shame and stagnation; or knowledge and progress.
Continue Reading
6 Truths About Software Every Business Leader Should Know
The real costs and leadership work of software begin after launch. Here are six truths about maintenance, technical debt, and protecting future velocity.
Attention is a Budget
We often think of budgeting in terms of money or time; but our most limited and overlooked resource is attention. Here's how managing it can reshape how we work and live.