Evaluation primitives assess options, plans, beliefs, and risks. They provide structured ways to judge quality before committing.
Trigger: Before any evaluation of a contested claim, plan, or position
Input: A position you are about to evaluate or disagree with
Operation: Construct the strongest possible version of the opposing view
Output: A more accurate model of the opposing position
Most people engage in straw-manning, reducing an opposing view to its weakest version so that it is easily defeated and dismissed. Steel-manning does the opposite: It requires you to construct the best possible version of the view you are about to disagree with.
The test: Could a sophisticated proponent of the opposing view read your steel-man and say “yes, that’s a fair representation of my position”? If not, you’re not done yet.
Two benefits: (1) prevents dismissing positions that are actually stronger than you realized; (2) forces genuine understanding rather than mere recognition and categorization.
Known limits: Can be weaponized. Constructing a strong version of a bad position and then attributing it to opponents who don’t hold it. The discipline is representing what opponents actually believe, not the strongest possible position in that space.
Trigger: Before committing to any significant plan
Input: A plan or decision
Operation: Assume it is one year later and the plan has failed. Ask: What caused the failure?
Output: A specific, actionable risk inventory
The pre-mortem is inversion applied to plans. By assuming failure as a given (rather than asking “could this fail?”), it bypasses optimism bias and generates more honest failure analysis.
Research by Gary Klein found that prospective hindsight, imagining an event has already occurred, increases the ability to identify reasons for future outcomes by 30%.
Protocol: Present the plan to a group. Ask everyone to imagine it is one year later and the project has failed spectacularly. Have each person independently write down the causes of failure. Share and compile. Use the list to redesign the plan.
Known limits: Can generate excessive caution if not balanced with upside analysis. Pair with an equivalent “pre-parade” exercise (assume spectacular success; what caused it?).
Trigger: Any consequential decision
Input: A decision under consideration
Operation: Ask: Is this a one-way door (irreversible) or two-way door (reversible)?
Output: A recommendation for how much deliberation process to apply
Most people apply the same level of scrutiny to all decisions regardless of reversibility, inefficient in both directions simultaneously. Two-way doors should be made quickly with permission to be wrong. One-way doors deserve slow, careful deliberation.
The hidden irreversibilities: Some decisions that feel reversible are one-way doors in disguise. Time is the most common: You can change jobs, but you cannot recover the five years spent. Reputation is another. Compounding mean early decisions constrain the entire subsequent option space in ways not obvious at the time.
Full protocol: (1) Is this reversible in principle? (2) Am I accounting for all dimensions of irreversibility, including time, compounding, and reputational effects?
Known limits: Not all irreversibility is bad. Some of the best decisions are irreversible commitments. The check determines process depth, not whether to proceed.
Trigger: Bets, investments, opportunities, commitments
Input: A proposed action with uncertain outcomes
Operation: Map the upside range and downside range; ask if they are proportionate
Output: A risk/reward profile; basis for expected value intuition
Look for situations where the upside is large and capped downside, or the reverse. The most dangerous decisions are those with small upside and catastrophic downside — where the asymmetry is hidden or ignored.
Nassim Taleb’s formulation: Seek optionality, situations where you have more upside than downside. Avoid situations where you bear the downside of variance without the upside.
Known limits: Asymmetry analysis requires honest probability estimation, which is hard. Avoid using “big upside” thinking to justify ignoring concrete, near-term downside.
Trigger: Time, capital, and attention allocation decisions
Input: A proposed commitment
Operation: Ask: What am I not doing, and not able to do, by doing this?
Output: The true cost of the commitment, including foregone alternatives
Every commitment has a cost that extends beyond its direct cost, the value of the next-best alternative foregone. Most people calculate direct costs carefully and ignore opportunity costs entirely.
The most important opportunity costs are often invisible: The project you didn’t start, the relationship you didn’t cultivate, the skill you didn’t develop, because something else was occupying that slot.
Known limits: Can produce paralysis if every option is evaluated against an idealized alternative. Set a decision threshold: Is the opportunity cost of this choice clearly worse than the realistic alternatives, not the ideal ones?
A belief that cannot, even in principle, be falsified is not a belief about the world, it is a closed system that protects itself from evidence. Popper’s falsifiability criterion identifies this as the dividing line between science and non-science; it applies equally to everyday beliefs.
Unfalsifiable beliefs are common and dangerous: “everything happens for a reason,” “the market always recovers,” “they would never do that.” Each can absorb any contrary evidence without updating.
The test: State the belief. Now describe, concretely, what you would need to observe for you to conclude the belief is false. If you can’t, the belief is doing something other than tracking reality.
Known limits: Some valuable commitments (values, long-term relationships) are deliberately not falsifiable in the short run. Distinguish beliefs-about-the-world from commitments-about-how-to-act.