Belief formation heuristics provide structured processes for building accurate beliefs and updating them appropriately as new evidence arrives.
Trigger: All belief updating — whenever new evidence arrives
Composed From: Assumption Surfacing + Falsifiability Check + Confidence Calibration
The theoretically optimal belief-updating procedure. In practice it requires two disciplines most people never develop:
First: forming explicit prior probabilities. Most people hold beliefs without quantifying their confidence. Bayesian updating requires knowing your starting point.
Second: updating in proportion to evidence quality. The common errors are anchoring (updating too little) and overreacting (updating too much). Strong evidence from reliable sources warrants large updates. Weak evidence from unreliable sources warrants tiny updates. Most people update based on how much the evidence surprised them, or how much they wanted to believe it — neither of which tracks evidence quality.
A practical protocol: For any belief you hold with high confidence, ask: what evidence would move me from 90% confident to 70%? From 70% to 50%? If you can’t answer, your updating process may be broken.
Known limits: Full Bayesian calculation is computationally impractical for most real-world decisions. Use as a thinking frame rather than a calculation procedure.
Trigger: Any prediction, forecast, or high-confidence belief
Composed From: Base Rates + Historical Precedent
Related Modules: Bayesian Updating, Overconfidence (bias), Reference Class Forecasting
Calibration is the alignment between stated confidence and actual accuracy. A well-calibrated person who says they are 80% confident is right about 80% of the time. Most people are systematically overconfident; their 80% confidence intervals contain the true answer only 50–60% of the time.
The practice: Make explicit predictions with stated confidence levels. Record them before the outcome is known. Track your accuracy rate at each confidence level. Adjust your stated confidence to match your actual accuracy rate.
Superforecasters (Tetlock’s research) consistently outperform experts not because they have better information, but because their confidence is better calibrated — they are more uncertain when uncertainty is warranted.
Known limits: Requires disciplined record-keeping. Calibration in one domain does not transfer automatically to others.
Trigger: Persistent debates; disagreements that won’t resolve
Composed From: Falsifiability Check + Distinctions + Variable Isolation
A crux is the single claim that, if false, would change your conclusion. Finding the crux of a disagreement identifies exactly where two positions diverge; the specific question that, if resolved, resolves the larger dispute.
State your conclusion
Ask: what is the single most important claim this depends on?
Ask the person your speaking with the same question about their position
Find the crux: The claim where you disagree most fundamentally
Direct the entire subsequent conversation at that crux
The discipline: Once you’ve named a crux, you must genuinely be willing to update if evidence shows the crux claim is wrong. “Moving the crux,” shifting what the “real” crux is after losing an argument about the first one, is a bad-faith move that destroys the process.
Known limits: Some disagreements don’t have a single crux, they involve genuinely different values or frameworks where no amount of factual resolution will produce agreement.
Trigger: Evaluating plans, forecasts, or beliefs you are personally invested in
Composed From: Base Rates + Role Shifting + Assumption Surfacing
The outside view asks: What do informed observers with no skin in the game think? It counteracts the inside view, the tendency to reason from the specific details of your own situation, which activates motivated reasoning and optimism bias.
The inside view feels more accurate because it uses more specific information. The research consistently shows it is less accurate because specificity enables rationalization.
Protocol: Before finalizing any significant forecast or plan, seek the outside view explicitly. Ask people with no investment in the outcome. Ask them to give you the base rate for plans like yours. Weight their view heavily.
Known limits: Outside views can be wrong, genuinely novel situations may have no valid comparison class. And outside observers lack context that is genuinely relevant. Use as a strong prior, not a final answer.
Trigger: Before finalizing any strongly-held belief or dismissing an opposing view
Composed From: Steel-Manning + Role Shifting
Ask: What would I have to believe for the opposing view to be correct? Then evaluate whether those conditions are plausible.
This is subtly different from steel-manning. Steel-manning constructs the best version of the opposing view. The epistemic humility check asks whether the conditions under which that view is correct might actually obtain.
Known limits: Can be performed dishonestly; e.g., identifying implausible conditions as the only way the opposing view could be right, then using that to dismiss it. The check requires genuine engagement.