Clarification primitives sharpen vague inputs into precise ones before reasoning begins. They prevent the common failure of reasoning brilliantly from the wrong starting point.
Trigger: Vague goals, abstract plans, fuzzy success criteria
Input: An abstract concept, goal, or plan
Operation: Ask: what would this actually look like in practice, in specific observable terms?
Output: A concrete, actionable specification
Converts abstractions into specifics. “We want to improve customer satisfaction” is not a plan. “We want NPS to increase from 42 to 55 within 6 months by reducing first-contact resolution time” is. The operationalization reveals whether the goal is achievable, measurable, and agreed-upon, or whether it was a placeholder for agreement that doesn’t yet exist.
Known limits: Over-operationalizing can destroy nuance, forcing quantification on things that resist it. Apply judgment about which concepts genuinely need this treatment.
Trigger: Debates that keep going in circles; persistent disagreements
Input: A contested or ambiguous term
Operation: Nail down exactly what the key term means in this context, before proceeding
Output: A shared, precise definition that all parties can work from
Research on the sources of disagreement consistently finds that a large fraction — often the majority — of apparent substantive disagreements are actually terminological. Two parties using the same word to mean different things will argue indefinitely and never converge.
High-value terms to pin: “success,” “fair,” “trust,” “strategy,” “commitment,” “culture.” These words carry enormous weight in conversations and almost never mean the same thing to both parties.
Known limits: Premature definitional closure can suppress legitimate ambiguity. Some productive conversations require holding multiple meanings simultaneously.
Trigger: When stated objections don’t quite make sense; when persuasion keeps failing
Input: A stated objection or resistance
Operation: Ask: What is the real concern behind the stated one?
Output: The actual objection, which can then be addressed directly
Output: The actual objection, which can then be addressed directlyPeople rarely state their real objection directly, often because they don’t fully know it themselves, or because the real concern feels less legitimate than the stated one. “This will be too expensive” often means “I don’t trust you to execute this.” “The timing isn’t right” often means “I’m not convinced this is the right direction.”
Addressing the stated objection when the real one is different produces no movement. Finding the real objection and addressing that directly unlocks the conversation.
Protocol: When you’ve addressed an objection but the person remains unconvinced and moves to a new objection, you’re probably not at the real concern yet. Keep going.
Known limits: Requires care, telling someone their stated objection is not their real one can feel presumptuous or dismissive. Frame as curiosity, not diagnosis.
Trigger: Before acting on any belief; when a situation surprises you
Input: A belief you are about to act on
Operation: Ask: Am I reasoning about reality, or about my model of reality?
Output: A flag when you need to go gather actual data vs. when to trust your model
The map is not the territory, Alfred Korzybski’s formulation. Every mental model is a simplification of reality, and all simplifications are wrong in some ways. The question is whether they are wrong in ways that matter for the decision at hand.
Most errors in complex situations are not logical errors, the reasoning from the model is often fine. They are map errors: The model itself is wrong, incomplete, or outdated. When you are surprised by a situation’s outcome, the right question is usually “what was wrong with my map?” not “what was wrong with my logic?”
Known limits: Can produce infinite regress; how do you know your check of the map is accurate? At some point you must act on your best available model. The check is a prompt to verify, not a reason to never act.