Mental Models You Actually Use
I've read about mental models for years. Munger's latticework. Farnam Street's catalog. Shane Parrish's book. I could name twenty models at a dinner party and explain each one clearly.
I couldn't tell you the last time I used one to make a decision.
That's the gap. Not knowledge — application. I had a library of frameworks in my head and a set of instincts in my gut, and the two rarely met. The models lived in books. My decisions happened in real time, under pressure, driven by emotion and habit. The latticework was decoration.
Collecting isn't thinking
There's a pattern I recognize in myself now: consuming frameworks as a substitute for doing anything with them. Reading about mental models feels productive. You're learning. You're sharpening your thinking. Except you're not — you're sharpening your vocabulary. The thinking happens when you're standing at a fork and you reach for a model instead of reaching for your default reaction.
Munger didn't catalog models for fun. He used them to make investment decisions with real money on the line. The list was a tool, not a trophy case. Somewhere between his approach and the internet's approach, the point got lost. We turned a practice into a genre of content.
I was a perfect consumer of that content. I bookmarked articles, highlighted Kindle passages, nodded along to podcasts. Meanwhile, every actual decision I made — whether to push back at work, how to handle a parenting conflict, when to say no to a commitment — ran on raw instinct. The models stayed on the shelf.
What changed
When I built my Life OS, I gave the system a catalog of models. Standard Munger stuff — inversion, second-order thinking, incentives, loss aversion, opportunity cost, feedback loops. About thirty in all, organized by discipline.
Then I added a rule: don't explain them.
The system doesn't teach me what inversion means. It uses inversion when I'm stuck on a decision and asks me what would guarantee failure. It doesn't define sunk cost fallacy. It notices when I'm arguing to continue something based on what I've already put in rather than what I'll get out.
The difference matters. A definition is something you read. An intervention is something that changes what you do. I wanted the models applied, not displayed.
The practice log
Here's the part I care about most. Every time a model gets used in a real decision, the system logs it. Date, situation, which model, what happened.
The log is empty right now. I just built this. But the design is what matters: over time, patterns will emerge. Some models will keep showing up — the ones that match how I actually think and the situations I actually face. Others will sit unused, intellectually interesting but practically irrelevant.
The goal is a short list. Not thirty models I can recite. Five or six that I reach for under pressure because they've proven useful in my life, with my patterns, in my specific situations. Munger's latticework, narrowed to the beams that bear weight.
The guardrails
I built two guardrails because I know myself.
The first: if I start reading about mental models instead of using them, the system names it. "You're consuming, not applying." This is the quietest way I avoid doing hard things — I replace action with input. I learn about the thing instead of doing the thing. A catalog of models is the most seductive version of this trap, because it looks like thinking.
The second: when a model surfaces, it doesn't force a conclusion. It opens a question. "Have you considered what this looks like inverted?" is useful. "Inversion says you should do X" is not. The model is a lens, not an answer. It makes me think differently about the decision. It doesn't make the decision for me.
This matters because I have a long history of outsourcing my judgment to frameworks. If someone authoritative says "the data suggests X," I'll follow it even when my gut says otherwise. The system needs to sharpen my thinking, not replace it. The moment a mental model becomes a rule I follow blindly, it's just another authority I'm deferring to.
What this looks like in practice
I'm planning a sabbatical this summer. Three months. The easy version of this decision is: take the time, rest, come back refreshed. The hard version is everything underneath — what I'm afraid of when the work identity drops away, what I'll try to fill the space with, whether I'll let myself actually stop.
A mental model approach doesn't hand me a sabbatical plan. It asks better questions.
Inversion: what would guarantee I waste the sabbatical? Over-scheduling it. Turning it into a productivity project. Grading myself on output.
Second-order thinking: if I protect empty space in the first month, what happens in the second? If I fill every week, what doesn't emerge?
Opportunity cost: every structured activity I add is unstructured time I lose. The structured stuff is comfortable. The unstructured stuff is where the interesting things happen.
None of these tell me what to do. They change the frame. And the frame changes the decision.
The honest version
I don't know if this will work. The practice log is empty. The short list doesn't exist yet. I might look back in six months and find that I still run on instinct and the models are still decoration.
But the design feels right. Models that surface at forks, not in lectures. A log that tracks what actually helps. Guardrails against the specific ways I avoid using them. And a system that knows me well enough to apply the right model at the right moment — not because it's the most famous one, but because it fits the situation I'm actually in.
Munger spent decades building his latticework through thousands of real decisions. I'm not going to shortcut that. But I can stop pretending that reading about models is the same as using them. The catalog is built. Now the practice starts.