Thinking/Mental Models
Framework

Mental Models I Use

Mental models are thinking tools. They don't give you answers — they give you better questions. This is a working collection, not an exhaustive list. These are the ones I actually reach for when making decisions, diagnosing problems, or evaluating opportunities.

None of these are original. They come from Munger, Feynman, Taleb, and others. The value isn't in knowing them — it's in using them consistently.

First Principles Thinking

What it is: Break a problem down to its fundamental truths, then reason up from there. Strip away assumptions and conventions.
When I use it: Pricing strategy, architecture decisions, entering new markets. Anytime the answer "because that's how it's always been done" shows up.
Example: When building our stock valuation framework, I didn't start from "what do other screeners do." I started from "what determines a company's value?" — cash flows, growth, risk. Everything else is derived.

Inversion

What it is: Instead of asking "how do I succeed?", ask "how would I guarantee failure?" Then avoid those things.
When I use it: Risk assessment, product launches, hiring decisions. Especially useful when you're stuck on the positive framing.
Example: Before launching a new product: "What would make this fail catastrophically?" Usually surfaces risks that optimistic planning misses — dependency on a single vendor, no rollback plan, unclear ownership.

The Feynman Technique

What it is: Explain a concept in simple language as if teaching a child. Where you stumble, you don't truly understand it.
When I use it: Learning new domains, preparing presentations, writing specs. If you can't explain it simply, you haven't understood it deeply enough.
Example: I use this constantly when writing /thinking articles. If I can't explain AI-DLC to someone outside tech, my framework has gaps.

Second-Order Thinking

What it is: Think beyond the immediate consequence. "And then what?" First-order: the obvious effect. Second-order: the effect of the effect.
When I use it: Policy changes, incentive design, pricing decisions. Most bad decisions come from stopping at first-order effects.
Example: Cutting prices to gain market share (first-order: more customers). Second-order: competitors match, margins compress industry-wide, you're worse off. Third-order: only the lowest-cost operator survives.

The Map Is Not the Territory

What it is: Models, dashboards, and reports are simplifications of reality. Don't confuse the representation with the thing itself.
When I use it: Data-driven decisions, financial models, org charts. The stock valuation dashboard I built is a model — useful, but not truth.
Example: A DCF model says a stock is worth $150. The model is built on assumptions about growth rates, discount rates, and margins. Change any assumption by 1% and the output shifts 20%. The map is useful. It's not the territory.

Occam's Razor

What it is: Among competing explanations, the simplest one is usually correct. Don't multiply complexity without necessity.
When I use it: Debugging, root cause analysis, system design. When a system breaks, check the simple things first — permissions, typos, config — before suspecting complex failures.
Example: Production incident: "Is it a distributed systems race condition?" No. Someone deployed to the wrong branch. Check the simple explanation first.

Circle of Competence

What it is: Know what you know, know what you don't know, and stay honest about the boundary. Operate inside your circle; learn at the edges.
When I use it: Investment decisions, career moves, delegation. The most dangerous decisions happen when you think you understand something you don't.
Example: I know product, operations, and AI integration. I don't know deep ML research. So I hire ML engineers and trust their technical judgment while I focus on product-market fit and go-to-market.

Leverage

What it is: Not all effort is equal. Find the inputs that produce disproportionate outputs. Code, media, capital, and people are the four forms of leverage.
When I use it: Prioritization, resource allocation, career strategy. Ask: "What's the one thing that makes everything else easier or unnecessary?"
Example: Building this website is leverage — it works 24/7 while I sleep. Writing one article reaches thousands. That's media leverage. The stock dashboard is code leverage — runs 503 analyses in minutes.

Hanlon's Razor

What it is: Never attribute to malice that which is adequately explained by ignorance, miscommunication, or incompetence.
When I use it: Cross-team conflicts, customer complaints, partner negotiations. Most friction comes from misalignment, not bad intent.
Example: A partner team ships a breaking change without telling you. Your instinct: "They don't respect us." Reality: they forgot, or their notification process is broken. Start with that assumption and you'll resolve it faster.

This list evolves. I add models when I find myself reaching for the same pattern repeatedly. I remove them when they stop being useful. The goal isn't to collect models — it's to internalize them until they become reflexive.