I’ve been following Shane Parrish and Farnam Street for a while now. He does a fantastic job of collecting the ideas and lessons from books and thinkers across the world. This book is no exception.
Some of these concepts are simple, some are not. Some I’d read about elsewhere, and others I’d never heard of.
But what this book absolutely was was mind-blowing.
Read below to broaden your concept of what a map is, learn to think backward to solve any problem, and understand how a fat-tail distribution explains why COVID-19 should never have been compared to the flu.
Some of these entries are longer than my usual, more concise descriptions, but I felt that clearer definitions and a couple of examples would go a long way for this book. Most examples are paraphrased from the book.
10 LESSONS FROM “THE GREAT MENTAL MODELS, VOLUME 1”
- First Principles: seek to understand the most basic underlying principles when you attempt to solve a problem. To do this, keep asking “why” until you arrive at the underlying framework, regardless of the traditions, cultures, or frameworks that have developed around the object in question.
- For example: Elon Musk wanted to build a great rocket, so instead of starting where other companies were, he started from the beginning, avoiding the baggage of accumulated tradition.
- Likewise, the first principles of “meat” as a food are the taste, texture, and nutrition of it, not the fact that it comes from animals. Therefore, following first principles encourages us to put more resources towards developing quality lab-grown meat, rather than working from the assumption that meat must come from animals.
- The Map is not the Territory: A map is a representation of something larger or more complex, created by leaving out details. A 100% accurate map of Colorado would be the size of Colorado and therefore useless.
- We construct maps about almost everything in our lives, such as using names to represent people, books to present ideas, movies to tell stories, sheet music to share instructions for music, words to describe emotions, numbers to describe quantities, etc. All of these are ways to understand something more complex by leaving out some details. When working with a map of any kind, it’s critical to understand what was left out in its creation.
- Understand the difference between a Bell-Curve distribution and a Fat-Tail distribution: A bell curve is the familiar representation of how data tends to average in the middle, with a few insignificant outliers on either side. A Fat-Tail looks similar, but the outliers have a much larger impact on the overall spread.
- For example: A bell curve represents the average heights of people. Its range is predictable, as you won’t encounter somebody that is 10x taller or 10x shorter than the average. Therefore, it is not affected by exponential outliers. A Fat-Tail represents the average wealth of people. It’s range is unpredictable, as you can very easily encounter somebody that has 10x the wealth of other people. Therefore, it is rampant with exponential outliers.
- Similarly, compare deaths by car accident to deaths by terrorist attack (or deaths by the flu to deaths by COVID-19). We can reasonably predict yearly deaths in car accidents or by the flu, as we’re probably not going to encounter a year that suddenly has 10x of either, but fat-tail/exponential-outliers like terrorist attacks and pandemics cannot be compared to bell-curves because they are not predictable.
- Use Inversion to gain a new perspective on problems. Inverting a problem or question works in two ways: First, we can ask the opposite question, and second, we can work backwards from the desired solution.
- Inverting the question: If we tend to ask, “how do I make more money,” then we will only focus on actions that we can take forward. This is where our line of questioning tends to end. But if we also invert the question to “how do I avoid losing money,” then we have a whole new set of perspectives and ideas to work with. Similarly, compare “how do I be happy” with “how do I avoid being sad.” New, and critical, answers are easily generated to achieve the same result.
- Inverting the solution: We can also work backwards from the solution that we want by asking what factors make that possible. The author uses the example of a tobacco marketer who was tasked with marketing tobacco to women. Instead of asking “how can I sell cigarettes to women,” he started from “what does a world where every woman smokes look like,” and directed his efforts towards (very successfully) normalizing smoking.
- Understand your Circle of Competence: your circle of competence is the collection of things that you know about and understand, largely gathered from study or interaction. One of the hallmarks of having a large circle of competence is that you know what you do and do not know in an area.
- For example: Somebody with competence in orchestral music will know the details of their field, but also that they don’t know the intricacies of fingerings for wind instruments or the factors for a percussionist selecting different mallets. Somebody without competence in music doesn’t even know that they don’t know these things.
- It is critical to be aware of the danger of false confidence. In a field where you are not competent, lack of knowing what you do not know leads to belief that you know all there is to know.
- Use Thought Experiments to evaluate possibilities and run experiments that can’t actually be run, such as physically impossible scenarios or unethical experiments.
- An example of this is the famous Trolley Problem: a trolley is running down a train track with 3 people tied to it. You have the chance to turn a lever and send it down a track with only 1 person tied to it. Is it more ethical to let 3 people die or turn the track and kill 1 person? For obvious reasons, it’s easier (and much more ethical) to test this experiment as a hypothetical than in real life.
- Necessity vs. Sufficiency: Understand the difference between what is necessary for success and what is sufficient for success. For anything, the set of conditions that it takes to actually achieve something is far larger than the set of conditions that are literally required to achieve that.
- For example, if you want to start a successful Youtube channel, then certain things are necessary: a computer or phone, videos to post, perhaps a microphone or animation software. Without these, failure is guaranteed. But success is not guaranteed. These elements are necessary, but they are not sufficient.
- Hanlon’s Razor: do not attribute to malice that which can be reasonably attributed to stupidity. Most of the time people are not out to get you or anybody else. But we, as egotistical-by-default creatures, tend to interpret the actions of others as having more intention than they do.
- There are 3 reasons that we avoid change, learning, or growth:
- It is difficult to gain a clear perspective of the realities of our situation because we know no other context, just as fish have no concept of the properties of water, or how one feels little motion on a plane while travelling 500 miles per hour.
- Our egos resist asking or accepting questions or answers that force us to admit that we are wrong about something. When we are invested in a belief, we have difficulty seeing outside of it and facing the fact that something that we believed to be one way is actually another.
- Often, the ripples of our decisions play out at too far of a distance to have any significant impact on us. If the CEO of a shoe company starts contracting a factory that engages in child labor, then the effect that decision has on her is virtually zero, while the effect on the children is strong. Her behavior will only change when the ripples hit close to home, such as a media frenzy or being called to court.
- Thinking better is about accurately understanding reality and its many factors. From considering the distant, 2nd or 3rd degree effects of a decision to simply understanding that almost everything is a map, we can think better and make better decisions when we see the world through multiple lenses and within varied frameworks. This is the benefit that mental models give us: a set of simple frameworks on which to see and consider as many sides of reality as we can.
- “Contrary to what we’re led to believe, thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do.”
- “Being able to accurately describe the full scope of a situation is the first step to understanding it.”
- “Better models mean better thinking. The degree to which our models accurately explain reality is the degree to which they improve our thinking. Understanding reality is the name of the game.”
- Referring to maps: “The only way we can navigate the complexity of reality is through some sort of abstraction…we run into problems when our knowledge becomes of the map rather than the actual underlying territory it describes.”
- “When it comes down to it, everything that is not a law of nature is just a shared belief.”
- “The events that have happened in history are but one realization of the historical process – one possible outcome among a large variety of possible outcomes. They’re like a deck of cards that has been dealt out only one time. All the things that didn’t happen but could have if some little thing went another way, are invisible to us.”
- “Avoiding Stupidity is easier than seeking brilliance.”
TOP QUOTES FROM OTHERS:
- “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” – George Box
- “A man who has committed a mistake and doesn’t correct it, is committing another mistake.” -Confucius
- “Ignorance more often begets confidence than knowledge.” – Charles Darwin
- “Whenever possible, try to create scenarios where randomness and uncertainty are your friends, not your enemies.” – Nassim Taleb
- “Anybody can make the simple complicated. Creativity is making the complicated simple.” – Charles Mingus
- “I need to listen well so that I hear what is not said.” – Thuli Madonsela