Tyranny of small decisions is what happens when individuals make decisions that seem rational for themselves, devoid of how those decisions may affect a shared interest.
This is particularly relevant when talking about climate change. Each small decision a person makes to reuse a plastic bag or take public transit doesn't have an enormous appreciable affect on the state of our climate. However, in sum, these decisions do have an enormous effect.
Formal definition:
Also known as the "agency dilemma." An agency is motivated to act in their own interests, which is often competing with or in direct contradiction to the interests of the principal.
How it applies
This is one of the most important models to understand, because it has broad-sweeping effects. We'll list a handful of examples here.
- Realtor and home-seller - The realtor's upside is much smaller than the home seller's. Leaving a house on the market for an extra few weeks is less attractive than taking a sub-optimal deal, as the guarantee of the majority of the upside is often a stronger incentive than the small delta for the optimal deal.
- Contractor and Consumer - Especially if the contractor's work is based on hourly rates, the contractor is incentivized to spend a longer amount of time on a project than is necessary to complete the work. The customer is incentivized to shorten the project as much as possible. This often leads the contractor to provide insurance by a guaranteeing a ceiling. (Note: Assuming the ceiling isn't based on an artifically high estimate, this extra risk for the agent can be compensated for by charging a higher hourly rate.)
- Pharmacies and Patients - At a macro-level, the pharmacy industry is incentivized to promote the effectiveness of a given drug, and simultaneously not eliminate the need for their drug (by entirely curing the given disease). (Note: with this said, pharmaceuticals have drastically improved life expectancy and quality of life for the majority of people who have access to them.)
Coined by James Clear on Twitter, the Overreaction Paradox simply states that sufficient preparation for potential disaster might prevent that disaster all together, making the preparation seem like overkill.
Formal definition:
From the Greek legend of Hydra. When one head was cut off, Hydra grew another; this counter-intuitive result came from a well intentioned effort to fix the problem, but instead multiplied the problem.
How it applies
Software engineers may experience this problem when they "eliminate a monolith" and move to microservices. In fact, issues caused by the monolithic codebase may then return, or mutate to match the new structure.
Similarly, some management styles may try to silence dissent by removing individuals from a team; this may backfire and cause others to take note of the silencing act, further increasing dissent from the remaining members of the team.
Related
Streisand effect - efforts to censor information can lead to that same information being highlighted further, increasing awareness. This could be considered a specific variant of the Hydra effect.
The concept of headwind comes from aviation. A headwind is a prevailing wind in the opposite direction of travel that results in an identical airspeed but a slower groundspeed.
This model is a useful way of thinking about progress of a team. A team may be performing as usual, but a force outside of their control is making progress much slower than if the force was not present.
Similarly, a tailwind is the opposite; an external force that allows lower effort for equivalent progress.
Finally, a crosswind is a force that creates a discrepancy between your heading and your direction of travel. In order to maintain a particular course, you may have to "correct" for the crosswind.
All of these models are useful when thinking about where energy is being spent, what prevailing forces are present, and how they are managed.
Goodhart's Law is very important for managers to understand. At it's core, Goodhart's Law can colloquially be summed up with this:
Give someone a score, and they will try to make it go up.
The more formal version:
Once a measure becomes a target, it no longer is a reliable measure.
The main thesis here is that a measure is likely to be gamed when it becomes a strategic target. This is especially true if there are consequences tied to the changes of that measure.
Imagine that you are measuring "number of sales calls completed" as a way of determining your overall sales base effort, and you have a stated interest in increasing that number to some particular target point. In this scenario, your sales associates may be incentivized to game the measure by simply rushing their sales calls. The measure is no longer useful, and in fact has unintended negative consequences.
This model deserves (and has) its own entire industry, because it is so powerful.
Think about this model with this in mind: "Systems that gain from themselves."
Compounding and exponential growth models are particularly relevant for financial decisions, but the same kind of model can be applied when trying to build new habits, cultivate better relationships, or create value for customers.
Compounding and exponential growth happens when the growth rate is determined by some basis, and that basis is in turn affected by the growth.
In other words:
for i in infinity: New Basis = Basis + Investment + ((Basis + Investment) * Interest Factor) Basis = New Basis
This is a very rough understanding of the idea, but you can quickly see that the interest factor compounds (combines) with the basis to increase that same basis.
So, the more you grow, the faster you grow. The faster you grow, the more you grow.
(Note: "Basis" may also be called "principal" in this rough model of thinking.)
Related concepts include network effects, levers, and collaboration models.
Blowback, or unintended consequences, are an important mental model to understand. We can't foresee every interaction our decisions will have with future states. This effect becomes even more pronounced the further into the future we try to project.
Understanding that blowback exists can change the way we think about decisions. Trying to determine what blowback might occur is a higher-order type of thinking. When making decisions, what second-order or third-order effects might occur as a result? Which ones are important?
A "black swan" event refers to an unexpected or rare event that has an outsized effect on the system the event occurs in. Because of their rare occurrence, a learning system may not prepare for the event, making the effect even more pronounced.
Example: How do you prepare for an earthquake? Looking at all previous earthquakes and preparing for the strongest of that set ignores the highly improbable (but still possible) event of an earthquake that exceeds previous records, thus rendering preparation null.
First explained by Nassim Taleb in his book of the same name, Black Swans pose a threat primarily because they are so unpredictable. Therefore, Taleb recommends practicing counterfactual reasoning.
Big and small number biases are fundamentally difficult to grasp. But to gain an intuition, try to visualize a stack of 1,000,000,000,000 quarters. Now visualize a stack of 1,000,000,000,000,000 quarters. If you are like most people, the height of each of these stacks is initially quite similar. In reality, the second stack is 1000x higher than the first stack. (Of course it is! But our brains don't see those numbers accurately.)
Similarly, imagine a cup of water. Now try to imagine pouring all but 0.00003% of the water out. How much exactly would be left? Trying to visualize this, most people would say a few drops, or just enough to keep the bottom layer of the glass wet. In a 16oz cup, .0000048oz, or about 136 micrograms. That's less than the weight of 3 grains of salt.
These biases extend further into our perception. For example, when we see "99.999% chance", we perceive it to be further from 100% than 99%. We're less likely to spend resources to improve life saving treatments from 93% to 96% effectiveness than we are from 99% to 100%. This "cure" effect is similar to the "possibility" effect: we buy lottery tickets because our chances of winning the lottery are non-zero ("possible").
Benefits of scale is illustrated simply by the phrase "you have to have money to make money." Multiple mechanics are at play for benefits of scale, and they are often multiplicative.
For example, with large enough scale, you may no longer have to spend as much of your resources on recruiting talent, advertising your brand, or establishing a production pipeline.
Scale is also important particularly for litigation; because a single lawyer or small team of lawyers can represent a huge corporation in a given legal case, the resources needed to fight individual suits are often miniscule in comparison to the resources brought by the prosecution.
The mental picture of the benfits of scale is an exponential graph, where opportunity is exponentially correlated with capital scale.
A base rate is how often something happens within some sample. For example, the base rate of hours of sleep for a given population may be 8 hours. You can have base rates for your own behaviors, too.
When evaluating frequencies of events and other statistical representations of information, it's important to do so in light of the base rate. For example, if I say that "someone will die of a lightning strike in the US in the next 30 minutes", am I predicting a likely future? The only way we can predict (given no specific weather information) is by using base rates.
In the lightning strike example, we can look at a base rate of yearly lightning strikes (in the US, 51 fatalities occur per year - that's a little less than 1/week - so 30 minutes has about a 1/340 chance of being correct. So how about likelihood of dying if you are hit by a lightning strike? How can we figure out this base rate? The annual injury rate of lightning strikes is around 240k globally, and around 6k of those are fatalities.
6k / 240k is less than 2.5% - in other words, being struck by lightning is safer than some surgeries.
If we didn't have base rates, we wouldn't know that 98 of 100 people survive lightning strikes.
We tend to entrench ourselves in our existing belief (see confirmation bias), and when presented with contradictary evidence, we tend to reject it and entrench even further into our preexisting belief.
Also known as the availability heuristic. Whatever is most recently remembered, within reach of our cognition or spatial awareness, is likely to influence our cognition. This also has the effect of causing us to weigh our judgments most heavily based on that availability.
Interesting research has been done on this topic, specifically as it relates to memory science. For example, studies show that someone will remember encountering something they are familiar with far more often than something they are not familiar with.
Additionally, availability heuristic might overlap with framing biases. For example, wording a question like "how fast were the cars going when they smashed into each other" has been shown to affect the respondent's recall and resulting answer. When the question is worded differently, i.e. "how fast were the cards going when they bumped into each other", the reported speed was lower, despite the same information being provided about the event.
Asymmetric information occurs when one agent in a transaction has more information than another.
Generally, information can create advantageous positions for the person with the additional information.
However, there are some rare instances where less information may be better, for different reasons. For example, dealing with the overwhelm of having too much information could create a "beginner's luck" situation.
A term that describes something that gets better with stress.
Generally, living beings are considered antifragile.
In direct contrast, something fragile breaks easily under stress.
Of note, this designation is not the same as being resilient; resilient means remaining close to an equilibrium under stress.
Anti-fragility is marked specifically by improvement under stress.
Something being simply resistant to wearing out is not enough to be considered anti-fragile.
An effect where the initial frame of reference has a major effect on later perspectives.
This is surprisingly often inescapable, even if the anchor is well-known to be a facade.
For example, a car lot using artificially high prices is not as silly of a tactic as it may immediately seem. Despite the cognitive understanding that the prices are indeed artificial, humans still have a hard time discarding the "anchor" price.
Formal definition:
When a transaction has asymmetric information, the party with less information is subject to adverse selection.
How it applies
Adverse selection doesn't have to be limited to scenarios where the decisionmaker has less information. The concept can also be applied to someone who is disadvantaged in another way. Consider asymmetric opportunity, as an example; Decisionmaking frameworks apply differently when the decisionmaker faces adversity, because there is generally a looser coupling between the decision and the outcome.
See also: Asymmetry of Information
Formal definition:
In chemistry, the energy required to change from one chemical state to another. You might also encounter a similar phenomenon in physical change called "latent heat" - the energy needed to complete a phase change from one physical state to another.
How it applies
It may seem at first glance that everything we experience is linear. For example, when we add heat to water, we see the temperature increase.
Activation energy, however, is a non-linear curve required to create some type of change. This energy isn't lost or wasted, but is required for the change to take place.
This applies to habit forming, as an example; once you take the first step towards exercising, you are more likely to continue. This is also where the phrase "the first step is always the hardest" comes from.
This could apply to career planning as well; getting your first job in an industry is likely harder than getting your second job.
See also: [[Conservation-of-resources]]