During the winter break I had a chance to read The Black Swan by Nassim. In a world full of uncertainity, “The Black Swan” shines a light on new ideas, shaking up what we think we know about randomness and unpredictability. I would recommend anyone working in field dealing with randomness and complex events to read this book.

At its core, “The Black Swan” introduces the concept of highly improbable events that defy our expectations, yet wield immense impact on our lives and societies. These “black swan” events, characterized by their rarity and extreme consequences disrupt the illusion that these events would never happen to us and that the world is stable.

In this blog, I’ll share 6 concept (out of 24) which I found very useful and see how they affect everything from money to history to disasters. “The Black Swan” warns us about ignoring rare stuff and relying too much on plans that usually don’t work in the long run. I have added the page numbers for anyone interested in reading the concepts themself (paperback second edition)

1. Problem of Induction (Pg 40):

The “problem of induction,” challenges the reliability of making predictions based on past experiences or data. Traditional methods of induction, which assume that the future will resemble the past, often fail to account for rare and unpredictable events, or “black swans.” This skepticism about induction highlights how tough it is to predict the future, especially in complex systems.

He explains this from a point of view of a Turkey which is fed every day. This increases the bird’s belief that its a geenral rule of life to be fed everyday. On the Thanksgiving day, the bird has a revision of belief. He explains that as our feeling of safety increases over time, our risk of failing is also increasing which we tend to ignore.

2. Confirmation Bias (Pg 57):

Confirmation bias refers to our tendency to seek out and interpret information in a way that confirms our pre-existing beliefs or hypotheses. Taleb argues that confirmation bias can lead us to overlook evidence that contradicts our views and can blind us to the possibility of rare and unexpected events, such as black swan events.

In hypothesis testing, we tend to look for instances where the hypothesis we were looking for proved true. Sometimes a lot of data can be meaningless; at other times one single piece of information can be very meaningful.

3. Narrative Fallacy (Pg 63):

Narrative fallacy refers to our inclination to create explanations for random or unpredictable events after they have happened. Taleb suggests that humans have a natural tendency to seek meaning and causality in events, even when none truly exists. This tendency can lead us to oversimplify complex phenomena, ignore randomness, and underestimate the role of chance in shaping outcomes.

We like stories and we like to summarize and simplify things i.e. reduce dimensionality.

4. Distortion Bias (Silent evidence problem) (Pg 101):

Distortion bias is when we distort or misrepresent information to fit our existing beliefs or preferences. For example, if we think a certain diet pill works, we might only pay attention to stories that support that idea and ignore ones that say it doesn’t work. This can make us stick to our beliefs even when there’s evidence against them.

5. Reference point argument (Pg 119):

The reference point argument means that our perceptions and judgments are heavily influenced by the reference points we use to evaluate outcomes. Humans tend to compare outcomes to a specific reference point or baseline, which can distort their perceptions of risk and reward.

For example, if someone invests in the stock market and their portfolio increases by 10%, they might perceive this as a positive outcome if their reference point is the initial investment amount. However, if their reference point is a higher level of return, such as 15%, they might perceive the 10% gain as a disappointment.

6. Anchoring (Pg 158):

Anchoring (researched by Daniel Kahneman and Amos Tversky) is a cognitive bias where individuals rely heavily on the first piece of information they encounter (the “anchor”) when making decisions or judgments. This initial piece of information then influences subsequent decisions, even if it’s unrelated or irrelevant to the decision at hand.

For example, imagine you’re negotiating the price of a used car. The seller suggests a price of $15,000, which becomes your anchor point. Even if you know the car is overpriced based on market research, the $15,000 figure may still influence your perception of what constitutes a fair price, leading you to make an offer that’s closer to the anchor than you otherwise would have.