(Image: Zen Buddha Silence by Marilyn Barbone.)
January 1, 2017
Humans have a very strong tendency to be overconfident, especially when it comes to probabilistic decisions about the future. Overconfidence is a default mental setting.
Even being exceedingly well aware of overconfidence does not mean that one can automatically avoid it. Few people are more aware of overconfidence than Nobel Laureate Daniel Kahneman, who has studied cognitive biases for decades. Yet Kahneman admits that his default mental setting is “wildly overconfident.”
Another expert on cognitive biases is Charlie Munger, as demonstrated in “The Psychology of Misjudgment.” (See Poor Charlie’s Almanack, Expanded Third Edition.) Munger also points out that knowing the cognitive biases does not at all mean that one can overcome them in one’s own decision-making. Munger argues that it’s essential to develop a system in order to be able to consistently overcome cognitive biases.
Kahneman and Munger are two of the best sources for learning about cognitive biases.
- My list of cognitive biases: http://boolefund.com/cognitive-biases/
- Munger’s (more comprehensive) list: http://boolefund.com/the-psychology-of-misjudgment/
In addition to Kahneman and Munger, there are many experts in psychology and decision-making. I’ve written about a few:
- Richard Thaler, who wrote an excellent book, Misbehaving: The Making of Behavioral Economics (W. W. Norton, 2015): http://boolefund.com/the-making-of-behavioral-economics/
- Howard Marks, a great value investor and the author of The Most Important Thing: http://boolefund.com/howard-marks-the-most-important-thing/
There are many others I could mention, but two great researchers who have written quite a bit about psychology, decision-making, and value investing are:
- James Montier, who has written Behavioural Investing: A Practitioner’s Guide to Applying Behavioural Finance (Wiley, 2007), a treasure trove of insight into decision-making and value investing. Montier also wrote Value Investing: Tools and Techniques for Intelligent Investment (Wiley, 2009).
- Michael Mauboussin has written several books, including More Than You Know: Finding Financial Wisdom in Unconventional Places (Columbia University Press, 2013), Think Twice: Harnessing the Power of Counterintuition (Harvard Business Review Press, 2012), and The Success Equation: Untangling Skill and Luck in Business, Sports, and Investing (Harvard Business Review Press, 2012).
In today’s blog post, I review some lessons from Mauboussin’s book Think Twice. Each chapter is based on a common mistake in decision-making.
RQ vs IQ
Given a proper investment framework or system, obviously IQ can help a great deal over time. Warren Buffett and Charlie Munger are seriously smart. But they wouldn’t have become great investors without a lifelong process of learning and improvement, including how to be rational.
An investor dedicated to lifelong improvements in knowledge and rationality can do well in value investing even without being brilliant.
Quite often, the best approach for a value investor is to follow a fully automated (computerized), quantitative approach. Lifelong improvements are still essential in this case. (Many value investors, including the father of value investing Ben Graham, have advocated and used a fully quantitative approach.)
THE OUTSIDE VIEW
Mauboussin discusses why Big Brown was a bad bet to win the Triple Crown in 2008. Big Brown had won the Kentucky Derby by four-and-three-quarters lengths, and he won the Preakness by five-and-one-quarter lengths. The horse’s trainer, Rick Dutrow, said, “He looks as good as he can possibly look. I can’t find any flaws whatsoever in Big Brown. I see the prettiest picture. I’m so confident, it’s unbelievable.” UPS (after whom Big Brown was named) signed a marketing deal. And enthusiasm for Big Brown’s chances in the Belmont Stakes grew.
What happened? Big Brown trailed the field during the race, so his jockey eased him out of the race. This was a shocking result. But the result of not winning could have been much more widely anticipated if people had used the outside view.
The outside view means identifying similar situations and finding the statistics on how things worked out. Renowned handicapper Steven Crist developed an outside view, as Mauboussin summarizes:
“Of the twenty-nine horses with a chance to capture the Triple Crown after winning the Kentucky Derby and the Preakness Stakes, only eleven triumphed, a success rate less than 40 percent. But a closer examination of those statistics yielded a stark difference before and after 1950. Before 1950, eight of the nine horses attempting to win the Triple Crown succeeded. After 1950, only three of twenty horses won. It’s hard to know why the achievement rate dropped from nearly 90 percent to just 15 percent, but logical factors include better breeding (leading to more quality foals) and bigger starting fields.” (page 2)
Most people naturally use the inside view. This essentially means looking at more subjective factors that are close at hand, like how tall and strong the horse looks and the fact that Big Brown had handily won the Kentucky Derby and the Preakness.
Why do people naturally adopt the inside view? Mauboussin gives three reasons: the illusion of superiority, the illusion of optimism, and the illusion of control.
First is the illusion of superiority. Most people say they are above average in many areas, such as driving, judging humor, investing, etc. Most people have an unrealistically positive view of themselves. In many areas of life, this does not cause problems. In fact, unrealistic positivity may often be an advantage that helps people to persevere, etc. But in zero-sum games where winning requires clearly being above average, the illusion of superiority is harmful.
Munger calls it the Excessive Self-Regard Tendency. Munger also notes that humans tend to way overvalue the things they possess. This often causes someone already overconfident about a bet he is considering to become even more overconfident after making the bet, due to the endowment effect.
The illusion of optimism, which is similar to the illusion of superiority, causes most people to see their future as brighter than that of others.
The illusion of control causes people to behave as if chance events are somehow subject to their control. People throwing dice throw softly when they want low numbers and hard for high numbers. A similar phenomenon is seen when people choose which lottery card to take, as opposed to getting one by chance.
Mauboussin notes that a vast range of professionals tend to use the inside view to make important decisions, with predictably poor results. “Encouraged by the three illusions, most believe they are making the right decision and have faith that the outcomes will be satisfactory.” The world of investing offers many examples of these types of errors. Mergers & Acquisitions is another area with many examples. Two-thirds of acquisitions fail to create value, but most executives, relying on the inside view, believe that they can beat the odds.
The planning fallacy is another area that demonstrates how most people rely on the inside view instead of the outside view. Mauboussin gives one common example of students estimating when they’d finish an assignment:
“… when the deadline arrived for which the students had given themselves a 50 percent chance of finishing, only 13 percent actually turned in their work. At the point when the students thought there was 75 percent chance they’d be done, just 19 percent had completed the project. All the students were virtually sure they’d be done by the final date. But only 45 percent turned out to be right.” (page 10-11)
Daniel Kahneman gives his own example of the planning fallacy. He was part of a group assembled to write a curriculum to teach judgment and decision making to high school students. Kahneman asked everyone in the group to write down their opinion of when they thought the group would complete the task. Kahneman found that the average was around two years, and everyone, including the dean, estimated between eighteen and thirty months.
Kahneman then realized that the dean had participated in similar projects in the past. Kahneman asked the dean how long it took them to finish. “The dean blushed and then answered that 40 percent of the groups that had started similar programs had never finished, and that none of the groups completed it in less than seven years. Kahneman then asked how good this group was compared to past groups. The dean thought and then replied: ‘Below average, but not by much.’” (pages 12-13)
OPEN TO OPTIONS
In making decisions, people often fail to consider a wide enough range of alternatives. People tend to have “tunnel vision.”
Anchoring is an important example of this mistake:
Kahneman and Amos Tversky asked people what percentage of the UN countries is made up of African nations. A wheel of fortune with the numbers 1 to 100 was spun in front of the participants before they answered. The wheel was rigged so it gave either 10 or 65 as the result of a spin. The subjects were then asked – before giving their specific prediction – if the answer was higher or lower than the number on the wheel. The median response from the group that saw the wheel stop at 10 was 25%, and the median response from the group that saw 65 was 45%.
In another experiment, people are asked to write down the last four digits of their phone number. Next, they are asked whether the number of doctors in their capital city is higher or lower than the last four digits of their phone number. Then they are asked to give a specific guess about the number of doctors in their capital city. Results from one such experiment: Those whose last four digits were greater than 7000 guessed 6762 doctors on average, while those with telephone numbers below 2000 guessed 2270 doctors.
Stock prices often have a large component of randomness, but investors tend to anchor on various past stock prices. The rational way to avoid such anchoring is to carefully develop different possible scenarios for the intrinsic value of a stock based on net asset value or earnings power. Ideally, one would not want to know about past stock prices (or even the current stock price) before developing the intrinsic value scenarios.
The Representative Heuristic
The representativeness heuristic is another bias that leads many people not to consider a wide range of possibilities. Here is an example Kahneman and Tversky have often used: “Steve is very shy and withdrawn, invariably helpful but with very little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail. Question: Is Steve more likely to be a librarian or a farmer?” Most people say librarian. But the fact that the description seems more representative of librarians than of farmers does not mean that it is more likely that Steve is a librarian. Instead, one must look at the base rate: there are twenty times as many farmers as librarians, so it is far more likely that Steve is a farmer.
Another example Kaheman gives: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Question: Which is more probable?
- Linda is a bank teller.
- Linda is a bank teller and is active in the feminist movement.”
Most people say the second option is more likely. But just using simple logic, we know that the second option is a subset of the first option, so the first option is more likely. Most people get this wrong because they use the representativeness heuristic.
Availability Bias, Vividness Bias, Recency Bias
If a fact is easily available, or if a fact is vivid or recent, people generally way overestimate its probability.
A good example is a recent and vivid plane crash. The odds of dying in a plane crash are one in 11 million, astronomically low. The odds of dying in car crash are one in five thousand. But many people, after seeing recent and vivid photos of a plane crash, decide that taking a car is much safer than taking a plane.
Extrapolating the Recent Past
Most people automatically extrapolate the recent past into the future without considering various alternative scenarios. Recall Kahneman’s definitions of System 1 and System 2:
System 1: Operates automatically and quickly; makes instinctual decisions based on heuristics.
System 2: Allocates attention (which has a limited budget) to the effortful mental activities that demand it, including logic, statistics, and complex computations.
Kahneman writes that System 1 and System 2 work quite well generally: “The division of labor between System 1 and System 2 is highly efficient: it minimizes effort and optimizes performance. The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate. System 1 has biases, however, systematic errors that it is prone to make in specified circumstances. As we shall see, it sometimes answers easier questions than the one it was asked, and it has little understanding of logic and statistics. One further limitation of System 1 is that it cannot be turned off. If you are shown a word on the screen in a language you know, you will read it – unless your attention is totally focused elsewhere.”
So our System 1 operates automatically and very quickly. And it usually works incredibly well. In our evolutionary history, the amygdala would cause a human to begin running away even before the rustling in the grass had been consciously registered. So System 1 was thus almost always correct, and it helped keep people alive. Throughout most of our evolutionary history, the recent past was almost always the best guide to the future.
When it comes to probabilistic areas such as investing, however, extrapolating the recent past is one of the biggest mistakes that investors make. In a highly random environment, over time and in general, one should expect reversion to the mean, rather than a continuation of the recent past.
Charlie Munger explains what he calls Doubt Avoidance Tendency as follows: “The brain of man is programmed with a tendency to quickly remove doubt by reaching some decision.” (See Poor Charlie’s Almanack, Expanded Third Edition.) Again, we can think about the operation System 1 throughout most of evolutionary history. If there was a potential threat, the human brain was programmed to react immediately.
Munger: “What triggers Doubt-Avoidance Tendency? Well, an unthreatened man, thinking of nothing in particular, is not being prompted to remove doubt through rushing to some decision. As we shall see later when we get to Social-Proof Tendency and Stress-Influence Tendency, what usually triggers Doubt-Avoidance Tendency is some combination of (1) puzzlement and (2) stress…”
Inconsistency Avoidance and Confirmation Bias
Munger on the Inconsistency-Avoidance Tendency: “The brain of man conserves programming space by being reluctant to change, which is a form of inconsistency avoidance. We see this in all human habits, constructive and destructive. Few people can list a lot of bad habits that they have eliminated, and some people cannot identify even one of these. Instead, practically everyone has a great many bad habits he has long maintained despite their being known as bad…. chains of habit that were too light to be felt before they became too heavy to be broken.”
“The rare life that is wisely lived has in it many good habits maintained and many bad habits avoided or cured.”
“It is easy to see that a quickly reached conclusion, triggered by Doubt-Avoidance Tendency, when combined with a tendency to resist any change in that conclusion, will naturally cause a lot of errors in cognition for modern man. And so it observably works out…”
“And so, people tend to accumulate large mental holdings of fixed conclusions and attitudes that are not often reexamined or changed, even though there is plenty of good evidence that they are wrong.”
“One of the most successful users of an antidote to first conclusion bias was Charles Darwin. He trained himself, early, to intensively consider any evidence tending to disconfirm any hypothesis of his, more so if he thought his hypothesis was a particularly good one. The opposite of what Darwin did is now called confirmation bias, a term of opprobrium. Darwin’s practice came from his acute recognition of man’s natural cognitive faults arising from Inconsistency-Avoidance Tendency. He provides a great example of psychological insight correctly used to advance some of the finest mental work ever done.” (my emphasis)
Selective Attention and Inattentional Blindness
People tend to be very selective about what they hear and see, and this is partly a function of what they already believe. People often see and hear only what they want, and tune out everything else.
On a purely visual level, there is something called inattentional blindness. When people focus on certain aspects of their environment, this causes many people to miss other aspects that are plainly visible. There is a well-known experiment related to inattentional blindness. People watch a thirty-second video that shows two teams, one wearing white and the wearing black. Each team is passing a basketball back and forth. In the middle of the video, a woman wearing a gorilla suit walks into the middle of the scene, thumps her chest, and walks off. Roughly half of the people watching the video have no recollection of the gorilla.
Struggles and Stresses
Stress or fatigue causes many people to make poorer decisions than they otherwise would. Thus, one must be careful. With the right attitude, however, stress can slowly be turned into an advantage over a long period of time.
As Ray Dalio and Charlie Munger have pointed out, mental strength is one of life’s greatest gifts. With a high degree of focus and discipline, a human being can become surprisingly strong. But this only happens very slowly, over the course of many years or decades, as the result of an endless series of struggles, stresses, and problems.
A part of strength that can be learned over time is inner peace or total calm in the face of seemingly overwhelming difficulties. Ray Dalio holds that this can be learned gradually through the regular practice of transcendental meditation. (I agree that this is part of the answer.)
Other wise men, such as Munger or Lincoln, are of the view that total calm in the face of any challenge is simply an aspect of mental strength that can be developed over time. One thinks of Rudyard Kipling’s poem “If”:
“…So, maybe in that sense I think a tougher hand has been good for us. My answer to that question reminds me of my old Harvard law professor who used to say, ‘Charlie, let me know what your problem is and I’ll try to make it harder for you.’ I’m afraid that’s what I’ve done to you.
As for how do I understand a new industry: the answer is barely. I just barely have enough cognitive ability to do what I do. And that’s because the world promoted me to the place where I’m stressed. And you’re lucky if it happens to you, because that’s what you want to end up: stressed. You want to have your full powers called for. Believe you me, I’ve had that happen all my life. I’ve just barely been able to think through to the right answer, time after time. And sometimes I’ve failed…”
Link to 2016 Daily Journal Meeting Notes (recorded courtesy of Whitney Tilson): https://www.scribd.com/doc/308879985/MungerDJ-2-16
Mauboussin writes about the credit crisis of 2007-2008. People without credit could buy nice homes. Lenders earned fees, and usually did not hold on to the mortgages. Investment banks bought mortgages and bundled them for resale, earning a fee. Rating agencies were paid to rate the mortgage-backed securities, and they rated many of them AAA (based partly on the fact that home prices had never declined nationwide). Investors worldwide in AAA-rated mortgage-backed securities earned higher returns than they did on other AAA issues. Some of these investors were paid based on portfolio performance, and thus earned higher fees this way.
Incentives are extremely important: “Never, ever think about something else when you should be thinking about incentives.” – Charlie Munger
Under a certain set of incentives, many people who normally are very good people will behave badly. Often this bad behavior is not only due to the incentives at play, but also involves other psychological pressures like social-proof, stress, and doubt-avoidance.
Finally, Mauboussin offers advice about how to deal with “tunnel vision,” or the insufficient consideration of alternatives:
- Explicitly consider alternatives.
- Seek dissent. (This is very difficult, but highly effective. Think of Lincoln’s team of rivals.)
- Keep track of previous decisions. (A decision journal does not cost much, but it can help one over time to make better decisions.)
- Avoid making decisions while at emotional extremes. (Total calm and rationality – the ability to think clearly in the face of any difficulties – can be learned over time. Also, one can learn to postpone a decision if one senses that one is overly emotional.)
- Understand incentives.
THE EXPERT SQUEEZE
In business today, there are many areas where one can get better insights or predictions than what traditional experts can offer.
Mauboussin gives the example of Best Buy forecasting holiday sales. In the past, Best Buy depended on specialists to make these forecasts. James Surowiecki, author of The Wisdom of Crowds, went to Best Buy’s headquarters and told them that a crowd could predict better than their specialists could.
Jeff Severts, a Best Buy executive, decided to test Surowiecki’s suggestion. Late in 2005, Severts set up a location for employees to submit and update their estimates of sales from Thanksgiving to year-end. In early 2006, Severts revealed that the internal experts had been 93 percent accurate, while the “amateur crowd” was off only one-tenth of one percent. Best Buy then allocated more resources to its prediction market, and benefitted.
Another example of traditional experts being supplanted: Orley Ashenfelter, wine lover and economist, figured out a simple regression equation that predicts the quality of red wines from France’s Bordeaux region better than most wine experts. “With the equation in hand, the computer can deliver appraisals that are quicker, cheaper, more reliable, and without a whiff of snobbishness.” (page 39)
Mauboussin mentions four categories over which one can judge experts versus computers:
Rule based; limited range of outcomes – experts are generally worse than computers. Examples include credit scoring and simple medical diagnosis.
Rule based; wide range of outcomes – experts are generally better than computers. Examples include chess and Go. (Eventually computers will be better in this category. Many computers can already beat any human in chess.)
Probabilistic; limited range of outcomes – experts are equal or worse than collectives. Examples include admissions officers and poker.
Probabilistic; wide range of outcomes – experts are worse than collectives. Examples include forecasting any of the following: stock prices, the stock market, interest rates, or the economy.
Regarding areas that are probabilistic, with a wide range of outcomes (the fourth category), Mauboussin highlights economic and political forecasts:
“The evidence shows that collectives outperform experts in solving these problems. For instance, economists are extremely poor forecasters of interest rates, often failing to accurately guess the direction of rate moves, much less their correct level. Note, too, that not only are experts poor at predicting actual outcomes, they rarely agree with one another. Two equally credentialed experts may make opposite predictions and, hence, decisions from one another.” (page 41)
Mauboussin notes that experts do relatively well with rule-based problems with a wide range of outcomes because they can be better than computers at eliminating bad choices and making creative connections between bits of information. A fascinating example: Eric Bonabeau, a physicist, has developed programs that generate alternative designs for packaging using the principles of evolution (recombination and mutation). But the experts select the best designs at the end of the process, since the computers have no taste. Yet computers will continue to make big improvements in this category (rule-based problems with a wide range of outcomes). For instance, many chess programs today can beat any human, whereas there was only one program (IBM’s Deep Blue) that could do this in the late 1990’s.
Note: We still need experts to make the systems that replace them. (Severts had to set up the prediction market. Ashenfelter had to find the regression equation. And experts need to stay on top of the systems, making improvements when needed.)
Also, experts are still needed for many areas in strategy, including innovation. And people are needed to deal with people. (Although many jobs will soon be done by robots.)
I’ve written before about how simple quant models outperform experts in a wide variety of areas: http://boolefund.com/simple-quant-models-beat-experts-in-a-wide-variety-of-areas/
Mauboussin writes about the famous experiment by Solomon Asch. The subject is shown lines of obviously different lengths. But in the same room with the subject are shills, who unbeknownst to the subject have already been instructed to say that two lines of obviously different lengths actually have the same length. So the subject of the experiment has to decide between the obvious evidence of his eyes – the two lines are clearly different lengths – and the opinion of the crowd. A significant number (36.8 percent) ignored their own eyes and went with the crowd, saying that the two lines had equal length, despite the obvious fact that they didn’t.
Mauboussin notes that the interesting question about the Solomon Asch experiment is: what’s going on in the heads of people who conform? Asch himself suggested three possibilities:
Distortion of judgment. The subjects conclude that their perceptions are wrong and that the group is right.
Distortion of action. These individuals suppress their own knowledge in order to go with the majority.
Distortion of perception. This group is not aware that the majority opinion distorts their estimates.
Unfortunately, Asch didn’t have the tools to try to test these possibilities. Gregory Berns, a neuroscientist, five decades after Asch, used functional magnetic resonance imaging (fMRI) in the lab at Emory University.
For the conforming subjects, the scientists found activity in the areas of the brain that were related to perception of the object. Also, the scientists did not find a meaningful change in activity in the frontal lobe – an area associated with activities like judgment. Thus, for conforming subjects, it is a distortion of perception: what the majority claims to see, the subject actually does see. Remarkable.
What about the people who remained independent when faced with the group’s wrong responses? Those subjects showed increased activity in the amygdala, a region that signals to prepare for immediate action (fight or flight). Mauboussin comments: “… while standing alone is commendable, it is unpleasant.” (page 57)
Mauboussin: “How do you feel when you read the word ‘treasure’? … If you are like most people, just ruminating on ‘treasure’ gives you a little lift. Our minds naturally make connections and associate ideas. So if someone introduces a cue to you – a word, a smell, a symbol – your mind often starts down an associative path. And you can be sure the initial cue will color a decision that waits at the path’s end. All this happens outside your perception.” (page 58)
Scientists did the following experiment:
“In this test, the researchers placed the French and German wines next to each other, along with small national flags. Over two weeks, the scientists alternated playing French accordion music and German Bierkeller pieces and watched the results. When French music played, French wines represented 77 percent of the sales. When German music played, consumers selected German wines 73 percent of the time… The music made a huge difference in shaping purchases. But that’s not what the shoppers thought…
While the customers acknowledged that the music made them think of either France or Germany, 86 percent denied that the tunes had any influence on their choice. This experiment is an example of priming, which psychologists formally define as ‘the incidental activation of knowledge structures by the current situational context.’ In other words, what comes in through our senses influences how we make decisions, even when it seems completely irrelevant in a logical sense. Priming is by no means limited to music. Researchers have manipulated behavior through exposure to words, smells, and visual backgrounds. For example, studies show:
Immediately after being exposed to words associated with the elderly, primed subjects walked 13 percent slower than subjects seeing neutral words.
Exposure to the scent of an all-purpose cleaner prompted study participants to keep their environment tidier while eating a crumbly biscuit.
Subjects reviewing Web pages describing two sofa models preferred the more comfortable model when they saw a background with puffy clouds, and favored the cheaper sofa when they saw a background with coins.” (pages 61-62)
The Fault of the Default
While virtually 100 percent of Austrians have consented to be an organ donor, only 12 percent of Germans have. The difference is due entirely to how the choice is presented. In Austria, you must opt-out of being an organ donor – being an organ donor is the default choice. In Germany, you must opt-in to being an organ donor – not being a donor is the default choice. But this directly translates into many more saved lives in Austria than in Germany.
Mauboussin makes an important larger point. We tend to assume that people decide what is best for them independent of how the choice is framed, but in reality, “many people simply go with the default options.” (page 63) This includes consequential areas (in addition to organ donation) like savings, educational choice, medical alternatives, etc.
The Power of Inertia
To overcome inertia, Peter Drucker suggested asking: “If we did not do this already, would we, knowing what we now know, go into it?”
Dr. Atul Gawande, author of The Checklist Manifesto, tells the story of Dr. Peter Pronovost, an anesthesiologist and critical-care specialist at the Johns Hopkins Hospital. Pronovost’s father died due to a medical error, which led Pronovost to dedicate his career to ensuring the safety of patients. Mauboussin explains: “In the United States, medical professionals put roughly 5 million lines into patients each year, and about 4 percent of those patients become infected within a week and a half. The added cost of treating those patients is roughly $3 billion per year, and the complications result in twenty to thirty thousand annual preventable deaths.” (page 69)
Pronovost came up with a simple checklist, because he observed that physicians in a hurry would often overlook some simple routine that is normally done as a part of safety. It saved numerous lives and millions of dollars in the first few years at Johns Hopkins Hospital, so Pronovost got the Michigan Health & Hospital Association to try the checklist. After just three months, the rate of infection dropped by two-thirds. After eighteen months, the checklist saved 1,500 lives and nearly $200 million.
MORE IS DIFFERENT
Mauboussin covers complex adaptive systems such as the stock market or the economy. His advice, when dealing with a complex adaptive systems, is:
Consider the system at the correct level. An individual agent in the system can be very different from one outside the system.
Watch for tightly coupled systems. A system is tightly coupled when there is no slack between items, allowing a process to go from one stage to the next without any opportunity to intervene. (Examples include space missions and nuclear power plants.) Most complex adaptive systems are loosely coupled, where removing or incapacitating one or a few agents has little impact on the system’s performance.
Use simulations to create virtual worlds. Simulation is a tool that can help our learning process. Simulations are low cost, provide feedback, and have proved their value in other domains like military planning and pilot training.
Mauboussin notes that complex adaptive systems often perform well at the system level, despite dumb agents (consider ants or bees). Moreover, there are often unintended consequences that can lead to failure when well-meaning humans try to manage a complex system towards a particular goal.
EVIDENCE OF CIRCUMSTANCE
Decisions that work well in one context can often fail miserably in a different context. The right answer to many questions that professionals face is: “It depends.”
Mauboussin writes about how most people make decisions based on a theory, even though often they are not aware of it. Two business professors, Paul Carlile and Clayton Christensen, describe three stages of theory building:
The first stage is observation, which includes carefully measuring a phenomenon and documenting the results. The goal is to set common standards so that subsequent researchers can agree on the subject and the terms to describe it.
The second stage is classification, where researchers simplify and organize the world into categories to clarify the differences among phenomena. Early in theory development, these categories are based predominantly on attributes.
The final stage is definition, or describing the relationship between the categories and the outcomes. Often, these relationships start as simple correlations.
What’s especially important:
“Theories improve when researchers test predictions against real-world data, identify anomalies, and subsequently reshape the theory. Two crucial improvements occur during this refining process. In the classification stage, researchers evolve the categories to reflect circumstances, not just attributes. In other words, the categories go beyond what works to when it works. In the definition stage, the theory advances beyond simple correlations and sharpens to define causes – why it works. This pair of improvements allows people to go beyond crude estimates and to tailor their choices to the situation they face.” (page 88)
Here is what is often done: Some successes are observed, some common attributes are identified, and it is proclaimed that these attributes can lead others to success. This doesn’t work.
By the same logic, a company should not adopt a strategy without understanding the conditions under which it succeeds or fails. Mauboussin gives the example of Boeing outsourcing both the design and the building of sections of the Dreamliner to its suppliers. This was a disaster. Boeing had to pull the design work back in house.
The Colonel Blotto Game
Each player gets a hundred soldiers (resources) to distribute across three battlefields (dimensions). What’s interesting is how the game changes as one adjusts one of the two parameters.
Mauboussin observes that it’s not intuitive how much advantage additional points gives to one side in a three-battlefield game: “In a three-battlefield game, a player with 25 percent more resources has a 60 percent expected payoff (the proportion of battles the player wins), and a player with twice the resources has a 78 percent expected payoff. So some randomness exists, even in contests with fairly asymmetric resources, but the resource-rich side has a decisive advantage. Further, with low dimensions, the game is largely transitive: if A can beat B and B can beat C, then A can beat C. Colonel Blotto helps us to understand games with few dimensions, such as tennis.” (page 92)
“But to get the whole picture of the payoffs, we must introduce the second parameter, the number of dimensions or battlefields. The more dimensions the game has, the less certain the outcome (unless the players have identical resources). For example, a weak player’s expected payoff is nearly three times higher in a game with fifteen dimensions than in a nine-dimension game. For this reason, the outcome is harder to predict in a high-dimension game than in a low-dimension game, and as a result there are more upsets. Baseball is a good example of a high-dimension game…”
“The Colonel Blotto game is also highly nontransitive in all but largely asymmetric, low-dimension situations. For this reason, tournaments often fail to reveal a best team…” (see page 93, with an example where A beats B, B beats C, C beats A, and all of them beat D; there is no best player; the winner of a tournament is simply “the player who got to play D first”).
“Because of nontransitivity and randomness, the attribute of resources does not always prevail over the circumstance of dimensionality.”
Bottom Line on Attributes vs. Circumstances
“Most of us look forward to leveraging our favorable experiences by applying the same approach to the next situation. We also have a thirst for success formulas – key steps to enrich ourselves. Sometimes our experience and nostrums work, but more often they fail us. The reason usually boils down to the simple reality that the theories guiding our decisions are based on attributes, not circumstances. Attribute-based theories come very naturally to us and often appear compelling… However, once you realize the answer to most questions is, ‘It depends,’ you are ready to embark on the quest to figure out what it depends on.” (page 99)
PHASE TRANSITIONS – “GRAND AH-WHOOMS”
Just a small incremental change in temperature leads to a change from liquid to solid. Philip Ball, a physicist and author of Critical Mass: How One Thing Leads to Another, calls it a grand ah-whoom.
Critical Points, Extremes, and Surprise
In part due to the writings of Nassim Taleb, people are more aware of black swans, or extreme outcomes within a power law distribution. According to Mauboussin, however, what most people do not yet appreciate is how black swans are caused:
“Here’s where critical points and phase transitions come in. Positive feedback leads to outcomes that are outliers. And critical points help explain our perpetual surprise at black swan events because we have a hard time understanding how such small incremental perturbations can lead to such large outcomes.” (page 106)
Mauboussin explains critical points in social systems. Consider the wisdom of crowds: Crowds tend to make accurate predictions when three conditions prevail – diversity, aggregation, and incentives. “Diversity is about people having different ideas and different views of things. Aggregation means you can bring the group’s information together. Incentives are rewards for being right and penalties for being wrong that are often, but not necessarily, monetary.”
“For a host of psychological and sociological reasons, diversity is the most likely condition to fail when humans are involved. But what’s essential is that the crowd doesn’t go from smart to dumb gradually. As you slowly remove diversity, nothing happens initially. Additional reductions may also have no effect. But at a certain critical point, a small incremental reduction causes the system to change qualitatively.” (page 106)
Blake LeBaron, an economist at Brandeis University, has done an experiment. LaBaron created a thousand investors within the computer and gave them money, guidelines on allocating their portfolios, and diverse trading rules. Then he let the system play out. As Mauboussin describes:
“His model was able to replicate many of the empirical features we see in the real world, including cycles of booms and crashes. But perhaps his most important finding is that a stock price can continue to rise even while the diversity of decision rules falls. Invisible vulnerability grows. But then, ah-whoom, the stock price tumbles as diversity rises again. Writes LaBaron, ‘During the run-up to a crash, population diversity falls. Agents begin using very similar trading strategies as their common good performance is reinforced. This makes the population very brittle, in that a small reduction in the demand for shares could have a strong destabilizing impact on the market.’” (page 107)
The Problem of Induction, Reductive Bias, and Bad Predictions
Extrapolating from what we see or have seen, to what will happen next, is a common decision-making mistake. Nassim Taleb retells Bertrand Russell’s story of a turkey (Taleb said turkey instead of chicken to suit his American audience). The turkey is fed a thousand days in a row. The turkey feels increasingly good until the day before Thanksgiving, when an unexpected event occurs. None of the previous one thousand days has given the turkey any clue about what’s next. Mauboussin explains:
“The equivalent of the turkey’s plight – sharp losses following a period of prosperity – has occurred repeatedly in business. For example, Merrill Lynch (which was acquired by Bank of America) suffered losses over a two-year period from 2007 to 2008 that were in excess of one-third of the profits it had earned cumulatively in its thirty-six years as a public company….”
“The term black swan reflects the criticism of induction by the philosopher Karl Popper. Popper argued that seeing lots of white swans doesn’t prove the theory that all swans are white, but seeing one black swan does disprove it. So Popper’s point is that to understand a phenomenon, we’re better off focusing on falsification than on verification. But we’re not naturally inclined to falsify something.” (page 108)
Not only does System 1 naturally look for confirming evidence. But even System 2 uses a positive test strategy, looking for confirming evidence for any hypothesis, rather than looking for disconfirming evidence. See: http://boolefund.com/seeking-wisdom-from-darwin-to-munger/
People have a tendency to stick to whatever they currently believe. Most people rarely examine or test their beliefs (hypotheses). As Bertrand Russell says:
“Most people would rather die than think; many do.”
Munger explains: “And so, people tend to accumulate large mental holdings of fixed conclusions and attitudes that are not often reexamined or changed, even though there is plenty of good evidence that they are wrong.” Unfortunately, assuming that everything makes sense and that everything is fine leads very often to negative surprises.
When people deal with complex phenomena, they tend to oversimplify things. Instead of properly addressing the real questions – however complex and difficult – the human mind naturally substitutes an easier question. Oversimplifying a complex system is just another way for the brain to feel that everything makes sense and that everything is fine. But complex systems eventually undergo phase transitions, and some of these new phases have sharply negative consequences, especially when people are completely unprepared.
Even very smart people tend to oversimplify, or to trust mathematical models too much. Mauboussin argues that Long-Term Capital Management, which blew up in the late 1990’s, had oversimplified reality by relying too heavily on its financial models. According to their models, the odds of LTCM blowing up – as they did – were astronomically low (1 out of billions). Clearly their models were very wrong. And obviously overconfidence is central to both to the problem of induction and to oversimplification (including the reductive bias).
Mauboussin was able to speak with Benoit Mandelbrot, the French mathematician and father of fractal geometry. Mauboussin asked about the reductive bias. Mandelbrot replied that the wild randomness of stock markets were clearly visible for all to see, but economists continued to assume mild randomness, largely because it simplified reality and made the math more tractable. “Mandelbrot emphasized that while he didn’t know what extreme event was going to happen in the future, he was sure that the simple models of the economists would not anticipate it.” (page 110)
Mauboussin gives the example of David Li’s formula, which measures the correlation of default between assets. (The formula is known as a Gaussian copula function.) Li’s equation could measure the likelihood that two or more assets within a portfolio would default at the same time. This “opened the floodgates” for financial engineers to create new products, including collateralized debt obligations (bundles of corporate bonds), and summarize the default correlation using Li’s equation “rather than worry about the details of how each corporate bond within the pool would behave.” (page 111)
Unfortunately, Li’s equation oversimplified a complex world: Li’s equation did not make any adjustments for the fact that many correlations can change significantly.
“The failure of Long-Term Capital Management illustrates how changing correlations can wreak havoc. LTCM observed that the correlation between its diverse investments was less than 10 percent over the prior five years. To stress test its portfolio, LTCM assumed that correlations could rise to 30 percent, well in excess of anything the historical data showed. But when the financial crisis hit in 1998, the correlations soared to 70 percent. Diversification went out the window, and the fund suffered mortal losses. ‘Anything that relies on correlation is charlatanism,’ scoffed Taleb. Or, as I’ve heard traders say, ‘The only thing that goes up in a bear market is correlation.’” (page 111)
Duncan Watts, a sociologist, led a trio of researchers at Columbia University in doing a social experiment. Subjects went to a web site – Music Lab – and were invited to participate in a survey. Upon entering the site, 20 percent of the subjects were assigned to an independent world and 10 percent each to eight worlds where people could see what other people were doing.
In the independent world, subjects were free to listen to songs, rated them, and download them, but they had no information about what other subjects were doing. In each of the other eight worlds, the subjects could see how many times other people had downloaded each song.
The subjects in the independent world collectively gave a reasonable indication of the quality of each of the songs. Thus, one could see for the other eight worlds whether social influence made a difference or not.
Song quality did play a role in the ranking, writes Mauboussin. A top-five song in the independent world had about a 50 percent chance of finishing in the top five in a social influence world. And the worst songs rarely topped the charts. But how would you guess the average song did in the social worlds?
“The scientists found that social influence played a huge part in success and failure. One song, ‘Lockdown’ by the band 52metro, ranked twenty-sixth in the independent world, effectively average. Yet it was the number one song in one of the social influence worlds, and number forty in another. Social influence catapulted an average song to hit status in one world – ah-whoom – and delegated it to the cellar in another. Call it Lockdown’s lesson.
In the eight social worlds, the songs the subjects downloaded early in the experiment had a huge influence on the songs subjects downloaded later. Since the patterns of download were different in each social world, so were the outcomes.” (page 113)
Mauboussin summarizes the lessons:
Study the distribution of outcomes for the system you are dealing with. Taleb defines gray swans as “modelable extreme events,” which are events one can at least prepare for, as opposed to black swans, which are by definition exceedingly difficult to prepare for.
Look for ah-whoom moments. In social systems, one must be mindful of the level of diversity.
Beware of forecasters. Especially for phase transitions, forecasts are generally dismal. (My note: What Didier Sornette is doing – trying to predict booms and crashes using rigorous mathematics – is interesting to follow, even though it may take a long time for it to be reliable. But if it does become truly reliable, then many investors will start using it, which might change the nature of the entire system, potentially rendering any such discovery useless.)
Mitigate the downside, capture the upside. One of the Kelly criterion’s central lessons is that betting too much in a system with extreme outcomes leads to ruin.
SORTING LUCK FROM SKILL
In areas such as business, investing, and sports, people make predictable and natural mistakes when it comes to distinguishing skill from luck. Consider reversion to the mean:
“The idea is that for many types of systems, an outcome that is not average will be followed by an outcome that has an expected value closer to the average. While most people recognize the idea of reversion to the mean, they often ignore or misunderstand the concept, leading to a slew of mistakes in their analysis.” (page 120)
Reversion to the mean was discovered by the Victorian polymath Francis Galton, a cousin of Charles Darwin. For instance, Dalton found that tall parents tend to have children that are tall, but not as tall – their heights are closer to the mean. Similarly, short parents tend to have children that are short, but not as short – their heights are closer to the mean.
Yet it’s equally true that tall people have parents that are tall, but not as tall – the parents’ heights are closer to the mean. Similarly, short people have parents that are short, but not as short – their heights are closer to the mean. Thus, Dalton’s crucial insight was that the overall distribution of heights remains stable over time: the proportions of the population in every height category was stable as one looks forward or backward in time.
Skill, Luck, and Outcomes
Mauboussin writes that Daniel Kahneman was asked to offer a formula for the twenty-first century. Kahneman gave two formulas:
Success = Some talent + luck
Great success = Some talent + a lot of luck
Consider an excellent golfer who scores well below her handicap during the first round. What do you predict will happen in the second round? We expect the golfer to have a score closer to her handicap for the second round because we expect there to be less luck compared to the first round.
When one thinks about great streaks in sports like baseball, the record streak always belongs to a very talented player. So a record streak is a lot of talent plus a lot of luck.
TIME TO THINK TWICE
You don’t need to think twice before every decision. The stakes for most decisions are low. And even when the stakes are high, the best decision is often obvious enough.
The value of Think Twice is in situations with high stakes where your natural decision-making process will typically lead to a suboptimal choice. Some final thoughts:
Raise Your Awareness
As Kahneman has written, it is much easier to notice decision-making mistakes in others than in ourselves. So pay careful attention not only to others, but also to oneself.
It is difficult to think clearly about many problems. Furthermore, after outcomes have occurred, hindsight bias causes many people to erroneously recall that they assigned the outcome a much higher probability than they actually did ex ante.
Put Yourself in the Shoes of Others
Embracing the outside view is typically essential when making an important probabilistic decision. Although the situation may be new for us, there are many others who have gone through similar things.
When it comes to understanding the behavior of individuals, often the situation – or specific, powerful incentives – can overwhelm otherwise good people.
Also, be careful when trying to understand or to manage a complex adaptive system, whether an ecosystem or the economy.
Finally, leaders must develop empathy for people.
Recognize the Role of Skill and Luck
When luck plays a significant role, anticipate reversion to the mean: extreme outcomes are followed by more average outcomes.
Short-term investment results reflect a great deal of randomness.
Timely, accurate, and clear feedback is central to deliberate practice, which is the path to gaining expertise. The challenge is that in some fields, like long-term investing, most of the feedback comes with a fairly large time lag.
For investors, it is very helpful to keep a journal detailing the reasons for every investment decision. (If you have the time, you can also write down how you feel physically and mentally at the time of each decision.)
A well-kept journal allows you to clearly audit your investment decisions. Otherwise, most of us will lose any ability to recall accurately why we made the decisions we did. This predictable memory loss – in the absence of careful written records – is often associated with hindsight bias.
It’s essential to identify – regardless of the outcome – when one has made a good decision and when one has made a bad decision. A good decision means that one faithfully followed a solid, proven process.
Another benefit of a well-kept investment journal is that you will start to notice other factors or patterns associated with bad investment decisions. For instance, too much stress or too much fatigue is often associated with poorer decisions. On the other hand, a good mood is often associated with overconfident decisions.
Mauboussin mentions a story told by Josh Waitzkin about Tigran Petrosian, a former World Chess Champion: “When playing matches lasting days or weeks, Petrosian would wake up and sit quietly in his room, carefully assessing his own mood. He then built his game plan for the day based on that mood, with great success. A journal can provide a structured tool for similar introspection.”
Create a Checklist
“When you face a tough decision, you want to be able to think clearly about what you might inadvertently overlook. That’s where a decision checklist can be beneficial.”
“A good checklist balances two opposing objectives. It should be general enough to allow for varying conditions, yet specific enough to guide action. Finding this balance means a checklist should not be too long; ideally, you should be able to fit it on one or two pages.
If you have yet to create a checklist, try it and see which issues surface. Concentrate on steps or procedures, and ask where decisions have gone off track before. And recognize that errors are often the result of neglecting a step, not from executing the other steps poorly.” (142, my emphasis)
Perform a Premortem
“You assume you are in the future and the decision you made has failed. You then provide plausible reasons for that failure. In effect, you try to identify why your decision might lead to a poor outcome before you make the decision. Klein’s research shows that premortems help people identify a greater number of potential problems than other techniques and encourage more open exchange, because no one individual or group has invested in a decision yet.”
“You can track your individual or group premortems in your decision journal. Watching for the possible sources of failure may also reveal early signs of trouble.”
Know What You Can’t Know
“… in decisions that involve system with many interacting parts, causal links are frequently unclear…. Remember what Warren Buffet said: ‘Virtually all surprises are unpleasant.’ So considering the worst-case scenarios is vital and generally overlooked in prosperous times.”
“Also resist the temptation to treat a complex system as if it’s simpler than it is…. We can trace most of the large financial disasters to a model that failed to capture the richness of outcomes inherent in a complex system like the stock market.”
“There’s a funny paradox with decision making. Almost everyone realizes how important it is, yet very few people practice. [and few keep a journal…]”
“There are common and identifiable mistakes that you can understand, see in your daily affairs, and manage effectively. In those cases, the correct approach to deciding well often conflicts with what your mind naturally does. But now that you know when to think twice, better decisions will follow. So prepare your mind, recognize the context, apply the right technique – and practice.” (143)
BOOLE MICROCAP FUND
An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: http://boolefund.com/best-performers-microcap-stocks/
This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.
There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.
The goal of the Boole Microcap Fund is to outperform the Russell Microcap Index over time, net of fees. The Boole Fund has low fees.
If you are interested in finding out more, please e-mail me or leave a comment.
My e-mail: email@example.com
Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.