Emotions and Biases

September 19, 2021

Meir Statman, an expert in behavioral finance, has written a good book, What Investors Really Want (McGraw-Hill, 2011).

Here is my brief summary of the important points:

 

UTILITY AND EMOTIONS

Statman argues that investments bring utilitarian benefits, expressive benefits, and emotional benefits.  The utilitarian benefits relate to being able to achieve financial goals, such as financial freedom or the ability to pay for the education of grandchildren.

Expressive benefits can convey to ourselves and others our values and tastes.  For instance, an investor is, in effect, saying, ‘I’m smart and can pick winning investments.’  Emotional benefits relate to how the activity makes you feel.  As Statman notes, Christopher Tsai said about his father Gerald Tsai, Jr. – a pioneer of the go-go funds in the 1960s:  “He loved doing transactions.  He loved the excitement of it.”

Statman tells the story of an engineer who learned that Statman is a professor of finance.  The engineer asked where he could buy the Japanese yen.  Statman asked him why, and the engineer said that the yen would zoom past the dollar based on macroeconomic fundamentals.  Statman replied:

Buying and selling Japanese yen, American stocks, French bonds, and all other investments is not like playing tennis against a practice wall, where you can watch the ball hit the wall and place yourself at just the right spot to hit it back when it bounces.  It is like playing tennis against an opponent you’ve never met before.  Are you faster than your opponent?  Will your opponent fool you by pretending to hit the ball to the left side, only to hit it to the right?  (page ix)

Later, Statman continues:

I tried to dissuade my fellow dinner guest from trading Japanese yen but I have probably failed.  Perhaps I failed to help my fellow dinner guest overcome his cognitive error, learn that trading should be framed as playing tennis against a possibly better player, and refrain from trading.  Or I might have succeeded in helping my fellow guest overcome his cognitive error and yet failed to dissuade him from trading because he wanted the expressive and emotional benefits of the trading game, the fun of playing and the thrill of winning.  (page xiii)

Statman explains that, in many fields of life, emotions are helpful in good decision-making.  Yet when it comes to areas such as investing, emotions tend to be harmful.

There is often a tension between what we should do and what we want to do.  And if we are stressed or fatigued, then it becomes even harder to do what we should do instead of what we want to do.

Moreover, our emotional reactions to changing stock prices generally mislead us.  When stocks are going up, we typically feel more confident and want to own more stocks.  When stocks are going down, we tend to feel less confident and want to own fewer stocks.  But this is exactly the opposite of what we should do if we want to maximize our long-term investment results.

 

WE WANT PROFITS HIGHER THAN RISKS

Beat-the-market investors have always been searching for investments with returns higher than risks.  But such investments are much rarer than is commonly supposed.  For every investor who beats the market, another must trail the market.  And that is before fees and expenses.  After fees and expenses, there are very few investors who beat the market over the course of several decades.

Statman mentions a study of stock traders.  Those who traded the most trailed the index by more than 7 percent per year on average.  Those who traded the least trailed the index by only one-quarter of 1 percent.  Furthermore, a study of Swedish investors showed that heavy traders lose, on average, nearly 4 percent of their total financial wealth each year.

 

FRAMING

Framing means that people can react differently to a particular choice based on how it is presented.  Framing is everywhere in the world of investments.  Statman explains:

Some frames are quick and intuitive, but frames that come to mind quickly and intuitively are not always correct… The beat-the-market frame that comes to mind quickly and intuitively is that of tennis played against a practice wall, but the correct frame is tennis played against a possibly better player.  Incorrect framing of the beat-the-market game is one cognitive error that fools us into believing that beating the market is easy.  (page 18)

Statman has some advice for overcoming the framing error:

It is not difficult to overcome the framing error.  All we need to do is install an app in our minds as we install apps on our iPhones.  When we are ready to trade it would pipe in, asking, ‘Who is the idiot on the other side of the trade?  Have you considered the likelihood that the idiot is you?’  (page 21)

The broader issue (discussed below) is that most of us, by nature, are overconfident in many areas of life, including investing.  Overconfidence is the most widespread cognitive bias that we have.  Using procedures such as a checklist can help reduce errors from overconfidence.  Also, keeping a journal of every investment decision – what the hypothesis is, what the evidence is, and what ended up happening – can help you to improve over time, hopefully reducing cognitive errors such as overconfidence.

 

REPRESENTATIVENESS HEURISTIC

Heuristics are mental shortcuts that often work, but sometimes don’t.  There is a good discussion of the representativeness heuristic on Wikipedia: https://en.wikipedia.org/wiki/Representativeness_heuristic

Daniel Kahneman and Amos Tversky defined representativeness as:

the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated.

When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not actually make it more likely.  The key issue is sample size versus base rate.

Many people mistakenly assume that a small sample – even as small as a single example – is representative of the relevant population.  This mistake is called the law of small numbers.

If you have a small sample, you cannot take it as representative of the entire population.  In other words, a small sample may differ significantly from the base rate.  If you have a large enough sample, then by the law of large numbers, you can conclude that the large sample approximates the base rate (the entire population).

For instance, if you flip a coin ten times and get 8 heads, you cannot conclude that flipping the same coin thousands of times will yield approximately 80% heads.  But if you flip a coin ten thousand times and get 5,003 heads, you can conclude that the base rate for heads is 50%.

If a mutual fund manager beats the market five years out of six, we conclude that it must be due to skill even though that is far too short a period for such a conclusion.  By randomness alone, there will be many mutual fund managers who beat the market five years out of six.

 

FINDING PATTERNS

Our brains are good at finding patterns.  But when the data are highly random, our brains often find patterns that don’t really exist.

For example, there is no way to time the market.  Yet many investors try to time the market, jumping in and out of stocks.  Nearly everyone who tries market timing ends up trailing a simple index fund over time.

Part of the problem is that the brain only notices and remembers the handful of investors who were able to time the market successfully.  What investors should examine is the base rate:  Out of all investors who have tried market timing, how many have succeeded?  A very tiny percentage.

 

WE HAVE EMOTIONS, SOME MISLEADING

When our sentiment is positive, we expect our investments to bring returns higher than risk.  When our sentiment is negative, we expect our investments to bring returns lower than risk.

People expect the stocks of admired companies to do better than the stocks of spurned companies, but the opposite is true.  That’s a key reason deep value investing works:  on average, people are overly negative on out-of-favor or struggling companies, and people are overly positive on companies currently doing well.

People even expect higher returns if the name of a stock is easier to pronounce!

Finally, many investors think they can get rich from a new technological innovation.  In the vast majority of the cases, this is not true.  For every Ford, for every Microsoft, for every Google, for every Amazon, there are many companies in the same industry that failed.

 

THE ILLUSION OF CONTROL

A sense of control, like optimism, is generally beneficial, helping us to overcome challenges and feel happier.  A sense of control is good in most areas of life, but – like overconfidence – it is generally harmful in areas that involve much randomness, such as investing.

Statman explains:

A sense of control gained through lucky charms or rituals can be useful.  In a golfing experiment, some people were told they were receiving a lucky ball; others received the same ball and were told nothing.  Everyone was instructed to take ten putts.  Players who were told that their ball was lucky made 6.42 putts on average while those with the ordinary ball made only 4.75.  People in another experiment were asked to bring a personal lucky charm to a memory test.  Half of them kept the charm with them, but the charms of the other half were kept in another room.  People who had the charms with them reported that they had greater confidence that they would do well on the test than the people whose charms were kept away, and people who had the charms with them indeed did better on the memory test.

The outcomes of golf and memory tasks are not random; they are tasks that can be improved by concentration and effort.  A sense of control brought about by lucky charms or lucky balls can help improve performance if a sense of control brings real control.  But no concentration or effort can improve performance when outcomes are random, not susceptible to control, as is often true in much of investing and trading.  (page 50)

Statman describes one experiment involving traders who saw an index move up or down.  The task was to raise the index as much as possible by the end of each of four rounds.  Traders were also told that three keys on their keyboard have special effect.

In truth, movements in the index were random and the three keys had no effect on outcomes.  Any sense of control was illusory.  Still, some traders believed that they had much control while others believed that they had little.  It turned out that the traders with the highest sense of control displayed the lowest level of performance.  (page 51)

 

COGNITIVE BIASES

Statman also discusses cognitive biases.  He remarks that cognitive biases affect each one of us slightly differently.  Some may fall prey to hindsight bias more often.  Some have more trouble with availability.  Others may be more overconfident, and so forth.

Before examining some cognitive biases, it’s worth briefly reviewing Daniel Kahneman’s definition of two different mental systems that we have:

System 1:   Operates automatically and quickly;  makes instinctual decisions based on heuristics.

System 2:   Allocates attention (which has a limited budget) to the effortful mental activities that demand it, including complex computations involving logic, math, or statistics.

Kahneman writes – in Thinking, Fast and Slow – that System 1 and System 2 usually work quite well together:

The division of labor between System 1 and System 2 is highly efficient:  it minimizes effort and optimizes performance.   The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate.

Yet in some circumstances – especially if a good judgment requires complex computations such as logic, math, or statistics – System 1 has cognitive biases, or systematic errors that it tends to make.

The systematic errors of System 1 happen predictably in areas such as investing or forecasting.  These areas involve so much randomness that the intuitive statistics of System 1 lead predictably and consistently to errors.

 

AVAILABILITY BIAS

availability bias:   we tend to overweight evidence that comes easily to mind.

Related to the availability bias are vividness bias and recency bias.  We typically overweight facts that are vivid (e.g., plane crashes or shark attacks).   We also overweight facts that are recent (partly because they are more vivid).

Statman comments on the availability bias and on the near-miss effect:

Availability errors compound representativeness errors, misleading us further into the belief that beating the market is easy.  Casinos exploit availability errors.  Slot machines are quiet when players lose, but they jingle cascading coins when players win.  We exaggerate the likelihood of winning because the loud voice of winning is available to our minds more readily than the quiet voice of losing… Scans of the brains of gamblers who experience near-misses show activation of a reward-related brain circuitry, suggesting that near-misses increase the transmission of dopamine.  This makes gambling addiction similar to drug addiction.  (page 29)

Statman pens the following about mutual fund marketing:

Mutual fund companies employ availability errors to persuade us to buy their funds.  Morningstar, a company that rates mutual funds, assigns to each fund a number of stars that indicate its relative performance, one star for the bottom group, three stars for the average group, and five stars for the top group.  Have you ever seen an advertisement for a fund with one or two stars?  But we’ve all seen advertisements for four- and five-star funds.  Availability errors lead us to judge the likelihood of finding winning funds by the proportion of four- and five-start funds available to our minds.  (page 29-30)

 

CONFIRMATION BIAS

confirmation bias:   we tend to search for, remember, and interpret information in a way that confirms our pre-existing beliefs or hypotheses.

Confirmation bias makes it quite difficult for many of us to improve upon or supplant our existing beliefs or hypotheses.  This bias also tends to make most of us overconfident about our existing beliefs or hypotheses, since all we can see are supporting data.

It’s clear that System 1 (intuition) often errors when it comes to forming and testing hypotheses.  First of all, System 1 always forms a coherent story (including causality), irrespective of whether there are truly any logical connections at all among various things in our experience.  Furthermore, when System 1 is facing a hypothesis, it automatically looks for confirming evidence.

But even System 2 – the logical and mathematical system that we possess and can develop – by nature uses a positive test strategy:

A deliberate search for confirming evidence, known as positive test strategy, is also how System 2 tests a hypothesis.  Contrary to the rules of philosophers of science, who advise testing hypotheses by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.  (page 81, Thinking, Fast and Slow)

Thus, the habit of always looking for disconfirming evidence of our hypotheses – especially our best-loved hypotheses (Charlie Munger’s term) – is arguably the most important intellectual habit we could develop in the never-ending search for wisdom and knowledge.

Charles Darwin is a wonderful model in this regard.  Darwin was far from being a genius in terms of IQ.  Yet Darwin trained himself always to search for facts and evidence that would contradict his hypotheses.  Charlie Munger explains in “The Psychology of Human Misjudgment” (see Poor Charlie’s Alamanack, expanded 3rd edition):

One of the most successful users of an antidote to first conclusion bias was Charles Darwin.  He trained himself, early, to intensively consider any evidence tending to disconfirm any hypothesis of his, more so if he thought his hypothesis was a particularly good one… He provides a great example of psychological insight correctly used to advance some of the finest mental work ever done.  (my emphasis)

As Statman states:

Confirmation errors contribute their share to the perception that winning the beat-the-market game is easy.  We commit the confirmation error when we look for evidence that confirms our intuition, beliefs, claims, and hypotheses, but overlook evidence that disconfirms them… The remedy for confirmation errors is a structure that forces us to consider all the evidence, confirming and disconfirming alike, and guides us to tests that tell us whether our intuition, beliefs, claims, or hypotheses are confirmed by the evidence or disconfirmed by it.

One manifestation of confirmation errors is the tendency to trim disconfirming evidence from stories… The fact that a forecast of an imminent stock market crash was made years before its coming is unappetizing, so we tend to trim it off our stock market stories.  (page 31)

 

HINDSIGHT BIAS

Hindsight bias:   the tendency, after an event has occurred, to see the event as having been predictable, despite little or no objective basis for predicting the event prior to its occurrence.

This is a very powerful bias that we have.   Because we view the past as much more predictable than it actually was, we also view the future as much more predictable than it actually is.

Hindsight bias is also called the knew-it-all-along effect or creeping determinism.  (See: http://en.wikipedia.org/wiki/Hindsight_bias)

Kahneman writes about hindsight bias as follows:

Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events.   Baruch Fischhoff first demonstrated this ‘I-knew-it-all-along’ effect, or hindsight bias, when he was a student in Jerusalem.  Together with Ruth Beyth (another of our students), Fischhoff conducted a survey before President Richard Nixon visited China and Russia in 1972.   The respondents assigned probabilities to fifteen possible outcomes of Nixon’s diplomatic initiatives.   Would Mao Zedong agree to meet with Nixon?   Might the United States grant diplomatic recognition to China?   After decades of enmity, could the United States and the Soviet Union agree on anything significant?

After Nixon’s return from his travels, Fischhoff and Beyth asked the same people to recall the probability that they had originally assigned to each of the fifteen possible outcomes.   The results were clear.   If an event had actually occurred, people exaggerated the probability that they had assigned to it earlier.   If the possible event had not come to pass, the participants erroneously recalled that they had always considered it unlikely.   Further experiments showed that people were driven to overstate the accuracy not only of their original predictions but also of those made by others.   Similar results have been found for other events that gripped public attention, such as the O.J. Simpson murder trial and the impeachment of President Bill Clinton.  The tendency to revise the history of one’s beliefs in light of what actually happened produces a robust cognitive illusion.  (pages 202-3, my emphasis)

Concludes Kahneman:

The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent that it really is.  The illusion that one has understood the past feeds the further illusion that one can predict and control the future.  These illusions are comforting.   They reduce the anxiety we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence.  (page 204-5, my emphasis)

Statman elucidates:

So, if an introverted man marries a shy woman, it must be because, as we have known all along, ‘birds of a feather flock together’ and if he marries an outgoing woman, it must be because, as we have known all along, ‘opposites attract.’  Similarly, if stock prices decline after a prolonged rise, it must be, as we have known all along, that ‘trees don’t grow to the sky’ and if stock prices continue to rise, it must be, as we have equally known all along, that ‘the trend is your friend.’  Hindsight errors are a serious problem for all historians, including stock market historians.  Once an event is part of history, there is a tendency to see the sequence that led to it as inevitable.  In hindsight, poor choices with happy endings are described as brilliant choices, and unhappy endings of well-considered choices are attributed to horrendous choices.  (page 33)

Statman later writes about Warren Buffett’s understanding of hindsight bias:

Warren Buffett understands well the distinction between hindsight and foresight and the temptation of hindsight.  Roger Lowenstein mentioned in his biography of Buffett the events surrounding the increase in the Dow Jones Industrial Index beyond 1,000 in early 1966 and its subsequent decline by spring.  Some of Buffett’s partners called to warn him that the market might decline further.  Such calls, said Buffett, raised two questions:

If they knew in February that the Dow was going to 865 in May, why didn’t they let me in on it then; and

If they didn’t know what was going to happen during the ensuing three months back in February, how do they know in May?

Statman concludes:  We will always be normal, never rational, but we can increase the ratio of smart normal behavior to stupid normal behavior by recognizing our cognitive errors and devising methods to overcome them.

One of the best ways to minimize errors from cognitive bias is to use a fully automated investment strategy.  A low-cost broad market index fund will allow you to beat at least 90% of all investors over several decades.  If you adopt a quantitative value approach, you can do even better.

 

OVERCONFIDENCE

Overconfidence is such as widespread cognitive bias among people that Kahneman devotes Part 3 of his book, Thinking, Fast and Slow, entirely to this topic.  Kahneman says in his introduction:

The difficulties of statistical thinking contribute to the main theme of Part 3, which describes a puzzling limitation of our mind:  our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in.   We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events.   Overconfidence is fed by the illusory certainty of hindsight.   My views on this topic have been influenced by Nassim Taleb, the author of The Black Swan.  (pages 14-5)

As Statman describes:

Investors overestimate the future returns of their investments relative to the returns of the average investor.  Investors even overestimate their past returns relative to the returns of the average investor.  Members of the American Association of Individual Investors overestimated their own investment returns by an average of 3.4 percentage points relative to their actual returns, and they overestimated their own returns relative to those of the average investor by 5.1 percentage points.  The unrealistic optimism we display in the investment arena is similar to the unrealistic optimism we display in other arenas.  (page 45)

Statman also warns that stockbrokers and stock exchanges have good reasons to promote overconfidence because unrealistically optimistic investors trade far more often.

 

SELF-ATTRIBUTION BIAS

self-attribution bias:   we tend to attribute good outcomes to our own skill, while blaming bad outcomes on bad luck.

This ego-protective bias prevents us from recognizing and learning from our mistakes.  This bias also contributes to overconfidence.

As with the other cognitive biases, often self-attribution bias makes us happier and stronger.  But we have to learn to slow ourselves down and take extra care in areas – like investing – where overconfidence will hurt us.

 

INFORMATION AND OVERCONFIDENCE

In Behavioural Investing (Wiley, 2007), James Montier explains a study done by Paul Slovic (1973).  Eight experienced bookmakers were shown a list of 88 variables found on a typical past performance chart on a horse.  Each bookmaker was asked to rank the piece of information by importance.

Then the bookmakers were given data for 40 past races and asked to rank the top five horses in each race.  Montier:

Each bookmaker was given the past data in increments of the 5, 10, 20, and 40 variables he had selected as most important.  Hence each bookmaker predicted the outcome of each race four times – once for each of the information sets.  For each prediction the bookmakers were asked to give a degree of confidence ranking in their forecast.  (page 136)

Here are the results:

Accuracy was virtually unchanged, regardless of the number of pieces of information the bookmaker was given (5, 10, 20, then 40).

But confidence skyrocketed as the number of pieces of information increased (5, 10, 20, then 40).

This same result has been found in a variety of areas.  As people get more information, the accuracy of their judgments or forecasts typically does not change at all, while their confidence in the accuracy of their judgments or forecasts tends to increase dramatically.

 

NARRATIVE FALLACY

In The Black Swan, Nassim Taleb writes the following about the narrative fallacy:

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them.  Explanations bind facts together.  They make them all the more easily remembered;  they help them make more sense.  Where this propensity can go wrong is when it increases our impression of understanding.  (page 63-4)

The narrative fallacy is central to many of the biases and misjudgments mentioned by Daniel Kahneman and Charlie Munger.  The human brain, whether using System 1 (intuition) or System 2 (logic), always looks for or creates logical coherence among random data.

Thanks to evolution, System 1 is usually right when it assumes causality.  For example, there was movement in the grass, probably caused by a predator, so run.  And even in the modern world, as long as cause-and-effect is straightforward and not statistical, System 1 is amazingly good at what it does:  its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate.  (Kahneman)

Furthermore, System 2, by searching for underlying causes or coherence, has, through careful application of the scientific method over centuries, developed a highly useful set of scientific laws by which to explain and predict various phenomena.

The trouble comes when the data or phenomena in question are ‘highly random’ – or inherently unpredictable (based on current knowledge).  In these areas, System 1 is often very wrong when it creates coherent stories or makes predictions.  And even System 2 assumes necessary logical connections when there may not be any – at least, none that can be discovered for some time.

Note:  The eighteenth century Scottish philosopher (and psychologist) David Hume was one of the first to clearly recognize the human brain’s insistence on always assuming necessary logical connections in any set of data or phenomena.

 

ANCHORING

anchoring effect:   we tend to use any random number as a baseline for estimating an unknown quantity, despite the fact that the unknown quantity is totally unrelated to the random number.

Kahneman and Tversky did one experiment where they spun a wheel of fortune, but they had secretly programmed the wheel so that it would stop on 10 or 65.   After the wheel stopped, participants were asked to estimate the percentage of African countries in the UN.   Participants who saw “10” on the wheel guessed 25% on average, while participants who saw “65” on the wheel guessed 45% on average, a huge difference.

Behavioral finance expert James Montier ran his own experiment on anchoring.   People were asked to write down the last four digits of their phone number.   Then they were asked whether the number of doctors in their capital city is higher or lower than the last four digits of their phone number.   Results:  Those whose last four digits were greater than 7000 on average reported 6762 doctors, while those with telephone numbers below 2000 arrived at an average 2270 doctors.  (Behavioural Investing, page 120)

Those are just two experiments out of many.  The anchoring effect is “one of the most reliable and robust results of experimental psychology” (page 119, Kahneman).  Furthermore, Montier observes that the anchoring effect is one reason why people cling to financial forecasts, despite the fact that most financial forecasts are either wrong, useless, or impossible to time.

When faced with the unknown, people will grasp onto almost anything. So it is little wonder that an investor will cling to forecasts, despite their uselessness.  (Montier, page 120)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.