Deep Value: Profiting from Mean Reversion


(Image: Zen Buddha Silence by Marilyn Barbone.)

November 12, 2017

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion. Sometimes it seems that there are misconceptions about deep value investing.

  • First, deep value stocks have on occasion been called cheap relative to future growth. But it’s often more accurate to say that deep value stocks are cheap relative tonormalizedearnings or cash flows.
  • Second, the cheapness of deep value stocks has often been said to be relative to “net tangible assets.” However, in many cases, even including stocks at a discount to tangible assets, mean reversion relates to the futurenormalized earnings or cash flows that the assets can produce.
  • Third, typically more than half of deep value stocks underperform the market. And deep value stocks are more likely to be distressed than average stocks. Do these facts imply that a deep value investment strategy is riskier than average? No…

Have you noticed these misconceptions? I’m curious to hear your take. Please let me know.

Here are the sections in this blog post:

  • Introduction
  • Mean Reversion as “Return to Normal” instead of “Growth”
  • Revenues, Earnings, Cash Flows, NOT Asset Values
  • Is Deep Value Riskier?
  • A Long Series of Favorable Bets
  • “Cigar Butt’s” vs. See’s Candies
  • Microcap Cigar Butt’s

 

INTRODUCTION

Deep value stocks tend to fit two criteria:

  • Deep value stocks trade at depressed multiples.
  • Deep value stocks have depressed fundamentals – they have generally been doing terribly in terms of revenues, earnings, or cash flows, and often the entire industry is doing poorly.

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.

  • Low multiples include low P/E (price-to-earnings), low P/B (price-to-book), low P/CF (price-to-cash flow), and low EV/EBIT (enterprise value-to-earnings before interest and taxes).
  • Mean reversion implies that, in general, deep value stocks are underperforming their economic potential. On the whole, deep value stocks will experience better future economic performance than is implied by their current stock prices.

If you look at deep value stocks as a group, it’s a statistical fact that many will experience better revenues, earnings, or cash flows in the future than what is implied by their stock prices. This is due largely tomean reversion. The future economic performance of these deep value stocks will be closer to normal levels than their current economic performance.

Moreover, the stock price increases of the good future performers will outweigh the languishing stock prices of the poor future performers. This causes deep value stocks, as a group, to outperform the market over time.

Two important notes:

  1. Generally, for deep value stocks, mean reversion implies a return to more normal levels of revenues, earnings, or cash flows. It does not often imply growth above and beyond normal levels.
  2. For most deep value stocks, mean reversion relates to future economic performance andnot to tangible asset value per se.

(1) Mean Reversion as Return to More Normal Levels

One of the best papers on deep value investing is by Josef Lakonishok, Andrei Shleifer, and Robert Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.” Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

LSV (Lakonishok, Schleifer, and Vishny) correctly point out that deep value stocks are better identified by using more than one multiple. LSV Asset Management currently manages $105 billion using deep value strategies that rely simultaneously on several metrics for cheapness, including low P/E and low P/CF.

  • InQuantitative Value (Wiley, 2012), Tobias Carlisle and Wesley Gray find that low EV/EBIT outperformed every other measure of cheapness, including composite measures.
  • However, James O’Shaughnessy, inWhat Works on Wall Street (McGraw-Hill, 2011), demonstrates – with great thoroughness – that, since the mid-1920’s, composite approaches (low P/S, P/E, P/B, EV/EBITDA, P/FCF) have been the best performers.
  • Any single metric may be more easily arbitraged away by a powerful computerized approach. Walter Schloss once commented that low P/B was working less well because many more investors were using it. (In recent years, low P/B hasn’t worked.)

LSV explain why mean reversion is the essence of deep value investing. Investors, on average, are overly pessimistic about stocks at low multiples. Investors understimate the mean reversion in future economic performance for these out-of-favor stocks.

However, in my view, the paper would be clearer if it used (in some but not all places) “return to more normal levels of economic performance” in place of “growth.” Often it’s a return to more normal levels of economic performance – rather than growth above and beyond normal levels – that defines mean reversion for deep value stocks.

(2) Revenues, Earnings, Cash Flows NOT Net Asset Values

Buying at a low price relative to tangible asset value is one way to implement a deep value investing strategy. Many value investors have successfully used this approach. Examples include Ben Graham, Walter Schloss, Peter Cundill, John Neff, and Marty Whitman.

Warren Buffett used this approach in the early part of his career. Buffett learned this method from his teacher and mentor, Ben Graham. Graham called this the “net-net” approach. You take net working capital minus ALL liabilities. If the stock price is below that level, and if you buy a basket of such “net-net’s,” you can’t help but do well over time. These areextremely cheap stocks, on average. (The only catch is that there must be enough net-net’s in existence to form a basket, which is not always the case.)

Buffett on “cigar butts”:

…I call it the cigar butt approach to investing. You walk down the street and you look around for a cigar butt someplace. Finally you see one and it is soggy and kind of repulsive, but there is one puff left in it. So you pick it up and the puff is free – it is a cigar butt stock. You get one free puff on it and then you throw it away and try another one. It is not elegant. But it works. Those are low return businesses.

Link: http://intelligentinvestorclub.com/downloads/Warren-Buffett-Florida-Speech.pdf

But most net-net’s are NOT liquidated. Rather, there is mean reversion in their future economic performance – whether revenues, earnings, or cash flows. That’s not to say there aren’t some bad businesses in this group. For net-net’s, when economic performance returns to more normal levels, typically you sell the stock. You don’t (usually) buy and hold net-net’s.

Sometimes net-net’s are acquired. But in many of these cases, the acquirer is focused mainly on the earnings potential of the assets. (Non-essential assets may be sold, though.)

In sum, the specific deep value method of buying at a discount to net tangible assets has worked well in general ever since Graham started doing it. And net tangible assets do offer additional safety. That said, when these particular cheap stocks experience mean reversion, often it’s because revenues, earnings, or cash flows return to “more normal” levels. Actual liquidation is rare.

 

IS DEEP VALUE RISKIER?

According to a study done by Joseph Piotroski from 1976 to 1996 – discussed below – although a basket of deep value stocks clearly beats the market over time, only 43% of deep value stocks outperform the market, while 57% underperform. By comparison, an average stock has a 50% chance of outperforming the market and a 50% chance of underperforming.

Let’s assume that the average deepvalue stock has a 57% chance of underperforming the market, while an average stock has only a 50% chance of underperforming. This is a realistic assumption not only because of Piotroski’s findings, but also because the average deep value stock is more likely to be distressed (or to have problems) than the average stock.

Does it follow that the reason deep value investing does better than the market over time is that deep value stocks are riskier than average stocks?

It is widely accepted that deep value investing does better than the market over time. But there is still disagreement about how risky deep value investing is. Strict believers in the EMH (Efficient Markets Hypothesis) – such as Eugene Fama and Kenneth French – argue that value investing must be unambiguously riskier than simply buying an S&P 500 Index fund. On this view, the only way to do better than the market over time is by taking more risk.

Now, it is generally true that the average deep value stock is more likely to underperform the market than the average stock. And the average deep value stock is more likely to be distressed than the average stock.

But LSV show that a deep value portfolio does better than an average portfolio, especially during down markets. This means that a basket of deep value stocks is less risky than a basket of average stocks.

  • A “portfolio” or “basket” of stocks refers to a group of stocks. Statistically speaking, there must be at least 30 stocks in the group. In the case of LSV’s study – like most academic studies of value investing – there are hundreds of stocks in the deep value portfolio. (The results are similar over time whether you have 30 stocks or hundreds.)

Moreover, a deep value portfolio only has slightly more volatility than an average portfolio, not nearly enough to explain the significant outperformance. In fact, when looked at more closely, deep value stocks as a group have slightly more volatility mainly because of upside volatility– relative to the broad market – rather than because of downside volatility. This is captured not only by the clear outperformance of deep value stocks as a group over time, but also by the fact that deep value stocks do much better than average stocks in down markets.

Deep value stocks, as a group, not only outperform the market, but are less risky. Ben Graham, Warren Buffett, and other value investors have been saying this for a long time. After all, the lower the stock price relative to the value of the business, the less risky the purchase, on average. Less downside implies more upside.

 

A LONG SERIES OF FAVORABLE BETS

Let’s continue to assume that the average deep value stock has a 57% chance of underperforming the market. And the average deep value stock has a greater chance of being distressed than the average stock. Does that mean that the average individual deep value stock is riskier than the average stock?

No, because the expected return on the average deep value stock is higher than the expected return on the average stock. In other words, on average, a deep value stock has more upside than downside.

Put very crudely, in terms of expected value:

[(43% x upside) – (57% x downside)] > [avg. return]

43% times the upside, minus 57% times the downside, is greater than the return from the average stock (or from the S&P 500 Index).

The crucial issue relates to making a long series of favorable bets. Since we’re talking about a long series of bets, let’s again consider a portfolioof stocks.

  • Recall that a “portfolio” or “basket” of stocks refers to a group of at least 30 stocks.

A portfolio of average stocks will simply match the market over time. That’s an excellent result for most investors, which is why most investors should just invest in index funds:https://boolefund.com/warren-buffett-jack-bogle/

A portfolio of deep value stocks will, over time, do noticeably better than the market. Year in and year out, approximately 57% of the deep value stocks will underperform the market, while 43% will outperform. But the overall outperformance of the 43% will outweigh the underperformance of the 57%, especially over longer periods of time. (57% and 43% are used for illustrative purposes here. The actual percentages vary.)

Say that you have an opportunity to make the same bet 1,000 times in a row, and that the bet is as follows: You bet $1. You have a 60% chance of losing $1, and a 40% chance of winning $2. This is a favorable bet because the expected value is positive: 40% x $2 = $0.80, while 60% x $1 = $0.60. If you made this bet repeatedly over time, you would average $0.20 profit on each bet, since $0.80 – $0.60 = $0.20.

If you make this bet 1,000 times in a row, then roughly speaking, you will lose 60% of them (600 bets) and win 40% of them (400 bets). But your profit will be about $200. That’s because 400 x $2 = $800, while 600 x $1 = $600. $800 – $600 = $200.

Systematically investing in deep value stocks is similar to the bet just described. You may lose 57% of the bets and win 43% of the bets. But over time, you will almost certainly profit because the average upside is greater than the average downside. Your expected return is also higher than the market return over the long term.

 

“CIGAR BUTT’S” vs. SEE’S CANDIES

In his 1989 Letter to Shareholders, Buffett writes about his “Mistakes of the First Twenty-Five Years,” including a discussion of “cigar butt” (deep value) investing:

My first mistake, of course, was in buying control of Berkshire. Though I knew its business – textile manufacturing – to be unpromising, I was enticed to buy because the price looked cheap. Stock purchases of that kind had proved reasonably rewarding in my early years, though by the time Berkshire came along in 1965 I was becoming aware that the strategy was not ideal.

If you buy a stock at a sufficiently low price, there will usually be some hiccup in the fortunes of the business that gives you a chance to unload at a decent profit, even though the long-term performance of the business may be terrible. I call this the ‘cigar butt’ approach to investing. A cigar butt found on the street that has only one puff left in it may not offer much of a smoke, but the ‘bargain purchase’ will make that puff all profit.

Link: http://www.berkshirehathaway.com/letters/1989.html

Buffett has made it clear that cigar butt (deep value) investing does work. In fact, fairly recently, Buffett bought at basket of cigar butts in South Korea. The results were excellent. But he did this in his personal portfolio.

This highlights a major reason why Buffett evolved from investing in cigar butts to investing in higher quality businesses: size of investable assets. When Buffett was managing a few hundred million dollars or less, which includes when he managed an investment partnership, Buffett achieved outstanding results in part by investing in cigar butts. But when investable assets swelled into the billions of dollars at Berkshire Hathaway, Buffett began investing in higher quality companies.

  • Cigar butt investing works best for micro caps. But micro caps won’t move the needle if you’re investing many billions of dollars.

The idea of investing in higher quality companies is simple: If you can find a business with a sustainably high ROE – based on a sustainable competitive advantage – and if you can hold that stock for a long time, then your returns as an investor will approximate the ROE (return on equity). This assumes that the company can continue to reinvest all of its earnings at the same ROE, which is extremely rare when you look at multi-decade periods.

  • The quintessential high-quality business that Buffett and Munger purchased for Berkshire Hathaway is See’s Candies. They paid $25 million for $8 million in tangible assets in 1972. Since then, See’s Candies has produced over $2 billion in (pre-tax) earnings, while only requiring a bit over $40 million in reinvestment.
  • See’s turns out more than $80 million in profits each year. That’s over 100% ROE (return on equity), which is extraordinary. But that’s based mostly on assets in place. The company has not been able to reinvest most of its earnings. Instead, Buffett and Munger have invested the massive excess cash flows in other good opportunities – averaging over 20% annual returns on these other investments (for most of the period from 1972 to present).

Furthermore, buying and holding stock in a high-quality business brings enormous tax advantages over time because you never have to pay taxes until you sell. Thus, as a high-quality business – with sustainably high ROE – compounds value over many years, a shareholder who never sells receives the maximum benefit of this compounding.

Yet it’s extraordinarily difficult to find a business that can sustain ROE at over 20% – including reinvested earnings – for decades. Buffett has argued that cigar butt (deep value) investing produces more dependable results than investing exclusively in high-quality businesses. Very often investors buy what they think is a higher-quality business, only to find out later that they overpaid because the future performance does not match the high expectations that were implicit in the purchase price. Indeed, this is what LSV show in their famous paper (discussed above) in the case of “glamour” (or “growth”) stocks.

 

MICROCAP CIGAR BUTTS

Buffett has said that you can do quite well as an investor, if you’re investing smaller amounts, by focusing oncheapmicro caps. In fact, Buffett has maintained that he could get 50% per year if he could invest only in cheap micro caps.

Investing systematically in cheap micro caps can often lead to higher long-term results than the majority of approaches that invest in high-quality stocks.

First, micro caps, as a group, far outperform every other category. See the historical performance here: https://boolefund.com/best-performers-microcap-stocks/

Second,cheap micro caps do even better. Systematically buying at low multiples works over the course of time, as clearly shown by LSV and many others.

Finally, if you apply the Piotroski F-Score to screen cheap micro caps for improving fundamentals, performance is further boosted: The biggest improvements in performance are concentrated incheap micro caps with no analyst coverage. See:https://boolefund.com/joseph-piotroski-value-investing/

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ben Graham Was a Quant


(Image: Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011). In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors. Having a clearly defined quantitative investment strategy that you stick with over the long term–both when the strategy is in favor and when it’s not–is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches. Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors. See: https://boolefund.com/warren-buffett-jack-bogle/

An index fund tries to copy an index, which is itself typically based on companies of a certain size. By contrast, quantitative value investing is based on metrics that indicate undervaluation.

 

QUANTITATIVE VALUE INVESTING

Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details ofsecurity analysis which I devoted myself to so strenuously formany years. I feel that they are relatively unimportant,which, in a sense, has put me opposed to developments in thewhole profession. I think we can do it successfully with a fewtechniques and simple principles. The main point is to havethe right general principles and the character to stick tothem.

I have a considerableamount of doubt on the question of how successful analystscan be overall when applying these selectivity approaches. The thing that I have been emphasizing in my own work forthe last few years has been the group approach. To try to buygroups of stocks that meet some simple criterion for beingundervalued – regardless of the industry and with very littleattention to the individual company

I am just finishing a 50-year study–the application of thesesimple methods to groups of stocks, actually, to all the stocksin the Moody’s Industrial Stock Group. I found the resultswere very good for 50 years. They certainly did twice as wellas the Dow Jones. And so my enthusiasm has beentransferred from the selective to the group approach. What Iwant is an earnings ratio twice as good as the bond interestratio typically for most years. One can also apply a dividendcriterion or an asset value criterion and get good results. My research indicates the best results come from simple earningscriterions.

Imagine–there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work. It seems too good to be true. But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up.

See: http://www.cfapubs.org/doi/pdf/10.2470/rf.v1977.n1.4731

Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors. They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not. This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others. Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct. – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that. – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well. Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient. The difference between these propositions is night and day.

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.

 

DESPERATELY SEEKING ALPHA

Greiner refers to the July 12, 2010 issue of Barron’s. Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009. 87 of the 248 funds were gone completely, while the other funds had been downgraded. Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha. Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return. Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors. (page 16)

But risk involves not only just individual company risk. It also involves how one company’s stock is correlated with the stocks of other companies. If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks. Graham was not concerned about any correlations among the cheap stocks. As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha. (Beta as a concept has some obvious problems.) But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17). Moreover, alpha is excess return relative to an index or benchmark. We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors. Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data). (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return. Greiner explains:

The stock market is not a repeatable event. Every day is an out-of-sample environment. The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets. (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable. It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)

 

RISKY BUSINESS

Risk means more things can happen than will happen. – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed. The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed. Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management. Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk. Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high. Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quitelow.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear. In these situations, a spike in volatility is a symptom of risk. At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values. So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear. Sell-offs are usually buying opportunities for quantitative value investors.

I willtell youhow to becomerich. Closethedoors. Befearfulwhen othersare greedy. Begreedywhenothersarefearful. – Warren Buffett

So how do you figure out risk exposures? It is often a difficult thing to do. Greiner defines ELE events as extinction-level events, or extreme-extreme events. If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.” (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future. Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing. Value investing has always worked over timebecause the investor systematically buys stocks well below probable intrinsic value–whether net asset value or earnings power. This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience. When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences. This is what quants do. They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen. We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety. Do we not wear seat belts because of the odd chance of the tractor-trailer collision? Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan. Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism. The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes. In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow. The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks. Eventually, the interconnectedness of the associations among sticks results in an avalanche. Likewise, so behaves the market. (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so. Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation. Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out. Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful. Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty. Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event. When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before. This is not really very helpful. The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards. That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French,or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor. If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe. Do the same for all other factors. The numerical B/P for a stock is then termed exposure. Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta. The variance and covariance of the beta time series act as proxies for the variance of the stocks. These are the components of the covariance matrix. On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors. The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry. Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in. This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another. Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries. However, FactSet’s MAC model does include this operation. (page 45)

 

BETA IS NOT “SHARPE” ENOUGH

Value investors like Ben Graham know that price variability is not risk. Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power). Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good. Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky. In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power. Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value. (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution. The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions. For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete. Some of these are asymmetric about the mean (first moment) and some are not. Some have fat tails and some do not. You can even have distributions with infinite second moments (infinite variance). There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance. Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause. Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly. (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information. It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave. Market returns tend to have fatter tails. But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance. It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty. However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH? It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful. In a world of Econs, I believe that the EMH would be true. And it would not have been possible to do research in behavioral finance without the rational model as a starting point. Without the rational framework, there are no anomalies from which we can detect misbehavior. Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research. We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have. (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed. Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’ There are definitely anomalies: sometimes the market overreacts, and sometimes it underreacts. But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right. The price is often wrong, and sometimes very wrong, says Thaler. However, that doesn’t mean that you can beat the market. It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution. (page 60)

One problem with beta is that it has covariance in the numerator. And if two variables are linearly independent, then their covariance is always zero. But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark. Greiner explains that you can determine linear dependence as follows: When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X? If yes, then the portfolio and the benchmark are linearly dependent. Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity. However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated. Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund. The distribution of returns is more like a Frechet distribution than a normal distribution in two ways: it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74). The Magellan Fund returns also have a fat tail on the right side of the distribution. This is where the Frechet misses. But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution. Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Future of the Mind


(Image: Zen Buddha Silence by Marilyn Barbone.)

August 20, 2017

This week’s blog post covers another book by the theoretical physicist Michio Kaku–The Future of the Mind (First Anchor Books, 2015).

Most of the wealth we humans have created is a result of technological progress (in the context of some form of capitalism plus the rule of law). Most future wealth will result directly from breakthroughs in physics, artificial intelligence, genetics, and other sciences. This is why AI is fascinating in general (not just for investing). AI–in combination with other technologies–may eventually turn out to be the most transformative technology of all time.

 

A PHYSICIST’S DEFINITION OF CONSCIOUSNESS

Physicists have been quite successful historically because of their ability to gather data, to measure ever more precisely, and to construct testable, falsifiable mathematical models to predict the future based on the past. Kaku explains:

When a physicist first tries to understand something, first he collects data and then he proposes a “model,” a simplified version of the object he is studying that captures its essential features. In physics, the model is described by a series of parameters (e.g., temperature, energy, time). Then the physicist uses the model to predict its future evolution by simulating its motions. In fact, some of the world’s largest supercomputers are used to simulate the evolution of models, which can describe protons, nuclear explosions, weather patterns, the big bang, and the center of black holes. Then you create a better model, using more sophisticated parameters, and simulate it in time as well. (page 42)

Kaku then writes that he’s taken bits and pieces from fields such as neurology and biology in order to come up with a definition of consciousness:

Consciousness is a process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space, time, and in relation to others), in order to accomplish a goal (e.g., find mates, food, shelter).

Kaku emphasizes that humans use the past to predict the future, whereas most animals are focused only on the present or the immediate future.

Kaku writes that one can rate different levels of consciousness based on the definition. The lowest level of consciousness is Level 0, where an organism has limited mobility and creates a model using feedback loops in only a few parameters (e.g., temperature). Kaku gives the thermostat as an example. If the temperature gets too hot or too cold, the thermostat registers that fact and then adjusts the temperature accordingly using an air conditioner or heater. Kaku says each feedback loop is “one unit of consciousness,” so the thermostat – with only one feedback loop – would have consciousness of Level 0:1.

Organisms that are mobile and have a central nervous system have Level I consciousness. There’s a new set of parameters–relative to Level 0–based on changing locations. Reptiles are an example of Level I consciousness. The reptilian brain may have a hundred feedback loops based on their senses, etc. The totality of feedback loops give the reptile a “mental picture” of where they are in relation to various objects (including prey), notes Kaku.

Animals exemplify Level II consciousness. The number of feedback loops jumps exponentially, says Kaku. Many animals have complex social structures. Kaku explains that the limbic system includes the hippocampus (for memories), the amygdala (for emotions), and the thalamus (for sensory information).

You could rank the specific level of Level II consciousness of an animal by listing the total number of distinct emotions and social behaviors. So, writes Kaku, if there are ten wolves in the wolf pack, and each wolf interacts with all the others with fifteen distinct emotions and gestures, then a first approximation would be that wolves have Level II:150 consciousness. (Of course, there are caveats, since evolution is never clean and precise, says Kaku.)

 

LEVEL III CONSCIOUSNESS: SIMULATING THE FUTURE

Kaku observes that there is a continuum of consciousness from the most basic organisms up to humans. Kaku quotes Charles Darwin:

The difference between man and the higher animals, great as it is, is certainly one of degree and not of kind.

Kaku defines human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal.

Kaku explains that we as humans have so many feedback loops that we need a “CEO”–an expanded prefrontal cortex that can analyze all the data logically and make decisions. More precisely, Kaku writes that neurologist Michael Gazzaniga has identified area 10, in the lateral prefrontal cortex, which is twice as big in humans as in apes. Area 10 is where memory, planning, abstract thinking, learning rules, picking out what is relevant, etc. happens. Kaku says he will refer to this region as the dorsolateral prefrontal cortex, roughly speaking.

Most animals, by constrast, do not think and plan, but rely on instinct. For instance, notes Kaku, animals do not plan to hibernate, but react instinctually when the temperature drops. Predators plan, but only for the immediate future. Primates plan a few hours ahead.

Humans, too, rely on instinct and emotion. But humans also analyze and evaluate information, and run mental simulations of the future–even hundreds or thousands of years into the future. This, writes Kaku, is how we as humans try to make the best decision in pursuit of a goal. Of course, the ability to simulate various future scenarios gives humansa great evolutionary advantage for things like evading predators and finding food and mates.

As humans, we have so many feedback loops, says Kaku, that it would be a chaotic sensory overload if we didn’t have the “CEO” in the dorsolateral prefrontal cortex. We think in terms of chains of causality in order to predict future scenarios. Kaku explains that the essence of humor is simulating the future but then having an unexpected punch line.

Children play games largely in order tosimulate specific adult situations. When adults play various games like chess, bridge, or poker, they mentally simulate various scenarios.

Kaku explains the mystery of self-awareness:

Self-awareness is creating a model of the world and simulating the future in which you appear.

As humans, we constantly imagine ourselves in various future scenarios. In a sense, we are continuously running “thought experiments” about our lives in the future.

Kaku writes that the medial prefrontal cortex appears to be responsible for creating a coherent sense of self out of the various sensations and thoughts bombarding our brains. Furthermore, the left brain fits everything together in a coherent story even when the data don’t make sense. Dr. Michael Gazzaniga was able to show this by running experiments on split-brain patients.

Kaku speculates that humans can reach better conclusions if the brain receives a great deal of competing data. With enough data and with practice and experience, the brain can often reach correct conclusions.

At the beginning of the next section–Mind Over Matter–Kaku quotes Harvard psychologist Steven Pinker:

The brain, like it or not, is a machine. Scientists have come to that conclusion, not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain.

 

DARPA

DARPA is the Pentagon’s Defense Advanced Research Projects Agency. Kaku writes that DARPA has been central to some of the most important technological breakthroughs of the twentieth century.

President Dwight Eisenhower set up DARPA originally as a way to compete with the Russians after they launched Sputnik into orbit in 1957. Over the years, some of DARPA’s projects became so large that they spun them off as separate entities, including NASA.

DARPA’s “only charter is radical innovation.” DARPA scientists have always pushed the limits of what is physically possible. One of DARPA’s early projects was Arpanet, a telecommunications network to connect scientists during and after World War III. After the breakup of the Soviet bloc, the National Science Foundation decided to declassify Arpanet and give away the codes and blueprints for free. This would eventually become the internet.

DARPA helped create Project 57, which was a top-secret project for guiding ballistic missiles to specific targets. This technology later became the foundation for the Global Positioning System (GPS).

DARPA has also been a key player in other technologies, including cell phones, night-vision goggles, telecommunications advances, and weather satellites, says Kaku.

Kaku writes that, with a budget over $3 billion, DARPA has recently focused on the brain-machine interface. Kaku quotes former DARPA official Michael Goldblatt:

Imagine if soldiers could communicate by thought alone… Imagine the threat of biological attack being inconsequential. And contemplate, for a moment, a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-through. As impossible as these visions sound or as difficult as you might think the task would be, these visions are the everyday work of the Defense Sciences Office [a branch of DARPA]. (page 74)

Goldblatt, notes Kaku, thinks the long-term legacy of DARPA will be human enhancement. Goldblatt’s daughter has cerebral palsy and has been confined to a wheelchair all her life. Goldblatt is highly motivated not only to help millions of people in the future and create a legacy, but also to help his own daughter.

 

TELEKINESIS

Cathy Hutchinson became a quadriplegic after suffering a massive stroke. But in May 2012, scientists from Brown University placed a tiny chip on top of her brain–called Braingate–which is connected by wires to a computer. (The chip has ninety-six electrodes for picking up brain impulses.) Her brain could then send signals through the computer to control a mechanical robotic arm. She reported her great excitement and said she knows she will get robotic legs eventually, too. This might happen soon, says Kaku, since the field of cyber prosthetics is advancing fast.

Scientists at Northwestern placed a chip with 100 electrodes on the brain of a monkey. The signals were carefully recorded while the monkey performed various tasks involving the arms. Each task would involve a specific firing of neurons, which the scientists eventually were able to decipher.

Next, the scientists took the signal sequences from the chip and instead of sending them to a mechanical arm, they sent the signals to the monkey’s own arm. Eventually the monkey learned to control its own arm via the computer chips. (The reason 100 electrodes is enough is because they were placed on the output neurons. So the monkey’s brain had already done the complex processing involving millions of neurons by the time the signals reached the electrodes.)

This device is one of many that Northwestern scientists are testing. These devices, which continue to be developed, can help people with spinal cord injuries.

Kaku observes that much of the funding for these developments comes from a DARPA project called Revolutionizing Prosthetics, a $150 million effort since 2006. Retired U.S. Army colonel Geoffrey Ling, a neurologist with several tours of duty in Iraq and Afghanistan, is a central figure behind Revolutionizing Prosthetics. Dr. Ling was appalled by the suffering caused by roadside bombs. In the past, many of these brave soldiers would have died. Today, many more can be saved. However, more than 1,300 of them have lost limbs after returning from the Middle East.

Dr. Ling, with funding from the Pentagon, instructed his staff to figure out how to replace lost limbs within five years. Ling:

They thought we were crazy. But it’s in insanity that things happen.

Kaku continues:

Spurred into action by Dr. Ling’s boundless enthusiasm, his crew has created miracles in the laboratory. For example, Revolutionary Prosthetics funded scientists at the Johns Hopkins Applied Physics Laboratory, who have created the most advanced mechanical arm on Earth, which can duplicate nearly all the delicate motions of the fingers, hand, and arm in three dimensions. It is the same size and has the same strength and agility as a real arm. Although it is made of steel, if you covered it up with flesh-colored plastic, it would be nearly indistinguishable from a real arm.

This arm was attached to Jan Sherman, a quadriplegic who had suffered from a genetic disease that damaged the connection between her brain and her body, leaving her completely paralyzed from the neck down. At the University of Pittsburgh, electrodes were placed directly on top of her brain, which were then connected to a computer and then to a mechanical arm. Five months after surgery to attach the arm, she appeared on 60 Minutes. Before a national audience, she cheerfully used her new arm to wave, greet the host, and shake his hand. She even gave him a fist bump to show how sophisticated the arm was.

Dr. Ling says, ‘In my dream, we will be able to take this into all sorts of patients, patients with strokes, cerebral palsy, and the elderly.‘ (page 84)

Dr. Miguel Nicholelis of Duke University is pursuing novel applications of the brain-machine interface (BMI). Dr. Nicholelis has demonstrated that BMI can be done across continents. He put a chip on a monkey’s brain. The chip was connected to the internet. When the monkey was walking on a treadmill in North Carolina, the signals were sent to a robot in Kyoto, Japan, which performed the same walking motions.

Dr. Nicholelis is also working on the problem that today’s prosthetic hands lack a sense of touch. Dr. Nicholelis is trying to create a direct brain-to-brain interface to overcome this challenge. Messages would go from the brain to the mechanical arm, and then directly back to the brain, bypassing the stem altogether. This is a brain-machine-brain interface (BMBI).

Dr. Nicholelis connected the motor cortex of rhesus monkeys to mechanical arms. The mechanical arms have sensors, and send signals back to the brain by electrodes connected to the somato-sensory cortex (which registers the sensation of touch). Dr. Nicholelis invented a new code to represent different surfaces. After a month of practice, the brain learns the new code and can thus distinguish among different surfaces.

Dr. Nicholelis told Kaku that something like the holodeck from Star Trek–where you wander in a virtual world, but feel sensations when you bump into virtual objects–will be possible in the future. Kaku writes:

The holodeck of the future might use a combination of two technologies. First, people in the holodeck would wear internet contact lenses, so that they would see an entirely new virtual world everywhere they looked. The scenery in your contact lense would change instantly with the push of a button. And if you touched any object in this world, signals sent into the brain would simulate the sensation of touch, using BMBI technology. In this way, objects in the virtual world you see inside your contact lense would feel solid. (page 87)

Scientists have begun to explore an “Internet of the mind,” or brain-net. In 2013, scientists went beyond animal studies and demonstrated the first human brain-to-brain communication.

This milestone was achieved at the University of Washington, with one scientist sending a brain signal (move your right arm) to another scientist. The first scientist wore an EEG helmet and played a video game. He fired a cannon by imagining moving his right arm, but was careful not to move it physically.

The signal from the EEG helmet was sent over the Internet to another scientist, who was wearing a transcranial magnetic helmet carefully placed over the part of his brain that controlled his right arm. When the signal reached the second scientist, the helmet would send a magnetic pulse into his brain, which made his right arm move involuntarily, all by itself. Thus, by remote control, one human brain could control the movement of another.

This breakthrough opens up a number of possibilities, such as exchanging nonverbal messages via the Internet. You might one day be able to send the experience of dancing the tango, bungee jumping, or skydiving to the people on your e-mail list. Not just physical activity, but emotions and feelings as well might be sent via brain-to-brian communication.

Nicholelis envisions a day when people all over the world could participate in social networks not via keyboards, but directly through their minds. Instead of just sending e-mails, people on the brain-net would be able to telepathically exchange thoughts, emotions, and ideas in real time. Today a phone call conveys only the information of the conversation and the tone of voice, nothing more. Video conferencing is a bit better, since you can read the body language of the person on the other end. But a brain-net would be the ultimate in communications, making it possible to share the totality of mental information in a conversation, including emotions, nuances, and reservations. Minds would be able to share their most intimate thoughts and feeelings. (pages 87-88)

Kaku gives more details ofwhat would be needed to create a brain-net:

Creating a brain-net that can transmit such information would have to be done in stages. The first step would be inserting nanoprobes into important parts of the brain, such as the left temporal lobe, which governs speech, and the occipital lobe, which governs vision. Then computers would analyze these signals and decode them. This information in turn could be sent over the Internet by fiber-optic cables.

More difficult would be to insert these signals into another person’s brain, where they could be processed by the receiver. So far, progress in this area has focused only on the hippocampus, but in the future it should be possible to insert messages directly into other parts of the brain corresponding to our sense of hearing, light, touch, etc. So there is plenty of work to be done as scientists try to map the cortices of the brain involved in these senses. Once these cortices have been mapped… it should be possible to insert words, thoughts, memories, and experiences into another brain. (page 89)

Dr. Nicolelis’ next goalis the Walk Again Project. They are creating a complete exoskeleton that can be controlled by the mind. Nicolelis calls it a “wearable robot.” The aim is to allow the paralyzed to walk just by thinking. There are several challenges to overcome:

First, a new generation of microchips must be created that can be placed in the brain safely and reliably for years at a time. Second, wireless sensors must be created so the exoskeleton can roam freely. The signals from the brain would be received wirelessly by a computer the size of a cell phone that would probably be attached to your belt. Third, new advances must be made in deciphering and interpreting signals from the brain via computers. For the monkeys, a few hundred neurons were necessary to control the mechanical arms. For a human, you need, at minimum, several thousand neurons to control an arm or leg. And fourth, a power supply must be found that is portable and powerful enough to energize the entire exoskeleton. (page 92)

 

MEMORIES AND THOUGHTS

One interesting possibility is that the long-term memory evolved in humans because it was useful for us in simulating and predicting future scenarios.

Indeed, brain scans done by scientists at Washington University in St. Louis indicate that areas used to recall memories are the same as those involved in simulating the future. In particular, the link between the dorsolateral prefrontal cortex and the hippocampus lights up when a person is engaged in planning for the future and remembering the past. In some sense, the brain is trying to ‘recall the future,’ drawing upon memories of the past in order to determine how something will evolve into the future. This may also explain the curious fact that people who suffer from amnesia… are often unable to visualize what they will be doing in the future or even the very next day. (page 113)

Some claim that Alzheimer’s disease may be the disease of the century. As of Kaku’s writing, there were 5.3 million Americans with Alzheimer’s, and that number is expected to quadruple by 2050. Five percent of people aged sixty-five to seventy-four have it, but more than 50 percent of those over eighty-five have it, even if they have no obvious risk factors.

One possible way to try to combat Alzheimer’s is to create antibodies or a vaccine that might specifically target misshapen protein molecules associated with the disease. Another approach might be to create an artificial hippocampus. Yet another approach is to see if specific genes can be found that improve memory. Experiments on mice and fruit flies have been underway.

If the genetic fix works, it could be administered by a simple shot in the arm. If it doesn’t work, another possible approach is to insert the proper proteins into the body. Instead of a shot, it would be a pill. But scientists are still trying to understand the process of memory formation.

Eventually, writes Kaku, it will be possible to record the totality of stimulation entering into a brain. In this scenario, the Internet may become a giant library not only for the details of human lives, but also for the actual consciousness of various individuals. If you want to see how your favorite hero or historical figure felt as they confronted the major crises of their lives, you’ll be able to do so. Or you could share the memories and thoughts of a Nobel Prize-winning scientist, perhaps gleaning clues about how great discoveries are made.

 

ENHANCING OUR INTELLIGENCE

What made Einstein Einstein? It’s very difficult to say, of course. Partly, it may be that he was the right person at the right time. Also, it wasn’t just raw intelligence, but perhaps more a powerful imagination and an ability to stick with problems for a very long time. Kaku:

The point here is that genius is perhaps a combination of being born with certain mental abilities and also the determination and drive to achieve great things. The essence of Einstein’s genius was probably his extraordinary ability to simulate the future through thought experiments, creating new physical principles via pictures. As Einstein himself once said, ‘The true sign of intelligence is not knowledge, but imagination.’ And to Einstein, imagination meant shattering the boundaries of the known and entering the domain of the unknown. (page 133)

The brain remains “plastic” even into adult life. People can always learn new skills. Kaku notes that the Canadian psychologist Dr. Donald Hebb made an important discovery about the brain:

the more we exercise certain skills, the more certain pathways in our brains become reinforced, so the task becomes easier. Unlike a digital computer, which is just as dumb today as it was yesterday, the brain is a learning machine with the ability to rewire its neural pathways every time it learns something. This is a fundamental difference between the digital computer and the brain. (page 134)

Scientists also believe that the ability to delay gratification and the ability to focus attention may be more important than IQ for success in life.

Furthermore, traditional IQ tests only measure “convergent” intelligence related to the left brain and not “divergent” intelligence related to the right brain. Kaku quotes Dr. Ulrich Kraft:

‘The left hemisphere is responsible for convergent thinking and the right hemisphere for divergent thinking. The left side examines details and processes them logically and analytically but lacks a sense of overriding, abstract connections. The right side is more imaginative and intuitive and tends to work holistically, integrating pieces of an informational puzzle into a whole.’ (page 138)

Kaku suggests that a better test of intelligence might measure a person’s ability to imagine different scenarios related to a specific future challenge.

Another avenue of intelligence research is genes. We are 98.5 percent identical genetically to chimpanzees. But we live twice as long and our mental abilities have exploded in the past six million years. Scientists have even isolated just a handful of genes that may be responsible for our intelligence. This is intriguing, to say the least.

In addition to having a larger cerebral cortex, our brains have many folds in them, vastly increasing their surface area. (The brain of Carl Friedrich Gauss was found to be especiallyfolded and wrinkled.)

Scientists have also focused on the ASPM gene. It has mutated fifteen times in the last five or six million years. Kaku:

Because these mutations coincide with periods of rapid growth in intellect, it it tantalizing to speculate that ASPM is among the handful of genes responsible for our increased intelligence. If this is true, then perhaps we can determine whether these genes are still active today, and whether they will continue to shape human evolution in the future. (page 154)

Scientists have also learned that nature takes numerous shortcuts in creating the brain. Many neurons are connected randomly, so a detailed blueprint isn’t needed. Neurons organize themselves in a baby’s brain in reaction to various specific experiences. Also, nature uses modules that repeat over and over again.

It is possible that we will be able to boost our intelligence in the future, which will increase the wealth of society (probably significantly). Kaku:

It may be possible in the coming decades to use a combination of gene therapy, drugs, and magnetic devices to increase our intelligence. (page 162)

…raising our intelligence may help speed up technological innovation. Increased intelligence would mean a greater ability to simulate the future, which would be invaluable in making scientific discoveries. Often, science stagnates in certain areas because of a lack of fresh new ideas to stmulate new avenues of research. Having an ability to simulate different possible futures would vastly increase the rate of scientific breakthroughs.

These scientific discoveries, in turn, could generate new industries, which would enrich all of society, creating new markets, new jobs, and new opportunities. History if full of technological breakthroughs creating entirely new industries that benefited not just the few, but all of society (think of the transistor and the laser, which today form the foundation of the world economy). (page 164)

 

DREAMS

Kaku explains that the brain, as a neural network, may need to dream in order to function well:

The brain, as we have seen, is not a digital computer, but rather a neural network of some sort that constantly rewires itself after learning new tasks. Scientists who work with neural networks noticed something interesting, though. Often these systems would become saturated after learning too much, and instead of processing more information they would enter a “dream” state, whereby random memories would sometimes drift and join together as the neural networks tried to digest all the new material. Dreams, then, might reflect “house cleaning,” in which the brain tries to organize its memories in a more coherent way. (If this is true, then possibly all neural networks, including all organisms that can learn, might enter a dream state in order to sort out their memories. So dreams probably serve a purpose. Some scientists have speculated that this might imply that robots that learn from experience might also eventually dream as well.)

Neurological studies seem to back up this conclusion. Studies have shown that retaining memories can be improved by getting sufficient sleep between the time of activity and a test. Neuroimaging shows that the areas of the brain that are activated during sleep are the same a those involved in learning a new task. Dreaming is perhaps useful in consolidating this new information. (page 172)

In 1977, Dr. Allan Hobson and Dr. Robert McCarley made history – seriously challenging Freud’s theory of dreams–by proposing the “activation synthesis theory” of dreams:

The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert. As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state.

As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves. These waves travel up the brain stem into the visual cortex, stimulating it to create dreams. Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams. (pages 174-175)

 

ALTERED STATE OF CONSCIOUSNESS

There seem to be certain parts of the brain that are associated with religious experiences and also with spirituality. Dr. Mario Beauregard of the University of Montreal commented:

If you are an atheist and you live a certain kind of experience, you will relate it to the magnificence of the universe. If you are a Christian, you will associate it with God. Who knows. Perhaps they are the same thing.

Kaku explains how human consciousness involves delicate checks and balances similar to the competing points of view that a good CEO considers:

We have proposed that a key function of human consciousness is to simulate the future, but this is not a trivial task. The brain accomplishes it by having these feedback loops check and balance one another. For example, a skillful CEO at a board meeting tries to draw out the disagreement among staff members and to sharpen competing points of view in order to sift through the various arguments and then make a final decision. In the same way, various regions of the brain make diverging assessments of the future, which are given to the dorsolateral profrontal cortex, the CEO of the brain. These competing assessments are then evaluated and weighted until a balanced final decision is made. (page 205)

The most common mental disorder is depression, afflicting twenty million people in the United States. One way scientists are trying to cure depression isdeep brain stimulation (DBS)–inserting small probes into the brain and causing an electrical shock. Kaku:

In the past decade, DBS has been used on forty thousand patients for motor-related diseases, such as Parkinson’s and epilepsy, which cause uncontrolled movements of the body. Between 60 and 100 percent of the patients report significant improvement in controlling their shaking hands. More than 250 hospitals in the United States now perform DBS treatment. (page 208)

Dr. Helen Mayberg and colleagues at Washington University School of Medicine have discovered an important clue to depression:

Using brain scans, they identified an area of the brain, called Brodmann area 25 (also called the subcallosal cingulate region), in the cerebral cortex that is consistently hyperactive in depressed individuals for whom all other forms of treatment have been unsuccessful.

…Dr. Mayberg had the idea of applying DBS directly to Broadmann area 25… her team took twelve patients who were clinically depressed and had shown no improvement after exhaustive use of drugs, psychotherapy, and electroshock therapy.

They found that eight of these chronically depressed individuals showed immediate progress. Their success was so astonishing, in fact, that other groups raced to duplicate these results and apply DBS to other mental disorders…

Dr. Mayberg says, ‘Depression 1.0 was psychotherapy… Depression 2.0 was the idea that it’s a chemical imbalance. This is Depression 3.0. What has captured everyone’s imagination is that, by dissecting a complex behavior disorder into its component systems, you have a new way of thinking about it.’

Although the success of DBS in treating depressed individuals is remarkable, much more research needs to be done…

 

THE ARTIFICIAL MIND AND SILICON CONSCIOUSNESS

Kaku introduces the potential challenge of handling artificial intelligence as it evolves:

Given the fact that computer power has been doubling every two years for the past fifty years under Moore’s law, some say it is only a matter of time before machines eventually acquire self-awareness that rivals human intelligence. No one knows when this will happen, but humanity should be prepared for the moment when machine consciousness leaves the laboratory and enters the real world. How we deal with robot consciousness could decide the future of the human race. (page 216)

Kaku observes that AI has gone through three cycles of boom and bust. In the 1950s, machines were built that could play checkers and solve algebra problems. Robot arms could recognize and pick up blocks. In 1965, Dr. Herbert Simon, one of the founders of AI, made a prediction:

Machines will be capable, within 20 years, of doing any work a man can do.

In 1967, another founder of AI, Dr. Marvin Minsky, remarked:

…within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved.

But in the 1970s, not much progress in AI had been made. In 1974, both the U.S. and British governments significantly cut back their funding for AI. This was the beginning of the first AI winter.

But as computer power steadily increased in the 1980s, a new gold rush occurred in AI, fueled mainly by Pentagon planners hoping to put robot soldiers on the battlefield. Funding for AI hit a billion dollars by 1985, with hundreds of millions of dollars spent on projects like the Smart Truck, which was supposed to be an intelligent, autonomous truck that could enter enemy lines, do reconnaissance by itself, perform missions (such as rescuing prisoners), and then return to friendly territory. Unfortunately, the only thing that the Smart Truck did was get lost. The visible failures of these costly projects created yet another AI winter in the 1990s. (page 217)

Kaku continues:

But now, with the relentless march of computer power, a new AI renaissance has begun, and slow but substantial progress has been made. In 1997, IBM’s Deep Blue computer beat world chess champion, Garry Kasparov. In 2005, a robot car from Stanford won the DARPA Grand Challenge for a driverless car. Milestones continue to be reached.

This question remains: Is the third try a charm?

Scientists now realize that they vastly underestimated the problem, because most human thought is actually subconscious. The conscious part of our thoughts, in fact, represents only the tiniest portion of our computations.

Dr. Steve Pinker says, ‘I would pay a lot for a robot that would put away the dishes or run simple errands, but I can’t, because all of the little problems that you’d need to solve to build a robot to do that, like recognizing objects, reasoning about the world, and controlling hands and feet, are unsolved engineering problems.’ (pages 217-218)

Kaku asked Dr. Minsky when he thought machines would equal and then surpass human intelligence. Minsky replied that he’s confident it will happen, but that he doesn’t make predictions about specific dates any more.

If you remove a single transistor from a Pentium chip, the computer will immediately crash, writes Kaku. But the human brain can perform quite well even with half of it missing:

This is because the brain is not a digital computer at all, but a highly sophisticated neural network of some sort. Unlike a digital computer, which has a fixed architecture (input, output, and processor), neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task. The brain has no programming, no operating system, no Windows, no central processor. Instead, its neural networks are massively parallel, with one hundred billion neurons firing at the same time in order to accomplish a single goal: to learn.

In light of this, AI researchers are beginning to reexamine the ‘top-down approach’ they have followed for the past fifty years (e.g., putting all the rules of common sense on a CD). Now AI researchers are giving the ‘bottom-up approach’ a second look. This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones. Neural networks must learn the hard way, by bumping into things and making mistakes. (page 220)

Dr. Rodney Brooks, former director of the MIT Artificial Intelligence Laboratory, introduced a totally new approach to AI. Why not build small, insectlike robots that learn how to walk by trail and error, just as nature learns? Brooks told Kaku that he used to marvel at the mosquito, with a microscopic brain of a few neurons, which can, nevertheless, maneuver in space better than any robot airplane. Brooks built a series of tiny robots called ‘insectoids’ or ‘bugbots,’ which learn by bumping into things. Kaku comments:

At first, it may seem that this requires a lot of programming. The irony, however, is that neural networks require no programming at all. The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision. So programming is nothing; changing the network is everything. (page 221)

The Mars Curiosity rover is one result of this bottom-up approach.

Scientists have realized that emotions are central to human cognition. Humans usually need some emotional input, in addition to logic and reason,in order to make good decisions. Robots are now be programmed to recognize various human emotions and also to exhibit emotions themselves. Robots also need a sense of danger and some feeling of pain in order to avoid injuring themselves. Eventually, as robots become ever more conscious, there will be many ethical questions to answer.

Biologists used to debate the question, “What is life?” But,writes Kaku, the physicist and Nobel Laureate Francis Crick has observed that the question is not well-defined now that we are advancing in our understanding of DNA. There are many layer and complexities to the question, “What is life?” Similarly, there are likely to be many layers and complexities to the question of what constitutes “emotion” or “consciousness.”

Moreover, as Rodney Brooks argues, we humans are machines. Eventually the robot machines we are building will be just as alive as we are. Kaku summarizes a conversation he had with Brooks:

This evolution in human perspective started with Nicholaus Copernicus when he realized that the Earth is not the center of the universe, but rather goes around the sun. It continued with Darwin, who showed that we were similar to the animals in our evolution. And it will continue into the future… when we realize that we are machines, except that we are made of wetware and not hardware. (page 248)

Kaku then quotes Brooks directly:

We don’t like to give up our specialness, so you know, having the idea that robots could really have emotions, or that robots could be living creatures–I think is going to be hard for us to accept. But we’re going to come to accept it over the next fifty years.

Brooks also thinks we will successfully create robots that are safe for humans:

The robots are coming, but we don’t have to worry too much about that. It’s going to be a lot of fun.

Furthermore, Brooks argues that we are likely to merge with robots. After all, we’ve already done this to an extent. Over twenty thousand people have cochlear implants, giving them the ability to hear.

Similarly, at the University of Southern California and elsewhere, it is possible to take a patient who is blind and implant an artificial retina. One method places a mini video camera in eyeglasses, which converts an image into digital signals. These are sent wirelessly to a chip placed in the person’s retina. The chip activates the retina’s nerves, which then send messages down the optic nerve to the occipital lobe of the brain. In this way, a person who is totally blind can see a rough image of familiar objects. Another design has a light-sensitive chip placed on the retina itself, which then sends signals directly to the optic nerve. This design does not need an external camera. (page 249)

This means, says Kaku, that eventually we’ll be able to enhance our ordinary senses and abilities. We’ll merge with our robot creations.

 

REVERSE ENGINEERING THE BRAIN

Kaku highlights three approaches to the brain:

Because the brain is so complex, there are at least three distinct ways in which it can be taken apart, neuron by neuron. The first is to simulate the brain electronically with supercomputers, which is the approach being taken by the Europeans. The second is to map out the neural pathways of living brains, as in BRAIN [Brain Research Through Advancing Innovative Neurotechnologies Initiative]. (This task, in turn, can be further subdivided, depending on how these neurons are analyzed – either anatomically, neuron by neuron, or by function and activity.) And third, one can decipher the genes that control the development of the brain, which is an approach pioneered by billionaire Paul Allen of Microsoft. (page 253)

Dr. Henry Markram is a central figure in the Human Brain Project. Kaku quotes Dr. Markram:

To build this–the supercomputers, the software, the research–we need around one billion dollars. This is not expensive when one considers that the global burden of brain disease will exceed twenty percent of the world gross domestic project very soon.

Dr. Markram also said:

It’s essential for us to understand the human brain if we want to get along in society, and I think that it is a key step in evolution.

How does the human genome go from twenty-three thousand genes to one hundred billion neurons?

The answer, Dr. Markram believes, is that nature uses shortcuts. The key to his approach is that certain modules of neurons are repeated over and over again once Mother Nature finds a good template. If you look at microscopic slices of the brain, at first you see nothing but a random tangle of neurons. But upon closer examination, patterns of modules that are repeated over and over appear.

(Modules, in fact, are one reason why it is possible to assemble large skyscrapers so rapidly. Once a single module is designed, it is possible to repeat it endlessly on the assembly line. Then you can rapidly stack them on top of one another to create the skyscraper. Once the paperwork is all signed, an apartment building can be assembled using modules in a few months.)

The key to Dr. Markram’s Blue Brain project is the “neocortical column,” a module that is repeated over and over in the brain. In humans, each column is about two millimeters tall, with a diameter of half a millimeter, and contains sixty thousand neurons. (As a point of comparison, rat neural modules contain about ten thousand neurons each.) In took ten years, from 1995 to 2005, for Dr. Markram to map the neurons in such a column and to figure out how it worked. Once that was deciphered, he then went to IBM to create massive iterations of these columns. (page 257)

Kaku quotes Dr. Markram again:

…I think, quite honestly, that if the planet understood how the brain functions, we would resolve conflicts everywhere. Because people would understand how trivial and how deterministic and how controlled conflicts and reactions and misunderstandings are.

The slice-and-dice approach:

The anatomical approach is to take apart the cells of an animal brain, neuron by neuron, using the “slice-and-dice” method. In this way, the full complexity of the environment, the body, and memories are already encoded in the model. Instead of approximating a human brain by assembling a huge number of transistors, these scientists want to identify each neuron of the brain. After that, perhaps each neuron can be simulated by a collection of transistors so that you’d have an exact replica of the human brain, complete with memory, personality, and connection to the senses. Once someone’s brain is fully reverse engineered in this way, you should be able to have an informative conversation with that person, complete with memories and a personality. (page 259)

There is a parallel project called the Human Connectome Project.

Most likely, this effort will be folded into the BRAIN project, which will vastly accelerate this work. The goal is to produce a neuronal map of the human brain’s pathways that will elucidate brain disorders such as autism and schizophrenia. (pages 260-261)

Kaku notes that one day automated microscopes will continuously take the photographs, while AI machines continuously analyze them.

The third approach:

Finally, there is a third approach to map the brain. Instead of analyzing the brain by using computer simulations or by identifying all the neural pathways, yet another approach was taken with a generous grant of $100 million from Microsoft billionaire Paul Allen. The goal was to construct a map or atlas of the mouse brain, with the emphasis on identifying the genes responsible for creating the brain.

…A follow-up project, the Allen Human Brain Atlas, was announced… with the hope of creating an anatomically and genetically complete 3-D map of the human brain. In 2011, the Allen Institute announced that it had mapped the biochemistry of two human brains, finding one thousand anatomical sites with one hundred million data points detailing how genes are expressed in the underlying biochemistry. The data confirmed that 82 percent of our genes are expressed in the brain. (pages 261-262)

Kaku says the Human Genome Project was very successful in sequencing all the genes in the human genome. But it’s just the first step in a long journey to understand how these genes work. Similarly, once scientists have reverse engineered the brain, that will likely be only the first step in understanding how the brain works.

Once the brain is reverse-engineered,this will help scientists understand and cure various diseases. Kaku observes that, with human DNA, if there is a single mispelling out of three billion base pairs, that can cause uncontrolled flailing of your limbs and convulsions, as in Huntington’s disease. Similarly, perhaps just a few disrupted connections in the brain can cause certain illnesses.

Successfully reverse engineering the brain also will help with AI research. For instance, writes Kaku, humans can recognize a familiar face from different angles in .1 seconds. But a computer has trouble with this. There’s also the question of how long-term memories are stored.

Finally, if human consciousness can be transferred to a computer, does that mean that immortality is possible?

 

THE FUTURE

Kaku talked with Dr. Ray Kurzweil, who told him it’s important for an inventor to anticipate changes. Kurzweil has made a number of predictions, at least someof which have been roughly accurate. Kurzweil predicts that the “singularity” will occur around the year 2045. Machines will have reached the point when they not only have surpassed humans in intelligence; machines also will have created next-generation robots even smarter than themselves.

Kurzweil holds that this process of self-improvement can be repeated indefinitely, leading to an explosion–thus the term “singularity”–of ever-smarter and ever more capable robots. Moreover, humans will have merged with their robot creations and will, at some point, become immortal.

Robots of ever-increasing intelligence and ability will require more power. Of course, there will be breakthroughs in energy technology, likely including nuclear fusion and perhaps even antimatter and/or black holes. So the cost to produce prodigious amounts of energy will keep coming down. At the same time, because Moore’s law cannot continue forever, super robots eventually will need ever-increasing amounts of energy. At some point, this will probably require traveling–or sending nanobot probes–tonumerous other stars orto other areaswhere the energy of antimatter and/or of black holes can be harnessed.

Kaku notes that most people in AI agree that a “singularity” will occur at some point. But it’s extremely difficult to predict the exact timing. It could happen sooner than Kurzweil predicts or it could end up taking much longer.

Kurzweil wants to bring his father back to life. Eventually something like this will be possible. Kaku:

…I once asked Dr. Robert Lanza of the company Advanced Cell Technology how he was able to bring a long-dead creature “back to life,” making history in the process. He told me that the San Diego Zoo asked him to create a clone of a banteng, an oxlike creature that had died out about twenty-five years earlier. The hard part was extracting a usable cell for the purpose of cloning. However, he was successful, and then he FedExed the cell to a farm, where it was implanted into a female cow, which then gave birth to this animal. Although no primate has ever been cloned, let alone a human, Lanza feels it’s a technical problem, and that it’s only a matter of time before someone clones a human. (page 273)

The hard part of cloning a human would be bringing back their memories and personality, says Kaku. One possibility would be creating a large data file containing all known information about a person’s habits and life. Such a file could be remarkably accurate. Even for people dead today, scores of questions could be asked to friends, relatives, and associates. This could be turned into hundreds of numbers, each representing a different trait that could be ranked from 0 to 10, writes Kaku.

When technology has advanced enough, it will become possible–perhaps via the Connectome Project–to recreate a person’s brain, neuron for neuron. If it becomes possible for you to have your connectome completed, then your doctor–or robodoc–would have all your neural connections on a hard drive. Then, says Kaku, at some point, you could be brought back to life, using either a clone or a network of digital transistors (inside an exeskeleton or surrogate of some sort).

Dr. Hans Moravec, former director of the Artificial Intelligence Laboratory at Carnegie Mellon University, has pioneered an intriguing idea: transferring your mind into an immortal robotic body while you’re still alive. Kaku explains what Moravec told him:

First, you lie on a stretcher, next to a robot lacking a brain. Next, a robotic surgeon extracts a few neurons from your brain, and then duplicates these neurons with some transistors located in the robot. Wires connect your brain to the transistors in the robot’s empty head. The neurons are then thrown away and replaced by the transistor circuit. Since your brain remains connected to these transistors via wires, it functions normally and you are fully conscious during this process. Then the super surgeon removes more and more neurons from your brain, each time duplicating these neurons with transistors in the robot. Midway through the operation, half your brain is empty; the other half is connected by wires to a large collection of transistors inside the robot’s head. Eventually all the neurons in your brain have been removed, leaving a robot brain that is an exact duplicate of your original brain, neuron for neuron. (page 280)

When you wake up, you are likely to have a few superhuman powers, perhaps including a form of immortality. This technology is likely far in the future, of course.

Kaku then observes that there is another possible path to immortality that does not involve reverse engineering the brain. Instead, super smart nanobots could periodically repair your cells. Kaku:

…Basically, aging is the buildup of errors, at the genetic and cellular level. As cells get older, errors begin to build up in their DNA and cellular debris also starts to accumulate, which makes the cells sluggish. As cells begin slowly to malfunction, skin begins to sag, bones become frail, hair falls out, and our immune system deteriorates. Eventually, we die.

But cells also have error-correcting mechanisms. Over time, however, even these error-correcting mechanisms begin to fail, and aging accelerates. The goal, therefore, is to strengthen natural cell-repair mechanisms, which can be done via gene therapy and the creation of new enzymes. But there is also another way: using “nanobot” assemblers.

One of the linchpins of this futuristic technology is something called the “nanobot,” or an atomic machine, which patrols the bloodstream, zapping cancer cells, repairing the damage from the aging process, and keeping us forever young and healthy. Nature has already created some nanobots in the form of immune cells that patrol the body in the blood. But these immune cells attack viruses and foreign bodies, not the aging process.

Immortality is within reach if these nanobots can reverse the ravages of the aging process at the molecular and cellular level. In this vision, nanobots are like immune cells, tiny police patrolling your bloodstream. They attack any cancer cells, neutralize viruses, and clean out the debris and mutations. Then the possibility of immortality would be within reach using our own bodies, not some robot or clone. (pages 281-282)

Kaku writes that his personal philosophy is simple: If something is possible based on the laws of physics, then it becomes an engineering and economics problem to build it. A nanobot is an atomic machine with arms and clippers that grabs molecules, cuts them at specific points, and then splices then back together. Such a nanobot would be able to create almost any known molecule. It may also be able to self-reproduce.

The late Richard Smalley, a Nobel Laureate in chemistry, argued that quantum forces would prevent nanobots from being able to function. Eric Drexler, a founder of nanotechnology, pointed out that ribosomes in our own body cut and splice DNA molecules at specific points, enabling the creation of new DNA strands. Eventually Drexler admitted quantum forces do get in the way sometimes, while Smalley acknowledged that if ribosomes can cut and split molecules, perhaps there are other ways, too.

Ray Kurzweil is convinced that nanobots will shape society itself. Kaku quotes Kurzweil:

…I see it, ultimately, as an awakening of the whole universe. I think the whole universe right now is basically made up of dumb matter and energy and I think it will wake up. But if it becomes transformed into this sublimely intelligent matter and energy, I hope to be a part of that.

 

THE MIND AS PURE ENERGY

Kaku writes that it’s well within the laws of physics for the mind to be in the form of pure energy, able to explore the cosmos. Isaac Asimov said his favorite science-fiction short story was “The Last Question.” In this story, humans have placed their physical bodies in pods, while their minds roam as pure energy. But they cannot keep the universe itself from dying in the Big Freeze. So they create a supercomputer to figure out if the Big Freeze can be avoided. The supercomputer responds that there is not enough data. Eons later, when stars are darkening, the supercomputer finds a solution: It takes all the dead stars and combines them, producing an explosion. The supercomputer says, “Let there be light!”

And there was light. Humanity, with its supercomputer, had become capable of creating a new universe.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Physics of the Future


(Image: Zen Buddha Silence by Marilyn Barbone.)

August 13, 2017

Science and technology are moving forward faster than ever before:

…this is just the beginning. Science is not static. Science is exploding exponentially all around us. (page 12)

Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future. His book,Physics of the Future(Anchor Books, 2012), is a result.

Kaku explains why his predictions may carry more weight than those of other futurists:

  • His book is based on interviews with more than 300 top scientists.
  • Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
  • Prototypes of all the technologies mentioned in the book already exist.
  • As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.

The ancients had little understanding of the forces of nature, so they invented the gods of mythology. Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.

We are on the verge of becoming a planetary, or Type I, civilization. This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.

But there are still some things, like face to face meetings, that appear not to have changed much. Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years. People still like to see tourist attractions in person. People still like live performances. Many people still prefer taking courses in-person rather than online. (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)

Here are the chapters from Kaku’s book that I cover:

  • Future of the Computer
  • Future of Artificial Intelligence
  • Future of Medicine
  • Nanotechnology
  • Future of Energy
  • Future of Space Travel
  • Future of Humanity

 

FUTURE OF THE COMPUTER

Kaku quotes Helen Keller:

No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.

According to Moore’s law, computer power doubles every eighteen months. Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly. Also, exponential growth is often not noticeable for the first few decades. But eventually things can change dramatically.

Even the near future may be quite different, writes Kaku:

…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control. They will, to a degree, recognize the human voice and face and converse in a formal language. They will be able to create entire virtual worlds that we can only dream of today. Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper. Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders. (pages 25-26)

In order to discuss the future of science and technology, Kaku has divided each chapter into three parts: the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).

In the near future, we cansurf the internet via special glasses or contact lenses. We cannavigate with a handheld device or just by moving our hands. We can connect to our office via the lense. It’s likely that when we encounter a person, we will see their biography on our lense.

Also, we will be able to travel by driverless cars. This will allow us to use commute time to access the internet via our lenses or to do other work. Kaku notes that the wordcar accident may disappear from the language once driveless cars become advanced and ubiquitous enough. Instead of nearly 40,000 dying in the United States in car accidents each year, there may bezero deaths from car accidents. Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.

At home, you will have a room with screens on every wall. If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.

You won’t need to carry a computer with you. Computers will be embedded nearly everywhere. You’ll have constant access to computers and the internet via your glasses or contact lenses.

As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person. This includes the moon, Mars, and other currently exotic locations.

Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland. Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill. Kaku found that he could run, hide, sprint, or lie down. Everything he saw was very realistic. In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.

Your doctor – likely a human face appearing on your wall – will have all your genetic information. Also, you’ll be able to pass a tiny probe over your body and diagnose any illness. (MRI machines will be as small as a phone.) As well, tiny chips or sensors will be embedded throughout your environment. Most forms of cancer will be identified and destroyed before a tumor ever forms. Kaku says the wordtumor will disappear from the human language.

Furthermore, we’ll probably be able to slow down and even reverse the aging process. We’ll be able to regrow organs based on computerized access to our genes. We’ll likely be able to reengineer our genes.

In the medium term (2030 to 2070):

  • Moore’s law may reach an end. Computing power will still continue to grow exponentially, however, just not as fast as before.
  • When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail. You’ll be able to download informative lectures about anything you see. In fact, a real professor willappear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
  • If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers. You’ll be able to see through hills and other obstacles.
  • If you’re a surgeon, you’ll see in great detail everything inside the body. You’ll have access to all medical records, etc.
  • Universal translators will allow any two people to converse.
  • True 3-D images will surround us when we watch a movie. 3-D holograms will become a reality.

In the far future (2070 to 2100):

We will be able to control computers directly with our minds.

John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain. Through trial and error, the paralyzed person learns to move the cursor on a computer screen. Eventually they can read and write e-mails, and play computer games. Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.

Similarly, paralyzed people will be able to control mechanical arms and legs from their brains. Experiments with monkeys have already achieved this.

Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain. MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.

Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy. In this way, we’ll be able to control objects just by thinking. Astronauts on earth will be able to control superhuman robotic bodies on the moon.

 

FUTURE OF ARTIFICIAL INTELLIGENCE

AI pioneer Herbert Simon, in 1965, said:

Machines will be capable, in twenty years, of doing any work a man can do.

Unfortunately not much progress was made. In 1974, the first AI winter began as the U.S. and British governments cut off funding.

Progress again was made in the 1980’s. But because it was overhyped, another backlash occurred and a second AI winter began. Many people left the field as funding disappeared.

The human brain is a type of neural network. Neural networks follow Hebb’s rule: every time a correct decision is made, those neural pathways are reinforced. Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience. ‘

Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers. Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.

Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).

What’s interesting is that robots are superfast when doing human mental calculations. But robots still are not good at visual pattern recognition, movement, and common sense. Robots can see far more detail than humans, but robots have trouble making sense of what they see. Also, many things in our experience that we as humans know by common sense, robots don’t understand.

There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things. But so far, these projects haven’t worked.

There are two ways to give a robot the ability to learn: top-down and bottom-up. An example of the top-down approach is STAIR (Stanford artificial intelligence robot). Everything is programmed into STAIR from the beginning. For STAIR to understand an image, it must compare the image to all the images already programmed into it.

The LAGR (learning applied to ground robots) uses the bottom-up approach. It learns everything from scratch, by bumping into things. LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.

Robots will become ever more helpful in medicine:

For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia. Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar. But the da Vinci robotic system can vastly decrease all these. The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery. Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body. There are 800 hospitals in Europe and North and South America that use this system; 48,000 operations were performed in 2006 alone using this robot. Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.

In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today. In fact, in the future, only rarely will the surgeon slice the skin at all. Noninvasive surgery will become the norm.

Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread. Micromachines smaller than the period at the end of this sentence will do much of the mechanical work. (pages 93-94)

But to make robots intelligent, scientists must learn more about how the human brain works.

The human brain has roughly three levels. The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc. At the next level, there is the monkey brain, or the limbic system, located at the center of our brain. Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.

The third level of the human brain is the front and outer part – the cerebral cortex. This level defines humanity and is responsible for the ability to think logically and rationally.

Scientists still have a way to go in understanding in sufficient detail how the human brain works.

By midcentury, scientists will be able to reverse engineer the brain. In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer. Kaku quotes Fred Hapgood from MIT:

Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.

By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku. However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.

For example, says Kaku, the Human Genome Project is like a dictionary with no definitions. We can spell out each gene in the human body. But we still don’t know what each gene does exactly. Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm,C. elegans. But scientists still can’t fully translate this map into the worm’s behavior.

Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.

When will machines become conscious? Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future. If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious. On the other hand, something like the Turning test may help to identify when machineshave become practically indistinguishable from humans.

When will robots exceed humans? Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.

What if superintelligent robots can make even smarter copies of themselves? They might thereby gain the ability to evolve exponentially. Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.

Thesingularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially. The inventor Ray Kurzweil has become a spokesman for the singularity. But he thinks humans will merge with this digital superintelligence. Kaku quotes Kurzweil:

It’s not going to be an invasion of intelligent machines coming over the horizon. We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.

Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us. The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).

One problem is that the military is the largest funder of AI research. On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).

Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.

One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience. Such an army might turn into a practical way to explore the solar system and beyond. One by-product of Brooks’ idea is the Mars Rover.

Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm. AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.

Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics. Thus, they have sought a single, unifying equation underlying all intelligence. But, says Minsky, there is no such thing:

Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness. Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task. He calls this ‘the society of minds’: that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years. (page 123)

Brooks predicts that, by 2100, there will be very intelligent robots. But we will be part robot and part connected with robots.

He sees this progressing in stages. Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions. For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf. These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…

Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain. One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons. Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision. These groups, for the first time in history, have been able to restore a degree of sight to the blind… (pages 124-125)

Scientists have also successfully created a robotic hand. One patient, Robin Ekenstam, had his right hand amputated. Scientists have given him a robotic hand with four motors and forty sensors. The doctors connected Ekenstam’s nerves to the chips in the artificial hand. As a result, Ekenstam is able to use the artificial hand as if it were his own hand. He feels sensations in the artificial fingers when he picks stuff up. In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.

Furthermore, the brain is extremely plastic because it is a neural network. So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.

And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities. Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats. Similarly, genetic engineering will become possible. As Brooks commented:

We will now longer find ourselves confined by Darwinian evolution.

Another way people will merge with robots is with surrogates and avatars. For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.

Robot pioneer Hans Morevic has described one way this could happen:

…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot. The operation starts when we lie beside a robot without a brain. A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull. As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot. Not only do we have a robotic body, we have also the benefits of a robot: immortality in superhuman bodies that are perfect in appearance. (pages 130-131)

 

FUTURE OF MEDICINE

Kaku quotes Nobel Laureate James Watson:

No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?

Nobel Laureate David Baltimore:

I don’t really think our bodies are going to have any secrets left within this century. And so, anything that we can manage to think about will probably have a reality.

Kaku mentions biologist Robert Lanza:

Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit. In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before. Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah. There, the fertilized cell was implanted into a female cow. Ten months later he got the news that his latest creation had just been born. On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out. Another day, he could be working on cloning human embryo cells. He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells. (page 138)

Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book,What is Life? He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.

Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule. In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix. When unraveled, a single strand of DNA stretches about 6 feet long. On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code. By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life. (page 140)

Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form. David Baltimore:

Biology is today an information science.

Kaku writes:

The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule. Atom for atom, we know how to build the molecules of life from scratch. And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.

Welcome to bioinformatics:

…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms. For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA. In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes. (page 143)

You’ll talk to your doctor – likely a software program – on the wall screen. Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form. If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.

If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed. (There are over 91,000 in the United States waiting for an organ transplant.)

…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells. The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells. (page 144)

Eventually cloning will be possible for humans.

The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep. By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original. (page 150)

Successes in animal studies will be translated to human studies. First diseases caused by a single mutated gene will be cured. Then diseases caused by multiple muted genes will be cured.

At some point, there will be “designer children.” Kaku quotes Harvard biologist E. O. Wilson:

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become.

The “smart mouse” gene was isolated in 1999. Mice that have it are better able to navigate mazes and remember things. Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn. This supports Hebb’s rule: learning occurs when certain neural pathways are reinforced.

It will take decades to iron out side effects and unwanted consequences of genetic engineering. For instance, scientists now believe that there is a healthy balance between forgetting and remembering. It’s important to remember key lessons and specific skills. But it’s also important not to remember too much. People need a certain optimism in order to make progress and evolve.

Scientists now know what aging is: Aging is the accumulation of errors at the genetic and cellular level. These errors have various causes. For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku. Errors can also accumulate as ‘junk’ molecular debris.

The buildup of genetic errors is a by-product of the second law of thermodynamics: entropy always increases. However, there’s an important loophole, notes Kaku. Entropy can be reduced in one place as long as it is increased at least as much somewhere else. This means that aging is reversible. Kaku quotes Richard Feynman:

There is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Kaku continues:

…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding. His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals. In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…

…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM. By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers. Scientists will be able to scan millions of genomes of two groups of people, the young and the old. By comparing the two groups, one can identify where aging takes place at the genetic level. A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated. (pages 168-169)

Scientists think aging is only 35 percent determined by genes. Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria. This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging. Soon we could live to 150. By 2100, we could live well beyond that.

If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent. This is called calorie restriction. Every organism studied so far exhibits this phenomenon.

…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging. In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time. Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long. (page 170)

Now scientists have shown that caloric restriction also works for primates: less diabetes, less cancer, less heart disease, and better health and longer life.

In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells. SIR2 is activated when it detects that the energy reserves of a cell are low. The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins. Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.

Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair. If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine. (page 171)

Kaku reports what William Haseltine, biotech pioneer, told him:

The nature of life is not mortality. It’s immortality. DNA is an immortal molecule. That molecule first appeared perhaps 3.5 billion years ago. That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that. First to extend our lives two- or three-fold. And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely. And I don’t think that will be an unnatural process. (page 173)

Kaku concludes that extending life span in the future will likely result from a combination of activities:

  • growing new organs as they wear out or become diseased, via tissue engineering and stem cells
  • ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
  • using gene therapy to alter genes that may slow down the aging process
  • maintaining a healthy lifestyle (exercise and a good diet)
  • using nanosensors to detect diseases like cancer years before they become a problem

Kaku quotes Richard Dawkins:

I believe that by 2050, we shall be able to read the language [of life]. We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.

Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor. After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.

 

NANOTECHNOLOGY

Kaku:

For the most part, nanotechnology is still a very young science. But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes. MEMS are tiny machines so small they can easily fit on the tip of a needle. They are created using the same etching technology used in the computer business. Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them. (pages 207-208)

Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car. This has already saved thousands of lives.

One day nanomachines may be able to replace surgery entirely. Cutting the skin may become completely obsolete. Nanomachines will also be able to find and kill cancer cells in many cases. These nanomachines can be guided by magnets.

DNA fragments can be embedded on a tiny chip using transistor etching technology. The DNA fragments can bind to specific gene sequences. Then, using a laser, thousands of genes can be read at one time, rather than one by one. Prices for these DNA chips continue to plummet due to Moore’s law.

Small electronic chips will be able to do the work that is now done by an entire laboratory. These chips will be embedded in our bathrooms. Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks. In the future, it may cost pennies and take just a few minutes.

In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite. They won the Nobel Prize for their work. Graphene is a single sheet of carbon, no more than one atom thick. And it can conduct electricity. It’s also the strongest material ever tested. (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)

Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor: one atom thick and ten atoms across. (The smallest transistors currently are about 30 nanometers. Novoselov’s transistors are 30 times smaller.)

The real challenge now is how to connect molecular transistors.

The most ambitious proposal is to use quantum computers, which actually compute on individual atoms. Quantum computers are extremely powerful. The CIA has looked at them for their code-breaking potential.

Quantum computers actually exist. Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.” When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.

The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations. (When atoms are “coherent,” they vibrate in phase with one another.) Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.

Scientists are working on programmable matter the size of grains of sand. These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object. In fact, many common consumer products may be replaced by software programs sent over the internet. If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.

In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything. This would be the crowning achievement of engineering, says Kaku. One problem is the sheer number of atoms that would need to be re-arranged. But this could be solved by self-replicating nanobots.

A version of this “replicator” already exists. Mother Nature can take the food we eat and create a baby in nine months. DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food,notes Kaku. Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms. (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)

 

FUTURE OF ENERGY

Kaku writes that in this century, we will harness the power of the stars. In the short term, this means solar and hydrogen will replace fossil fuels. In the long term, it means we’ll tap the power of fusion and even solar energy from outer space. Also, cars and trains will be able to float using magnetism. This can drastically reduce our use of energy, since most energy today is used to overcome friction.

Currently, fossil fuels meet about 80 percent of the world’s energy needs. Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.

Electric vehicles will reduce the use of fossil fuels. Butwe also have to transform the way electricity is generated. Solar power will keep getting cheaper. But much more clean energy will be required in order gradually to replace fossil fuels.

Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases. However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.

Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve. This increases the odds that terrorists could acquire nuclear weapons.

Within a few decades, global warming will become even more obvious. The signs are already clear, notes Kaku:

  • The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
  • Greenland’s ice shelves continue to shrink. (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
  • Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off. (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
  • For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
  • Temperatures started to be reliably recorded in the late 1700s; 1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded. Levels of carbon dioxide are rising dramatically.
  • As the earth heats up, tropical diseases are gradually migrating northward.

It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide. But we must be careful about unintended side effects on ecosystems.

Eventually fusion power may solve most of our energy needs. Fusion powers the sun and lights up all the stars.

Anyone who can successfully master fusion power will have unleashed unlimited eternal energy. And the fuel for these fusion plants comes from ordinary seawater. Pound for pound, fusion power releases 10 million times more power than gasoline. An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum. (page 272)

It’s extremely difficult to heat hydrogen gas to tens of millions of degrees. But scientists will probably master fusion power within the next few decades. And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.

One way scientists are tryingproduce nuclear fusion is by focusing huge lasers on to a tiny point. If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion. This approach is called inertial confinement fusion.

The other main approachused by scientists to try to create fusion is magnetic confinement fusion. A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.

What is most difficult in this approach is squeezing the hydrogen gas uniformly. Otherwise, it bulges out in complex ways. Scientists are using supercomputers to try to control this process. (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion. So stars form easily.)

Most of the energy we burn is used to overcome friction. Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.

In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance. Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy. The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.

But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero. Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero. This is important because liquid nitrogen forms at 77 degrees (Kelvin). Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.

Remember that most energy is used to overcome friction. Even for electricity, up to 30 percent can be lost during transmission. But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years. Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.

Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.

The reason the magnet floats is simple. Magnetic lines of force cannot penetrate a superconductor. This is the Meissner effect. (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.) When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic. This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float. (page 289)

Room temperature superconductors will allow trains and cars to move without any friction. This will revolutionize transportation. Compressed air could get a car going. Then the car could float almost forever as long as the surface is flat.

Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains. A maglev train does lose energy to air friction. In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.

Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible. A reduced cost of space travel may make it feasibleto send hundreds of solar satellites into space. One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles. But the main problem is the cost of booster rockets. (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are workingto reduce the cost of rockets by making them reusable.)

 

FUTURE OF SPACE TRAVEL

Kaku quotes Carl Sagan:

We have lingered long enough on the shores of the cosmic ocean. We are ready at last to set sail for the stars.

Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:

So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition. This, in turn, will generate more interest in one day sending a probe to these distant planets. There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms. (page 297)

Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars. But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.

For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans. And the surface of Europa is continually heated by the tidal forces caused by gravity.

It had been thought that life required sunlight. But in 1977, life was found on earth, deep under water in the Galapagos Rift. Energy from undersea volcano vents provided enough energy for life. Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents. Some of the most primitive forms of DNA have been found on the bottom of the ocean.

In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature. Kaku:

At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty. In one scenario, our universe is a huge bubble of some sort that is continually expanding. We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper). But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath. Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation). Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion. (page 301)

Space travel is very expensive. It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon. It costs much more to send a person to Mars.

Robotic missions are far cheaper than manned missions. And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.

Kaku next describes a mission to Mars:

Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission. But getting to Mars will be much more difficult than reaching the moon. In contrast to the moon, Mars represents a quantum leap in difficulty. It takes only three days to reach the moon. It takes six months to a year to reach Mars.

In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like. Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.

Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station. To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars. With no air, soil, or water, everything must be brought from earth. It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars. The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth. Any rip in a space suit would create rapid depressurization and death. (page 312)

Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth. The temperature never goes above the melting point of ice. And the dust storms are ferocious and often engulf the entire planet.

Eventually astronauts may be able to terraform Mars to make it more hospitable for life. The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice. (Methane gas is an even more potent greenhouse gas than carbon dioxide.) Once the temperature rises, the underground permafrost may begin to thaw. Riverbeds would fill with water, and lakes and oceansmight form again. This would release more carbon dioxide, leading to a positive feedback loop.

Another possible way to terraform Mars would be to deflect a comet towards the planet. Comets are made mostly of water ice. A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.

The polar regions of Mars are made of frozen carbon dioxide and ice. It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps. This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.

Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars. They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide. They could also be genetically modified to maximize their growth on Mars. These algae pools could accelerate terraforming in several ways. First, they could convert carbon dioxide into oxygen. Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun. Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet. Fourth, the algae can be harvested for food. Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen. (page 315)

Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.

2070 to 2100: A Space Elevator and Interstellar Travel

Near the end of the century, scientists may finally be able to construct a space elevator. With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky. Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.

One challenge is to create a carbon nanotube cable that is 50,000 miles long. Another challenge is that space satellites in orbit travel at 18,000 miles per hour. If a satellite collided with the space elevator, it would be catastrophic. So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.

Another challenge is turbulent weather on earth. The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform. Moreover, there must be an escape pod in case the cable breaks.

Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt. The next goal would be travelling to a star. A conventional chemical rocket would take 70,000 years to reach the nearest star. But there are several proposals for an interstellar craft:

  • solar sail
  • nuclear rocket
  • ramjet fusion
  • nanoships

Although light has no mass, it has momentum and so can exert pressure. The pressure is super tiny. But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft. The solar sail would likely be miles wide. The craft would have to circle the sun for a few years, gaining more and more momentum. Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.

Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power. One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters. It would have been powered by 1,000 hydrogen bombs. (This also would have been a good way to get rid of atomic bombs meant only for warfare.) Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion. So the project was set aside.

A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust. In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space. The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy vianuclear fusion. With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.

Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year. This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship. (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light. But meanwhile, on earth, millions of years will have passed.)

Note that there are still engineering questions about the ramjet fusion engine. For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space. Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.

Another possibility is antimatter rocket ships. If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel. Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.

Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars. These nanoships might become cheap enough to produce and to fuel. They might even be self-replicating.

Millions of nanoships could gather intelligence like a “swarm” does. For instance, a single ant is super simple. But a colony of ants can create a complex ant hill. A similar concept is the “smart dust” considered by the Pentagon. Billions of particles, each a sensor, could be used to gather a great deal of information.

Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light. Moreover, scientists may be able to create one or a few self-replicating nanoprobes. Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.

 

FUTURE OF HUMANITY

Kaku writes:

All the technological revolutions described here are leading to a single point: the creation of a planetary civilization. This transition is perhaps the greatest in human history. In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos. Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate. (pages 378-379)

In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations. So he proposed three types of civilization:

  • A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
  • A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
  • A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).

Kaku explains:

The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations. Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies. (page 381)

Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet. There are signs, says Kaku, that humanity will reach Type I in a matter of decades.

  • The internet allows a person to connect with virtually anyone else on the planet effortlessly.
  • Many families around the worldhave middle-class ambitions: a suburban house and two cars.
  • The criterion for being a superpower is not weapons, but economic strength.
  • Entertainers increasingly consider the global appeal of their products.
  • People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
  • The news is becoming planetary.
  • Soccer and the Olympics are emerging to dominate planetary sports.
  • The environment is debated on a planetary scale. People realize they must work together to control global warming and pollution.
  • Tourism is one of the fastest-growing industries on the planet.
  • War has rarely occurred between two democracies. A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
  • Diseases will be controlled on a planetary basis.

A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova. Or we may be able to keep the sun from exploding. (Or we might be able to change the orbit of our planet.) Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere. Also, we probably will have colonized not just the entire solar system, but nearby stars.

By the time we become a Type III civilization, we will have explored most of the galaxy. We may have done this using self-replicating robot probes. Or we may have mastered Planck energy (10^19 billion electron volts). At this energy, space-time itself becomes unstable. The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time. By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time. As a result, a Type III civilization might be able to colonize the entire galaxy.

It’s possible that a more advanced civilization has already visited or detected us. For instance, they may have used tiny self-replicating probes that we haven’t noticed yet. It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.

Kaku writes that many people are not aware of the historic transition humanity is now making. But this could change if we discover evidence of intelligent life somewhere in outer space. Then we would consider our level of technological evolution relative to theirs.

Consider the SETI Institute. This is from their website (www.seti.org):

SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.

Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets. Whether evolution will give rise to intelligent, technological civilizations is open to speculation. However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.

Finding evidence of other technological civilizations however, requires significant effort. Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.

Work at the Center is divided into two areas: Research and Development (R&D) andProjects. R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects. The algorithms and technology developed in the lab are first field-tested and then implemented during observing. The observing results are used to guide the development of new hardware, software, and observing facilities. The improved SETI observing Projects in turn provide new ideas for Research and Development. This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.

Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is. A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible. If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Warren Buffett on Jack Bogle


(Image: Zen Buddha Silence by Marilyn Barbone.)

July 23, 2017

Warren Buffett has long maintained that most investors–large and small–would be best off by simply investing in ultra-low-cost index funds. Buffett explains his reasoning again in the 2016 Letter to Berkshire Shareholders (see pages 21-25): http://berkshirehathaway.com/letters/2016ltr.pdf

Passive investors will essentially match the market over time. So, argues Buffett, active investors will match the market over time before costs (including fees and expenses). After costs, active investors will, in aggregate, trail the market by the total amount of costs. Thus, the net returns of most active investors will trail the market over time. Buffett:

There are, of course, some skilled individuals who are highly likely to out-perform the S&P over long stretches. In my lifetime, though, I’ve identified–early on–only ten or so professionals that I expected would accomplish this feat.

There are no doubt many hundreds of people–perhaps thousands–whom I have never met and whose abilities would equal those of the people I’ve identified. The job, after all, is not impossible. The problem simply is that the great majority of managers who attempt to over-perform will fail. The probability is also very high that the person soliciting your funds will not be the exception who does well.

As for those active managers who produce a solid record over 5-10 years, manyof them will have had a fair amount of luck. Moreover, good records attract assets under management. But large sums are always a drag on performance.

 

BUFFETT’S BETAGAINST PROT‰G‰ PARTNERS

Long Bets is a non-profit started by Jeff Bezos. As Buffett describes in his 2016 Letter to Shareholders, “proposers” can post a proposition at www.Longbets.org that will be proved right or wrong at some date in the future. They wait for someone to take the other side of the bet. Each side names a charity that will be the beneficiary if its side wins and writes a brief essay defending its position.

Buffett:

Subsequently, I publicly offered to wager $500,000 that no investment pro could select a set of at least five hedge funds–wildly-popular and high-fee investing vehicles–that would over an extended period match the performance of an unmanaged S&P-500 index fund charging only token fees. I suggested a ten-year bet and named a low-cost Vanguard S&P fund as my contender. I then sat back and waited expectantly for a parade of fund managers–who could include their own fund as one of the five–to come forth and defend their occupation. After all, these managers urged others to bet billions on their abilities. Why should they fear putting a little of their own money on the line?

What followed was the sound of silence. Though there are thousands of professional investment managers who have amassed staggering fortunes by touting their stock-selecting prowess, only one man–Ted Seides–stepped up to my challenge. Ted was a co-manager of Protégé Partners, an asset manager that had raised money from limited partners to form a fund-of-funds–in other words, a fund that invests in multiple hedge funds.

I hadn’t known Ted before our wager, but I like him and admire his willingness to put his money where his mouth was…

For Protégé Partners’ side of our ten-year bet, Ted picked five funds-of-funds whose results were to be averaged and compared against my Vanguard S&P index fund. The five he selected had invested their money in more than 100 hedge funds, which meant that the overall performance of the funds-of-funds would not be distorted by the good or poor results of a single manager.

Here are the results so far after nine years (from 2008 thru 2016):

Net return after 9 years
Fund of Funds A 8.7%
Fund of Funds B 28.3%
Fund of Funds C 62.8%
Fund of Funds D 2.9%
Fund of Funds E 7.5%

 

Net return after 9 years
S&P 500 Index Fund 85.4%

 

Compound Annual Return
All Funds of Funds 2.2%
S&P 500 Index Fund 7.1%

To see a more detailed table of the results, go to page 22 of the Berkshire 2016 Letter: http://berkshirehathaway.com/letters/2016ltr.pdf

Buffett continues:

The compounded annual increase to date for the index fund is 7.1%, which is a return that could easily prove typical for the stock market over time. That’s an important fact: A particularly weak nine years for the market over the lifetime of this bet would have probably helped the relative performance of the hedge funds, because many hold large ‘short’ positions. Conversely, nine years of exceptionally high returns from stocks would have provided a tailwind for index funds.

Instead we operated in what I would call a ‘neutral’ environment. In it, the five funds-of-funds delivered, through 2016, an average of only 2.2%, compounded annually. That means $1 million invested in those funds would have gained $220,000. The index fund would meanwhile have gained $854,000.

Bear in mind that every one of the 100-plus managers of the underlying hedge funds had a huge financial incentive to do his or her best. Moreover, the five funds-of-funds managers that Ted selected were similarly incentivized to select the best hedge-fund managers possible because the five were entitled to performance fees based on the results of the underlying funds.

I’m certain that in almost all cases the managers at both levels were honest and intelligent people. But the results for their investors were dismal–really dismal. And, alas, the huge fixed fees charged by all of the funds and funds-of-funds involved–fees that were totally unwarranted by performance–were such that their managers were showered with compensation over the nine years that have passed. As Gordon Gekko might have put it: ‘Fees never sleep.’

The underlying hedge-fund managers in our bet received payments from their limited partners that likely averaged a bit under the prevailing hedge-fund standard of ‘2 and 20,’ meaning a 2% annual fixed fee, payable even when losses are huge, and 20% of profits with no clawback (if good years were followed by bad ones). Under this lopsided arrangement, a hedge fund operator’s ability to simply pile up assets under management has made many of these managers extraordinarily rich, even as their investments have performed poorly.

Still, we’re not through with fees. Remember, there were the fund-of-funds managers to be fed as well. These managers received an additional fixed amount that was usually set at 1% of assets. Then, despite the terrible overall record of the five funds-of-funds, some experienced a few good years and collected ‘performance’ fees. Consequently, I estimate that over the nine-year period roughly 60%–gulp!–of all gains achieved by the five funds-of-funds were diverted to the two levels of managers. That was their misbegotten reward for accomplishing something far short of what their many hundreds of limited partners could have effortlessly–and with virtually no cost–achieved on their own.

In my opinion, the disappointing results for hedge-fund investors that this bet exposed are almost certain to recur in the future. I laid out my reasons for that belief in a statement that was posted on the Long Bets website when the bet commenced (and that is still posted there)…

Even if you take the smartest 10% of all active investors, most of them will trail the market, net of costs, over the course of a decade or two. Most investors (even the smartest) who think they can beat the market are wrong. Buffett’s bet against Protégé Partners is yet another example of this.

 

BUFFETT PRAISES BOGLE

If a statue is ever erected to honor the person who has done the most for American investors, the handsdown choice should be Jack Bogle. For decades, Jack has urged investors to invest in ultra-low-cost index funds. In his crusade, he amassed only a tiny percentage of the wealth that has typically flowed to managers who have promised their investors large rewards while delivering them nothing–or, as in our bet, less than nothing–of added value.

In his early years, Jack was frequently mocked by the investment-management industry. Today, however, he has the satisfaction of knowing that he helped millions of investors realize far better returns on their savings than they otherwise would have earned. He is a hero to them and to me.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Why You Shouldn’t Try Market Timing


(Image: Zen Buddha Silence by Marilyn Barbone.)

July 2, 2017

In Investing: The Last Liberal Art (Columbia University Press, 2nd edition, 2013), Robert Hagstrom has an excellent chapter on decision making. Hagstrom examines Philip Tetlock’s discussion of foxes versus hedgehogs.

 

PHILIP TETLOCK’S STUDY OF POLITICAL FORECASTING

Philip Tetlock, professor of psychology at the University of Pennsylvania, spent fifteen years (1988-2003) studying the political forecasts made by 284 experts. As Hagstrom writes:

All of them were asked about the state of the world; all gave their prediction of what would happen next. Collectively, they made over 27,450 forecasts. Tetlock kept track of each one and calculated the results. How accurate were the forecasts? Sadly, but perhaps not surprisingly, the predictions of experts are no better than ‘dart-throwing chimpanzees.’ (page 149)

In other words, one could have rolled a 6-sided dice 27,450 times over the course of fifteen years, and one would have achieved the same level of predictive accuracy as this group of top experts. (The predictions were in the form of: more of X, no change in X, or less of X. Rolling a 6-sided dice would be one way to generate random outcomes among three equally likely scenarios.)

In a nutshell, political experts generally achieve high levels of knowledge (about history, politics, etc.), but most of this knowledge does not help in making predictions. When it comes to predicting the future, political experts suffer from overconfidence, hindsight bias, belief system defenses, and lack of Bayesian process, says Hagstrom.

Although the overall record of political forecasting is dismal, Tetlock was still able to identify a few key differences:

The aggregate success of the forecasters who behaved most like foxes was significantly greater than those who behaved like hedgehogs. (page 150)

The distinction between foxes and hedgehogs goes back to an essay by Sir Isaiah Berlin entitled, ‘The Hedgehog and the Fox: An Essay on Tolstoy’s View of History.’ Berlin defined hedgehogs as thinkers who viewed the world through the lens of a single defining idea, and foxes as thinkers who were skeptical of grand theories and instead drew on a wide variety of ideas and experiences before making a decision.

 

FOXES VERSUS HEDGEHOGS

Hagstrom clearly explains key differences between Foxes and Hedgehogs:

Why are hedgehogs penalized? First, because they have a tendency to fall in love with pet theories, which gives them too much confidence in forecasting events. More troubling, hedgehogs were too slow to change their viewpoint in response to disconfirming evidence. In his study, Tetlock said Foxes moved 59 percent of the prescribed amount toward alternate hypotheses, while Hedgehogs moved only 19 percent. In other words, Foxes were much better at updating their Bayesian inferences than Hedgehogs.

Unlike Hedgehogs, Foxes appreciate the limits of their own knowledge. They have better calibration and discrimination scores than Hedgehogs. (Calibration, which can be thought of as intellectual humility, measures how much your subjective probabilities correspond to objective probabilities. Discrimination, sometimes called justified decisiveness, measures whether you assign higher probabilities to things that occur than to things that do not.) Hedgehogs have a stubborn belief in how the world works, and they are more likely to assign probabilities to things that have not occurred than to things that actually occur.

Tetlock tells us Foxes have three distinct cognitive advantages.

  1. They begin with ‘reasonable starter’ probability estimates. They have better ‘inertial-guidance’ systems that keep their initial guesses closer to short-term base rates.
  2. They are willing to acknowledge their mistakes and update their views in response to new information. They have a healthy Bayesian process.
  3. They can see the pull of contradictory forces, and, most importantly, they can appreciate relevant analogies.

Hedgehogs start with one big idea and follow through – no matter the logical implications of doing so. Foxes stitch together a collection of big ideas. They see and understand the analogies and then create an aggregate hypothesis. I think we can say the fox is the perfect mascot for the College of Liberal Arts Investing. (pages 150-151)

 

KNOWING WHAT YOU DON’T KNOW

We have two classes of forecasters: Those who don’t know – and those who don’t know they don’t know. – John Kenneth Galbraith

Last year, I wrote about The Most Important Thing, a terrific book by the great value investor Howard Marks. See: https://boolefund.com/howard-marks-the-most-important-thing/

One of the sections from that blog post, ‘Knowing What You Don’t Know,’ is directly relevant to the discussion of foxes versus hedgehogs. We can often ‘take the temperature’ of the stock market. Thus, we can have some idea that the market is high and may fall after an extended period of increases.

But we can never know for sure that the market will fall, and if so, when precisely. In fact, the market does not even have to fall much at all. It could move sideways for a decade or two, and still end up at more normal levels. Thus, we should always focus our energy and time on finding individual securities that are undervalued.

There could always be a normal bear market, meaning a drop of 15-25%. But that doesn’t conflict with a decade or two of a sideways market. If we own stocks that are cheap enough, we could still be fully invested. Even when the market is quite high, there are usually cheap micro-cap stocks, for instance. Buffett made a comment indicating that he would have been fully invested in 1999 if he were managing a small enough sum to be able to focus on micro caps:

If I was running $1 million, or $10 million for that matter, I’d be fully invested.

There are a few cheap micro-cap stocks today. Moreover, some oil-related stocks are cheap from a 5-year point of view.

Warren Buffett, when he was running the Buffett Partnership, knew for a period of almost ten years (roughly 1960 to 1969) that the stock market was high (and getting higher) and would either fall or move sideways for many years. Yet he was smart enough never to predict precisely when the correction would occur. Because Buffett stayed focused on finding individual companies that were undervalued, Buffett produced an outstanding track record for the Buffett Partnership. Had he ever not invested in cheap stocks because he knew the stock market was high, Buffett would not have produced such an excellent track record. (For more about the Buffett Partnership, see: https://boolefund.com/warren-buffetts-ground-rules/)

Buffett on forecasting:

We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.

Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.

Here is what Ben Graham, the father of value investing, said about forecasting the stock market:

…if I have noticed anything over these 60 years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

Howard Marks has tracked (in a limited way) many macro predictions, including U.S. interest rates, the U.S. stock market, and the yen/dollar exchange rate. He found quite clearly that most forecasts were not correct.

I can elaborate on two examples that I spent much time on (when I should have stayed focused on finding individual companies available at cheap prices):

  • the U.S. stock market
  • the yen/dollar exchange

The U.S. stock market

A secular bear market for U.S. stocks began (arguably) in the year 2000 when the 10-year Graham-Shiller P/E – also called the CAPE (cyclically adjusted P/E) – was over 30, its highest level in U.S. history. The long-term average CAPE is around 16. Based on over one hundred years of history, the pattern for U.S. stocks in a secular bear market would be relatively flat or lower until the CAPE approached 10.

However, ever since Greenspan started running the Fed in the 1980’s, the Fed has usually had a policy of stimulating the economy and stocks by lowering rates or keeping rates as low as possible. This has caused U.S. stocks to be much higher than otherwise. For instance, with rates today staying near zero, U.S. stocks could easily be at least twice as high as ‘normal’ indefinitely, assuming the Fed decides to keep rates low for many more years. Furthermore, as Buffett has noted, very low rates for many decades would eventually mean price/earnings ratios on stocks of 100.

In addition to the current Fed regime, there are several additional reasons why rates may stay low. As Jeremy Grantham recently wrote:

  • We could be between waves of innovation, which suppresses growth and the demand for capital.
  • Population in the developed world and in China is rapidly aging. With more middle-aged savers and less high-consuming young workers, the result could be excess savings that depresses all returns on capital.
  • Nearly 100% of all the recovery in total income since 2009 has gone to the top 0.1%.

Grantham discusses all of these possible reasons for low rates in the Q3 2016 GMO Letter: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/gmo-quarterly-letters/hellish-choices-what’s-an-asset-owner-to-do-and-not-with-a-bang-but-a-whimper.pdf?sfvrsn=8

Grantham gives more detail on income inequality in the Q4 2016 GMO Letter: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/gmo-quarterly-letters/is-trump-a-get-out-of-hell-free-card-and-the-road-to-trumpsville-the-long-long-mistreatment-of-the-american-working-class.pdf?sfvrsn=6

(In order to see GMO commentaries, you may have to register but it’s free.)

Around the year 2012 (or even earlier), some of the smartest market historians – including Russell Napier, author of Anatomy of the Bear – started predicting that the S&P 500 Index would fall towards a CAPE of 10 or lower, which is how every previous U.S. secular bear market concluded. It didn’t happen in 2012, or in 2013, or in 2014, or in 2015, or in 2016. Moreover, it may not happen in 2017 or even 2018.

Again, there could always be a normal bear market involving a drop of 15-25%. But that doesn’t conflict with a sideways market for a decade or two. Grantham suggests total returns of about 2.8% per year for the next 20 years.

Grantham, an expert on bubbles, also pointed out that the usual ingredients for a bubble do not exist today. Normally in a bubble, there are excellent economic fundamentals combined with a euphoric extrapolation of those fundamentals into the future. Grantham in Q3 2016 GMO Letter:

  • Current fundamentals are way below optimal – trend line growth and productivity are at such low levels that the usually confident economic establishment is at an obvious loss to explain why. Capacity utilization is well below peak and has been falling. There is plenty of available labor hiding in the current low participation rate (at a price). House building is also far below normal.
  • Classic bubbles have always required that the geopolitical world is at least acceptable, more usually well above average. Today’s, in contrast, you can easily agree isunusually nerve-wracking.
  • Far from euphoric extrapolations, the current market has been for a long while and remains extremely nervous. Investor trepidation is so great that many are willing to tie up money in ultra-safe long-term government bonds that guarantee zero real return rather than buy the marginal share of stock! Cash reserves are high and traditional measures of speculative confidence are low. Most leading commentators are extremely bearish. The net effect of this nervousness is shown in the last two and a half years of the struggling U.S. market…so utterly unlike the end of the classic bubbles.
  • …They – the bubbles in stocks and houses – all coincided with bubbles in credit…Credit is, needless to say, complex…What is important here is the enormous contrast between the credit conditions that previously have been coincident with investment bubbles and the lack of a similarly consistent and broad-based credit boom today.

The yen/dollar exchange

As for the yen/dollar exchange, some of the smartest macro folks around predicted (in 2010 and later) that shorting the yen vs. the U.S. dollar would be the ‘trade of the decade,’ and that the yen/dollar exchange would exceed 200. In 2007, the yen/dollar was over 120. By 2011-2012, the yen/dollar had gone to around 76. In late 2014 and for most of 2015, the yen/dollar again exceeded 120. However, in late 2015, the BOJ decided not to try to weaken their currency further by printing even larger amounts of money. The yen/dollar declined from over 120 to about 106. Since then, it has remained below 120.

The ‘trade of the decade argument’ was the following: the debt-to-GDP in Japan has reached stratospheric levels (over 400-500%, including over 250% for government debt-to-GDP), government deficits have continued to widen, and the Japanese population is actually shrinking. Since long-term GDP growth is essentially population growth plus productivity growth, it should become mathematically impossible for the Japanese government to pay back its debt without a significant devaluation of their currency. If the BOJ could devalue the yen by 67% – which would imply a yen/dollar exchange rate of well over 200 – then Japan could repay the government debt in seriously devalued currency. In this scenario – a yen devaluation of 67% – Japan effectively would only have to repay 33% of the government debt. Currency devaluation – inflating away the debts – is what most major economies throughout history have done.

Although the U.S. dollar may be stronger than the yen or the euro, all three governments want to devalue their currency over time. Therefore, even if the yen loses value, it’s not at all clear how long this will take when you consider the yen versus the dollar. The yen ‘collapse’ could be delayed by many years. So if you compare a yen/dollar short position versus a micro-cap value investment strategy, it’s likely that the micro-cap value investment strategy will produce higher returns with less risk.

  • Similar logic applies to market timing. You may get lucky once in a row trying to time the market. But simply buying cheap stocks – and holding them for at least 3 to 5 years before buying cheaper stocks – is likely to do much better over the course of decades. Countless extremely intelligent investors throughout history have gone mostly to cash based on a market prediction, only to see the market continue to move higher for many years or even decades. Again: Even if the market is high, it can go sideways for a decade or two. If you buy baskets of cheap micro-cap for a decade or two, there is virtually no chance of losing money, and there’s an excellent chance of doing well.

Also, the total human economy is likely to be much larger in the future, and there may be some way to help the Japanese government with its debts. The situation wouldn’t seem so insurmountable if Japan could grow its population. But this might happen in some indirect way if the total economy becomes more open in the future, perhaps involving the creation of a new universal currency.

TWO SCHOOLS: ‘I KNOW’ vs. ‘I DON’T KNOW’

Financial forecasting cannot be done with any sort of consistency. Every year, there are many people making financial forecasts, and so purely as a matter of chance, a few will be correct in a given year. But the ones correct this year are almost never the ones correct the next time around, because what they’re trying to predict can’t be predicted with any consistency. Howard Marks writes:

I am not going to try to prove my contention that the future is unknowable. You can’t prove a negative, and that certainly includes this one. However, I have yet to meet anyone who consistently knows what lies ahead macro-wise…

One way to get to be right sometimes is to always be bullish or always be bearish; if you hold a fixed view long enough, you may be right sooner or later. And if you’re always an outlier, you’re likely to eventually be applauded for an extremely unconventional forecast that correctly foresaw what no one else did. But that doesn’t mean your forecasts are regularly of any value…

It’s possible to be right about the macro-future once in a while, but not on a regular basis. It doesn’t do any good to possess a survey of sixty-four forecasts that includes a few that are accurate; you have to know which ones they are. And if the accurate forecasts each six months are made by different economists, it’s hard to believe there’s much value in the collective forecasts.

Marks gives one more example: How many predicted the crisis of 2007-2008? Of those who did predict it – there was bound to be some from pure chance alone – how many of those then predicted the recovery starting in 2009 and continuing until today (early 2017)? The answer is ‘very few.’ The reason, observes Marks, is that those who got 2007-2008 right “did so at least in part because of a tendency toward negative views.” They probably were negative well before 2007-2008, and more importantly, they probably stayed negative afterward. And yet, from a close of 676.53 on March 9, 2009, the S&P 500 Index has increased more than 240% to a close of 2316.10 on February 10, 2017.

Marks has a description for investors who believe in the value of forecasts. They belong to the ‘I know’ school, and it’s easy to identify them:

  • They think knowledge of the future direction of economies, interest rates, markets and widely followed mainstream stocks is essential for investment success.
  • They’re confident it can be achieved.
  • They know they can do it.
  • They’re aware that lots of other people are trying to do it too, but they figure either (a) everyone can be successful at the same time, or (b) only a few can be, but they’re among them.
  • They’re comfortable investing based on their opinions regarding the future.
  • They’re also glad to share their views with others, even though correct forecasts should be of such great value that no one would give them away gratis.
  • They rarely look back to rigorously assess their record as forecasters. (page 121)

Marks contrasts the confident ‘I know’ folks with the guarded ‘I don’t know’ folks. The latter believe you can’t predict the macro-future, and thus the proper goal for investing is to do the best possible job analyzing individual securities. If you belong to the ‘I don’t know’ school, eventually everyone will stop asking you where you think the market’s going.

You’ll never get to enjoy that one-in-a-thousand moment when your forecast comes true and the Wall Street Journal runs your picture. On the other hand, you’ll be spared all those times when forecasts miss the mark, as well as the losses that can result from investing based on overrated knowledge of the future.

No one likes investing on the assumption that the future is unknowable, observes Marks. But if the future IS largely unknowable, then it’s far better as an investor to acknowledge that fact than to pretend otherwise.

Furthermore, says Marks, the biggest problems for investors tend to happen when investors forget the difference between probability and outcome (i.e., the limits of foreknowledge):

  • when they believe the shape of the probability distribution is knowable with certainty (and that they know it),
  • when they assume the most likely outcome is the one that will happen,
  • when they assume the expected result accurately represents the actual result, or
  • perhaps most important, when they ignore the possibility of improbable outcomes.

Marks sums it up:

Overestimating what you’re capable of knowing or doing can be extremely dangerous – in brain surgery, transocean racing or investing. Acknowledging the boundaries of what you can know – and working within those limits rather than venturing beyond – can give you a great advantage. (page 123)

Or as Warren Buffett wrote in the 2014 Berkshire Hathaway Letter to Shareholders:

Anything can happen anytime in markets. And no advisor, economist, or TV commentator – and definitely not Charlie nor I – can tell you when chaos will occur. Market forecasters will fill your ear but will never fill your wallet.

Link: http://berkshirehathaway.com/letters/2014ltr.pdf

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Does the Stock Market Overreact?


(Image: Zen Buddha Silence by Marilyn Barbone.)

June 18, 2017

Richard H. Thaler recently published a book entitled Misbehaving: The Making of Behavioral Economics. It’s an excellent book. According to Nobel Laureate Daniel Kahneman, Richard Thaler is “the creative genius who invented the field of behavioral economics.”

Thaler defines “Econs” as the fully rational human beings that traditional economists have always assumed for their models. “Humans” are often less than fully rational, as demonstrated not only by decades of experiments, but also by the history of various asset prices.

For this blog post, I will focus on Part VI (Finance, pages 203-253). But first a quotation Thaler has at the beginning of his book:

The foundation of political economy and, in general, of every social science, is evidently psychology. A day may come when we shall be able to deduce the laws of social science from the principles of psychology.

– Vilfredo Pareto, 1906

 

THE BEAUTY CONTEST

Chicago economist Eugene Fama coined the term “efficient market hypothesis,” or EMH for short. Thaler writes that the EMH has two (related) components:

  • the price is right – the idea is that any asset will sell for its “intrinsic value.” “If the rational valuation of a company is $100 million, then its stock will trade such that the market cap of the firm is $100 million.”
  • no free lunch– EMH holds that all publically available information is already reflected in current stock prices, thus there is no reliable way to “beat the market” over time.

NOTE: If prices are always right, that means that there can never be bubbles in asset prices. It also implies that there are no undervalued stocks, at least none that an investor could consistently identify. There is no way to “beat the market” over a long period of time except by luck. Warren Buffett was lucky.

Thaler observes that finance did not become a mainstream topic in economics departments before the advent of cheap computer power and great data. The University of Chicago was the first to develop a comprehensive database of stock prices going back to 1926. After that, research took off, and by 1970 EMH was well-established.

Thaler also points out that the famous economist J. M. Keynes was “a true forerunner of behavioral finance.” Keynes, who was a great value investor, thought that “animal spirits” play an important role in financial markets.

Keynes also observed that professional investors areplaying an intricate guessing game, similar to picking out the prettiest faces from a set of photographs:

…It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be. And there are some, I believe, who practice the fourth, fifth, and higher degrees.

 

DOES THE STOCK MARKET OVERREACT?

On average, investors overreact to recent poor performance for low P/E stocks, which is why the P/E’s are low. And, on average, investors overreact to recent good performance for high P/E stocks, which is why the P/E’s are high.

Having said that, Thaler is quick to quote a warning by Ben Graham about timing: ‘Undervaluations caused by neglect or prejudice may persist for an inconveniently long time, and the same applies to inflated prices caused by overenthusiasm or artificial stimulus.’ Thaler gives the example of the late 1990s: for years, Internet stocks just kept going up, while value stocks just kept massively underperforming.

According to Thaler, most academic financial economists overlooked Graham’s work:

It was not so much that anyone had refuted Graham’s claim that value investing worked; it was more that the efficient market theory of the 1970s said that value investing couldn’t work. But it did. Late that decade, accounting professor Sanjoy Basu published a thoroughly competent study of value investing that fully supported Graham’s strategy. However, in order to get such papers published at the time, one had to offer abject apologies for the results. (page 221)

Thaler and his research partner Werner De Bondt came up with the following. Suppose that investors are overreacting. Suppose that investors are overly optimistic about the future growth of high P/E stocks, thus driving the P/E’s “too high.” And suppose that investors are excessively pessimistic about low P/E stocks, thus driving the P/E’s “too low.” Then subsequent high returns from value stocks and low returns from growth stocks present simple reversion to the mean. But EMH says that:

  • The price is right: Stock prices cannot diverge from intrinsic value.
  • No free lunch: Because all information is already in the stock price, it is not possible to beat the market. Past stock prices and the P/E cannot predict future price changes.

Thaler and De Bondt took all the stocks listed on the New York Stock Exchange, and ranked their performance over three to five years. They isolated the worst performing stocks, which they called “Losers.” And they isolated the best performing stocks, which they called “Winners.” Writes Thaler:

If markets were efficient, we should expect the two portfolios to do equally well. After all, according to the EMH, the past cannot predict the future. But if our overreaction hypothesis were correct, Losers would outperform Winners. (page 223)

Results:

The results strongly supported our hypothesis. We tested for overreaction in various ways, but as long as the period we looked back at to create the portfolios was long enough, say three years, then the Loser portfolio did better than the Winner portfolio. Much better. For example, in one test we used five years of performance to form the Winner and Loser portfolios and then calculated the returns of each portfolio over the following five years, compared to the overall market. Over the five-year period after we formed our portfolios, the Losers outperformed the market by about 30% while the Winners did worse than the market by about 10%.

 

THE REACTION TO OVERREACTION

In response to widespread evidence that ‘Loser’ stocks (low P/E) – as a group – outperform ‘Winner’ stocks, defenders of EMH were forced to argue that ‘Loser’ stocks are riskier as a group.

NOTE: On an individual stock basis, a low P/E stock may be riskier. But a basket of low P/E stocks generally far outperforms a basket of high P/E stocks. The question is whether a basket of low P/E stocks is riskier than a basket of high P/E stocks.

According to the CAPM (Capital Asset Pricing Model), the measure of the riskiness of a stock is its correlation with the rest of the market, or “beta.” If a stock has a beta of 1.0, then its volatility is similar to the volatility of the whole market. If a stock has a beta of 2.0, then its volatility is double the volatility of the whole market (e.g., if the whole market goes up or down by 10%, then this individual stock will go up or down by 20%).

According to CAPM, if the basket of Loser stocks subsequently outperforms the market while the basket of Winner stocks underperforms, then the Loser stocks must have high betas and the Winner stocks must have low betas. But Thaler and De Bondt found the opposite. Loser stocks (value stocks) were much less risky as measured by beta.

Eventually Eugene Fama himself, along with research partner Kenneth French, published a series of papers documenting that, indeed, both value stocks and small stocks earn higher returns than predicted by CAPM. In short, “the high priest of efficient markets” (as Thaler calls Fama) had declared that CAPM was dead.

But Fama and French were not ready to abandon the EMH (Efficient Market Hypothesis). They came up with the Fama-French Three Factor Model. They showed that value stocks are correlated – a value stock will tend to do well when other value stocks are doing well. And they showed that small-cap stocks are similarly correlated.

The problem, again, is that there is no evidence that a basket of value stocks is riskier than a basket of growth stocks. And there is no theoretical reason to believe that value stocks, as a group, are riskier.

Thaler asserts thatthe debate was settled by the paper ‘Contrarian Investment, Extrapolation, and Risk’ published in 1994 by Josef Lakonishok, Andrei Shleifer, and Robert Vishny. This paper shows clearly that value stocks outperform, and value stocks are, if anything, less risky than growth stocks. Link to paper: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

Lakonishok, Shleifer, and Vishny launched the highly successful LSV Asset Management based on their research: http://lsvasset.com/

(Recently Fama and French have introduced a five-factor model, which includes profitability. Profitability was one of Ben Graham’s criteria.)

 

THE PRICE IS NOT RIGHT

If you held a stock forever, it would be worth all future dividends discounted back to the present. Even if you sold the stock, as long as you held it for a very long time, the distant future sales price (discounted back to the present) would be a negligible part of the intrinsic value of the stock. The stock price is really the present value of all expected future dividend payments.

Bob Shiller collected historical data on stock prices and dividends.

Then, starting in 1871, for each year he computed what he called the ‘ex post rational’ forecast of the stream of future dividends that would accrue to someone who bought a portfolio of the stocks that existed at that time. He did this by observing the actual dividends that got paid out and discounting them back to the year in question. After adjusting for the well-established trend that stock prices go up over long periods of time, Shiller found that the present value of dividends was… highly stable. But stock prices, which we should interpret as attempts to forecast the present value of dividends, are highly variable….(231-232, my emphasis)

Shiller demonstrated that a stock price typically moves around much more than the intrinsic value of the underlying business.

October 1987 provides yet another example of stock prices moving much more than fundamental values. The U.S. stock market dropped more than 25% from Thursday, October 15, 1987 to Monday, October 19, 1987. This happened in the absence of any important news, financial or otherwise. Writes Thaler:

If prices are too variable, then they are in some sense ‘wrong.’ It is hard to argue that the price at the close of trading on Thursday, October 15, and the price at the close of trading the following Monday – which was more than 25% lower – can both be rational measures of intrinsic value, given the absence of news.

THE BATTLE OF CLOSED-END FUNDS

It’s important to note that although the assumption of rationality and the EMH have been demonstrated not to be true – at least strictly speaking – behavioral economists have not invented a model of human behavior that can supplant rationalist economics. Therefore, rationalist economics, and not behaviorist economics, is still the chief basis by which economists attempt to predict human behavior.

Neuroscientists, psychologists, biologists, and other scientists will undoubtedly learn much more about human behavior in the coming decades. But even then, human behavior, due to its complexity, may remain partly unpredictable for some time. Thus, rationalist economic models may continue to be useful.

  • Rationalist models, including game theory, may also be central to understanding and predicting artificially intelligent agents.
  • It’s also possible (as hard as it may be to believe) that human beings will evolve – perhaps partly with genetic engineering and/or with help from AI – and become more rational overall.

The Law of One Price

In an efficient market, the same asset cannot sell simultaneously for two different prices. Thaler gives the standard example of gold selling for $1,000 an ounce in New York and $1,010 an ounce in London. If transaction costs were small enough, a smart trader could buy gold in New York and sell it in London. This would eventually cause the two prices to converge.

But there is one obvious example that violates this law of one price: closed-end funds, which had already been written about by Ben Graham.

For an open-end fund, all trades take place at NAV (net asset value). Investors can purchase a stake in an open-end fund on the open market, without there having to be a seller. So the total amount invested in an open-end fund can vary depending upon what investors do.

But for a closed-end fund, there is an initial amount invested in the fund, say $100 million, and then there can be no further investments and no withdrawals. A closed-end fund is traded on an exchange. So an investor can buy partial ownership of a closed-end fund, but this means that a previous owner must sell that stake to the buyer.

According to EMH, closed-end funds should trade at NAV. But in the real world, many closed-end funds trade at prices different from NAV (sometimes a premium and sometimes a discount). This is an obvious violation of the law of one price.

Charles Lee, Andrei Shleifer, and Richard Thaler wrote a paper on closed-end funds in which they identified four puzzles:

  • Closed-end funds are often sold by brokers with a sales commission of 7%. But within six months, the funds typically sell at a discount of more than 10%. Why do people repeatedly pay $107 for an asset that in six months is worth $90?
  • More generally, why do closed-end funds so often trade at prices that differ from the NAV of its holdings?
  • The discounts and premia vary noticeably across time and across funds. This rules out many simple explanations.
  • When a closed-end fund, often under pressure from shareholders, changes its structure to an open-end fund, its price often converges to NAV.

The various premia and discounts on closed-end funds simply make no sense. These mispricings would not exist if investors were rational because the only rational price for a closed-end fund is NAV.

Lee, Shleifer, and Thaler discovered that individual investors are the primary owners of closed-end funds. So Thaler et al. hypothesized that individual investors have more noticeably shifting moods of optimism and pessimism. Says Thaler:

We conjectured that when individual investors are feeling perky, discounts on closed-end funds shrink, but when they get depressed or scared, the discounts get bigger. This approach was very much in the spirit of Shiller’s take on social dynamics, and investor sentiment was clearly one example of ‘animal spirits.’ (pages 241-242)

In order to measure investor sentiment, Thaler et al. used the fact that individual investors are more likely than institutional investors to own shares of small companies. Thaler et al. reasoned that if the investor sentiment of individual investors changes, it would be apparent both in the discounts of closed-end funds and in the relative performance of small companies (vs. big companies). And this is exactly what Thaler et al. found upon doing the research. The greater the discounts to NAV for closed-end funds, the larger the difference was in returns between small stocks and large stocks.

 

NEGATIVE STOCK PRICES

Years later, Thaler revisited the law of one price with a Chicago colleague, Owen Lamont. Owen had spotted a blatant violation of the law of one price involving the company 3Com. 3Com’s main business was in networking computers using Ethernet technology, but through a merger they had acquired Palm, which made a very popular (at the time) handheld computer the Palm Pilot.

In the summer of 1999, as most tech stocks seemed to double almost monthly, 3Com stock seemed to be neglected. So management came up with the plan to divest itself of Palm. 3Com sold about 4% of its stake in Palm to the general public and 1% to a consortium of firms. As for the remaining 95% of Palm, each 3Com shareholder would receive 1.5 shares of Palm for each share of 3Com they owned.

Once this information was public, one could infer the following: As soon as the initial shares of Palm were sold and started trading, 3Com shareholders would in a sense have two separate investments. A single share of 3Com included 1.5 shares of Palm plus an interest in the remaining parts of 3Com – what’s called the “stub value” of 3Com. Note that the remaining parts of 3Com formed a profitable business in its own right. So the bottom line is that one share of 3Com should equal the “stub value” of 3Com plus 1.5 times the price of Palm.

When Palm started trading, it ended the day at $95 per share. So what should one share of 3Com be worth? It should be worth the “stub value” of 3Com – the remaining profitable businesses of 3Com (Ethernet tech, etc.) – PLUS 1.5 times the price of Palm, or 1.5 x $95, which is $143.

Again, because the “stub value” of 3Com involves a profitable business in its own right, this means that 3Com should trade at X (the stub value) plus $143, so some price over $143.

But what actually happened? The same day Palm started trading, ending the day at $95, 3Com stock fell to $82 per share. Thaler writes:

That means that the market was valuing the stub value of 3Com at minus $61 per share, which adds up to minus $23 billion! You read that correctly. The stock market was saying that the remaining 3Com business, a profitable business, was worth minus $23 billion. (page 246)

Thaler continues:

Think of it another way. Suppose an Econ is interested in investing in Palm. He could pay $95 and get one share of Palm, or he could pay $82 and get one share of 3Com that includes 1.5 shares of Palm plus an interest in 3Com.

Thaler observes that two things are needed for such a blatant violation of the law of one price to emerge and persist:

  • You need some traders who want to own shares of the now publicly traded Palm, traders who appear not to realize the basic math of the situation. These traders are called “noise traders,” because they are trading not based on real information (or real news), but based purely on “noise.” (The term “noise traders” was invented by Fischer Black. See: http://www.e-m-h.org/Blac86.pdf)
  • There also must be something preventing smart traders from driving prices back to where they are supposed to be. After all, the sensible investor can buy a share of 3Com for $82, and get 1.5 shares of Palm (worth $143) PLUS an interest in remaining profitable businesses of 3Com. Actually, the rational investor would go one step further: buy 3Com shares (at $82) and then short an appropriate number of Palm shares (at $95). When the deal is completed and the rational investor gets 1.5 shares of Palm for each share of 3Com owned, he can then use those shares of Palm to repay the shares he borrowed earlier when shorting the publicly traded Palm stock. This was a CAN’T LOSE investment. Then why wasn’t everyone trying to do it?

The problem was that there were very few shares of Palm being publicly traded. Some smart traders made tens of thousands. But there wasn’t enough publicly traded Palm stock available for any rational investor to make a huge amount of money. So the irrational prices of 3Com and Palm were not corrected.

Thaler also tells a story about a young Benjamin Graham. In 1923, DuPont owned a large number of shares of General Motors. But the market value of DuPont was about the same as its stake in GM. DuPont was a highly profitable firm. So this meant that the stock market was putting the “stub value” of DuPont’s highly profitable business at zero. Graham bought DuPont and sold GM short. He made a lot of money when the price of DuPont went up to more rational levels.

In mid-2014, says Thaler, there was a point when Yahoo’s holdings of Alibaba were calculated to be worth more than the whole of Yahoo.

Sometimes, as with the closed-end funds, obvious mispricings can last for a long time, even decades. Andrei Shleifer and Robert Vishny refer to this as the “limits of arbitrage.”

 

THALER’S CONCLUSIONS ABOUT BEHAVIORAL FINANCE

What are the implications of these examples? If the law of one price can be violated in such transparently obvious cases such as these, then it is abundantly clear that even greater disparities can occur at the level of the overall market. Recall the debate about whether these was a bubble going on in Internet stocks in the late 1990s…. (page 250)

So where do I come down on the EMH? It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful. In a world of Econs, I believe that the EMH would be true. And it would not have been possible to do research in behavioral finance without the rational model as a starting point. Without the rational framework, there are no anomalies from which we can detect misbehavior. Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research. We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed. Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’ There are definitely anomalies: sometimes the market overreacts, and sometimes it underreacts. But it remains the case that most active money managers fail to beat the market…

I have a much lower opinion about the price-is-right component of the EMH, and for many important questions, this is the more important component…

My conclusion: the price is often wrong, and sometimes very wrong. Furthermore, when prices diverge from fundamental value by such wide margins, the misallocation of resources can be quite big. For example, in the United States, where home prices were rising at a national level, some regions experienced especially rapid price increases and historically high price-to-rental ratios. Had both homeowners and lenders been Econs, they would have noticed these warning signals and realized that a fall in home prices was becoming increasingly likely. Instead, surveys by Shiller showed that these were the regions in which expectations about the future appreciation of home prices were the most optimistic. Instead of expecting mean reversion, people were acting as if what goes up must go up even more. (my emphasis)

Thaler adds that policy-makers should realize that asset prices are often wrong, and sometimes very wrong, instead of assuming that prices are always right.

 

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Investment Checklist


(Image: Zen Buddha Silence by Marilyn Barbone.)

April 23, 2017

Michael Shearn is the author of The Investment Checklist (Wiley, 2012), a very good book about how to research stocks.

For investors who have a long-term investment time horizon, micro-cap value stocks should be a major focus. I launched the Boole Microcap Fund to create a very low-cost way for investors to invest in undervalued micro-cap stocks. Boole currently uses a fully quantitative investment strategy. (Ultimately Boole will use an early form of artificial intelligence, which is a natural extension of a fully quantitative strategy.)

For investors who use a fully quantitative strategy, it’s worthwhile to review good investment checklists like Shearn’s. Although in practice, a quantitative micro-cap strategy can rely primarily on a few simple metrics – for example, a high EBIT/EV and a high Piotroski F-Score – one must regularly look for ways to improve the formula.

 

SHEARN’S CHECKLIST

Shearn writes that he came up with his checklist by studying his own mistakes, and also by studying mistakes other investors and executives had made. Shearn says the checklist helps an investor to focus on what’s important. Shearn argues that a checklist also helps one to fight against the strong human tendency to seek confirming evidence while ignoring disconfirming evidence.

Shearn explains how the book is organized:

The three most common investing mistakes relate to the price you pay, the management team you essentially join when you invest in a company, and your failure to understand the future economics of the business you’re considering investing in. (page xv)

 

FINDING IDEAS

There are many ways to find investment ideas. One of the best ways to find the most potential investment ideas is to look at micro-cap stocks trading at high EBIT/EV (or, equivalently, low EV/EBIT) and with a high Piotroski F-Score.

Micro-cap stocks perform best over time. See: https://boolefund.com/best-performers-microcap-stocks/

Low EV/EBIT – equivalently, high EBIT/EV – does better than the other standard measures of cheapness such as low P/E, low P/S, and low P/B. See:

  • Quantitative Value (Wiley, 2013), by Wesley Gray and Tobias Carlisle
  • Deep Value (Wiley, 2014), Tobias Carlisle

A high Piotroski F-Score is most effective when applied to cheap micro-cap stocks. See: https://boolefund.com/joseph-piotroski-value-investing/

In sum, if you focus on micro-cap stocks trading at a high EBIT/EV and with a high Piotroski F-Score, you should regularly find many potentially good investment ideas. This is essentially the process used by the Boole Microcap Fund.

There are, of course, many other good ways to find ideas. Shearn mentions forced selling, such as when a stock is dropped from an index. Also, spin-offs typically involve some forced selling. Moreover, the 52-week low list and other new-low lists often present interesting ideas.

Looking for the areas of greatest distress can lead to good investment opportunities. For instance, some offshore oil drillers appear to be quite cheap from a three- to five-year point of view assuming oil returns to a market clearing price of $60-70.

 

CONCENTRATED VS. QUANTITATIVE

A fully quantitative approach can work quite well. Ben Graham, the father of value investing, often used a fully quantitative approach. Graham constructed a portfolio of the statistically cheapest stocks, according to various metrics like low P/E or low P/B.

I’ve already notedthat the Boole Microcap Fund uses a fully quantitative approach: micro-cap stocks with a high EBIT/EV and a high Piotroski F-Score. This particular quantitative strategy has the potential to beat both the Russell Microcap Index and the S&P 500 Index by solid margins over time.

But there are a few ways that you can possibly do better than the fully quantitative micro-cap approach I’ve outlined. One way is using the same quantitative approach as a screen, doing in-depth research on several hundred candidates, and then building a very concentrated portfolio of the best 5 to 8 ideas.

Inpractice, it is extremely difficult to make the concentrated approach work. The vast majority of investors are better off using a fully quantitative approach (which selects the best 20 to 30 ideas, instead of the best 5 to 8 ideas).

The key ingredient to make the concentrated strategy work is passion. Some investors truly love learning everything possible about hundreds of companies. If you develop such a passion, and then apply it for many years, it’s possible to do better than a purely quantitative approach, especially if you’re focusing on micro-cap stocks. Micro-cap stocks are the most inefficiently priced part of the market because most professional investors never look there. Moreover, many micro-cap companies are relatively simple businesses that are easier for the investor to understand.

I’m quite passionate about value investing, including micro-cap value investing. But I’m also passionate about fully automated investing, whether via index funds or quantitative value funds. I know that low-cost broad market index funds are the best long-term investment for most investors. Low-cost quantitative value funds – especially if focused on micro caps – can do much better than low-cost broad market index funds.

I am more passionate about perfecting a fully quantitative investment strategy – ultimately by using an early form of artificial intelligence – than I am about studying hundreds of micro-cap companies in great detail. I know that a fully quantitative approach that picks the best 20 to 30 micro-cap ideas is very likely to perform better than my best 5 to 8 micro-cap ideas overtime

Also, once value investing can be done well by artificial intelligence, it won’t be long before the best AI value investor will be better than the best human value investor. Very few people thought that a computer could beat Garry Kasparov at chess, but IBM’s Deep Blue achieved this feat in 1997. Similarly, few people thought that a computer could beat human Jeopardy! champions. But IBM’s Watson trounced Ken Jennings and Brad Rutter at Jeopardy! in 2011.

Although investing is far more complex than chess or Jeopardy!, there is no reason to think that a form of artificial intelligence will not someday be better than the best human investors. This might not happen for many decades. But that it eventually will happen is virtually inevitable. Scientists will figure out, in ever more detail, exactly how the human brain functions. And scientists will eventually design a digital brain that can do everything the best human brain can do.

The digital brain will get more and more powerful, and faster and faster. And at some point, the digital brain is likely to gain the ability to accelerate its own evolution (perhaps by re-writing its source code). Some have referred to such an event – a literal explosion in the capabilities of digital superintelligence, leading to an explosion in technological progress – as the singularity.

 

UNDERSTANDING THE BUSINESS

If you’re going to try to pick stocks, then, notes Shearn, a good question to ask is: How would you evaluate this business if you were to become its CEO?

If you were to become CEO of a given business, then you’d want to learn everything you could about the industry and about the company. To really understand a business can easily take 6-12 months or even longer, depending on your prior experience and prior knowledge, and also depending upon the size and complexity of the business. (Micro-cap companies tend to be much easier to understand.)

You should read at least ten years’ worth of annual reports (if available). If you’re having difficulty understanding the business, Shearn recommends asking yourself what the customer’s world would look like if the business (or industry) did not exist.

You should understand exactly how the business makes money. You’d also want to understand how the business has evolved over time. (Many businesses include their corporate history on their website.)

 

UNDERSTANDING THE BUSINESS – FROM THE CUSTOMER PERSPECTIVE

Shearn writes:

The more you can understand a business from the customer’s perspective, the better position you will be in to value that business, because satisfied customers are the best predictor of future earnings for a business. As Dave and Sherry Gold, co-founders of dollar store retailer 99 Cent Only Stores, often say, ‘The customer is CEO.’ (page 39)

To gain an understanding of the customers, Shearn recommends that you interview some customers. Most investors never interview customers. So if you’re willing to spend the time interviewing customers, you can often gain good insight into the business that many other investors won’t have.

Shearn says it’s important to identify the core customers, since often a relatively small percentage of customers will represent a large chunk of the company’s revenues. Core customers may also reveal how the business caters to them specifically. Shearn gives an example:

Paccar is a manufacturer of heavy trucks that is a great example of a company that has built its product around its core customer, the owner operator. Owner operators buy the truck they drive and spend most of their time in it. They work for themselves, either contracting directly with shippers or subcontracting with big truck companies. Owner operators care about quality first, and want amenities, such as noise-proofed sleeper cabins with luxury-grade bedding and interiors. They also want the truck to look sharp, and Paccar makes its Peterbilt and Kenworth brand trucks with exterior features to please this customer. Paccar also backs up the driver with service features, such as roadside assistance and a quick spare parts network. Because owner operators want this level of quality and service, they are less price sensitive, and they will pay 10 percent more for these brands. (page 42)

Shearn writes that you want to find out how easy or difficult it is to convince customers to buy the products or services. Obviously a business with a product or service that customers love is preferable as an investment, other things being equal.

A related question is: what is the customer retention rate? The longer the business retains a customer, the more profitable the business is. Also, loyal customers make future revenues more predictable, which in itself can lead to higher profits. Businesses that carefully build long-term relationships with their customers are more likely to do well. Are sales people rewarded just for bringing in a customer, or are they also rewarded for retaining a customer?

You need to find out what pain the business alleviates for the customer, as well. Similarly, you want to find out how essential the product or service is. This will give you insight into how important the product or service is for the customers. Shearn suggests the question: If the business disappeared tomorrow, what impact would this have on the customer base?

 

EVALUATING THE STRENGTHS AND WEAKNESSES OF A BUSINESS AND INDUSTRY

Not only do you want to find out if the business has a sustainable competitive advantage. But you also want to learn if the industry is good, writes Shearn. And you want to find out about supplier relations.

Shearn lists common sources of sustainable competitive advantage:

  • Network economics
  • Brand loyalty
  • Patents
  • Regulatory licenses
  • Switching costs
  • Cost advantages stemming from scale, location, or access to a unique asset

If a product or service becomes more valuable if more customers use it, then the business may have a sustainable competitive advantage from network economics. Facebook becomes more valuable to a wider range of people as more and more people use it.

If customers are loyal to a particular brand and if the business can charge a premium price, this creates a sustainable competitive advantage. Coca-Cola has a very strong brand. So does See’s Candies (owned by Berkshire Hathaway).

A patent legally protects a product or service over a 17- to 20-year period. If a patented product or service has commercial value, then the patent is a source of sustainable competitive advantage.

Regulatory licenses – by limiting competition – can be a source of sustainable competitive advantage.

Switching costs can create a sustainable competitive advantage. If it has taken time to learn new software, for example, that can create a high switching cost.

There are various cost advantages that can be sustainable. If there are high fixed-costs in a given industry, then as a business grows larger, it can benefit from lower per-unit costs. Sometimes a business has a cost advantage by its location or by access to a unique asset.

Sustainable Competitive Advantages Are Rare

Even if a business has had a sustainable competitive advantage for some time, that does not guarantee that it will continue to have one going forward. Any time a business is earning a high ROIC – more specifically, a return on invested capital that is higher than the cost of capital – competitors will try to take some of those excess returns. That is the essence of capitalism. High ROIC usually reverts to the mean (average ROIC) due to competition and/or due to changes in technology.

Most Investment Gains Are Made During the Development Phase

Shearn points out that most of the gains from a sustainable competitive advantage come when the business is still developing, rather than when the business is already established. The biggest gains on Wal-Mart’s stock occurredwhen the company was developing. Similarly for Microsoft, Amazon, or Apple.

Pricing Power

Pricing power is usually a function of a sustainable competitive advantage. Businesses that have pricing power tend to have a few characteristics in common, writes Shearn:

  • They usually have high customer-retention rates
  • Their customers spend only a small percentage of their budget on the business’s product or service
  • Their customers have profitable business models
  • The quality of the product is more important than the price

Nature of Industry and Competitive Landscape

Some industries, like software, may be considered “good” in that the best companies have a sustainable competitive advantage as represented by a sustainably high ROIC.

But an industry with high ROIC’s, like software, is hyper-competitive. Competition and/or changes in technology can cause previously unassailable competitive advantages to disappear entirely.

It’s important to examine companies that failed in the past. Why did they fail?

IMPORTANT: Stock Price

As a value investor, depending upon the price, a low-quality asset can be a much better investment than a high-quality asset. This is a point Shearn doesn’t mention but should. As Howard Marks explains:

A high-quality asset can constitute a good or bad buy, and a low-quality asset can constitute a good or bad buy. The tendency to mistake objective merit for investment opportunity, and the failure to distinguish between good assets and good buys, gets most investors into trouble.

Supplier Relations

Does the business have a good relationship with its suppliers? Does the business help suppliers to innovate? Is the business dependent on only a few suppliers?

 

MEASURING THE OPERATING AND FINANCIAL HEALTH OF THE BUSINESS

Shearn explains why the fundamentals – the things a business has to do in order to be successful – are important:

As an investor, identifying and tracking fundamentals puts you in a position to more quickly evaluate a business. If you already understand the most critical measures of a company’s operational health, you will be better equipped to evaluate unexpected changes in the business or outside environment. Such changes often present buying opportunities if they affect the price investors are willing to pay for a business without affecting the fundamentals of the business. (page 99)

Moreover, there are specific operating metrics for a given business or industry that are important to track. Monitoring the right metrics can give you insight into any changes that may be significant. Shearn lists the following industry primers:

  • Reuters Operating Metrics
  • Standard & Poor’s Industry Surveys
  • Fisher Investment guides

Shearn also mentions internet search and books are sources for industry metrics. Furthermore, there are trade associations and trade journals.

Shearn suggests monitoring the appropriate metrics, and writing down any changes that occur over three- to five-year periods. (Typically a change over just one year is not enough to draw a conclusion.)

Key Risks

Companies list their key risks in the 10-K in the section Risk Factors. It is obviously important to identify what can go wrong. Shearn:

…it is important for you to spend some time in this section and investigate whether the business has encountered the risks listed in the past and what the consequences were. This will help you understand how much impact each risk may have. (page 106)

You would like to identify how each risk could impact the value of the business. You may want to use scenario analysis of the value of the business in order to capture specific downside risks.

Shearn advises thinking like an insurance underwriter about the risks for a given business. What is the frequency of a given risk – in other words, how often has it happened in the past? And what is the severity of a given risk – if the downside scenario materializes, what impact will that have on the value of the business? It is important to study what has happened in the past to similar businesses and/or to businesses that were in similar situations. This allows you to develop a better idea of the frequency – i.e., the base rate – of specific risks.

Is the Balance Sheet Strong or Weak?

A strong balance sheet allows the business not only to survive, but in some cases, to thrive by being able to take advantage of opportunities. A weak balance sheet, on the other hand, can mean the difference between temporary difficulties and insolvency.

You need to figure out if future cash flows will be enough to make future debt payments.

For value investors in general, the advice given by Graham, Buffett, and Munger is best: Avoid companies with high debt. The vast majority of the very best value investments ever made involved companies with low debt or no debt. Therefore, it is far simpler just to avoid companies with high debt.

Occasionally there may be equity stub situations where the potential upside is so great that a few value investors may want to carefully consider it. Then you would have to determine what the liquidity needs of the business are, what the debt-maturity schedule is, whether the interest rates are fixed or variable, what the loan covenants indicate, and whether specific debts are resource or non-recourse.

Return on Reinvestment or RONIC

It’s not high historical ROIC that counts:

What counts is the ability of a business to reinvest its excess earnings at a high ROIC, which is what creates future value. (page 129)

You need to determine the RONIC – return on new invested capital. How much of the excess earnings can the company reinvest and at what rate of return?

How to Improve ROIC

Shearn gives two ways a business can improve its ROIC:

  • Using capital more efficiently, such as managing inventory better or managing receivables better, or
  • Increasing profit margins, instead of through one-time, non-operating boosts to cash earnings.

A supermarket chain has low net profit margins, so it must have very high inventory turnover to be able to generate high ROIC. On the other hand, a steel manufacturer has low asset turnover, therefore it must achieve a high profit margin in order to generate high ROIC.

 

EVALUATING THE DISTRIBUTION OF EARNINGS (CASH FLOWS)

Scenario analysis is useful when there is a wide range of future earnings. As mentioned earlier, some offshore oil drillers appear very cheap right now on the assumption that oil returns to a market clearing price of $60-70 a barrel within the next few years. If it takes five years for oil to return to $60-70, then many offshore oil drillers will have lower intrinsic value (a few may not survive). If it takes three years (or less) for oil to return to $60-70, then some offshore drillers are likely very cheap compared to their normalized earnings.

Compare Cash Flow from Operations to Net Income

As Shearn remarks, management has much less flexibility in manipulating cash flow from operations than it does net income because the latter includes many subjective estimates. Over the past one to five years, cash flow from operations should closely approximate net income, otherwise there may be earnings manipulation.

Accounting

If the accounting is conservative and straightforward, that should give you more confidence in management than if the accounting is liberal and hard to understand. Shearn lists some ways management can manipulate earnings:

  • Improperly inflating sales
  • Under- or over-stating expenses
  • Manipulating discretionary costs
  • Changing accounting methods
  • Using restructuring charges to increase future earnings
  • Creating reserves by manipulating estimates

Management can book a sale before the revenue is actually earned in order to inflate revenues.

Management can capitalize an expense over several time periods, which shifts some current expenses to later periods thereby boosting short-term earnings. Expenses commonly capitalized include start-up costs, R&D expenses, software development, maintenance costs, marketing, and customer-acquisition costs. Shearn says you can find out whether a business routinely capitalizes its costs by reading the footnotes to the financial statements.

Manipulating discretionary costs is common, writes Shearn. Most companies try to meet their quarterly earnings goals. Most great owner operator businesses – like Warren Buffett’s Berkshire Hathaway or Henry Singleton’s Teledyne – spend absolutely no time worrying about short-term (including quarterly) earnings.

Managers often extend the useful life of particular assets, which reduces quarterly depreciation expenses.

A business reporting a large restructuring loss may add extra expenses in the restructuring charge in order to reduce future expenses (and boost future earnings).

Management can overstate certain reserve accounts in order to draw on those reserves during future bad times (in order to boost earnings during those bad times). Reserves can be booked for: bad debts, sales returns, inventory obsolescence, warranties, product liability, litigation, or environmental contingencies.

Operating Leverage

If a business has high operating leverage, then it is more difficult to forecast future earnings. Again, scenario analysis can help in this situation.

High operating leverage means that a relatively small change in revenues can have a large impact on earnings. A business with high fixed costs has high operating leverage, whereas a business with low fixed costs has low operating leverage.

For example, as Shearn records, in 2008, Boeing reported that revenues decreased 8.3 percent and operating income decreased 33.9 percent.

Working Capital

Shearn explains:

The amount of working capital a business needs depends on the capital intensity and the speed at which a business can turn its inventory into cash. The shorter the commitment or cycle, the less cash is tied up and the more a business can use the cash for other internal purposes. (page 163)

Boeing takes a long time to turn sheet metal and various electronics into an airplane. Restaurants, on the other hand, turn inventories into cash quite quickly.

The Cash Conversion Cycle (CCC) tells you how quickly a company can turn its inventory and receivables into cash and pay its short-term obligations.

CCC = Inventory conversion period (Days)

+ Receivables conversion period (Days)

– Payables conversion period (Days)

When a company has more current liabilities than current assets, that means it has negative working capital. In this situation, the customers and suppliers are financing the business, so growth is less expensive. Typically cash flow from operations will exceed net income for a business with negative working capital.

Negative working capital is only good as long as sales are growing, notes Shearn.

MANAGEMENT – BACKGROUND AND CLASSIFICATION

Sound management is usually essential for a business to do well, although ideally, as Buffett joked, you want a business so good that any idiot can run it, because eventually one will.

Shearn offers good advice on how to judge management:

It is best to evaluate a management team over time. By not rushing into investment decisions and by taking the time to understand a management team, you can reduce your risk of misjudging them. Most errors in assessing managers are made when you try to judge their character quickly or when you see only what you want to see and ignore flaws or warning signs. The more familiar you are with how managers act under different types of circumstances, the better you are able to predict their future actions. Ideally, you want to understand how managers have operated in both difficult and favorable circumstances. (pages 174-175)

Types of managers

  • Owner-operator
  • Long-tenured manager
  • Hired hand

An owner-operator is a manager who has a genuine passion for the business and is typically the founder. Shearn gives examples:

  • Sam Walton, founder of Wal-Mart
  • Dave and Sherry Gold, co-founders of 99 Cent Only Stores
  • Joe Mansueto, founder of Morningstar
  • John Mackey, co-founder of Whole Foods Market
  • Warren Buffett, CEO of Berkshire Hathaway
  • Founders of most family-controlled businesses

Shearn continues:

These passionate leaders run the business for key stakeholders such as customers, employees, and shareholders alike… They typically are paid modestly and have high ownership interests in the business. (page 177)

(Shearn also defines a second and third type of owner-operator to the extent that the owner-operator runs the business for their own benefit.)

A long-tenured manager has worked at the business for at least three years. (A second type of long-tenured manager joined from outside the business, but worked in the same industry.)

A hired hand is a manager who has joined from outside the business, but who has worked in a related industry. (Shearn defines a second type of hired hand who has worked in a completely unrelated industry.)

The Importance of Tenure in Operating the Business

Out of the 500 businesses in the S&P 500, only 28 have CEOs who have held office for more than 15 years (this is as of the year 2012, when Shearn was writing). Of these 28 long-term CEOs, 25 of them had total shareholder returns during their tenures that beat the S&P 500 index (including dividends reinvested).

Management Style: Lions and Hyenas

Based on an interview with Seng Hock Tan, Shearn distinguishes between Lion Managers and Hyena Managers.

Lion Manager:

  • Committed to ethical and moral values
  • Thinking long term and maintains a long-term focus
  • Does not take shortcuts
  • Thirsty for knowledge and learning
  • Supports partners and alliances
  • Treats employees as partners
  • Admires perseverance

Hyena Manager:

  • Has little interest in ethics and morals
  • Thinks short term
  • Just wants to win the game
  • Has little interest in knowledge and learning
  • A survivor and an opportunist
  • Treats employees as expenses
  • Admires tactics, resourcefulness, and guile

Operating Background

Shearn observes that it can be risky to have a top executive who does not have a background in the day-to-day operations of the business.

Low Salaries and High Stock Ownership

Ideally, managers will be incentivized based high stock ownership (and comparatively low salaries) as a function of building long-term business value. This aligns management incentiveswith shareholder interests.

You also want managers who are generous to all employees in terms of stock ownership. This means the managers and employees have similar incentives(which are aligned with shareholder interests).

Finally, you want managers who gradually increase their ownership interest in the business over time.

 

MANAGEMENT – COMPETENCE

Obviously you prefer a good manager, not only because the business will tend to do better over time, but also because you won’t have to spend time worrying.

Shearn on a CEO who manages the business for all stakeholders:

If you were to ask investors whether shareholder value is more important than customer service at a business, most would answer that it is. What they fail to consider is that shareholder value is a byproduct of a business that keeps its customers happy. In fact, many of the best-performing stocks over the long term are the ones that balance the interests of all stakeholder groups, including customers, employees, suppliers, and other business partners. These businesses are managed by CEOs who have a purpose greater than solely generating profits for their shareholders. (pages 210-211)

Shearn mentions John Mackey, co-founder and CEO of Whole Foods Market, who coined the term conscious capitalism to describe businesses designed to benefit all stakeholders. Shearn quotes Mackey:

Long-term profits come from having a deeper purpose, great products, satisfied customers, happy employees, great suppliers, and from taking a degree of responsibility for the community and environment we live in. The paradox of profits is that, like happiness, they are best achieved by not aiming directly for them.

Continuous Incremental Improvement

Shearn:

Contrary to popular belief, most successful businesses are built on hundreds of small decisions, instead of on one well-formulated strategic plan. For example, when most successful entrepreneurs start their business, they do not have a business plan stating what their business will look like in 2, 5, or 10 years. They instead build their business day by day, focusing on customer needs and letting these customer needs shape the direction of their business. It is this stream of everyday decisions over time that accounts for great outcomes, instead of big one-time decisions….

Another common theme among businesses that improve day by day is that they operate on the premise that it is best to repeatedly launch a product or service with a limited number of its customers so that it can use customer reactions and feedback to modify it. They operate on the premise that it is okay to learn from mistakes…

You need to determine if the management team you are investing in works on proving a concept before investing a lot of capital in it or whether it prefers to put a lot of money in all at once hoping for a big payoff. (page 215)

PIPER = persistent incremental progress eternally repeated

As CEO, Henry Singleton was one of the best capital allocators in American business history. Under Singleton, Teledyne stock compounded at 17.9 percent over 25 years (or a 53x return, vs. 6.7x for the S&P 500 Index).

Singleton believed that the best plan was no plan, as he once explained at an annual meeting:

…we’re subject to a tremendous number of outside influences, and the vast majority of them cannot be predicted. So my idea is to stay flexible. I like to steer the boat each day rather than plan ahead way into the future.

Shearn points out that one major problem with a strategic plan is the commitment and consistency principle (see Robert Cialdini’s Influence). When people make a public statement, they tend to have a very difficult time admitting they were wrong and changing course when the evidence calls for it. Similarly, notes Shearn, strategic plans can make people blind to other opportunities.

When managers give short-term guidance, it can have similar effects as a strategic plan. People may make decisions that harm long-term business value just in order to hit short-term (statistically meaningless) numbers. Also, managers may even start borrowing from the future in order to meet the numbers. Think of Enron, WorldCom, Tyco, Adelphia, and HealthSouth, says Shearn.

Does management value its employees?

Shearn:

…Try to understand if the management team values its employees because the only way it will obtain positive results is through these people.

When employees feel they are partners with their boss in a mutual effort, rather than merely employees of some business run by managers they never see, morale will increase. Furthermore, when a business has good employee relations, it typically has many other good attributes, such as good customer relations and the ability to adapt quickly to changing economic circumstances. (page 225)

Are the CEO and CFO disciplined in making capital allocation decisions?

As Shearn observes, operating a business and allocating capital involve two completely different skills sets. Many CEOs do not have skill in capital allocation. Capital allocation includes:

  • Investing in new projects
  • Holding cash on the balance sheet
  • Paying dividends
  • Buying back stock
  • Making acquisitions

Shearn writes:

One of the best capital allocators in corporate history was Henry Singleton, longtime CEO of Teledyne, who cofounded the business in 1960 and served as CEO until 1986. In John Train’s book The Money Masters, Warren Buffett reported that he believes ‘Henry Singleton has the best operating and capital-deployment record in American business.’ When Teledyne’s stock was trading at extremely high prices in the 1960s, Singleton used the high-priced stock as currency to make acquisitions. Singleton made more than 130 acquisitions of small, high-margin manufacturing and technology businesses that operated in defensible niches managed by strong management. When the price-to-earnings ratio of Teledyne fell sharply starting in the 1970s, he repurchased stock. Between 1972 and 1984, he reduced the share count by more than 90 percent. He repurchased stock for as low as $6 per share in 1972, which by 1987 traded at more than $400 per share. (page 249)

 

MANAGEMENT – POSITIVE AND NEGATIVE TRAITS

Does the CEO love the money or the business?

This question comes from Warren Buffett. Buffett looks for CEOs who love the business. CEOs who are passionate about their business are more likely to persevere through many difficulties and over long periods of time. CEOs who are passionate about their business are more likely to excel over the long term. As Steve Jobs said in his commencement address to Stanford University students in 2005:

The only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.

If someone has stayed in one industry for a long time, odds are they love their work. If a CEO is very focused on the business, and not worried about appearances or large social or charity events, that’s a good sign the CEO is passionate about the business. Does the CEO direct philanthropic resources to causes they truly care about, or are they involved in ‘social scene philanthropy’?

Are the Managers Lifelong Learners Who Focus on Continuous Improvement?

Lifelong learners are managers who are never satisfied and continually find ways to improve the way they run a business. This drive comes from their passion for the business. It is extremely important for management to constantly improve, especially if a business has been successful for a long period of time. Look for managers who regard success as a base from which they continue to grow, rather than as a final accomplishment. (page 263)

How Have They Behaved Under Adversity?

Shearn:

You never truly know someone’s character until you have seen it tested by stress, adversity, or a crisis, because a crisis produces extremes in behavior… (page 264)

You need to determine how a manager responds to a difficult situation and then evaluate the action they took. Were they calm and intentional in dealing with a negative situation, or were they reactive instead? (page 266)

The best managers are those who quickly and openly communicate how they are thinking about the problem and outline how they are going to solve it. (page 267)

Does Management Think Independently?

…The best managers always maintain a long-term focus, which means that they are often building for years before they see concrete results. For example, in 2009, Jeff Bezos, founder of online retailer Amazon.com, talked about the way that some investors congratulate Amazon.com on success in a single reporting period. ‘I always tell people, if we have a good quarter, it’s because of the work we did three, four, and five years ago. It’s not because we did a good job this quarter.’ (page 275)

The best CEOs think independently.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

How To Master Yourself As An Investor


(Image: Zen Buddha Silence by Marilyn Barbone.)

March 12, 2017

James Montier, an expert in behavioral finance, has written several excellent books, including The Little Book of Behavioral Investing: How Not to Be Your Own Worst Enemy (Wiley, 2010).

Montier begins with a quote from the father of value investing, Ben Graham, who clearly understood (decades before behavioral finance was invented) that overcoming your own emotions is often the greatest challenge for an investor:

The investor’s chief problem – and even his worst enemy – is likely to be himself.

The problem, as Daniel Kahneman observes in his great book Thinking, Fast and Slow, is that we can easily recognize cognitive errors in others, but not in ourselves. Yet we all suffer from cognitive biases.

If you can learn to recognize when your brain may makea cognitive error and if you can learn systems to overcome that, then you can do much better as an investor. Here are three systems that can help you to minimize the impact of cognitive biases in order to maximize your long-term investment results:

  • Invest in low-cost broad market index funds. If you do this, then you’ll do better than 90% of all investors after several decades. Also, this approach takes very little time to implement and maintain.
  • Invest in a quantitative value fund (or in several such funds). This is a fund that automatically buys the statistically cheapest stocks, year in and year out. If done properly, this approach should do better than broad market index funds over time. One of the most successful quantitative value investors is LSV Asset Management. See: http://lsvasset.com/
  • Do value investing on your own. If you really enjoy learning about various businesses and if you enjoy the process of value investing, then with the right systems, you can learn to do well over time. As an individual investor, you can look at tiny stocks overlooked by most professionals. Also, you can easily concentrate your portfolio if you happen to come across any extremely cheap stocks. (A young Warren Buffett once put his entire personal portfolio into one stock: GEICO.)

Note that the first two systems involve fully automated investing, which drastically reduces the number of decisions you make and therefore almost entirely removes the possibility of cognitive error. The vast majority of investors are much better off using a fully automated approach, whether index funds or quantitative value funds.

(There are some actively managed value funds whose managers possess great skill, but usually it’s not possible for an ordinary investor to invest with them. Also, many of these funds have gotten quite large, so performance over the next decade or two will likely be noticeably lower.)

 

TWO SYSTEMS

It is now broadly recognized that there are two different systems operating in the human brain:

System 1: Operates automatically and quickly;makes instinctual decisions based on heuristics.

System 2: Allocates attention (which has a limited budget) to the effortful mental activities that demand it, including logic, statistics, and complex computations.

As Daniel Kahneman points out (see Thinking, Fast and Slow), System 1 is amazingly good at what it does. Its models of familiar situations are accurate. Its short-term predictions are usually accurate. And its initial reactions to challenges are swift and generally appropriate. System 1 does all of these things automatically, and without needing any help from System 2.

The problem is that there are some situations in modern life – especially if a good decision requires the proper use of logic or statistics – where System 1 suffers from cognitive biases, causing systematic errors. Most people, even if highly educated, rely largely on System 1.

System 2 can be trained over time to do logic, math, and statistics. But in the presence of high levels of stress or fatigue, System 2 can easily be derailed or not activated at all. Self-aware people recognize this, and can learn countermeasures such as meditation (which can dramatically reduce stress) or postponing important decisions (if possible).

To show how most people rely on System 1, even for math and logic, Montier describes the Cognitive Reflection Test (CRT), invented by Shane Frederick.

If you’re reading this, try answering these three questions:

  1. A bat and a ball together cost $1.10 in total. The bat costs a dollar more than the ball. How much does the ball cost?
  2. If it takes five minutes for five machines to make five widgets, how long would it take 100 machines to make 100 widgets?
  3. In a lake there is a patch of lily pads. Every day the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long will it take to cover half the lake?

Montier writes that only 17% out of 3,500 people tested by Frederick got all three questions right. 33% got none right! Even among the best performing group – MIT students – only 48% managed to get all three right.

Montier gave the CRT to 600 professional investors. Only 40% got all three right. 10% didn’t get any right.

Not doing well on the CRT is correlated with many behavioral errors, says Montier. Doing well in investing is more about temperament and rationality than it is about IQ. Montier quotes Buffett:

Success in investing doesn’t correlate with IQ once you’re above the level of 125. Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing.

It takes years of work to improve as an investor. But the great thing is that you can improve at investing your entire life. All the knowledge and experience is cumulative, as Buffett says.

 

PREPARE, PLAN, PRE-COMMIT

The empathy gap is our inability to predict how we will behave while under emotional strain. In the world of investing, many react emotionally to negative news or to falling stock prices, which leads to poor decisions.

In order to prevent ourselves from making poor decisions, we need to pre-commit to a plan, or (even better) follow a fully automated investment strategy.

Sir John Templeton explains how to do well as an investor:

The time of maximum pessimism is the best time to buy, and the time of maximum optimism is the best time to sell.

To do this consistently, we need to pre-commit to a plan or use fully automated investing. System 1 reacts to dropping prices with fear and loss aversion. Fear causes people to ignore great bargains when stock prices are plummeting. And fearful people become even more fearful if they have already suffered from losses. Many investors just say, ‘Get me out of the market [or the stock] at any price’ in order to relieve their pain. But what they should be doing is buying (or holding) the cheapest stocks available.

In order to overcome his own cognitive biases, John Templeton pre-committed to buying cheap stocks during a bear market: he placed buy orders for many stocks well below their current prices.

Typically during a bear market, the investors who have large cash positions fail to invest until after they’ve already missed much of the market recovery. Again, only by having pre-committed to a battle plan before a bear market can you increase the odds of buying when you should be buying – when stocks are cheap. As Jeremy Grantham wrote (quoted by Montier):

There is only one cure for terminal paralysis: you absolutely must have a battle plan for reinvestment and stick to it. Since every action must overcome paralysis, what I recommend is a few large steps, not many small ones….

It is particularly important to have a clear definition of what it will take for you to be fully invested. Without a similar program, be prepared for your committee’s enthusiasm to invest (and your own for that matter) to fall with the market. You must get them to agree now – quickly before rigor mortis sets in… Finally, be aware that the market does not turn when it sees light at the end of the tunnel. It turns when all looks black, but just a subtle shade less black than the day before.

The legendary value investor Seth Klarman describes a bear market as follows (quoted by Montier):

The chaos is so extreme, the panic selling so urgent, that there is almost no possibility that sellers are acting on superior information; indeed, in situation after situation, it seems clear that investment fundamentals do not factor into their decision making at all… While it is always tempting to try and time the market and wait for the bottom to be reached (as if it would be obvious when it arrived), such a strategy proved over the years to be deeply flawed. Historically, little volume transacts at the bottom or on the way back up and competition from other buyers will be much greater when the markets settle down and the economy begins to recover. Moreover, the price recovery from a bottom can be very swift. Therefore, an investor should put money to work amidst the throes of a bear market, appreciating that things will likely get worse before they get better.

So a battle plan for reinvestment is a “schedule of pre-commitments” that can help you overcome the fear induced by plummeting prices. Such a plan will require that you buy stocks when they are cheap. Such a plan will require that you average down when stocks continue to get cheaper.

Although this discussion has focused mostly on a broad market sell-off, sometimes there are sell-offs (for non-fundamental, short-term reasons) in particular industries or in individual stocks. In these cases, too, having a battle plan can help you to ignore the crowd – which often is focused only on the short term – and buy the stocks that are likely the cheapest based on long-term fundamentals.

 

OVERCOMING OVERCONFIDENCE

The majority of drivers say they are above average. Most students say they will finish in the top half of their class. Indeed, overconfidence is the pattern for nearly all human activities. Montier asked 600 professional fund managers how many were above average, and 74% said they were.

Earlier I mentioned Shane Frederick’s CRT test. People who ace that test tend to suffer from fewer cognitive biases. However, this does not apply to overconfidence, which is a deep-seated and very wide-spread cognitive bias. Nearly everyone suffers from overconfidence, although some people learn to tame it.

The trouble is that knowing a great deal about overconfidence does not remove the bias. Daniel Kahneman, one of the best psychologists, especially when it comes to cognitive biases such as overconfidence, admits that he himself is still “wildly overconfident” as a default setting. Unless there is a serious threat – a possible predator in the grass – our System 1 automatically makes us feel at ease and overconfident about nearly everything.

Now, in most spheres of human activity, overconfidence and optimism are actually good things. Overconfident and optimistic people deal with problems better, are happier, and live longer. Perhaps we developed overconfidence a long time ago when hunting. Fearless hunters were generally better hunters.

The problem is that in fields (like investing) that require the proper use of logic or statistics, overconfidence leads to poor decisions and often disasters.

There are several ways that we can overcome overconfidence. Committing to a fully automated approach to investing is the best way to minimize all cognitive biases.

  • Buying and holding a low-cost broad market index fund will allow you to do better than 90% of all investors over several decades.
  • A quantitative value strategy can be even better than an index fund. Such an approach automatically buys the statistically cheapest stocks.

If you want to do value investing yourself because you enjoy it, then using a checklist and other specific processes can greatly improve your results over time. Guy Spier has outstanding discussions of systems and processes in The Education of a Value Investor. See: https://boolefund.com/the-education-of-a-value-investor/

Montier also mentions some good questions you should ask if you’re doing value investing for yourself. “Must I believe this?” is a better question than “Can I believe this?” In other words, what would invalidate the investment hypothesis?

It’s important to understand that System 1 automatically looks for confirming evidence for all of its views. Even more importantly, System 2 uses a positive test strategy, which means that System 2 also looks for confirming evidence rather than disconfirming evidence.

Yet we know that the advance of science depends directly on the search for disconfirming evidence. Charles Darwin, for instance, trained himself always to look for disconfirming evidence. In this careful, painstaking manner, Darwin, though not at all brilliant, was eventually able to produce some of the finest mental work ever done. The bottom line for investors is simple:

Always look for disconfirming evidence rather than for confirming evidence.

Many of the best value investors have developed the habit of always looking for disconfirming evidence. If you look the investment management firms run by Seth Klarman, Richard Pzena, and Ray Dalio – to give just a few examples out of many – there are processes in place to ensure that disconfirming evidence is ALWAYS sought. For instance, if one analyst is responsible for the bullish case on a stock, another analyst will be assigned to argue the bearish case. Investors who have a deeply ingrained process of searching for disconfirming evidence have generally enjoyed a boost to their long-term performance as a result.

Montier also writes that “Why should I own this investment?” is a better question than “Why shouldn’t I own this investment?” The default should be non-ownership until the investment hypothesis has been formed and tested. One major reason Buffett and Munger are among the very best investors is because of their extreme discipline and patience. They are super selective, willing to pass on a hundred possibilities in order to find the one investment that is the most supported by the evidence. (This was even more true when they were managing less capital. These days, they have so much capital – as a result of their great successes over time – that they have to overlook many of the best opportunities, which involve companies too tiny to move the needle at Berkshire Hathaway.)

 

DON’T LISTEN TO FINANCIAL FORECASTERS

By nature, we tend to like people who sound confident. Financial commentators typically sound very confident on TV. But when it comes to financial forecasting, no one can do it well. Some forecasters will be right on occasion, but you never know ahead of time who that will be. And six months from now, some other forecasters – different ones – will be right.

The best thing, as strongly advised by many top investors including Warren Buffett, is to ignore all financial forecasts. Either stick with a fully automated investment program – whether index fund or quantitative value – or stay focused on individual companies if you’re doing value investing for yourself.

If you’re doing value investing yourself, then even when the broader market is overvalued, you can usually find tiny companies that are extremely cheap. Most professional investors never look at microcap stocks, which is why it’s usually the best place to look for the cheapest stocks.

Warren Buffett has never relied on a financial forecast while creating the greatest 57-year (and counting) track record of all time:

I have no use whatever for projections or forecasts. They create an illusion of apparent precision. The more meticulous they are, the more concerned you should be.

I make no effort to predict the course of general business or the stock market. Period.

Anything can happen anytime in markets. And no advisor, economist, or TV commentator – and definitely not Charlie nor I – can tell you when chaos will occur. Market forecasters will fill your ear but will never fill your wallet.

Similarly, the great investor Peter Lynch never relied on financial forecasts:

Nobody can predict interest rates, the future direction of the economy, or the stock market. Dismiss all such forecasts and concentrate on what’s actually happening to the companies in which you’ve invested.

The way you lose money in the stock market is to start off with an economic picture. I also spend fifteen minutes a year on where the stock market is going.

Henry Singleton, a business genius (100 points from being a chess grandmaster) who was easily one of the best capital allocators in American business history, never relied on financial forecasts:

I don’t believe all this nonsense about market timing. Just buy very good value and when the market is ready that value will be recognized.

Furthermore, as noted by Montier (page 46), when people are told that someone is an expert, they partially switch off the part of the brain associated with logic and statistics. (This has been shown in experiments.)

 

THE DANGER OF PERCEIVED AUTHORITY

There are still quite a few investors who continue to listen to financial forecasters, despite the fact that financial forecasts on the whole do not add value. There are several cognitive errors happening here. Many investors are governed mostly by emotions. They watch the daily fluctuations of stock prices – which have no meaning at all for long-term investors (except when they are low enough to create a buying opportunity). They listen to financial experts giving mostly useless forecasts. And they anchor on various forecasts.

Part of the problem is that many people will believe or do almost anything if it comes from a perceived authority.

Stanley Milgram ran a famous experiment about authority because he was trying to understand why so many previously good people decided to commit evil during WWII by joining the Nazi’s.

In Milgram’s experiment, subjects were told they would administer electric shocks to a “learner.” If the learner answered a question incorrectly, the subject was told to deliver the shock. The subjects met the learners before the experiment, but the learners were in a separate room during the experiment. The subjects could hear the learners, however, as they answered questions and as they were shocked. Behind the subject stood an authority figure wearing a white lab coat, carrying a clipboard, and telling the subject when to give the shock.

Before running the experiment, Milgram asked a panel of 40 psychiatrists what they thought the subjects would do. The psychiatrists thought that only 1% of the subjects would go to the maximum shock level (450 volts with labels of “extreme danger” and “XXX”). After all, they reasoned, the subjects were Americans and Americans wouldn’t do that, right?

100% of the subjects went up to 135 volts (at which point the learner is asking to be released). 80% of the subjects were willing to go up to 285 volts (at which point all they could hear were screams of agony). And more than 62% of the subjects were willing to administer the maximum 450 volts (despite the labels of “extreme danger” and “XXX”) – at which point there was no sound at all from the learner.

Milgram ran several variants of his experiment, but the results were essentially the same.

 

WHY DO FINANCIAL FORECASTERS CONTINUE TO FORECAST?

A major reason why financial forecasters continue to forecast, despite adding no value on the whole, is simply that there are enough fearful people who need to hear financial forecasts.

But you might wonder: If a forecaster has been wrong so often, how can that person continue to forecast, even when there is a demand for it?

Montier discusses a landmark, 20-year study done by the psychologist Philip Tetlock. (See Expert Political Judgment: How Good Is It? How Can We Know?) Tetlock gathered over 27,000 expert predictions, and he found that they were little better than pure chance.

Tetlock also found that forecasters tend to use the same excuses over and over. The five most common excuses for failed forecasts were the following:

The “If only” defense – If only the Federal Reserve had raised rates, then the prediction would have been true. Effectively, the experts claim that they would have been correct if only their advice had been followed.

The ceteris paribus” defense – Something outside of the model of analysis occurred, which invalidated the forecast; therefore it isn’t my fault.

The “I was almost right” defense – Although the predicted outcome didn’t occur, it almost did.

The “It just hasn’t happened yet” defense – I wasn’t wrong, it just hasn’t occurred yet.

The “Single prediction” defense – You can’t judge me by the performance of a single forecast.

How can forecasters continue to be confident in their forecasts after years or decades of mostly being wrong? Typically, they invoke one of the five defenses just listed. As a result, most forecasters maintain just as high a level of overconfidence, even after a long string of mostly incorrect forecasts.

 

NET ASSET VALUE AND EARNINGS POWER VALUE

A fully automated deep value investing strategy will probably yield excellent results over time (especially if focused on microcap stocks), when compared with most other investment strategies. If you have doubts about that, then low-cost broad market index funds are a great choice for your long-term investing.

If you’re picking individual stocks using a value investing approach, then the key is to be a learning machine. And there are many books you should read, including all the books I’ve written about previously on this weekly blog.

There’s one book in particular that I recommend because it essentially uses Ben Graham’s conservative approach to picking individual stocks. The book is Value Investing: From Graham to Buffett and Beyond (Wiley, 2004), by Professor Bruce Greenwald (and others).

If you could forecast the future free cash flows of a given company, and if you had a reasonable estimate of the discount rate, then you could value that company. The problem with this DCF approach is that very tiny changes in certain variables, like long-term growth rates or the discount rate, can dramatically change the intrinsic value estimate.

Greenwald explains two approaches that depend entirely on the currently known facts about the company:

  • Net asset value
  • Earnings power value

Many value investors over the years have done extraordinarily well by estimating net asset value and then buying below that value. Peter Cundill, for example, is a great value investor who always focused on buying below liquidation value. See: https://boolefund.com/peter-cundill-discount-to-liquidation-value/

Other value investors have done extremely well by estimating the earnings power value – or normalized earnings – and consistently buying below that level.

Often the earnings power value of a company will exceed its net asset value. So in a sense, net asset value is a conservative estimate of intrinsic value, while earnings power value may represent potential upside. But even with earnings power value, there is zero forecasting involved. There is no estimate of future growth. This is especially helpful for value investors because the studies have shown that the stocks of ‘high-growth’ companies underperform, while the stocks of ‘low-growth’ or ‘no-growth’ stocks outperform. See: https://boolefund.com/the-ugliest-stocks-are-the-best-stocks/

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Are Humans Rational?


(Image: Zen Buddha Silence by Marilyn Barbone.)

February 12, 2017

I’ve always deeply admiredboth great scientists and great artists. Science and art – each in its own way – attempt to grasp the nature of ultimate reality, or at least the nature of reality as far as we currently can see it. As Einstein wrote in his essay, “The World As I See It”:

The ideals which have lighted me on my way and time after time have given me new courage to face life cheerfully, have been Truth, Goodness, and Beauty. Without the sense of fellowship with people of like mind, of preoccupation with the objective, the eternally unattainable in the field of art and science, life would have seemed to me empty.

The pursuit of truth, goodness, and beauty is really everything that makes life worthwhile. (This includes, by definition, friendship and love.)

As a business person, I hasten to add that having the freedom to pursue one’s passion is obviously essential. As science moves forward and as the world economy evolves, ever more people are gaining a measure of freedom (including better access to healthcare and education). Butthere is still a long way to go.

What does this have to do with investing? Only that the scientific study of human nature – including biology, genetics, psychology, neuroscience, economics, and sociology – is as fascinating as other areas in science, such as physics and astronomy.

 

DECISION MAKING UNDER UNCERTAINTY

When people make investment decisions, they are making decisions under uncertainty – decisions where there are unknown future states of the world. Many economists have assumed that people make rational decisions under uncertainty. Most of these economists have held that people always maximize their expected utility. Maximizing expected utility means that people assign a probability and a reward to each possible future state, and then make a decision (e.g., an investment) that will maximize the expected utility. Expected utility is just the sum of each probability times each reward.

Expected Utility Example: Atwood Oceanics

The Boole Microcap Fund bought stock in Atwood Oceanics, Inc. (NYSE: ATW) at $6.46 in late January. Currently, ATW is at roughly $11.70, but it is still probably quite cheap, at least from a 3- to 5-year point of view.

Here is how to apply the expected utility framework. First note that under “normal” economic conditions, the market clearing price of oil is about $60-70. A market clearing price just means a price where the daily supply of oil meets the daily demand for oil. Let’s consider three scenarios, any of which could occur over the next 3 to 5 years, however long it takes for the oil markets to return to “normal”:

  • Oil at $55 and ATW earns $3.00 per share. Under this scenario, ATW may be worth $30 per share (a P/E of 10).
  • Oil at $65 and ATW earns $4.50 per share. Under this scenario, ATW may be worth $45 per share (a P/E of 10).
  • Oil at $75 and ATW earns $6.00 per share. Under this scenario, ATW may be worth $60 per share (a P/E of 10).

Assume that each scenario is equally likely, so there is a 33.3% chance for each scenario to occur. To get the expected value of an investment in ATW today at $11.70, for each scenario we take the probability times the outcome (stock price). Then we add those numbers together. So we have $30 x .333 = $10, plus $45 x .333 = $15, plus $60 x .333 = $20. So we expect ATW to hit $45 = $10 + $15 + $20, and currently we can buy the stock for $11.70. That would be an expected return of almost 300% (or nearly double that if we were lucky enough to buy the stock at $6).

We could adjust the probabilities, oil prices, earnings, and P/E ratio’s for each scenario (as long as the probabilities add up to 100%). We could also decide to use more (or less) than three scenarios.

A Few More Expected Utility Examples

An obvious business example of expected utility is whether acompany should invest in Product A. One tries to guess different scenarios (and their associated probabilities), perhaps including one that is highly profitable and another involving large losses. One also must compare an investment in Product A with the next best investment opportunity.

In theory, we can apply the expected utility framework to nearly any situation where the future states are unknown, even life and death.

I don’t think George Washington explicitly did an expected utility calculation. But like many young men of his day, Washington was eager to serve in the military. He had a couple ofhorses shot from under him. And there were several occasions when bullets went whizzing right past his head (which he thought sounded”charming”). In short, Washington was lucky to survive, and the luck of Washington has arguably reverberated for centuries.

Had Washington been able to do an expected utility calculation about his future war experience, it might have been something like this (crudely):

  • 25% chance of death(even when he was a general, Washington often charged to the front)
  • 75% chance of honor and wealth

A more realistic expected utility calculation – but not what Washington could have guessed – might have been the following:

  • 50% chance of death
  • 50% chance of great honor (due to great service to his future country) and wealth

(I’m oversimplifying in order to illustrate the generality of the expected utility framework.)

Von Neumann and Morgenstern

What’s fascinating is that the expected utility framework does a very good job describing how any rational agent should make decisions. The genius polymath John von Neumann and the economist Oskar Morgenstern invented this framework. (For some details on the most general form of the framework,see: https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem)

But ever since, many economists have assumed that people actually do make decisions with full rationality. Economists have argued that rationalist economics is the best approximation for how humans behave.

Others scientists and observers have long suspected (or, indeed, realized) that people are NOT fully rational. But it’s one thing to hold the view that much human behavior is irrational. It’s another thing to prove human irrationality scientifically.

The psychologists Daniel Kahneman and Amos Tversky did hundreds of experiments over the course of decades of people making decisions under uncertainty. What Kahneman and Tversky demonstrated conclusively is that most human beings are NOT fully rational when they make decisions under uncertainty.

Behavioral economics – based on the research of Kahneman, Tversky, and many others – is a fascinating field (and the reason I went to graduate school). But it has not yet developed to the point where its models can predict a wide range of human behavior better than the rationalist economic models. The rationalist economic models arestill the best approximation for much human behavior.

 

THE MAKING OF BEHAVIORAL ECONOMICS

In his great book Misbehaving: The Making of Behavioral Economics (W. W. Norton, 2015), Richard Thaler discusses the development of behavioral economics. According to Kahneman, Richard Thaler is “the creative genius who invented the field of behavioral economics.”

Thaler defines “Econs” as the fully rational human beings that traditional economists have always assumed for their models. “Humans,” on the other hand, are the actual people who make various decisions, including often less than fully rational decisions under uncertainty.

For this blog post, I will focus on Part VI (Finance, pages 203-253). But first, a quotation Thaler has at the beginning of his book:

The foundation of political economy and, in general, of every social science, is evidently psychology. A day may come when we shall be able to deduce the laws of social science from the principles of psychology. – Vilfredo Pareto, 1906

 

THE BEAUTY CONTEST

Chicago economist Eugene Fama coined the term efficient market hypothesis, or EMH for short. Thaler writes that the EMH has two (related) components:

  • the price is right – the idea is that any asset will sell for its intrinsic value. If the rational valuation of a company – based on normalized earnings or net assets – is $100 million, then the company’s market cap (as reflected by its stock price) will be $100 million.
  • no free lunch– EMH holds that all publically available information is already reflected in current stock prices, thus there is no reliable way to “beat the market” over time.

NOTE: If prices are always right, that meansthat assets can never be overvalued (there are no bubbles) or undervalued.

Thaler observes that finance did not become a mainstream topic in economics departments before the advent of cheap computer power and great data. The University of Chicago was the first to develop a comprehensive database of stock prices going back to 1926. After that, research took off, and by 1970 EMH was well-established.

Thaler also points out that the economist J. M . Keynes was “a true forerunner of behavioral finance.” Keynes, who was also an investor, thought that animal spiritsemotions – played an important role in financial markets.

Keynes also thought that professional investors were playing an intricate guessing game, similar to picking out the prettiest faces from a set of photographs:

… It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be. And there are some, I believe, who practice the fourth, fifth, and higher degrees.

 

DOES THE STOCK MARKET OVERREACT?

On average, investors overreact to recent poor performance for low P/E stocks, which is why the P/E’s are low. And, on average, investors overreact to recent good performance for high P/E stocks, which is why the P/E’s are high.

Having said that, Thaler is quick to quote a warning by Ben Graham (the father of value investing) about timing: ‘Undervaluations caused by neglect or prejudice may persist for an inconveniently long time, and the same applies to inflated prices caused by overenthusiasm or artificial stimulus.’ Thaler gives the example of the late 1990s: for years, Internet stocks just kept going up, while value stocks just kept massively underperforming.

According to Thaler, most academic financial economists overlooked Graham’s work:

It was not so much that anyone had refuted Graham’s claim that value investing worked; it was more that the efficient market theory of the 1970s said that value investing couldn’t work. But it did. Late that decade, accounting professor Sanjoy Basu published a thoroughly competent study of value investing that fully supported Graham’s strategy. However, in order to get such papers published at the time, one had to offer abject apologies for the results. (page 221)

Thaler and his research partner Werner De Bondt came up with the following. Suppose that investors are overreacting. Suppose that investors are overly optimistic about the future growth of high P/E stocks, thus driving the P/E’s “too high.” And suppose that investors are excessively pessimistic about low P/E stocks, thus driving the P/E’s “too low.” Then subsequent high returns from value stocks and low returns from growth stocks represent simple reversion to the mean. But EMH says that:

  • The price is right: Stock prices cannot diverge from intrinsic value
  • No free lunch: Because all information is already in the stock price, it is not possible to beat the market. Past stock prices and the P/E cannot predict future price changes

Thaler and De Bondt took all the stocks listed on the New York Stock Exchange, and ranked their performance over three to five years. They isolated the worst performing stocks, which they called “Losers.” And they isolated the best performing stocks, which they called “Winners.” Writes Thaler:

If markets were efficient, we should expect the two portfolios to do equally well. After all, according to the EMH, the past cannot predict the future. But if our overreaction hypothesis were correct, Losers would outperform Winners. (page 223)

What did they find?

The results strongly supported our hypothesis. We tested for overreaction in various ways, but as long as the period we looked back at to create the portfolios was long enough, say three years, then the Loser portfolio did better than the Winner portfolio. Much better. For example, in one test we used five years of performance to form the Winner and Loser portfolios and then calculated the returns of each portfolio over the following five years, compared to the overall market. Over the five-year period after we formed our portfolios, the Losers outperformed the market by about 30% while the Winners did worse than the market by about 10%.

 

THE REACTION TO OVERREACTION

In response to widespread evidence that ‘Loser’ stocks (low P/E) – as a group – outperform ‘Winner’ stocks, defenders of EMH were forced to argue that ‘Loser’ stocks are riskier as a group.

NOTE: On an individual stock basis, a low P/E stock may be riskier. But a basket of low P/E stocks generally far outperforms a basket of high P/E stocks. The question is whether a basket of low P/E stocks is riskier than a basket of high P/E stocks. Furthermore, one can use other metrics such as low P/B (low price to book) and get the same results as the low P/E studies.

According to the CAPM (Capital Asset Pricing Model), the measure of the riskiness of a stock is its correlation with the rest of the market, or beta. If a stock has a beta of 1.0, then its volatility is similar to the volatility of the whole market. If a stock has a beta of 2.0, then its volatility is double the volatility of the whole market (e.g., if the whole market goes up or down by 10%, then this individual stock will go up or down by 20%).

According to CAPM, if the basket of Loser stocks subsequently outperforms the market while the basket of Winner stocks underperforms, then the Loser stocks must have high betas and the Winner stocks must have low betas. But Thaler and De Bondt found the opposite. Loser stocks (value stocks) were much less risky as measured by beta.

Eventually Eugene Fama himself, along with research partner Kenneth French, published a series of papers documenting that, indeed, both value stocks and small stocks earn higher returns than predicted by CAPM. In short, “the high priest of efficient markets” (as Thaler calls Fama) had declared that CAPM was dead.

But Fama and French were not ready to abandon the EMH (Efficient Market Hypothesis). They came up with the Fama-French Three Factor Model. They showed that value stocks are correlated – a value stock will tend to do well when other value stocks are doing well. And they showed that small-cap stocks are similarly correlated.

The problem, again, is that there is no evidence that a basket of value stocks is riskier than a basket of growth stocks. And there is no theoretical reason to believe that value stocks, as a group, are riskier.

According to Thaler, the debate was settled by the paper ‘Contrarian Investment, Extrapolation, and Risk’ published in 1994 by Josef Lakonishok, Andrei Shleifer, and Robert Vishny. This paper shows clearly that value stocks outperform, and value stocks are, if anything, less risky than growth stocks. Lakonishok, Shleifer, and Vishny launched the highly successful LSV Asset Management based on their research. Here is a link to the paper: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

(Recently Fama and French have introduced a five-factor model, which includes profitability. Profitability was one of Ben Graham’s criteria.)

 

THE PRICE IS NOT RIGHT

If you held a stock forever, it would be worth all future dividends discounted back to the present. Even if you sold the stock, as long as you held it for a very long time, the distant future sales price (discounted back to the present) would be a negligible part of the intrinsic value of the stock. The stock price is really the present value of all expected future dividend payments.

Bob Shiller collected data on stock prices and dividends starting in 1871:

… for each year he computed what he called the ‘ex post rational’ forecast of the stream of future dividends that would accrue to someone who bought a portfolio of the stocks that existed at that time. He did this by observing the actual dividends that got paid out and discounting them back to the year in question. After adjusting for the well-established trend that stock prices go up over long periods of time, Shiller found that the present value of dividends was… highly stable. But stock prices, which we should interpret as attempts to forecast the present value of dividends, are highly variable…. (231-232)

October 1987 provides yet another example of stock prices moving much more than fundamental values. The U.S. stock market dropped more than 25% from Thursday, October 15, 1987 to Monday, October 19, 1987. This happened in the absence of any important news, financial or otherwise. Writes Thaler:

If prices are too variable, then they are in some sense ‘wrong.’ It is hard to argue that the price at the close of trading on Thursday, October 15, and the price at the close of trading the following Monday – which was more than 25% lower – can both be rational measures of intrinsic value, given the absence of news.

IS HUMAN BEHAVIOR INHERENTLY UNPREDICTABLE?

It’s very important to note (again) that although the assumption of rationality and the EMH have been demonstrated not to be true, behavioral economists have not invented a model of human behavior that can supplant rationalist economics. Therefore, rationalist economics, and not behaviorist economics, is still the chief basis by which economists attempt to predict human behavior.

In other words, scientists must learn much more– genetics, neuroscience, psychology, etc. –in order to better predict human behavior. But even then, individual human behavior (to say nothing of group behavior) may remain partly unpredictable for a long period of time.

At the sub-atomic level, most top physicists believe that a part of reality is inherently unpredictable. However, it’s always possible that this “inherent unpredictability” is a result of current limitations in theoretical physics. Once physicists make further advances, it’s at least possible that what is currently “inherently unpredictable” may turn out to be predictable after all. Einstein may be right when it comes to sub-atomic physics: “God does not play dice.”

Similarly with human behavior. What will be the state of genetics, neuroscience, psychology, physics, artificial intelligence, etc. one thousand years from today? Human behavior may be largely predictable. But it then seems to get paradoxical. How could one accurately predict one’s own behavior all the time, given that one could choose to act differently from one’s own predictions? Or, if only a super-advanced artificial intelligence could aggregate enough data to predict all the behavior of a human being, when would it be legal to do so? And who would own the super-advanced artificial intelligence, or would it be a benevolent entity guarding the universe?

In any case, rationalist economic models may continue to be useful for a long time. In fact, rationalist models, including game theory, may also be central to predicting how various artificially intelligent agents will compete against one another.

 

THE BATTLE OF CLOSED-END FUNDS

In an efficient market, the same asset cannot sell simultaneously for two different prices. Thaler gives the standard example of gold selling for $1,000 an ounce in New York and $1,010 an ounce in London. If transaction costs were small enough, a smart trader could buy gold in New York and sell it in London. This would eventually cause the two prices to converge.

But there is one obvious example that violates this law of one price: closed-end funds, which had already been written about by Ben Graham.

For an open-end fund, all trades take place at NAV (net asset value). Investors can purchase a stake in an open-end fund on the open market, without there having to be a seller. So the total amount invested in an open-end fund can vary depending upon what investors do.

But for a closed-end fund, there is an initial amount invested in the fund, say $100 million, and then there can be no further investments and no withdrawals. A closed-end fund is traded on an exchange. So an investor can buy partial ownership of a closed-end fund, but this means that a previous owner must sell that stake to the buyer.

According to EMH, closed-end funds should trade at NAV. But in the real world, many closed-end funds trade at prices different from NAV (sometimes a premium and sometimes a discount). This is an obvious violation of the law of one price.

Charles Lee, Andrei Shleifer, and Richard Thaler wrote a paper on closed-end funds in which they identified four puzzles:

  • Closed-end funds are often sold by brokers with a sales commission of 7%. But within six months, the funds typically sell at a discount of more than 10%. Why do people repeatedly pay $107 for an asset that in six months is worth $90?
  • More generally, why do closed-end funds so often trade at prices that differ from the NAV of its holdings?
  • The discounts and premia vary noticeably across time and across funds. This rules out many simple explanations.
  • When a closed-end fund, often under pressure from shareholders, changes its structure to an open-end fund, its price often converges to NAV.

The various premia and discounts on closed-end funds simply make no sense. These mispricings would not exist if investors were rational because the only rational price for a closed-end fund is NAV.

Lee, Shleifer, and Thaler discovered that individual investors are the primary owners of closed-end funds. So Thaler et al. hypothesized that individual investors have more noticeably shifting moods of optimism and pessimism. Says Thaler:

We conjectured that when individual investors are feeling perky, discounts on closed-end funds shrink, but when they get depressed or scared, the discounts get bigger. This approach was very much in the spirit of Shiller’s take on social dynamics, and investor sentiment was clearly one example of ‘animal spirits.’ (pages 241-242)

In order to measure investor sentiment, Thaler et al. used the fact that individual investors are more likely than institutional investors to own shares of small companies. Thaler et al. reasoned that if the investor sentiment of individual investors changes, it would be apparent both in the discounts of closed-end funds and in the relative performance of small companies (vs. big companies). And this is exactly what they foundupon doing the research. The greater the discounts to NAV for closed-end funds, the more the returns for small stocks lagged (on average) the returns for large stocks.

 

NEGATIVE STOCK PRICES

Years later, Thaler revisited the law of one price with a Chicago colleague, Owen Lamont. Owen had spotted a blatant violation of the law of one price involving the company 3Com. 3Com’s main business was in networking computers using Ethernet technology, but through a merger they had acquired Palm, which made a very popular (at the time) handheld computer the Palm Pilot.

In the summer of 1999, as most tech stocks seemed to double almost monthly, 3Com stock seemed to be neglected. So management came up with the plan to divest itself of Palm. 3Com sold about 4% of its stake in Palm to the general public and 1% to a consortium of firms. As for the remaining 95% of Palm, each 3Com shareholder would receive 1.5 shares of Palm for each share of 3Com they owned.

Once this information was public, one could infer the following: As soon as the initial shares of Palm were sold and started trading, 3Com shareholders would in a sense have two separate investments. A single share of 3Com included 1.5 shares of Palm plus an interest in the remaining parts of 3Com – what’s called the “stub value” of 3Com. Note that the remaining parts of 3Com formed a profitable business in its own right. So the bottom line is that one share of 3Com should equal the “stub value” of 3Com plus 1.5 times the price of Palm.

When Palm started trading, it ended the day at $95 per share. So what should one share of 3Com be worth? It should be worth the “stub value” of 3Com – the remaining profitable businesses of 3Com (Ethernet tech, etc.) – PLUS 1.5 times the price of Palm, or 1.5 x $95, which is $143.

Again, because the “stub value” of 3Com involves a profitable business in its own right, this means that 3Com should trade at X (the stub value) plus $143, so some price over $143.

But what actually happened? The same day Palm started trading, ending the day at $95, 3Com stock fell to $82 per share. Thaler writes:

That means that the market was valuing the stub value of 3Com at minus $61 per share, which adds up to minus $23 billion! You read that correctly. The stock market was saying that the remaining 3Com business, a profitable business, was worth minus $23 billion. (page 246)

Thaler continues:

Think of it another way. Suppose an Econ is interested in investing in Palm. He could pay $95 and get one share of Palm, or he could pay $82 and get one share of 3Com that includes 1.5 shares of Palm plus an interest in 3Com.

Thaler observes that two things are needed for such a blatant violation of the law of one price to emerge and persist:

  • You need some traders who want to own shares of the now publicly traded Palm, traders who appear not to realize the basic math of the situation. These traders are called noise traders, because they are trading not based on real information (or real news), but based purely on “noise.” (The term noise traders was invented by Fischer Black. See: http://www.e-m-h.org/Blac86.pdf)
  • There also must be something preventing smart traders from driving prices back to where they are supposed to be. After all, the sensible investor can buy a share of 3Com for $82, and get 1.5 shares of Palm (worth $143) PLUS an interest in remaining profitable businesses of 3Com. Actually, the rational investor would go one step further: buy 3Com shares (at $82) and then short an appropriate number of Palm shares (at $95). When the deal is completed and the rational investor gets 1.5 shares of Palm for each share of 3Com owned, he can then use those shares of Palm to repay the shares he borrowed earlier when shorting the publicly traded Palm stock. This was a CAN’T LOSE investment. Then why wasn’t everyone trying to do it?

The problem was that there were very few shares of Palm being publicly traded. Some smart traders made tens of thousands. But there wasn’t enough publicly traded Palm stock available for any rational investor to make a huge amount of money. So the irrational prices of 3Com and Palm were not corrected.

Thaler also tells a story about a young Benjamin Graham. In 1923, DuPont owned a large number of shares of General Motors. But the market value of DuPont was about the same as its stake in GM. DuPont was a highly profitable firm. So this meant that the stock market was putting the “stub value” of DuPont’s highly profitable business at zero. Graham bought DuPont and sold GM short. He made a lot of money when the price of DuPont went up to more rational levels.

In mid-2014, says Thaler, there was a point when Yahoo’s holdings of Alibaba were calculated to be worth more than the whole of Yahoo.

Sometimes, as with the closed-end funds, obvious mispricings can last for a long time, even decades. Andrei Shleifer and Robert Vishny refer to this as the limits of arbitrage.

 

THALER’S CONCLUSIONS ABOUT BEHAVIORAL FINANCE

What are the implications of these examples? If the law of one price can be violated in such transparently obvious cases such as these, then it is abundantly clear that even greater disparities can occur at the level of the overall market. Recall the debate about whether there was a bubble going on in Internet stocks in the late 1990s….

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful:

So where do I come down on the EMH? It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful. In a world of Econs, I believe that the EMH would be true. And it would not have been possible to do research in behavioral finance without the rational model as a starting point. Without the rational framework, there are no anomalies from which we can detect misbehavior. Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research. We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have. (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed. Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’ There are definitely anomalies: sometimes the market overreacts, and sometimes it underreacts. But it remains the case that most active money managers fail to beat the market…

Most investors agree with Thaler on the no-free-lunch component of the EMH. In practice, it is very difficult for any investor to beat the market over time.

But do prices always accurately reflect intrinsic value?

I have a much lower opinion about the price-is-right component of the EMH, and for many important questions, this is the more important component…

My conclusion: the price is often wrong, and sometimes very wrong. Furthermore, when prices diverge from fundamental value by such wide margins, the misallocation of resources can be quite big. For example, in the United States, where home prices were rising at a national level, some regions experienced especially rapid price increases and historically high price-to-rental ratios. Had both homeowners and lenders been Econs, they would have noticed these warning signals and realized that a fall in home prices was becoming increasingly likely. Instead, surveys by Shiller showed that these were the regions in which expectations about the future appreciation of home prices were the most optimistic. Instead of expecting mean reversion, people were acting as if what goes up must go up even more.

Thaler adds that policy-makers should realize that asset prices are often wrong, and sometimes very wrong, instead of assuming that prices are always right.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.