Macro Enterprises (MCR.V)


(Image: Zen Buddha Silence by Marilyn Barbone.)

January 19, 2020

The Boole Microcap Fund follows a quantitative investment process. This year (2020), I am going to give examples of Boole’s investment process in action.

The first example is Macro Enterprises Inc. (Canada: MCR.V). Macro Enterprises builds oil and natural gas pipelines, constructs energy-related infrastructure facilities, and performs maintenance and integrity work on existing pipelines. The company operates primarily in western Canada and is headquartered in Fort St. John, British Columbia.

Macro Enterprises comes out near the top of the quantitative screen employed by the Boole Microcap Fund. This results from four steps.

Note: All values in Canadian dollars unless otherwise noted.

Step One

First we screen for cheapness based on five metrics. Here are the numbers for Macro Enterprises:

    • EV/EBITDA = 1.37
    • P/E = 3.72
    • P/B = 1.08
    • P/CF = 3.51
    • P/S = 0.26

These figures–especially EV/EBITDA, P/E, and P/S–make Macro Enterprises one of the top ten cheapest companies out of over two thousand that we ranked.

Step Two

Next we calculate the Piotroski F-Score, which is a measure of the fundamental strength of the company. For more on the Piostroski F-Score, see my blog post here: https://boolefund.com/piotroski-f-score/

Macro Enterprises has a Piotroski F-Score of 7. (The best score possible is 9, while the worst score is 0.) This is a very good score.

Step Three

Then we rank the company based on low debt, high insider ownership, and shareholder yield.

Warren Buffett, arguably the greatest investor of all time, explains why low debt is important:

At rare and unpredictable intervals… credit vanishes and debt becomes financially fatal.

We measure debt levels by looking at total liabilities (TL) to total assets (TA). Macro Enterprises has TL/TA of 27.17%, which is fairly low.

Insider ownership is important because that means that the people running the company have interests that are aligned with the interests of other shareholders. Macro’s founder and CEO, Frank Miles, owns approximately 30%+ of the shares outstanding. Other insiders own about 3%. This puts Macro Enterprises in the top 7% of the more than two thousand companies we ranked according to insider ownership.

Shareholder yield is the dividend yield plus the buyback yield. The company has no dividend. Also, while it has bought back a modest number of shares, this has been offset by the issuance and exercise of stock options. Thus overall, the shareholder yield is zero.

Each component of the ranking has a different weight. The overall combined ranking of Macro Enterprises places it in the top 5 stocks on our screen, or the top 0.2% of the more than two thousand companies we ranked.

Step Four

The final step is to study the company’s financial statements, presentations, and quarterly conference calls to (i) check for non-recurring items, hidden liabilities, and bad accounting; (ii) estimate intrinsic value–how much the business is worth–using scenarios for low, mid, and high cases.

Macro Enterprises has been in operation for 25 years. Over that time, it has earned a reputation for safety and reliability while becoming one of the largest pipeline construction companies in western Canada. The company has a market cap of $121 million and an enterprise value of $98 million.

Macro has built a record backlog of $870+ million in net revenue over the next few years. That is more than 7x the company’s current market cap. Presently the company has at least a 16% EBITDA margin. This translates into a net profit margin of at least 11%. That means the company will earn at least 80% of its current market cap over the next few years. (Peak net profit margins were around 15%–at these levels, the company would earn more than 100% of its market cap over the next few years.)

If you look at the $870+ million backlog, there are two large projects. There’s the $375 million Trans Mountain Project, of which Macro’s interest is 50%. And there’s the $900 million Coastal GasLink Project, of which Macro’s interest is 40%. Importantly, both of these projects are largely cost-plus–as opposed to fixed price–which greatly reduces the company’s execution risk. Macro can be expected to add new profitable projects to its backlog.

Furthermore, Macro performs maintenance and integrity work on existing pipelines. The company has four master service agreements with large pipeline operators to conduct such work, which is a source of recurring, higher-margin revenue.

Intrinsic value scenarios:

    • Low case: Macro is probably not worth less than book value, which is $3.61 per share. That’s about 7% lower than today’s share price of $3.89.
    • Mid case: The company is probably worth at least EV/EBITDA of 5.0. That translates into a share price of $10.39, which is 167% higher than today’s $3.89.
    • High case: Macro may easily be worth at least EV/EBITDA of 8.0. That translates into a share price of $16.17, which is about 316% higher than today’s $3.89.

Bottom Line

Macro Enterprises is one of the top 5 most attractive stocks out of more than two thousand microcap stocks that we ranked using our quantitative screen. Moreover, the mid case and high case intrinsic value estimates are far above the current stock price. As a result, we are “trembling with greed” to buy this stock for the Boole Microcap Fund.

Sources

In addition to company financial statements and presentations, I used information from the following three analyses of Macro Enterprises:

(Note: If you have trouble accessing the www.valueinvestorsclub.com analyses, you can create a guest account, which is free.)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

This Time Is Different


(Image: Zen Buddha Silence by Marilyn Barbone)

July 14, 2019

For a value investor who patiently searches for individual stocks that are cheap, predictions about the economy or the stock market are irrelevant. In fact, most of the time, such predictions are worse than irrelevant because they could cause the value investor to miss some individual bargains.

Warren Buffett puts it best:

  • Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.
  • We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.
  • Market forecasters will fill your ear but never fill your wallet.
  • Forecastsmay tell you a great deal about the forecaster; they tell you nothing about the future.
  • Stop trying to predict the direction of the stock market, the economy, interest rates, or elections.
  • [On economic forecasts:]Why spend time talking about something you don’t know anything about? People do it all the time, but why do it?
  • I don’t invest a dime based on macro forecasts.

A bull and bear are facing each other in front of the stock market.

(Illustration by Eti Swinford)

No one has ever been able to predict the stock market with any sort of reliability. Ben Graham–with a 200 IQ–was as smart or smarter than any value investor who’s ever lived. And here’s what Graham said near the end of his career:

If I have noticed anything over these 60 years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

No one can predict the stock market, although anyone can get lucky once or twice in a row. But if you’re patient, you can find individual stocks that are cheap. Consider the career of Henry Singleton.

When he was managing Teledyne, Singleton built one of the best track records of all time as a capital allocator. A dollar invested with Singleton would grow to $180.94 by the time of Singleton’s retirement 29 years later. $10,000 invested with Singleton would have become $1.81 million.

Did Singleton ever worry about whether the stock market was too high when he was deciding how to allocate capital? Not ever. Not one single time. Singleton:

I don’t believe all this nonsense about market timing. Just buy very good value and when the market is ready that value will be recognized.

Had Singleton ever brooded over the level of the stock market, his phenomenal track record as a capital allocator would have suffered.

Top value investor Seth Klarman expresses the matter as follows:

In reality, no one knows what the market will do; trying to predict it is a waste of time, and investing based upon that prediction is a speculative undertaking.

If you’re not convinced that focusing on individual bargains–regardless of the economy or the market–is the wise approach, then let’s consider whether “this time is different.”Why is this phrase important? Because if things are never different, then you can bet on historical trends–and mean reversion; you can bet that P/E ratio’s will return to normal, which (if true) implies that the stock market today will probably fall.

Quite a few leading value investors–who have excellent track records of ignoring the crowd and being right–agree that the U.S. stock market today is very overvalued, at least based on historical trends. (This group includes Rob Arnott, Jeremy Grantham, John Hussman, Frank Martin, Russell Napier, and Andrew Smithers.)

However, this time the crowd appears to be right and leading value investors wrong. This time really is different. Grantham admits:

[It] can be very dangerous indeed to assume that things are never different.

Here Grantham presents his views: https://www.barrons.com/articles/grantham-dont-expect-p-e-ratios-to-collapse-1493745553

Leading value investor Howard Marks:

The thing I find most interesting about investing is how paradoxical it is: how often the things that seem most obvious–on which everyone agrees–turn out not to be true.

 

THIS TIME IS DIFFERENT

The main reason is not possible to predict the economy or the stock market is that both the economy and the stock market evolve over time. As Howard Marks says:

Economics and markets aren’t governed by immutable laws like the physical sciences…

…sometimes things really are different…

Link: https://www.oaktreecapital.com/docs/default-source/memos/this-time-its-different.pdf

In 1963, Graham gave a lecture, “Securities in an Insecure World.” Link: http://jasonzweig.com/wp-content/uploads/2015/03/BG-speech-SF-1963.pdf

In the lecture, Graham admits that the Graham P/E–based on ten-year average earnings of the Dow components–was much too conservative. (The Graham P/E is now called the CAPE–cyclically adjusted P/E.) Graham:

The action of the stock market since then would appear to demonstrate that these methods of valuations are ultra-conservative and much too low, although they did work out extremely well through the stock market fluctuations from 1871 to about 1954, which is an exceptionally long period of time for a test. Unfortunately in this kind of work, where you are trying to determine relationships based upon past behavior, the almost invariable experience is that by the time you have had a long enough period to give you sufficient confidence in your form of measurement just then new conditions supersede and the measurement is no longer dependable for the future.

Because of the U.S. government’s more aggressive policy with respect to preventing a depression, Graham concluded that the U.S. stock market should have a fair value 50 percent higher. (Graham explains this change in the 1962 edition ofSecurity Analysis.)

Similar logic can be applied to the S&P 500 Index today–at just over 3,013. Fed policy, moral hazard, lower interest rates, an aging population, slower growth, productivity, and higher profit margins (based in part on political and monopoly power) are all factors in the S&P 500 being quite high.

The great value investor John Templeton observed that when people say, “this time is different,” 20 percent of the time they’re right.

By traditional standards, the U.S. stock market looks high. For instance, the CAPE is at 29+. (The CAPE–formerly the Graham P/E–is the cyclically adjusted P/E ratio based on 10-year average earnings.) The historical average CAPE is 16.6.

If the stock market followed the pattern of history, then there would be mean reversion in stock prices, i.e., there would probably be a large drop in stock prices, at least until the CAPE approached 16.6. (Typically the CAPE would overshoot on the downside and so would go below 16.6.)

But that assumes that the CAPE will still average 16.6 going forward. Since 1996, according to Rob Arnott, 96% of the time the CAPE has been above the 16.6; and two-thirds of the time the CAPE has been above 24. See: https://www.researchaffiliates.com/en_us/publications/articles/645-cape-fear-why-cape-naysayers-are-wrong.html

Here are some reasons why the average CAPE going forward could be 24 (or even higher) instead of 16.6.

  • Interest rates have gotten progressively lower over the past couple of decades, especially since 2009. This may continue. The longer interest rates stay low, the higher stock prices will be.
  • Perhaps the government has tamed the business cycle (at least to some extent). Monetary and fiscal authorities may continue to be able to delay or avoid a recession.
  • Government deficits might not cause interest rates to rise, in part because the U.S. can print its own currency.
  • Government debt might not cause interest rates to rise. (Again, the U.S. can print its own currency.)
  • Just because the rate of unemployment is low doesn’t mean that the rate of inflation will pick up.
  • Inflation may be structurally lower–and possibly also less volatile–than in the past.
  • Profit margins may be permanently higher than in the past.

Let’s consider each point in some detail.

 

LOWER INTEREST RATES

The longer rates stay low, the higher stock prices will be.

Warren Buffett pointed out recently that if 3% on 30-year bonds makes sense, then stocks are ridiculously cheap: https://www.cnbc.com/2019/05/06/warren-buffett-says-stocks-are-ridiculously-cheap-if-interest-rates-stay-at-these-levels.html

 

BUSINESS CYCLE TAMED

The current economic recovery is the longest recovery in U.S. history. Does that imply that a recession is overdue? Not necessarily. GDP has been less volatile due in part to the actions of the government, including Fed policy.

Perhaps the government is finally learning how to tame the business cycle. Perhaps a recession can be avoided for another 5-10 years or even longer.

 

GOVERNMENT DEFICITS MAY NOT CAUSE RATES TO RISE

Traditional economic theory says that perpetual government deficits will eventually cause interest rates to rise. However, according to Modern Monetary Theory (MMT), a country that can print its own currency doesn’t need to worry about deficits.

Per MMT, the government first spends money and then later takes money back out in the form of taxes. Importantly, every dollar the government spends ends up as a dollar of income for someone else. So deficits are benign. (Deficits can still be too big under MMT, particularly if they are not used to increase the nation’s productive capacity, or if there is a shortage of labor, raw materials, and factories.)

Interview with Stephanie Kelton, one of the most influential proponents of MMT: https://theglobepost.com/2019/03/28/stephanie-kelton-mmt/

 

MOUNTING GOVERNMENT DEBT MAY NOT CAUSE RATES TO RISE

Traditional economic theory says that government debt can get so high that people lose confidence in the country’s bonds and currency. Stephanie Kelton:

The national debt is nothing more than a historical record of all of the dollars that were spent into the economy and not taxed back, and are currently being saved in the form of Treasury securities.

One key, again, is that the country in question must be able to print its own currency.

Kelton again:

MMT is advancing a different way of thinking about money and a different way of thinking about the role of taxes and deficits and debt in our economy. I think it’s probably also safe to say that MMT has, I think, a superior understanding of monetary operations. That means that we take banking and the Federal Reserve and Treasury operations and so forth very seriously, whereas more conventional approaches historically have rarely even found room in their models for things like money and finance and debt.

Let’s be clear. MMT may be wrong, at least in part. Many great economists–including Paul Krugman, Ken Rogoff, Larry Summers, and Janet Yellen–do not agree with MMT’s assertion that deficits and debt don’t matter for a country that can print its own currency.

 

UNEMPLOYMENT AND INFLATION

In traditional economic theory, the Phillips curve holds that there is an inverse relationship between the rate of unemployment and the rate of inflation. As unemployment falls, wages increase which causes inflation. But if you look at the non-employment rate (rather than the unemployment rate), the labor market isn’t really tight. The labor force participation rate is at its lowest level in more than 40 years. That explains in part why wages and inflation have not increased.

 

INFLATION STRUCTURALLY LOWER

As Howard Marks has noted, inflation may be structurally lower than in the past, due to automation, the shift of manufacturing to low-cost countries, and the abundace of free/cheap stuff in the digital age.

Link again: https://www.oaktreecapital.com/docs/default-source/memos/this-time-its-different.pdf

 

PROFIT MARGINS PERMANENTLY HIGHER

Proft margins on sales and corporate profits as a percentage of GDP have both been trending higher. This is due partly to “increased monopoly, political, and brand power,” according to Jeremy Grantham. Link again: https://www.barrons.com/articles/grantham-dont-expect-p-e-ratios-to-collapse-1493745553

Furthermore, lower interest rates and higher leverage (since 1997) have contributed to higher profit margins, asserts Grantham.

I would add that software and related technologies have become much more important in the U.S. and global economy. Companies in these fields tend to have much higher profit margins–even after accounting for lower rates, higher leverage, and increased monopoly and political power.

 

IGNORE FORECASTS AND DON’T TRY TO TIME THE MARKET; INSTEAD FOCUS ON INDIVIDUAL BUSINESSES

The most important point is that it’s not possible to predict the stock market, but it is possible–if you’re patient–to find individual stocks that are undervalued. This is especially true if your assets are small enough to invest in microcap stocks. In 1999, when the overall U.S. stock market was close to its highest valuation in history, Warren Buffett said:

If I was running $1 million, or $10 million for that matter, I’d be fully invested.

No matter how high the S&P 500 Index gets, there are hundreds of microcap stocks that are almost completely ignored, with no analyst coverage and with no large investors paying attention. That’s why Buffett said during the stock bubble in 1999 that he’d be fully invested if he were managing a small enough sum.

Microcap stocks offer the highest potential returns because there are thousands of them and they are largely ignored. That’s not to say that there are no cheap small caps, mid caps, or large caps. Even when the broad market is high, there are at least a few undervalued large caps. But the number of undervalued micro caps is always much greater than the number of undervalued large caps.

So it’s best to focus on micro caps in order to maximize long-term returns. But whether you invest in micro caps or in large caps, what matters is not the stock market or the economy, but the price of the individual business.

If and when you find a business selling at a cheap stock price, then it’s best to buy regardless of economic and market conditions–and regardless of economic and market forecasts. As Seth Klarman puts it:

Investors must learn to assess value in order to know a bargain when they see one. Then they must exhibit the patience and discipline to wait until a bargain emerges from their searches and buy it, regardless of the prevailing direction of the market or their own views about the economy at large.

For example, if you find a conservatively financed business whose stock is trading at 20 percent of liquidation value, it makes sense to buy it regardless of how high the overall stock market is and regardless of what’s happening–or what might happen–in the economy. Seth Klarman again:

We don’t buy ‘the market’. We invest in discrete situations, each individually compelling.

Ignore forecasts!

A man in a suit and tie holding a telescope.

(Illustration by Maxim Popov)

Peter Lynch:

Nobody can predict interest rates, the future direction of the economy, or the stock market. Dismiss all such forecasts and concentrate on what’s actually happening to the companies in which you’ve invested.

Now, every year there are “pundits” who make predictions about the stock market. Therefore, as a matter of pure chance, there will always be people in any given year who are “right.” But there’s zero evidence that any of those who were “right” at some point in the past have been correct with any sort of reliability.

Howard Marks has asked: of those who correctly predicted the bear market in 2008, how many of them predicted the recovery in 2009 and since then? The answer: very few. Marks points out that most of those who got 2008 right were already disposed to bearish views in general. So when a bear market finally came, they were “right,” but the vast majority missed the recovery starting in 2009.

There are always naysayers making bearish predictions. But anyone who owned an S&P 500 Index fund from 2007 to present (mid 2019) would have done dramatically better than most of those who listened to naysayers. Buffett:

Ever-present naysayers may prosper by marketing their gloomy forecasts. But heaven help them if they act on the nonsense they peddle.

Buffett himself made a 10-year wager against a group of talented hedge fund (and fund of hedge fund) managers. Buffett’s investment in an S&P 500 Index fund trounced the super-smart hedge funds. See: http://berkshirehathaway.com/letters/2017ltr.pdf

Some very able investors have stayed largely in cash since 2011. Meanwhile, the S&P 500 Index has increased close to 140 percent. Moreover, many smart investors have tried to short the U.S. stock market since 2011. Not surprisingly, some of these short sellers are down 50 percent or more.

This group of short sellers includes the value investor John Hussman, whose Hussman Strategic Growth Fund (HSGFX) is down nearly 54 percent since the end of 2011. Compare that to a low-cost S&P 500 Index fund like the Vanguard 500 Index Fund Investor Shares (VFINX), which is up 140 percent since then end of 2011.

If you invested $10,000 in HSGFX at the end of 2011, you would have about $4,600 today. If instead you invested $10,000 in VFINX at the end of 2011, you would have about $24,000 today. In other words, if you invested with one of the “ever-present naysayers,” you would have 20 percent of the value you otherwise would have gotten from a simple index fund. HSGFX will have to increase 400 percent more than VFINX just to get back to even.

Please don’t misunderstand. John Hussman is a brilliant and patient investor. (Also, I made a very similar mistake 2011-2013.) But Hussman, along with many other highly intelligent value investors–including Rob Arnott, Frank Martin, Russell Napier, and Andrew Smithers–have missed the strong possibility that this time really may be different, i.e., the average CAPE (cyclically adjusted P/E) going forward may be 24 or higher instead of 16.6.

The truth–fair value–may be somewhere in-between a CAPE of 16.6 and a CAPE of 24. But even in that case, HSGFX is unlikely to increase 400 percent relative to the S&P 500 Index.

Jeremy Grantham again:

[It] can be very dangerous indeed to assume that things are never different.

As John Maynard Keynes is (probably incorrectly) reported to have said:

When the information changes, I alter my conclusions. What do you do, sir?

 

WARREN BUFFETT: U.S. STOCKS VS. GOLD

In his 2018 letter to Berkshire Hathaway shareholders, Warren Buffett writes about “The American Tailwind.” See pages 13-14: http://www.berkshirehathaway.com/letters/2018ltr.pdf

Buffett begins this discussion by pointing out that he first invested in American business when he was 11 years old in 1942. That was 77 years ago. Buffett “went all in” and invested $114.75 in three shares of City Service preferred stock.

Buffett then asks the reader to travel back the two 77-year periods prior to his purchase. The year is 1788. George Washington had just been made the first president of the United States.

Buffett asks:

Could anyone then have imagined what their new country would accomplish in only three 77-year lifetimes?

Buffett continues:

During the two 77-year periods prior to 1942, the United States had grown from four million people – about 1â„2 of 1% of the world’s population – into the most powerful country on earth. In that spring of 1942, though, it faced a crisis: The U.S. and its allies were suffering heavy losses in a war that we had entered only three months earlier. Bad news arrived daily.

Despite the alarming headlines, almost all Americans believed on that March 11th that the war would be won. Nor was their optimism limited to that victory. Leaving aside congenital pessimists, Americans believed that their children and generations beyond would live far better lives than they themselves had led.

The nation’s citizens understood, of course, that the road ahead would not be a smooth ride. It never had been. Early in its history our country was tested by a Civil War that killed 4% of all American males and led President Lincoln to openly ponder whether “a nation so conceived and so dedicated could long endure.” In the 1930s, America suffered through the Great Depression, a punishing period of massive unemployment.

Nevertheless, in 1942, when I made my purchase, the nation expected post-war growth, a belief that proved to be well-founded. In fact, the nation’s achievements can best be described as breathtaking.

Let’s put numbers to that claim: If my $114.75 had been invested in a no-fee S&P 500 index fund, and all dividends had been reinvested, my stake would have grown to be worth (pre-taxes) $606,811 on January 31, 2019 (the latest data available before the printing of this letter). That is a gain of 5,288 for 1. Meanwhile, a $1 million investment by a tax-free institution of that time – say, a pension fund or college endowment – would have grown to about $5.3 billion.

[…]

Those who regularly preach doom because of government budget deficits (as I regularly did myself for many years) might note that our country’s national debt has increased roughly 400-fold during the last of my 77-year periods. That’s 40,000%! Suppose you had foreseen this increase and panicked at the prospect of runaway deficits and a worthless currency. To “protect” yourself, you might have eschewed stocks and opted instead to buy 3 1â„4 ounces of gold with your $114.75.

And what would that supposed protection have delivered? You would now have an asset worth about $4,200, less than 1% of what would have been realized from a simple unmanaged investment in American business. The magical metal was no match for the American mettle.

Our country’s almost unbelievable prosperity has been gained in a bipartisan manner. Since 1942, we have had seven Republican presidents and seven Democrats. In the years they served, the country contended at various times with a long period of viral inflation, a 21% prime rate, several controversial and costly wars, the resignation of a president, a pervasive collapse in home values, a paralyzing financial panic and a host of other problems. All engendered scary headlines; all are now history.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Art of Value Investing


(Image: Zen Buddha Silence by Marilyn Barbone.)

March 11, 2018

The Art of Value Investing (Wiley, 2013) is an excellent book by John Heins and Whitney Tilson. Heins and Tilson have been running the monthly newsletter, Value Investor Insight, for a decade now. Over that time, they have interviewed many of the best value investors in the world. The Art of Value Investing is a collection of quotations carefully culled from those interviews.

I’ve selected and discussed the best quotes from the following areas:

  • Margin of Safety
  • Humility, Flexibility, and Patience
  • “Can’t Lose”: Shorting the U.S. Stock Market
  • “Can’t Lose”: Shorting the Japanese Yen
  • Courage
  • Cigar-Butt’s
  • Opportunities in Micro Caps
  • Predictable Human Irrationality
  • Long-Term Time Horizon
  • Screening and Quantitative Models

 

MARGIN OF SAFETY

A black and white photo of an older man.

(Ben Graham, by Equim43)

Ben Graham, the father of value investing, stressed having a margin of safety by buying well below the probable intrinsic value of a stock. This is essential because the future is uncertain. Also, mistakes are inevitable. (Good value investors tend to be right 60 percent of the time and wrong 40 percent of the time.) Jean-Marie Eveillard:

Whenever Ben Graham was asked what he thought would happen to the economy or to company X’s or Y’s profits, he always used to deadpan, ‘The future is uncertain.’ That’s precisely why there’s a need for a margin of safety in investing, which is more relevant today than ever.

Value investing legend Seth Klarman:

People should be highly skeptical of anyone’s, including their own, ability to predict the future, and instead pursue strategies that can survive whatever may occur.

The central idea in value investing is to figure out what a business is worth (approximately), and then pay a lot less to acquire part ownership of that business via stock. Howard Marks:

If I had to identify a single key to consistently successful investing, I’d say it’s ‘cheapness.’ Buying at low prices relative to intrinsic value (rigorously and conservatively derived) holds the key to earning dependably high returns, limiting risk and minimizing losses. It’s not the only thing that matters–obviously–but it’s something for which there is no substitute.

 

HUMILITY, FLEXIBILITY, AND PATIENCE

A lit up sign that says always stay humble.

(Image by Wilma64)

Successful value investing, to a large extent, is about having the right mindset. Matthew McLennan identifies humility, flexibility, and patience as key traits:

Starting with the first recorded and reliable history that we can find–a history of the Peloponnesian war by a Greek author named Thucydides–and following through a broad array of key historical global crises, you see recurring aspects of human nature that have gotten people into trouble: hubris, dogma, and haste. The keys to our investing approach are the symmetrical opposite of that: humility, flexibility, and patience.

On the humility side, one of the things that Jean-Marie Eveillard firmly ingrained in the culture here is that the future is uncertain. That results in investing with not only a price margin of safety, but in companies with conservative balance sheets and prudent and proven management teams….

In terms of flexibility, we’ve been willing to be out of the biggest sectors of the market…

The third thing in terms of temperament we think we value more than most other investors is patience. We have a five-year average holding period….We like to plant seeds and then watch the trees grow, and our portfolio is often kind of a portrait of inactivity.

It’s hard to overstate the importance of humility in investing. Many of the biggest investing mistakes have occurred when intelligent investors who have succeeded in the past have developed high conviction in an idea that happens to be wrong. Kyle Bass explains this point clearly:

You obviously need to develop strong opinions and to have the conviction to stick with them when you believe you’re right, even when everybody else may think you’re an idiot. But where I’ve seen ego get in the way is by not always being open to questions and to input that could change your mind. If you can’t ever admit you’re wrong, you’re more likely to hang on to your losers and sell your winners, which is not a recipe for success.

It often happens in investing that ideas that seem obvious or even irrefutable turn out to be wrong. The very best investors–such as Warren Buffett, Charlie Munger, Seth Klarman, Howard Marks, Jeremy Grantham, George Soros, and Ray Dalio–have developed enough humility to admit when they’re wrong, even when all the evidence seems to indicate that they’re right.

Here are two great examples of how seemingly irrefutable ideas can turn out to be wrong:

  • shorting the U.S. stock market;
  • shorting the Japanese yen.

 

“CAN’T LOSE”: SHORTING THE U.S. STOCK MARKET

A bull and bear are facing each other in front of the stock market.

(Illustration by Eti Swinford)

Professor Russell Napier is the author ofAnatomy of the Bear (Harriman House, 4th edition, 2016). Napier was a top-rated analyst for many years and has been studying and writing about global macro strategy for institutional investors since 1995.

Napier has maintained (at least since 2012) that the U.S. stock market is significantly overvalued based on the Q-ratio and also the CAPE (cyclically adjusted P/E). Moreover, Napier points out that every major U.S. secular bear market bottom in the last 100 years or so has seen the CAPE approach single digits. The catalyst for the major drop has always been either inflation or deflation, states Napier.

Napier continues to argue that U.S. stocks are overvalued and that deflation will cause the U.S. stock market to drop significantly, similar to previous secular bear markets.

Many highly intelligent value investors–at least since 2012 or 2013–have maintained high cash balances and/or short positions because they essentially agree with Napier’s argument.

However, no one has ever been able to predict the stock market. But if you follow the advice of most great value investors, you just focus on investing in individual businesses that you can understand. There’s no need to try to predict the unpredictable.

That’s not to say there won’t be a large drop in the S&P 500 Index at some point. But Napier was arguing–starting even before 2012–that the S&P 500 Index was overvalued at levels around 1200-1500 and that it would fall possibly as low as 400. It’s now roughly six years later and the S&P 500 Index has recently exceeded 2700-2800. Moreover, Jeremy Grantham, an expert on bubbles and fully aware of arguments by bears like Napier, has recently suggested the S&P 500 Index could exceed 3400-3700 before any serious break.

If the market exceeds 3400 or 3700 and then falls to 1700-2000, Napier still wouldn’t be right because he originally suggested a fall from 1200-1500 towards levels near 400. Napier is one of the smartest market historians in the world. This demonstrates that no one has ever been able to predict the stock market. That’s what great value investors–including Ben Graham, Henry Singleton, Warren Buffett, Charlie Munger, Peter Lynch, and Seth Klarman–have always maintained.

The basic reason the stock market can’t be predicted is that the economy changes and evolves over time.

  • For example, Fed policy in recent decades has been to keep interest rates quite low for years in order to prevent deflation. Very low rates cause stocks to be much higher than otherwise.
  • Profit margins are arguably higher to the extent that software (and related technologies) has become much more important in the U.S. and global economy. The five largest U.S. companies are Google, Apple, Microsoft, Facebook, and Amazon, all technology companies. Lower corporate taxes are likely giving a further boost to profit margins.

Jeremy Grantham, co-founder of GMO, is one of the most astute value investors who tracks fair value of the S&P 500 Index. Grantham used to think, back in 2012-2013, that the U.S. secular bear market was not over. Then he partially revised his view and predicted that the S&P 500 Index was likely to exceed 2250-2300. This level would have made the S&P 500’s value two standard deviations above the historical mean, indicating that it was back in bubble territory according to GMO’s definition.

Recently, in June 2017, Grantham has revised his view again. See:https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/viewpoints—i-do-indeed-believe-the-us-market-will-revert-toward-its-old-means-just-very-slowly

Grantham says mean reversion for profit margins and for the CAPE (cyclically adjusted P/E) is likely, but will probably take 20 years rather than 7 years (which previously was sufficient for mean reversion). That’s because the factors that support margins and the CAPE are themselves changing very slowly. Those factors include Fed policy including moral hazard, lower interest rates, an aging population, slower growth, productivity, and increased political and monopoly power for corporations.

In January 2018, Grantham updated his view yet again:https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/viewpoints—bracing-yourself-for-a-possible-near-term-melt-up.pdf?sfvrsn=4

Grantham now asserts that a market melt-up is likely over the next 6 months to 2 years. Grantham suggests that the S&P 500 Index will exceed 3400 or 3700. Prices are already high, but few of the usual signs of euphoria are present, which is why Grantham thinks the S&P 500 Index is not quite back to bubble territory.

The historian has to emphasize the big picture: In general are investors getting clearly carried away? Are prices accelerating? Is the market narrowing? And, are at least some of the other early warnings from the previous great bubbles falling into place?

A pencil is writing the word conclusion on paper

(Image by joshandandreaphotography)

As John Maynard Keynes is (probably incorrectly) reported to have said:

When the information changes, I alter my conclusions. What do you do, sir?

There are some very smart value investors–such as Frank Martin and John Hussman–who still basically agree with Russell Napier’s views. They may eventually be right.

But no one has ever been able to predict the stock market. Ben Graham–with a 200 IQ–was as smart or smarter than any value investor who’s ever lived. And here’s what Graham said near the end of his career:

If I have noticed anything over these sixty years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

In 1963, Graham gave a lecture, “Securities in an Insecure World.” Link:https://www8.gsb.columbia.edu/rtfiles/Heilbrunn/Schloss%20Archives%20for%20Value%20Investing/Articles%20by%20Benjamin%20Graham/DOC005.PDF

In the lecture, Graham admits that the Graham P/E–based on ten-year average earnings of the Dow components–was much too conservative. Graham:

The action of the stock market since then would appear to demonstrate that these methods of valuations are ultra-conservative and much too low, although they did work out extremely well through the stock market fluctuations from 1871 to about 1954, which is an exceptionally long period of time for a test. Unfortunately in this kind of work, where you are trying to determine relationships based upon past behavior, the almost invariable experience is that by the time you have had a long enough period to give you sufficient confidence in your form of measurement just then new conditions supersede and the measurement is no longer dependable for the future.

Graham goes on to note that, in the 1962 edition ofSecurity Analysis, Graham and Dodd addressed this issue. Because of the U.S. government’s more aggressive policy with respect to preventing a depression, Graham and Dodd concluded that the U.S. stock market should have a fair value 50 percent higher.

Similar logic can be applied to the S&P 500 Index today–at just over 2783. Fed policy including moral hazard, lower interest rates, an aging population, slower growth, productivity, and increased political and monopoly power for corporations are all factors in the S&P 500 being quite high. But Grantham is most likely right that there won’t be a true bubble until there are more signs of investors getting carried away. Grantham reminds readers that a bubble is “Excellent Fundamentals Euphorically Extrapolated.” Now that the global economy is doing nicely, this condition for a true bubble is now in place.

None of this suggests that an investor should attempt market timing. Value investors can still find individual stocks that are undervalued, even though there are fewer today than a few years ago. But trying to time the market itself has almost never worked except by luck. This has not only been observed by Graham. But it’s also been pointed out by Peter Lynch, Seth Klarman, Henry Singleton, and Warren Buffett. Peter Lynch is one of the best investors. Klarman is even better. Buffett is arguably the best. And Singleton was even smarter than Buffett.

A man in a suit and tie holding a telescope.

(Illustration by Maxim Popov)

Peter Lynch:

Nobody can predict interest rates, the future direction of the economy, or the stock market. Dismiss all such forecasts and concentrate on what’s actually happening to the companies in which you’ve invested.

Seth Klarman:

In reality, no one knows what the market will do; trying to predict it is a waste of time, and investing based upon that prediction is a speculative undertaking.

Now, every year there are “pundits” who make predictions about the stock market. Therefore, as a matter of pure chance, there will always be people in any given year who are “right.” But there’s zero evidence that any of those who were “right” at some point in the past have been correct with any sort of reliability.

Howard Marks has asked: of those who correctly predicted the bear market in 2008, how many of them predicted the recovery in 2009 and since then? The answer: very few. Marks points out that most of those who got 2008 right were already disposed to bearish views in general. So when a bear market finally came, they were “right,” but the vast majority missed the recovery starting in 2009.

There are always naysayers making bearish predictions. But anyone who owned an S&P 500 index fund from 2007 to present (early 2018) would have done dramatically better than most of those who listened to naysayers. Buffett:

Ever-present naysayers may prosper by marketing their gloomy forecasts. But heaven help them if they act on the nonsense they peddle.

Buffett himself made a 10-year wager against a group of talented hedge fund (and fund of hedge fund) managers. The S&P 50 Index fund trounced the super-smart hedge funds. See:http://berkshirehathaway.com/letters/2017ltr.pdf

Some very able investors have stayed largely in cash since 2011-2012. The S&P 500 Index has more than doubled since then. Moreover, many have tried to short the U.S. stock market since 2011-2012. Some are down 50 percent or more, while the S&P 500 Index has more than doubled. The net result of that combination is to be at only 15-25% of the S&P 500’s current value.

Henry Singleton, a business genius (100 points from being a chess grandmaster) who was easily one of the best capital allocators in American business history,neverrelied on financial forecasts–despite operating in a secular bear market from 1968 to 1982:

I don’t believe all this nonsense about market timing. Just buy very good value and when the market is ready that value will be recognized.

Warren Buffett puts it best:

  • Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.
  • We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.
  • Market forecasters will fill your ear but never fill your wallet.
  • Forecastsmay tell you a great deal about the forecaster; they tell you nothing about the future.
  • Stop trying to predict the direction of the stock market, the economy, interest rates, or elections.
  • [On economic forecasts:]Why spend time talking about something you don’t know anything about? People do it all the time, but why do it?
  • I don’t invest a dime based on macro forecasts.

 

“CAN’T LOSE”: SHORTING THE JAPANESE YEN

Another good example of a “can’t lose” investment idea that has turned out not to be right: shorting the Japanese yen. Many macro experts have been quite certain that the Japanese yen versus the U.S. dollar would eventually exceed 200. They thought this would have happened years ago. Some called it the “trade of the decade.” But the yen versus U.S. dollar is still around 110. A simple S&P 500 index fund appears to be doing far better than the “trade of the decade.”

A red and white sign with the symbol for yen

(Illustration by Shalom3)

Some have tried to short Japanese government bonds (JGB’s), rather than shorting the yen currency. But that hasn’t worked for decades. In fact, shorting JGB’s has become known as thewidowmaker trade.

Seth Klarman on humility:

In investing, certainty can be a serious problem, because it causes one not to reassess flawed conclusions. Nobody can know all the facts. Instead, one must rely on shreds of evidence, kernels of truth, and what one suspects to be true but cannot prove.

Klarman on the vital importance of doubt:

It is much harder psychologically to be unsure than to be sure; certainty builds confidence, and confidence reinforces certainty. Yet being overly certain in an uncertain, protean, and ultimately unknowable world is hazardous for investors. To be sure, uncertainty breeds doubt, which can be paralyzing. But uncertainty also motivates diligence, as one pursues the unattainable goal of eliminating all doubt. Unlike premature or false certainty, which induces flawed analysis and failed judgments, a healthy uncertainty drives the quest for justifiable conviction.

My own painful experiences: shorting the U.S. stock market and shorting the Japanese yen. In each case, I believed that the evidence was overwhelming. By far the biggest mistake I’ve ever made was shorting the U.S. stock market in 2011-2013. At the time, I agreed with Russell Napier’s arguments. I was completely wrong.

After that, I shorted the Japanese yen because I was convinced the argument was virtually irrefutable. Wrong. Perhaps the yen will collapse some day, but if it’s 10-20 years in the future–or even later–then an index fund or a quantitative value fund would be a far better and safer investment.

Spencer Davidson:

Over a long career you learn a certain humility and are quicker to attribute success to luck rather than your own brilliance. I think that makes you a better investor, because you’re less apt to make the big mistake and you’re probably quicker to capitalize on good fortune when it shines upon you.

Jeffrey Bronchick:

It’s important not to get carried away with yourself when times are good, and to be able to admit your mistakes and move on when they’re not so good. If you are intellectually honest–and not afraid to be visibly and sometimes painfully judged by your peers–investing is not work, it’s fun.

Patiently waiting for pessimism or temporary bad news to create low stock prices (some place), and then buying stocks well below probable intrinsic value, does not require genius in general. But it does require the humility to focus only on areas where you can do well. As Warren Buffett has remarked:

What counts for most people in investing is not how much they know, but rather how realistically they define what they don’t know.

 

COURAGE

A close up of the word courage on paper

(Courage concept by Travelling-light)

Humility is essential for success in investing. But you also need the courage to think and act independently. You have to be able to develop an investment thesis based on the facts and good reasoning without worrying if many others disagree. Most of the best value investments are contrarian, meaning that your view differs from the consensus. Ben Graham:

In the world of securities, courage becomes the supreme virtue after adequate knowledge and a tested judgment are at hand.

Graham again:

You’re neither right nor wrong because the crowd disagrees with you. You’re right because your data and reasoning are right.

Or as Carlo Cannell says:

Going against the grain is clearly not for everyone–and it doesn’t tend to help you in your social life–but to make the really large money in investing, you have to have the guts to make the bets that everyone else is afraid to make.

Joel Greenblatt identifies two chief reasons why contrarian value investing is hard:

Value investing strategies have worked for years and everyone’s known about them. They continue to work because it’s hard for people to do, for two main reasons. First, the companies that show up on the screens can be scary and not doing so well, so people find them difficult to buy. Second, there can be one-, two- or three-year periods when a strategy like this doesn’t work. Most people aren’t capable of sticking it out through that.

Contrarian value investing requires buying what is out-of-favor, neglected, or hated. It also requires the ability to endure multi-year periods of trailing the market, which most investors just can’t do. Furthermore, while you’re buying what everyone hates and while you’re trailing the market, you also have to put up with people calling you an idiot. In a word, you must have the ability to suffer. Eveillard:

If you are a value investor, you’re a long-term investor. If you are a long-term investor, you’re not trying to keep up with a benchmark on a short-term basis. To do that, you accept in advance that every now and then you will lag behind, which is another way of saying you will suffer. That’s very hard to accept in advance because, the truth is, human nature shrinks from pain. That’s why not so many people invest this way. But if you believe as strongly as I do that value investing not only makes sense, but that it works, there’s really no credible alternative.

 

CIGAR-BUTT’S

A bunch of coupons that are on top of each other

(Photo by Leung Cho Pan)

Warren Buffett has remarked that buying baskets of statistically cheap cigar-butt’s–50-cent dollars–is a more dependable way to generate good returns than buying high-quality businesses. Rich Pzena perhaps expressed it best:

When I talk about the companies I invest in, you’ll be able to rattle off hundreds of bad things about them–but that’s why they’re cheap! The most common comment I get is ‘Don’t you read the paper?’ Because if you read the paper, there’s no way you’d buy these stocks.

They’re priced where they are for good reason, but I invest when I believe the conditions that are causing them to be priced that way are probably not permanent. By nature, you can’t be short-term oriented with this investment philosophy. If you’re going to worry about short-term volatility, you’re just not going to be able to buy the cheapest stocks. With the cheapest stocks, the outlooks are uncertain.

Many investors incorrectly assume that high growth in the past will continue into the future, or that a high-quality company is automatically a good investment. Behavioral finance expert and value investor James Montier:

There’s a great chapter [in Dan Ariely’s Predictably Irrational] about the ways in which we tend to misjudge price and use it as an indicator of something or other. That links back to my whole thesis that the most common error we as investors make is overpaying for the hope of growth. Dan did an experiment involving wine, in which he told people, ‘Here’s a $10 bottle of wine and here’s a $90 bottle of wine. Please rate them and tell me which tastes better.’ Not surprisingly, nearly everyone thought the $90 wine tasted much better than the $10 wine. The only snag was that the $90 wine and the $10 wine were actually the same $10 wine.

 

OPPORTUNITIES IN MICRO CAPS

A chess board with stacks of coins on it.

(Illustration by Mopic)

Micro-cap stocks are the most inefficiently priced. That’s because, for most professional investors, assets under management are too large. These investors cannot even consider micro caps. The Boole Microcap Fund is designed to take advantage of this inefficiency:https://boolefund.com/best-performers-microcap-stocks/

James Vanasek on the opportunity in micro caps:

We’ll invest in companies with up to $1 billion or so in market cap, but have been most successful in ideas that start out in the $50 million to $300 million range. Fewer people are looking at them and the industries the companies are in can be quite stable. Given that, if you find a company doing well, it’s more likely it can sustain that advantage over time.

Because very few professional investors can even contemplate investing in micro caps, there’s far less competition. Carlo Cannell:

My basic premise is that the efficient markets hypothesis breaks down when there is inconsistent, imperfect dissemination of information. Therefore it makes sense to direct our attention to the 14,000 or so publicly traded companies in the U.S. for which there is little or no investment sponsorship by Wall Street, meaning three or fewer sell-side analysts who publish research…

You’d be amazed how little competition we have in this neglected universe. It is just not in the best interest of the vast majority of the investing ecosphere to spend 10 minutes on the companies we spend our lives looking at.

Robert Robotti adds:

We focus on smaller-cap companies that are largely ignored by Wall Street and face some sort of distress, of their own making or due to an industry cycle. These companies are more likely to be inefficiently priced and if you have conviction and a long-term view they can produce not 20 to 30 percent returns, but multiples of that.

 

PREDICTABLE HUMAN IRRATIONALITY

Value investors recognize that the stock market is not always efficient, largely because humans are often less than fully rational. As Seth Klarman explains:

Markets are inefficient because of human nature–innate, deep-rooted, permanent. People don’t consciously choose to invest with emotion–they simply can’t help it.

Quantitative value investor James O’Shaughnessy:

Because of all the foibles of human nature that are well documented by behavioral research–people are always going to overshoot and undershoot when pricing securities. A review of financial markets all the way back to the South Sea Company nearly 300 years ago proves this out.

Bryan Jacoboski:

The very reason price and value diverge in predictable and exploitable ways is because people are emotional beings. That’s why the distinguishing attribute among successful investors is temperament rather than brainpower, experience, or classroom training. They have the ability to be rational when others are not.

Overconfidence is extremely deep-rooted in human psychology. When asked, the vast majority of us rate ourselves as above average across a wide variety of dimensions such as looks, smarts, driving skill, academic ability, future well-being, and even luck (!).

In a field such as investing, it’s vital to become aware of our natural overconfidence. Charlie Munger likes this quote from Demosthenes:

Nothing is easier than self-deceit. For what each man wishes, that also he believes to be true.

But becoming aware of our overconfidence is usually not enough. We also have to develop systems–such as checklists–that can automatically reduce both the frequency and the severity of mistakes.

A black and white icon of a sheet with a check mark on it.

(Image by Aleksey Vanin)

Charlie Munger reminds value investors not only to develop and use a checklist, but also to follow the advice of mathematician Carl Jacobi:

Invert, always invert.

In other words, instead of thinking about how to succeed, Munger advises value investors to figure out all the ways you can fail. This is a powerful concept in a field like investing, where overconfidence frequently causes failure. Munger:

It is occasionally possible for a tortoise, content to assimilate proven insights of his best predecessors, to outrun hares which seek originality or don’t wish to be left out of some crowd folly which ignores the best work of the past. This happens as the tortoise stumbles on some particularly effective way to apply the best previous work, or simply avoids the standard calamities. We try more to profit by always remembering the obvious than from grasping the esoteric. It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.

When it comes to checklists, it’s helpful to have a list of cognitive biases. Here’s my list:https://boolefund.com/cognitive-biases/

Munger’s list is more comprehensive:https://boolefund.com/the-psychology-of-misjudgment/

Recency bias is one of the most important biases to be aware of as an investor. Jed Nussdorf:

It is very hard to avoid recency bias, when what just happened inordinately informs your expectation of what will happen next. One of the best things I’ve read on that is The Icarus Syndrome, by Peter Beinart. It’s not about investing, but describes American hubris in foreign policy, in many cases resulting from doing what seemed to work in the previous 10 years even if the setting was materially different or conditions had changed. One big problem is that all the people who succeed in the recent past become the ones in charge going forward, and they think they have it all figured out based on what they did before. It’s all quite natural, but can result in some really bad decisions if you don’t constantly challenge your core beliefs.

Availability bias is closely related to recency bias and vividness bias. You’re at least 15-20 times more likely to be hit by lightning in the United States than to be bitten by shark. But often people don’t realize this because shark attacks tend to be much more vivid in people’s minds. Similarly, your odds of dying in a car accident are 1 in 5,000, while your odds of dying in a plane crash are 1 in 11 million. Nonetheless, many people view flying as more dangerous.

John Dorfman on investors overreacting to recent news:

Investors overreact to the latest news, which has always been the case, but I think it’s especially true today with the Internet. Information spreads so quickly that decisions get made without particularly deep knowledge about the companies involved. People also overemphasize dramatic events, often without checking the facts.

 

LONG-TERM TIME HORIZON

A green sticky note with the words " good things take time ".
(Illustration by Marek)

Because so many investors worry and think about the shorter term, value investors continue to gain a large advantage by focusing on the longer term (especially three to five years). In a year or less, a given stock can do almost anything. But over a five-year period, a stock tracks intrinsic business value to a large extent. Jeffrey Ubben:

It’s still true that the biggest players in the public markets–particularly mutual funds and hedge funds–are not good at taking short-term pain for long-term gain. The money’s very quick to move if performance falls off over short periods of time. We don’t worry about headline risk–once we believe in an asset, we’re buying more on any dips because we’re focused on the end game three or four years out.

Mario Cibelli:

One of the last great arbitrages left is to be long-term-oriented when there is a large class of shareholders who have no tolerance for short-term setbacks. So it’s interesting when stocks get beaten-up because a company misses earnings or the market reacts to a short-term business development. It’s crazy to me when someone says something is cheap but doesn’t buy it because they think it won’t go anywhere for the next 6 to 12 months. We have a pretty high tolerance for taking that pain if we see glory longer term.

Whitney Tilson wrote about a great story that value investor Bill Miller told. Miller recalled that, early in his career, he was visiting an institutional money manager, to whom he was pitching R.J. Reynolds, then trading at four times earnings. Miller:

“When I finished, the chief investment officer said: ‘That’s a really compelling case but we can’t own that. You didn’t tell me why it’s going to outperform the market in the next nine months.’ I said I didn’t know if it was going to do that or not but that there was a very high probability it would do well over the next three to five years.

“He said: ‘How long have you been in this business? There’s a lot of performance pressure, and performing three to five years down the road doesn’t cut it. You won’t be in business then. Clients expect you to perform right now.’

“So I said: ‘Let me ask you, how’s your performance?’

“He said: ‘It’s terrible, that’s why we’re under a lot of performance pressure.’

“I said: ‘If you bought stocks like this three years ago, your performance would be good right now and you’d be buying RJR to help your performance over the next three years.'”

Link:http://www.tilsonfunds.com/Patience%20can%20find%20a%20virtue%20in%20market%20inefficiency-FT-6-9-06.pdf

Many investors are so focused on shorter periods of time (a year or less). They forget that the value of any business is ALL of its (discounted) future free cash flow, which often means 10-20 years or more. David Herro:

I would assert the biggest reason quality companies sell at discounts to intrinsic value is time horizon. Without short-term visibility, most investors don’t have the conviction or courage to hold a stock that’s facing some sort of challenge, either internally or externally generated. It seems kind of ridiculous, but what most people in the market miss is that intrinsic value is the sum of ALLfuture cash flows discounted back to the present. It’s not just the next six months’ earnings or the next year’s earnings. To truly invest for the long term, you have to be able to withstand underperformance in the short term, and the fact of the matter is that most people can’t.

As Mason Hawkins observes, a company may be lagging now precisely because it’s making longer-term investments that will probably increase business value in the future:

Classic opportunities for us get back to time horizon. A company reports a bad quarter, which disappoints Wall Street with its 90-day focus, but that might be for explainable temporary reasons or even because the company is making very positive long-term investments in the business. Many times that investment increases the likely value of the company five years from now, but disappoints people who want the stock up tomorrow.

Whitney George:

We evaluate businesses over a full business cycle and probably our biggest advantage is an ability to buy things when most people can’t because the short-term outlook is lousy or very hard to judge. It’s a good deal easier to know what’s likely to happen than to know precisely when it’s going to happen.

In general, humans are impatient and often discount multi-year investment gains far too much. John Maynard Keynes:

Human nature desires quick results, there is a particular zest in making money quickly, andremoter gainsare discounted by the average man at a very high rate.

 

SCREENING AND QUANTITATIVE MODELS

A word cloud of words related to technology.

(Word cloud by Arloofs)

Automating of the investment process, including screening, is often more straightforward now than it has been, thanks to enormous advances in computing in the past two decades.

Will Browne:

We often start with screens on all aspects of valuation. There are characteristics that have been proven over long periods to be associated with above-average rates of return: low P/Es, discounts to book value, low debt/equity ratios, stocks with recent significant price declines, companies with patterns of insider buying and–something we’re paying a lot more attention to–stocks with high dividend yields.

Stephen Goddard:

Our basic screening process weights three factors equally: return on tangible capital, the multiple of EBIT to enterprise value, and free cash flow yield. We rank the universe we’ve defined on each factor individually from most attractive to least, and then combine the rankings and focus on the top 10%.

Carlo Cannell:

[We] basically spend our time trying to uncover the assorted investment misfits in the market’s underbrush that are largely neglected by the investment community. One of the key metrics we assign to our companies is an analyst ratio, which is simply the number of analysts who follow the company. The lower the better–as of the end of last year, about 65 percent of the companies in our portfolio had virtually no analyst coverage.

For some time now, it has been clear that simple quant models outperform experts in a wide variety of areas:https://boolefund.com/simple-quant-models-beat-experts-in-a-wide-variety-of-areas/

Quantitative value investor James O’Shaughnessy:

Models beat human forecasters because they reliably and consistently apply the same criteria time after time. Models never vary. They are never moody, never fight with their spouse, are never hung over from a night on the town, and never get bored. They don’t favor vivid, interesting stories over reams of statistical data. They never take anything personally. They don’t have egos. They’re not out to prove anything. If they were people, they’d be the death of any party.

People on the other hand, are far more interesting. It’s far more natural to react emotionally or to personalize a problem than it is to dispassionately review broad statistical occurrences–and so much more fun! It’s much more natural for us to look at the limited set of our personal experiences and then generalize from this small sample to create a rule-of-thumb heuristic. We are a bundle of inconsistencies, and although this tends to make us interesting, it plays havoc with our ability to successfully invest.

Buffett maintains (correctly) that the vast majority of investors, large or small, should invest in low-cost broad market index funds:https://boolefund.com/quantitative-microcap-value/

If you invest in a quantitative value fund focused on cheap micro caps with improving fundamentals, then you can reasonably expect to do about 7% (+/- 3%) better than the S&P 500 Index over time:https://boolefund.com/best-performers-microcap-stocks/

Will Browne:

When you have a model you believe in, that you’ve used for a long time and which is more empirical than intuitive, sticking with it takes the emotion away when markets are good or bad. That’s been a central element of our success. It’s the emotional dimension that drives people to make lousy, irrational decisions.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Deep Value: Profiting from Mean Reversion


(Image: Zen Buddha Silence by Marilyn Barbone.)

November 12, 2017

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion. Sometimes it seems that there are misconceptions about deep value investing.

  • First, deep value stocks have on occasion been called cheap relative to future growth. But it’s often more accurate to say that deep value stocks are cheap relative tonormalizedearnings or cash flows.
  • Second, the cheapness of deep value stocks has often been said to be relative to “net tangible assets.” However, in many cases, even including stocks at a discount to tangible assets, mean reversion relates to the futurenormalized earnings or cash flows that the assets can produce.
  • Third, typically more than half of deep value stocks underperform the market. And deep value stocks are more likely to be distressed than average stocks. Do these facts imply that a deep value investment strategy is riskier than average? No…

Have you noticed these misconceptions? I’m curious to hear your take. Please let me know.

Here are the sections in this blog post:

  • Introduction
  • Mean Reversion as “Return to Normal” instead of “Growth”
  • Revenues, Earnings, Cash Flows, NOT Asset Values
  • Is Deep Value Riskier?
  • A Long Series of Favorable Bets
  • “Cigar Butt’s” vs. See’s Candies
  • Microcap Cigar Butt’s

 

INTRODUCTION

Deep value stocks tend to fit two criteria:

  • Deep value stocks trade at depressed multiples.
  • Deep value stocks have depressed fundamentals – they have generally been doing terribly in terms of revenues, earnings, or cash flows, and often the entire industry is doing poorly.

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.

  • Low multiples include low P/E (price-to-earnings), low P/B (price-to-book), low P/CF (price-to-cash flow), and low EV/EBIT (enterprise value-to-earnings before interest and taxes).
  • Mean reversion implies that, in general, deep value stocks are underperforming their economic potential. On the whole, deep value stocks will experience better future economic performance than is implied by their current stock prices.

If you look at deep value stocks as a group, it’s a statistical fact that many will experience better revenues, earnings, or cash flows in the future than what is implied by their stock prices. This is due largely tomean reversion. The future economic performance of these deep value stocks will be closer to normal levels than their current economic performance.

Moreover, the stock price increases of the good future performers will outweigh the languishing stock prices of the poor future performers. This causes deep value stocks, as a group, to outperform the market over time.

Two important notes:

  1. Generally, for deep value stocks, mean reversion implies a return to more normal levels of revenues, earnings, or cash flows. It does not often imply growth above and beyond normal levels.
  2. For most deep value stocks, mean reversion relates to future economic performance andnot to tangible asset value per se.

(1) Mean Reversion as Return to More Normal Levels

One of the best papers on deep value investing is by Josef Lakonishok, Andrei Shleifer, and Robert Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.” Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

LSV (Lakonishok, Schleifer, and Vishny) correctly point out that deep value stocks are better identified by using more than one multiple. LSV Asset Management currently manages $105 billion using deep value strategies that rely simultaneously on several metrics for cheapness, including low P/E and low P/CF.

  • InQuantitative Value (Wiley, 2012), Tobias Carlisle and Wesley Gray find that low EV/EBIT outperformed every other measure of cheapness, including composite measures.
  • However, James O’Shaughnessy, inWhat Works on Wall Street (McGraw-Hill, 2011), demonstrates – with great thoroughness – that, since the mid-1920’s, composite approaches (low P/S, P/E, P/B, EV/EBITDA, P/FCF) have been the best performers.
  • Any single metric may be more easily arbitraged away by a powerful computerized approach. Walter Schloss once commented that low P/B was working less well because many more investors were using it. (In recent years, low P/B hasn’t worked.)

LSV explain why mean reversion is the essence of deep value investing. Investors, on average, are overly pessimistic about stocks at low multiples. Investors understimate the mean reversion in future economic performance for these out-of-favor stocks.

However, in my view, the paper would be clearer if it used (in some but not all places) “return to more normal levels of economic performance” in place of “growth.” Often it’s a return to more normal levels of economic performance – rather than growth above and beyond normal levels – that defines mean reversion for deep value stocks.

(2) Revenues, Earnings, Cash Flows NOT Net Asset Values

Buying at a low price relative to tangible asset value is one way to implement a deep value investing strategy. Many value investors have successfully used this approach. Examples include Ben Graham, Walter Schloss, Peter Cundill, John Neff, and Marty Whitman.

Warren Buffett used this approach in the early part of his career. Buffett learned this method from his teacher and mentor, Ben Graham. Graham called this the “net-net” approach. You take net working capital minus ALL liabilities. If the stock price is below that level, and if you buy a basket of such “net-net’s,” you can’t help but do well over time. These areextremely cheap stocks, on average. (The only catch is that there must be enough net-net’s in existence to form a basket, which is not always the case.)

Buffett on “cigar butts”:

…I call it the cigar butt approach to investing. You walk down the street and you look around for a cigar butt someplace. Finally you see one and it is soggy and kind of repulsive, but there is one puff left in it. So you pick it up and the puff is free – it is a cigar butt stock. You get one free puff on it and then you throw it away and try another one. It is not elegant. But it works. Those are low return businesses.

Link: http://intelligentinvestorclub.com/downloads/Warren-Buffett-Florida-Speech.pdf

But most net-net’s are NOT liquidated. Rather, there is mean reversion in their future economic performance – whether revenues, earnings, or cash flows. That’s not to say there aren’t some bad businesses in this group. For net-net’s, when economic performance returns to more normal levels, typically you sell the stock. You don’t (usually) buy and hold net-net’s.

Sometimes net-net’s are acquired. But in many of these cases, the acquirer is focused mainly on the earnings potential of the assets. (Non-essential assets may be sold, though.)

In sum, the specific deep value method of buying at a discount to net tangible assets has worked well in general ever since Graham started doing it. And net tangible assets do offer additional safety. That said, when these particular cheap stocks experience mean reversion, often it’s because revenues, earnings, or cash flows return to “more normal” levels. Actual liquidation is rare.

 

IS DEEP VALUE RISKIER?

According to a study done by Joseph Piotroski from 1976 to 1996 – discussed below – although a basket of deep value stocks clearly beats the market over time, only 43% of deep value stocks outperform the market, while 57% underperform. By comparison, an average stock has a 50% chance of outperforming the market and a 50% chance of underperforming.

Let’s assume that the average deepvalue stock has a 57% chance of underperforming the market, while an average stock has only a 50% chance of underperforming. This is a realistic assumption not only because of Piotroski’s findings, but also because the average deep value stock is more likely to be distressed (or to have problems) than the average stock.

Does it follow that the reason deep value investing does better than the market over time is that deep value stocks are riskier than average stocks?

It is widely accepted that deep value investing does better than the market over time. But there is still disagreement about how risky deep value investing is. Strict believers in the EMH (Efficient Markets Hypothesis) – such as Eugene Fama and Kenneth French – argue that value investing must be unambiguously riskier than simply buying an S&P 500 Index fund. On this view, the only way to do better than the market over time is by taking more risk.

Now, it is generally true that the average deep value stock is more likely to underperform the market than the average stock. And the average deep value stock is more likely to be distressed than the average stock.

But LSV show that a deep value portfolio does better than an average portfolio, especially during down markets. This means that a basket of deep value stocks is less risky than a basket of average stocks.

  • A “portfolio” or “basket” of stocks refers to a group of stocks. Statistically speaking, there must be at least 30 stocks in the group. In the case of LSV’s study – like most academic studies of value investing – there are hundreds of stocks in the deep value portfolio. (The results are similar over time whether you have 30 stocks or hundreds.)

Moreover, a deep value portfolio only has slightly more volatility than an average portfolio, not nearly enough to explain the significant outperformance. In fact, when looked at more closely, deep value stocks as a group have slightly more volatility mainly because of upside volatility– relative to the broad market – rather than because of downside volatility. This is captured not only by the clear outperformance of deep value stocks as a group over time, but also by the fact that deep value stocks do much better than average stocks in down markets.

Deep value stocks, as a group, not only outperform the market, but are less risky. Ben Graham, Warren Buffett, and other value investors have been saying this for a long time. After all, the lower the stock price relative to the value of the business, the less risky the purchase, on average. Less downside implies more upside.

 

A LONG SERIES OF FAVORABLE BETS

Let’s continue to assume that the average deep value stock has a 57% chance of underperforming the market. And the average deep value stock has a greater chance of being distressed than the average stock. Does that mean that the average individual deep value stock is riskier than the average stock?

No, because the expected return on the average deep value stock is higher than the expected return on the average stock. In other words, on average, a deep value stock has more upside than downside.

Put very crudely, in terms of expected value:

[(43% x upside) – (57% x downside)] > [avg. return]

43% times the upside, minus 57% times the downside, is greater than the return from the average stock (or from the S&P 500 Index).

The crucial issue relates to making a long series of favorable bets. Since we’re talking about a long series of bets, let’s again consider a portfolioof stocks.

  • Recall that a “portfolio” or “basket” of stocks refers to a group of at least 30 stocks.

A portfolio of average stocks will simply match the market over time. That’s an excellent result for most investors, which is why most investors should just invest in index funds:https://boolefund.com/warren-buffett-jack-bogle/

A portfolio of deep value stocks will, over time, do noticeably better than the market. Year in and year out, approximately 57% of the deep value stocks will underperform the market, while 43% will outperform. But the overall outperformance of the 43% will outweigh the underperformance of the 57%, especially over longer periods of time. (57% and 43% are used for illustrative purposes here. The actual percentages vary.)

Say that you have an opportunity to make the same bet 1,000 times in a row, and that the bet is as follows: You bet $1. You have a 60% chance of losing $1, and a 40% chance of winning $2. This is a favorable bet because the expected value is positive: 40% x $2 = $0.80, while 60% x $1 = $0.60. If you made this bet repeatedly over time, you would average $0.20 profit on each bet, since $0.80 – $0.60 = $0.20.

If you make this bet 1,000 times in a row, then roughly speaking, you will lose 60% of them (600 bets) and win 40% of them (400 bets). But your profit will be about $200. That’s because 400 x $2 = $800, while 600 x $1 = $600. $800 – $600 = $200.

Systematically investing in deep value stocks is similar to the bet just described. You may lose 57% of the bets and win 43% of the bets. But over time, you will almost certainly profit because the average upside is greater than the average downside. Your expected return is also higher than the market return over the long term.

 

“CIGAR BUTT’S” vs. SEE’S CANDIES

In his 1989 Letter to Shareholders, Buffett writes about his “Mistakes of the First Twenty-Five Years,” including a discussion of “cigar butt” (deep value) investing:

My first mistake, of course, was in buying control of Berkshire. Though I knew its business – textile manufacturing – to be unpromising, I was enticed to buy because the price looked cheap. Stock purchases of that kind had proved reasonably rewarding in my early years, though by the time Berkshire came along in 1965 I was becoming aware that the strategy was not ideal.

If you buy a stock at a sufficiently low price, there will usually be some hiccup in the fortunes of the business that gives you a chance to unload at a decent profit, even though the long-term performance of the business may be terrible. I call this the ‘cigar butt’ approach to investing. A cigar butt found on the street that has only one puff left in it may not offer much of a smoke, but the ‘bargain purchase’ will make that puff all profit.

Link: http://www.berkshirehathaway.com/letters/1989.html

Buffett has made it clear that cigar butt (deep value) investing does work. In fact, fairly recently, Buffett bought at basket of cigar butts in South Korea. The results were excellent. But he did this in his personal portfolio.

This highlights a major reason why Buffett evolved from investing in cigar butts to investing in higher quality businesses: size of investable assets. When Buffett was managing a few hundred million dollars or less, which includes when he managed an investment partnership, Buffett achieved outstanding results in part by investing in cigar butts. But when investable assets swelled into the billions of dollars at Berkshire Hathaway, Buffett began investing in higher quality companies.

  • Cigar butt investing works best for micro caps. But micro caps won’t move the needle if you’re investing many billions of dollars.

The idea of investing in higher quality companies is simple: If you can find a business with a sustainably high ROE – based on a sustainable competitive advantage – and if you can hold that stock for a long time, then your returns as an investor will approximate the ROE (return on equity). This assumes that the company can continue to reinvest all of its earnings at the same ROE, which is extremely rare when you look at multi-decade periods.

  • The quintessential high-quality business that Buffett and Munger purchased for Berkshire Hathaway is See’s Candies. They paid $25 million for $8 million in tangible assets in 1972. Since then, See’s Candies has produced over $2 billion in (pre-tax) earnings, while only requiring a bit over $40 million in reinvestment.
  • See’s turns out more than $80 million in profits each year. That’s over 100% ROE (return on equity), which is extraordinary. But that’s based mostly on assets in place. The company has not been able to reinvest most of its earnings. Instead, Buffett and Munger have invested the massive excess cash flows in other good opportunities – averaging over 20% annual returns on these other investments (for most of the period from 1972 to present).

Furthermore, buying and holding stock in a high-quality business brings enormous tax advantages over time because you never have to pay taxes until you sell. Thus, as a high-quality business – with sustainably high ROE – compounds value over many years, a shareholder who never sells receives the maximum benefit of this compounding.

Yet it’s extraordinarily difficult to find a business that can sustain ROE at over 20% – including reinvested earnings – for decades. Buffett has argued that cigar butt (deep value) investing produces more dependable results than investing exclusively in high-quality businesses. Very often investors buy what they think is a higher-quality business, only to find out later that they overpaid because the future performance does not match the high expectations that were implicit in the purchase price. Indeed, this is what LSV show in their famous paper (discussed above) in the case of “glamour” (or “growth”) stocks.

 

MICROCAP CIGAR BUTTS

Buffett has said that you can do quite well as an investor, if you’re investing smaller amounts, by focusing oncheapmicro caps. In fact, Buffett has maintained that he could get 50% per year if he could invest only in cheap micro caps.

Investing systematically in cheap micro caps can often lead to higher long-term results than the majority of approaches that invest in high-quality stocks.

First, micro caps, as a group, far outperform every other category. See the historical performance here: https://boolefund.com/best-performers-microcap-stocks/

Second,cheap micro caps do even better. Systematically buying at low multiples works over the course of time, as clearly shown by LSV and many others.

Finally, if you apply the Piotroski F-Score to screen cheap micro caps for improving fundamentals, performance is further boosted: The biggest improvements in performance are concentrated incheap micro caps with no analyst coverage. See:https://boolefund.com/joseph-piotroski-value-investing/

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ben Graham Was a Quant


(Image: Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011). In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors. Having a clearly defined quantitative investment strategy that you stick with over the long term–both when the strategy is in favor and when it’s not–is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches. Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors. See: https://boolefund.com/warren-buffett-jack-bogle/

An index fund tries to copy an index, which is itself typically based on companies of a certain size. By contrast, quantitative value investing is based on metrics that indicate undervaluation.

 

QUANTITATIVE VALUE INVESTING

Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details ofsecurity analysis which I devoted myself to so strenuously formany years. I feel that they are relatively unimportant,which, in a sense, has put me opposed to developments in thewhole profession. I think we can do it successfully with a fewtechniques and simple principles. The main point is to havethe right general principles and the character to stick tothem.

I have a considerableamount of doubt on the question of how successful analystscan be overall when applying these selectivity approaches. The thing that I have been emphasizing in my own work forthe last few years has been the group approach. To try to buygroups of stocks that meet some simple criterion for beingundervalued – regardless of the industry and with very littleattention to the individual company

I am just finishing a 50-year study–the application of thesesimple methods to groups of stocks, actually, to all the stocksin the Moody’s Industrial Stock Group. I found the resultswere very good for 50 years. They certainly did twice as wellas the Dow Jones. And so my enthusiasm has beentransferred from the selective to the group approach. What Iwant is an earnings ratio twice as good as the bond interestratio typically for most years. One can also apply a dividendcriterion or an asset value criterion and get good results. My research indicates the best results come from simple earningscriterions.

Imagine–there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work. It seems too good to be true. But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up.

See: http://www.cfapubs.org/doi/pdf/10.2470/rf.v1977.n1.4731

Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors. They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not. This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others. Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct. – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that. – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well. Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient. The difference between these propositions is night and day.

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.

 

DESPERATELY SEEKING ALPHA

Greiner refers to the July 12, 2010 issue of Barron’s. Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009. 87 of the 248 funds were gone completely, while the other funds had been downgraded. Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha. Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return. Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors. (page 16)

But risk involves not only just individual company risk. It also involves how one company’s stock is correlated with the stocks of other companies. If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks. Graham was not concerned about any correlations among the cheap stocks. As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha. (Beta as a concept has some obvious problems.) But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17). Moreover, alpha is excess return relative to an index or benchmark. We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors. Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data). (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return. Greiner explains:

The stock market is not a repeatable event. Every day is an out-of-sample environment. The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets. (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable. It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)

 

RISKY BUSINESS

Risk means more things can happen than will happen. – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed. The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed. Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management. Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk. Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high. Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quitelow.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear. In these situations, a spike in volatility is a symptom of risk. At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values. So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear. Sell-offs are usually buying opportunities for quantitative value investors.

I willtell youhow to becomerich. Closethedoors. Befearfulwhen othersare greedy. Begreedywhenothersarefearful. – Warren Buffett

So how do you figure out risk exposures? It is often a difficult thing to do. Greiner defines ELE events as extinction-level events, or extreme-extreme events. If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.” (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future. Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing. Value investing has always worked over timebecause the investor systematically buys stocks well below probable intrinsic value–whether net asset value or earnings power. This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience. When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences. This is what quants do. They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen. We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety. Do we not wear seat belts because of the odd chance of the tractor-trailer collision? Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan. Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism. The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes. In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow. The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks. Eventually, the interconnectedness of the associations among sticks results in an avalanche. Likewise, so behaves the market. (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so. Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation. Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out. Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful. Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty. Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event. When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before. This is not really very helpful. The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards. That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French,or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor. If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe. Do the same for all other factors. The numerical B/P for a stock is then termed exposure. Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta. The variance and covariance of the beta time series act as proxies for the variance of the stocks. These are the components of the covariance matrix. On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors. The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry. Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in. This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another. Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries. However, FactSet’s MAC model does include this operation. (page 45)

 

BETA IS NOT “SHARPE” ENOUGH

Value investors like Ben Graham know that price variability is not risk. Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power). Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good. Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky. In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power. Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value. (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution. The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions. For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete. Some of these are asymmetric about the mean (first moment) and some are not. Some have fat tails and some do not. You can even have distributions with infinite second moments (infinite variance). There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance. Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause. Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly. (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information. It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave. Market returns tend to have fatter tails. But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance. It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty. However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH? It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful. In a world of Econs, I believe that the EMH would be true. And it would not have been possible to do research in behavioral finance without the rational model as a starting point. Without the rational framework, there are no anomalies from which we can detect misbehavior. Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research. We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have. (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed. Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’ There are definitely anomalies: sometimes the market overreacts, and sometimes it underreacts. But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right. The price is often wrong, and sometimes very wrong, says Thaler. However, that doesn’t mean that you can beat the market. It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution. (page 60)

One problem with beta is that it has covariance in the numerator. And if two variables are linearly independent, then their covariance is always zero. But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark. Greiner explains that you can determine linear dependence as follows: When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X? If yes, then the portfolio and the benchmark are linearly dependent. Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity. However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated. Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund. The distribution of returns is more like a Frechet distribution than a normal distribution in two ways: it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74). The Magellan Fund returns also have a fat tail on the right side of the distribution. This is where the Frechet misses. But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution. Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Future of the Mind


(Image: Zen Buddha Silence by Marilyn Barbone.)

August 20, 2017

This week’s blog post covers another book by the theoretical physicist Michio Kaku–The Future of the Mind (First Anchor Books, 2015).

Most of the wealth we humans have created is a result of technological progress (in the context of some form of capitalism plus the rule of law). Most future wealth will result directly from breakthroughs in physics, artificial intelligence, genetics, and other sciences. This is why AI is fascinating in general (not just for investing). AI–in combination with other technologies–may eventually turn out to be the most transformative technology of all time.

 

A PHYSICIST’S DEFINITION OF CONSCIOUSNESS

Physicists have been quite successful historically because of their ability to gather data, to measure ever more precisely, and to construct testable, falsifiable mathematical models to predict the future based on the past. Kaku explains:

When a physicist first tries to understand something, first he collects data and then he proposes a “model,” a simplified version of the object he is studying that captures its essential features. In physics, the model is described by a series of parameters (e.g., temperature, energy, time). Then the physicist uses the model to predict its future evolution by simulating its motions. In fact, some of the world’s largest supercomputers are used to simulate the evolution of models, which can describe protons, nuclear explosions, weather patterns, the big bang, and the center of black holes. Then you create a better model, using more sophisticated parameters, and simulate it in time as well. (page 42)

Kaku then writes that he’s taken bits and pieces from fields such as neurology and biology in order to come up with a definition of consciousness:

Consciousness is a process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space, time, and in relation to others), in order to accomplish a goal (e.g., find mates, food, shelter).

Kaku emphasizes that humans use the past to predict the future, whereas most animals are focused only on the present or the immediate future.

Kaku writes that one can rate different levels of consciousness based on the definition. The lowest level of consciousness is Level 0, where an organism has limited mobility and creates a model using feedback loops in only a few parameters (e.g., temperature). Kaku gives the thermostat as an example. If the temperature gets too hot or too cold, the thermostat registers that fact and then adjusts the temperature accordingly using an air conditioner or heater. Kaku says each feedback loop is “one unit of consciousness,” so the thermostat – with only one feedback loop – would have consciousness of Level 0:1.

Organisms that are mobile and have a central nervous system have Level I consciousness. There’s a new set of parameters–relative to Level 0–based on changing locations. Reptiles are an example of Level I consciousness. The reptilian brain may have a hundred feedback loops based on their senses, etc. The totality of feedback loops give the reptile a “mental picture” of where they are in relation to various objects (including prey), notes Kaku.

Animals exemplify Level II consciousness. The number of feedback loops jumps exponentially, says Kaku. Many animals have complex social structures. Kaku explains that the limbic system includes the hippocampus (for memories), the amygdala (for emotions), and the thalamus (for sensory information).

You could rank the specific level of Level II consciousness of an animal by listing the total number of distinct emotions and social behaviors. So, writes Kaku, if there are ten wolves in the wolf pack, and each wolf interacts with all the others with fifteen distinct emotions and gestures, then a first approximation would be that wolves have Level II:150 consciousness. (Of course, there are caveats, since evolution is never clean and precise, says Kaku.)

 

LEVEL III CONSCIOUSNESS: SIMULATING THE FUTURE

Kaku observes that there is a continuum of consciousness from the most basic organisms up to humans. Kaku quotes Charles Darwin:

The difference between man and the higher animals, great as it is, is certainly one of degree and not of kind.

Kaku defines human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future. This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal.

Kaku explains that we as humans have so many feedback loops that we need a “CEO”–an expanded prefrontal cortex that can analyze all the data logically and make decisions. More precisely, Kaku writes that neurologist Michael Gazzaniga has identified area 10, in the lateral prefrontal cortex, which is twice as big in humans as in apes. Area 10 is where memory, planning, abstract thinking, learning rules, picking out what is relevant, etc. happens. Kaku says he will refer to this region as the dorsolateral prefrontal cortex, roughly speaking.

Most animals, by constrast, do not think and plan, but rely on instinct. For instance, notes Kaku, animals do not plan to hibernate, but react instinctually when the temperature drops. Predators plan, but only for the immediate future. Primates plan a few hours ahead.

Humans, too, rely on instinct and emotion. But humans also analyze and evaluate information, and run mental simulations of the future–even hundreds or thousands of years into the future. This, writes Kaku, is how we as humans try to make the best decision in pursuit of a goal. Of course, the ability to simulate various future scenarios gives humansa great evolutionary advantage for things like evading predators and finding food and mates.

As humans, we have so many feedback loops, says Kaku, that it would be a chaotic sensory overload if we didn’t have the “CEO” in the dorsolateral prefrontal cortex. We think in terms of chains of causality in order to predict future scenarios. Kaku explains that the essence of humor is simulating the future but then having an unexpected punch line.

Children play games largely in order tosimulate specific adult situations. When adults play various games like chess, bridge, or poker, they mentally simulate various scenarios.

Kaku explains the mystery of self-awareness:

Self-awareness is creating a model of the world and simulating the future in which you appear.

As humans, we constantly imagine ourselves in various future scenarios. In a sense, we are continuously running “thought experiments” about our lives in the future.

Kaku writes that the medial prefrontal cortex appears to be responsible for creating a coherent sense of self out of the various sensations and thoughts bombarding our brains. Furthermore, the left brain fits everything together in a coherent story even when the data don’t make sense. Dr. Michael Gazzaniga was able to show this by running experiments on split-brain patients.

Kaku speculates that humans can reach better conclusions if the brain receives a great deal of competing data. With enough data and with practice and experience, the brain can often reach correct conclusions.

At the beginning of the next section–Mind Over Matter–Kaku quotes Harvard psychologist Steven Pinker:

The brain, like it or not, is a machine. Scientists have come to that conclusion, not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain.

 

DARPA

DARPA is the Pentagon’s Defense Advanced Research Projects Agency. Kaku writes that DARPA has been central to some of the most important technological breakthroughs of the twentieth century.

President Dwight Eisenhower set up DARPA originally as a way to compete with the Russians after they launched Sputnik into orbit in 1957. Over the years, some of DARPA’s projects became so large that they spun them off as separate entities, including NASA.

DARPA’s “only charter is radical innovation.” DARPA scientists have always pushed the limits of what is physically possible. One of DARPA’s early projects was Arpanet, a telecommunications network to connect scientists during and after World War III. After the breakup of the Soviet bloc, the National Science Foundation decided to declassify Arpanet and give away the codes and blueprints for free. This would eventually become the internet.

DARPA helped create Project 57, which was a top-secret project for guiding ballistic missiles to specific targets. This technology later became the foundation for the Global Positioning System (GPS).

DARPA has also been a key player in other technologies, including cell phones, night-vision goggles, telecommunications advances, and weather satellites, says Kaku.

Kaku writes that, with a budget over $3 billion, DARPA has recently focused on the brain-machine interface. Kaku quotes former DARPA official Michael Goldblatt:

Imagine if soldiers could communicate by thought alone… Imagine the threat of biological attack being inconsequential. And contemplate, for a moment, a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-through. As impossible as these visions sound or as difficult as you might think the task would be, these visions are the everyday work of the Defense Sciences Office [a branch of DARPA]. (page 74)

Goldblatt, notes Kaku, thinks the long-term legacy of DARPA will be human enhancement. Goldblatt’s daughter has cerebral palsy and has been confined to a wheelchair all her life. Goldblatt is highly motivated not only to help millions of people in the future and create a legacy, but also to help his own daughter.

 

TELEKINESIS

Cathy Hutchinson became a quadriplegic after suffering a massive stroke. But in May 2012, scientists from Brown University placed a tiny chip on top of her brain–called Braingate–which is connected by wires to a computer. (The chip has ninety-six electrodes for picking up brain impulses.) Her brain could then send signals through the computer to control a mechanical robotic arm. She reported her great excitement and said she knows she will get robotic legs eventually, too. This might happen soon, says Kaku, since the field of cyber prosthetics is advancing fast.

Scientists at Northwestern placed a chip with 100 electrodes on the brain of a monkey. The signals were carefully recorded while the monkey performed various tasks involving the arms. Each task would involve a specific firing of neurons, which the scientists eventually were able to decipher.

Next, the scientists took the signal sequences from the chip and instead of sending them to a mechanical arm, they sent the signals to the monkey’s own arm. Eventually the monkey learned to control its own arm via the computer chips. (The reason 100 electrodes is enough is because they were placed on the output neurons. So the monkey’s brain had already done the complex processing involving millions of neurons by the time the signals reached the electrodes.)

This device is one of many that Northwestern scientists are testing. These devices, which continue to be developed, can help people with spinal cord injuries.

Kaku observes that much of the funding for these developments comes from a DARPA project called Revolutionizing Prosthetics, a $150 million effort since 2006. Retired U.S. Army colonel Geoffrey Ling, a neurologist with several tours of duty in Iraq and Afghanistan, is a central figure behind Revolutionizing Prosthetics. Dr. Ling was appalled by the suffering caused by roadside bombs. In the past, many of these brave soldiers would have died. Today, many more can be saved. However, more than 1,300 of them have lost limbs after returning from the Middle East.

Dr. Ling, with funding from the Pentagon, instructed his staff to figure out how to replace lost limbs within five years. Ling:

They thought we were crazy. But it’s in insanity that things happen.

Kaku continues:

Spurred into action by Dr. Ling’s boundless enthusiasm, his crew has created miracles in the laboratory. For example, Revolutionary Prosthetics funded scientists at the Johns Hopkins Applied Physics Laboratory, who have created the most advanced mechanical arm on Earth, which can duplicate nearly all the delicate motions of the fingers, hand, and arm in three dimensions. It is the same size and has the same strength and agility as a real arm. Although it is made of steel, if you covered it up with flesh-colored plastic, it would be nearly indistinguishable from a real arm.

This arm was attached to Jan Sherman, a quadriplegic who had suffered from a genetic disease that damaged the connection between her brain and her body, leaving her completely paralyzed from the neck down. At the University of Pittsburgh, electrodes were placed directly on top of her brain, which were then connected to a computer and then to a mechanical arm. Five months after surgery to attach the arm, she appeared on 60 Minutes. Before a national audience, she cheerfully used her new arm to wave, greet the host, and shake his hand. She even gave him a fist bump to show how sophisticated the arm was.

Dr. Ling says, ‘In my dream, we will be able to take this into all sorts of patients, patients with strokes, cerebral palsy, and the elderly.‘ (page 84)

Dr. Miguel Nicholelis of Duke University is pursuing novel applications of the brain-machine interface (BMI). Dr. Nicholelis has demonstrated that BMI can be done across continents. He put a chip on a monkey’s brain. The chip was connected to the internet. When the monkey was walking on a treadmill in North Carolina, the signals were sent to a robot in Kyoto, Japan, which performed the same walking motions.

Dr. Nicholelis is also working on the problem that today’s prosthetic hands lack a sense of touch. Dr. Nicholelis is trying to create a direct brain-to-brain interface to overcome this challenge. Messages would go from the brain to the mechanical arm, and then directly back to the brain, bypassing the stem altogether. This is a brain-machine-brain interface (BMBI).

Dr. Nicholelis connected the motor cortex of rhesus monkeys to mechanical arms. The mechanical arms have sensors, and send signals back to the brain by electrodes connected to the somato-sensory cortex (which registers the sensation of touch). Dr. Nicholelis invented a new code to represent different surfaces. After a month of practice, the brain learns the new code and can thus distinguish among different surfaces.

Dr. Nicholelis told Kaku that something like the holodeck from Star Trek–where you wander in a virtual world, but feel sensations when you bump into virtual objects–will be possible in the future. Kaku writes:

The holodeck of the future might use a combination of two technologies. First, people in the holodeck would wear internet contact lenses, so that they would see an entirely new virtual world everywhere they looked. The scenery in your contact lense would change instantly with the push of a button. And if you touched any object in this world, signals sent into the brain would simulate the sensation of touch, using BMBI technology. In this way, objects in the virtual world you see inside your contact lense would feel solid. (page 87)

Scientists have begun to explore an “Internet of the mind,” or brain-net. In 2013, scientists went beyond animal studies and demonstrated the first human brain-to-brain communication.

This milestone was achieved at the University of Washington, with one scientist sending a brain signal (move your right arm) to another scientist. The first scientist wore an EEG helmet and played a video game. He fired a cannon by imagining moving his right arm, but was careful not to move it physically.

The signal from the EEG helmet was sent over the Internet to another scientist, who was wearing a transcranial magnetic helmet carefully placed over the part of his brain that controlled his right arm. When the signal reached the second scientist, the helmet would send a magnetic pulse into his brain, which made his right arm move involuntarily, all by itself. Thus, by remote control, one human brain could control the movement of another.

This breakthrough opens up a number of possibilities, such as exchanging nonverbal messages via the Internet. You might one day be able to send the experience of dancing the tango, bungee jumping, or skydiving to the people on your e-mail list. Not just physical activity, but emotions and feelings as well might be sent via brain-to-brian communication.

Nicholelis envisions a day when people all over the world could participate in social networks not via keyboards, but directly through their minds. Instead of just sending e-mails, people on the brain-net would be able to telepathically exchange thoughts, emotions, and ideas in real time. Today a phone call conveys only the information of the conversation and the tone of voice, nothing more. Video conferencing is a bit better, since you can read the body language of the person on the other end. But a brain-net would be the ultimate in communications, making it possible to share the totality of mental information in a conversation, including emotions, nuances, and reservations. Minds would be able to share their most intimate thoughts and feeelings. (pages 87-88)

Kaku gives more details ofwhat would be needed to create a brain-net:

Creating a brain-net that can transmit such information would have to be done in stages. The first step would be inserting nanoprobes into important parts of the brain, such as the left temporal lobe, which governs speech, and the occipital lobe, which governs vision. Then computers would analyze these signals and decode them. This information in turn could be sent over the Internet by fiber-optic cables.

More difficult would be to insert these signals into another person’s brain, where they could be processed by the receiver. So far, progress in this area has focused only on the hippocampus, but in the future it should be possible to insert messages directly into other parts of the brain corresponding to our sense of hearing, light, touch, etc. So there is plenty of work to be done as scientists try to map the cortices of the brain involved in these senses. Once these cortices have been mapped… it should be possible to insert words, thoughts, memories, and experiences into another brain. (page 89)

Dr. Nicolelis’ next goalis the Walk Again Project. They are creating a complete exoskeleton that can be controlled by the mind. Nicolelis calls it a “wearable robot.” The aim is to allow the paralyzed to walk just by thinking. There are several challenges to overcome:

First, a new generation of microchips must be created that can be placed in the brain safely and reliably for years at a time. Second, wireless sensors must be created so the exoskeleton can roam freely. The signals from the brain would be received wirelessly by a computer the size of a cell phone that would probably be attached to your belt. Third, new advances must be made in deciphering and interpreting signals from the brain via computers. For the monkeys, a few hundred neurons were necessary to control the mechanical arms. For a human, you need, at minimum, several thousand neurons to control an arm or leg. And fourth, a power supply must be found that is portable and powerful enough to energize the entire exoskeleton. (page 92)

 

MEMORIES AND THOUGHTS

One interesting possibility is that the long-term memory evolved in humans because it was useful for us in simulating and predicting future scenarios.

Indeed, brain scans done by scientists at Washington University in St. Louis indicate that areas used to recall memories are the same as those involved in simulating the future. In particular, the link between the dorsolateral prefrontal cortex and the hippocampus lights up when a person is engaged in planning for the future and remembering the past. In some sense, the brain is trying to ‘recall the future,’ drawing upon memories of the past in order to determine how something will evolve into the future. This may also explain the curious fact that people who suffer from amnesia… are often unable to visualize what they will be doing in the future or even the very next day. (page 113)

Some claim that Alzheimer’s disease may be the disease of the century. As of Kaku’s writing, there were 5.3 million Americans with Alzheimer’s, and that number is expected to quadruple by 2050. Five percent of people aged sixty-five to seventy-four have it, but more than 50 percent of those over eighty-five have it, even if they have no obvious risk factors.

One possible way to try to combat Alzheimer’s is to create antibodies or a vaccine that might specifically target misshapen protein molecules associated with the disease. Another approach might be to create an artificial hippocampus. Yet another approach is to see if specific genes can be found that improve memory. Experiments on mice and fruit flies have been underway.

If the genetic fix works, it could be administered by a simple shot in the arm. If it doesn’t work, another possible approach is to insert the proper proteins into the body. Instead of a shot, it would be a pill. But scientists are still trying to understand the process of memory formation.

Eventually, writes Kaku, it will be possible to record the totality of stimulation entering into a brain. In this scenario, the Internet may become a giant library not only for the details of human lives, but also for the actual consciousness of various individuals. If you want to see how your favorite hero or historical figure felt as they confronted the major crises of their lives, you’ll be able to do so. Or you could share the memories and thoughts of a Nobel Prize-winning scientist, perhaps gleaning clues about how great discoveries are made.

 

ENHANCING OUR INTELLIGENCE

What made Einstein Einstein? It’s very difficult to say, of course. Partly, it may be that he was the right person at the right time. Also, it wasn’t just raw intelligence, but perhaps more a powerful imagination and an ability to stick with problems for a very long time. Kaku:

The point here is that genius is perhaps a combination of being born with certain mental abilities and also the determination and drive to achieve great things. The essence of Einstein’s genius was probably his extraordinary ability to simulate the future through thought experiments, creating new physical principles via pictures. As Einstein himself once said, ‘The true sign of intelligence is not knowledge, but imagination.’ And to Einstein, imagination meant shattering the boundaries of the known and entering the domain of the unknown. (page 133)

The brain remains “plastic” even into adult life. People can always learn new skills. Kaku notes that the Canadian psychologist Dr. Donald Hebb made an important discovery about the brain:

the more we exercise certain skills, the more certain pathways in our brains become reinforced, so the task becomes easier. Unlike a digital computer, which is just as dumb today as it was yesterday, the brain is a learning machine with the ability to rewire its neural pathways every time it learns something. This is a fundamental difference between the digital computer and the brain. (page 134)

Scientists also believe that the ability to delay gratification and the ability to focus attention may be more important than IQ for success in life.

Furthermore, traditional IQ tests only measure “convergent” intelligence related to the left brain and not “divergent” intelligence related to the right brain. Kaku quotes Dr. Ulrich Kraft:

‘The left hemisphere is responsible for convergent thinking and the right hemisphere for divergent thinking. The left side examines details and processes them logically and analytically but lacks a sense of overriding, abstract connections. The right side is more imaginative and intuitive and tends to work holistically, integrating pieces of an informational puzzle into a whole.’ (page 138)

Kaku suggests that a better test of intelligence might measure a person’s ability to imagine different scenarios related to a specific future challenge.

Another avenue of intelligence research is genes. We are 98.5 percent identical genetically to chimpanzees. But we live twice as long and our mental abilities have exploded in the past six million years. Scientists have even isolated just a handful of genes that may be responsible for our intelligence. This is intriguing, to say the least.

In addition to having a larger cerebral cortex, our brains have many folds in them, vastly increasing their surface area. (The brain of Carl Friedrich Gauss was found to be especiallyfolded and wrinkled.)

Scientists have also focused on the ASPM gene. It has mutated fifteen times in the last five or six million years. Kaku:

Because these mutations coincide with periods of rapid growth in intellect, it it tantalizing to speculate that ASPM is among the handful of genes responsible for our increased intelligence. If this is true, then perhaps we can determine whether these genes are still active today, and whether they will continue to shape human evolution in the future. (page 154)

Scientists have also learned that nature takes numerous shortcuts in creating the brain. Many neurons are connected randomly, so a detailed blueprint isn’t needed. Neurons organize themselves in a baby’s brain in reaction to various specific experiences. Also, nature uses modules that repeat over and over again.

It is possible that we will be able to boost our intelligence in the future, which will increase the wealth of society (probably significantly). Kaku:

It may be possible in the coming decades to use a combination of gene therapy, drugs, and magnetic devices to increase our intelligence. (page 162)

…raising our intelligence may help speed up technological innovation. Increased intelligence would mean a greater ability to simulate the future, which would be invaluable in making scientific discoveries. Often, science stagnates in certain areas because of a lack of fresh new ideas to stmulate new avenues of research. Having an ability to simulate different possible futures would vastly increase the rate of scientific breakthroughs.

These scientific discoveries, in turn, could generate new industries, which would enrich all of society, creating new markets, new jobs, and new opportunities. History if full of technological breakthroughs creating entirely new industries that benefited not just the few, but all of society (think of the transistor and the laser, which today form the foundation of the world economy). (page 164)

 

DREAMS

Kaku explains that the brain, as a neural network, may need to dream in order to function well:

The brain, as we have seen, is not a digital computer, but rather a neural network of some sort that constantly rewires itself after learning new tasks. Scientists who work with neural networks noticed something interesting, though. Often these systems would become saturated after learning too much, and instead of processing more information they would enter a “dream” state, whereby random memories would sometimes drift and join together as the neural networks tried to digest all the new material. Dreams, then, might reflect “house cleaning,” in which the brain tries to organize its memories in a more coherent way. (If this is true, then possibly all neural networks, including all organisms that can learn, might enter a dream state in order to sort out their memories. So dreams probably serve a purpose. Some scientists have speculated that this might imply that robots that learn from experience might also eventually dream as well.)

Neurological studies seem to back up this conclusion. Studies have shown that retaining memories can be improved by getting sufficient sleep between the time of activity and a test. Neuroimaging shows that the areas of the brain that are activated during sleep are the same a those involved in learning a new task. Dreaming is perhaps useful in consolidating this new information. (page 172)

In 1977, Dr. Allan Hobson and Dr. Robert McCarley made history – seriously challenging Freud’s theory of dreams–by proposing the “activation synthesis theory” of dreams:

The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert. As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state.

As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves. These waves travel up the brain stem into the visual cortex, stimulating it to create dreams. Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams. (pages 174-175)

 

ALTERED STATE OF CONSCIOUSNESS

There seem to be certain parts of the brain that are associated with religious experiences and also with spirituality. Dr. Mario Beauregard of the University of Montreal commented:

If you are an atheist and you live a certain kind of experience, you will relate it to the magnificence of the universe. If you are a Christian, you will associate it with God. Who knows. Perhaps they are the same thing.

Kaku explains how human consciousness involves delicate checks and balances similar to the competing points of view that a good CEO considers:

We have proposed that a key function of human consciousness is to simulate the future, but this is not a trivial task. The brain accomplishes it by having these feedback loops check and balance one another. For example, a skillful CEO at a board meeting tries to draw out the disagreement among staff members and to sharpen competing points of view in order to sift through the various arguments and then make a final decision. In the same way, various regions of the brain make diverging assessments of the future, which are given to the dorsolateral profrontal cortex, the CEO of the brain. These competing assessments are then evaluated and weighted until a balanced final decision is made. (page 205)

The most common mental disorder is depression, afflicting twenty million people in the United States. One way scientists are trying to cure depression isdeep brain stimulation (DBS)–inserting small probes into the brain and causing an electrical shock. Kaku:

In the past decade, DBS has been used on forty thousand patients for motor-related diseases, such as Parkinson’s and epilepsy, which cause uncontrolled movements of the body. Between 60 and 100 percent of the patients report significant improvement in controlling their shaking hands. More than 250 hospitals in the United States now perform DBS treatment. (page 208)

Dr. Helen Mayberg and colleagues at Washington University School of Medicine have discovered an important clue to depression:

Using brain scans, they identified an area of the brain, called Brodmann area 25 (also called the subcallosal cingulate region), in the cerebral cortex that is consistently hyperactive in depressed individuals for whom all other forms of treatment have been unsuccessful.

…Dr. Mayberg had the idea of applying DBS directly to Broadmann area 25… her team took twelve patients who were clinically depressed and had shown no improvement after exhaustive use of drugs, psychotherapy, and electroshock therapy.

They found that eight of these chronically depressed individuals showed immediate progress. Their success was so astonishing, in fact, that other groups raced to duplicate these results and apply DBS to other mental disorders…

Dr. Mayberg says, ‘Depression 1.0 was psychotherapy… Depression 2.0 was the idea that it’s a chemical imbalance. This is Depression 3.0. What has captured everyone’s imagination is that, by dissecting a complex behavior disorder into its component systems, you have a new way of thinking about it.’

Although the success of DBS in treating depressed individuals is remarkable, much more research needs to be done…

 

THE ARTIFICIAL MIND AND SILICON CONSCIOUSNESS

Kaku introduces the potential challenge of handling artificial intelligence as it evolves:

Given the fact that computer power has been doubling every two years for the past fifty years under Moore’s law, some say it is only a matter of time before machines eventually acquire self-awareness that rivals human intelligence. No one knows when this will happen, but humanity should be prepared for the moment when machine consciousness leaves the laboratory and enters the real world. How we deal with robot consciousness could decide the future of the human race. (page 216)

Kaku observes that AI has gone through three cycles of boom and bust. In the 1950s, machines were built that could play checkers and solve algebra problems. Robot arms could recognize and pick up blocks. In 1965, Dr. Herbert Simon, one of the founders of AI, made a prediction:

Machines will be capable, within 20 years, of doing any work a man can do.

In 1967, another founder of AI, Dr. Marvin Minsky, remarked:

…within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved.

But in the 1970s, not much progress in AI had been made. In 1974, both the U.S. and British governments significantly cut back their funding for AI. This was the beginning of the first AI winter.

But as computer power steadily increased in the 1980s, a new gold rush occurred in AI, fueled mainly by Pentagon planners hoping to put robot soldiers on the battlefield. Funding for AI hit a billion dollars by 1985, with hundreds of millions of dollars spent on projects like the Smart Truck, which was supposed to be an intelligent, autonomous truck that could enter enemy lines, do reconnaissance by itself, perform missions (such as rescuing prisoners), and then return to friendly territory. Unfortunately, the only thing that the Smart Truck did was get lost. The visible failures of these costly projects created yet another AI winter in the 1990s. (page 217)

Kaku continues:

But now, with the relentless march of computer power, a new AI renaissance has begun, and slow but substantial progress has been made. In 1997, IBM’s Deep Blue computer beat world chess champion, Garry Kasparov. In 2005, a robot car from Stanford won the DARPA Grand Challenge for a driverless car. Milestones continue to be reached.

This question remains: Is the third try a charm?

Scientists now realize that they vastly underestimated the problem, because most human thought is actually subconscious. The conscious part of our thoughts, in fact, represents only the tiniest portion of our computations.

Dr. Steve Pinker says, ‘I would pay a lot for a robot that would put away the dishes or run simple errands, but I can’t, because all of the little problems that you’d need to solve to build a robot to do that, like recognizing objects, reasoning about the world, and controlling hands and feet, are unsolved engineering problems.’ (pages 217-218)

Kaku asked Dr. Minsky when he thought machines would equal and then surpass human intelligence. Minsky replied that he’s confident it will happen, but that he doesn’t make predictions about specific dates any more.

If you remove a single transistor from a Pentium chip, the computer will immediately crash, writes Kaku. But the human brain can perform quite well even with half of it missing:

This is because the brain is not a digital computer at all, but a highly sophisticated neural network of some sort. Unlike a digital computer, which has a fixed architecture (input, output, and processor), neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task. The brain has no programming, no operating system, no Windows, no central processor. Instead, its neural networks are massively parallel, with one hundred billion neurons firing at the same time in order to accomplish a single goal: to learn.

In light of this, AI researchers are beginning to reexamine the ‘top-down approach’ they have followed for the past fifty years (e.g., putting all the rules of common sense on a CD). Now AI researchers are giving the ‘bottom-up approach’ a second look. This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones. Neural networks must learn the hard way, by bumping into things and making mistakes. (page 220)

Dr. Rodney Brooks, former director of the MIT Artificial Intelligence Laboratory, introduced a totally new approach to AI. Why not build small, insectlike robots that learn how to walk by trail and error, just as nature learns? Brooks told Kaku that he used to marvel at the mosquito, with a microscopic brain of a few neurons, which can, nevertheless, maneuver in space better than any robot airplane. Brooks built a series of tiny robots called ‘insectoids’ or ‘bugbots,’ which learn by bumping into things. Kaku comments:

At first, it may seem that this requires a lot of programming. The irony, however, is that neural networks require no programming at all. The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision. So programming is nothing; changing the network is everything. (page 221)

The Mars Curiosity rover is one result of this bottom-up approach.

Scientists have realized that emotions are central to human cognition. Humans usually need some emotional input, in addition to logic and reason,in order to make good decisions. Robots are now be programmed to recognize various human emotions and also to exhibit emotions themselves. Robots also need a sense of danger and some feeling of pain in order to avoid injuring themselves. Eventually, as robots become ever more conscious, there will be many ethical questions to answer.

Biologists used to debate the question, “What is life?” But,writes Kaku, the physicist and Nobel Laureate Francis Crick has observed that the question is not well-defined now that we are advancing in our understanding of DNA. There are many layer and complexities to the question, “What is life?” Similarly, there are likely to be many layers and complexities to the question of what constitutes “emotion” or “consciousness.”

Moreover, as Rodney Brooks argues, we humans are machines. Eventually the robot machines we are building will be just as alive as we are. Kaku summarizes a conversation he had with Brooks:

This evolution in human perspective started with Nicholaus Copernicus when he realized that the Earth is not the center of the universe, but rather goes around the sun. It continued with Darwin, who showed that we were similar to the animals in our evolution. And it will continue into the future… when we realize that we are machines, except that we are made of wetware and not hardware. (page 248)

Kaku then quotes Brooks directly:

We don’t like to give up our specialness, so you know, having the idea that robots could really have emotions, or that robots could be living creatures–I think is going to be hard for us to accept. But we’re going to come to accept it over the next fifty years.

Brooks also thinks we will successfully create robots that are safe for humans:

The robots are coming, but we don’t have to worry too much about that. It’s going to be a lot of fun.

Furthermore, Brooks argues that we are likely to merge with robots. After all, we’ve already done this to an extent. Over twenty thousand people have cochlear implants, giving them the ability to hear.

Similarly, at the University of Southern California and elsewhere, it is possible to take a patient who is blind and implant an artificial retina. One method places a mini video camera in eyeglasses, which converts an image into digital signals. These are sent wirelessly to a chip placed in the person’s retina. The chip activates the retina’s nerves, which then send messages down the optic nerve to the occipital lobe of the brain. In this way, a person who is totally blind can see a rough image of familiar objects. Another design has a light-sensitive chip placed on the retina itself, which then sends signals directly to the optic nerve. This design does not need an external camera. (page 249)

This means, says Kaku, that eventually we’ll be able to enhance our ordinary senses and abilities. We’ll merge with our robot creations.

 

REVERSE ENGINEERING THE BRAIN

Kaku highlights three approaches to the brain:

Because the brain is so complex, there are at least three distinct ways in which it can be taken apart, neuron by neuron. The first is to simulate the brain electronically with supercomputers, which is the approach being taken by the Europeans. The second is to map out the neural pathways of living brains, as in BRAIN [Brain Research Through Advancing Innovative Neurotechnologies Initiative]. (This task, in turn, can be further subdivided, depending on how these neurons are analyzed – either anatomically, neuron by neuron, or by function and activity.) And third, one can decipher the genes that control the development of the brain, which is an approach pioneered by billionaire Paul Allen of Microsoft. (page 253)

Dr. Henry Markram is a central figure in the Human Brain Project. Kaku quotes Dr. Markram:

To build this–the supercomputers, the software, the research–we need around one billion dollars. This is not expensive when one considers that the global burden of brain disease will exceed twenty percent of the world gross domestic project very soon.

Dr. Markram also said:

It’s essential for us to understand the human brain if we want to get along in society, and I think that it is a key step in evolution.

How does the human genome go from twenty-three thousand genes to one hundred billion neurons?

The answer, Dr. Markram believes, is that nature uses shortcuts. The key to his approach is that certain modules of neurons are repeated over and over again once Mother Nature finds a good template. If you look at microscopic slices of the brain, at first you see nothing but a random tangle of neurons. But upon closer examination, patterns of modules that are repeated over and over appear.

(Modules, in fact, are one reason why it is possible to assemble large skyscrapers so rapidly. Once a single module is designed, it is possible to repeat it endlessly on the assembly line. Then you can rapidly stack them on top of one another to create the skyscraper. Once the paperwork is all signed, an apartment building can be assembled using modules in a few months.)

The key to Dr. Markram’s Blue Brain project is the “neocortical column,” a module that is repeated over and over in the brain. In humans, each column is about two millimeters tall, with a diameter of half a millimeter, and contains sixty thousand neurons. (As a point of comparison, rat neural modules contain about ten thousand neurons each.) In took ten years, from 1995 to 2005, for Dr. Markram to map the neurons in such a column and to figure out how it worked. Once that was deciphered, he then went to IBM to create massive iterations of these columns. (page 257)

Kaku quotes Dr. Markram again:

…I think, quite honestly, that if the planet understood how the brain functions, we would resolve conflicts everywhere. Because people would understand how trivial and how deterministic and how controlled conflicts and reactions and misunderstandings are.

The slice-and-dice approach:

The anatomical approach is to take apart the cells of an animal brain, neuron by neuron, using the “slice-and-dice” method. In this way, the full complexity of the environment, the body, and memories are already encoded in the model. Instead of approximating a human brain by assembling a huge number of transistors, these scientists want to identify each neuron of the brain. After that, perhaps each neuron can be simulated by a collection of transistors so that you’d have an exact replica of the human brain, complete with memory, personality, and connection to the senses. Once someone’s brain is fully reverse engineered in this way, you should be able to have an informative conversation with that person, complete with memories and a personality. (page 259)

There is a parallel project called the Human Connectome Project.

Most likely, this effort will be folded into the BRAIN project, which will vastly accelerate this work. The goal is to produce a neuronal map of the human brain’s pathways that will elucidate brain disorders such as autism and schizophrenia. (pages 260-261)

Kaku notes that one day automated microscopes will continuously take the photographs, while AI machines continuously analyze them.

The third approach:

Finally, there is a third approach to map the brain. Instead of analyzing the brain by using computer simulations or by identifying all the neural pathways, yet another approach was taken with a generous grant of $100 million from Microsoft billionaire Paul Allen. The goal was to construct a map or atlas of the mouse brain, with the emphasis on identifying the genes responsible for creating the brain.

…A follow-up project, the Allen Human Brain Atlas, was announced… with the hope of creating an anatomically and genetically complete 3-D map of the human brain. In 2011, the Allen Institute announced that it had mapped the biochemistry of two human brains, finding one thousand anatomical sites with one hundred million data points detailing how genes are expressed in the underlying biochemistry. The data confirmed that 82 percent of our genes are expressed in the brain. (pages 261-262)

Kaku says the Human Genome Project was very successful in sequencing all the genes in the human genome. But it’s just the first step in a long journey to understand how these genes work. Similarly, once scientists have reverse engineered the brain, that will likely be only the first step in understanding how the brain works.

Once the brain is reverse-engineered,this will help scientists understand and cure various diseases. Kaku observes that, with human DNA, if there is a single mispelling out of three billion base pairs, that can cause uncontrolled flailing of your limbs and convulsions, as in Huntington’s disease. Similarly, perhaps just a few disrupted connections in the brain can cause certain illnesses.

Successfully reverse engineering the brain also will help with AI research. For instance, writes Kaku, humans can recognize a familiar face from different angles in .1 seconds. But a computer has trouble with this. There’s also the question of how long-term memories are stored.

Finally, if human consciousness can be transferred to a computer, does that mean that immortality is possible?

 

THE FUTURE

Kaku talked with Dr. Ray Kurzweil, who told him it’s important for an inventor to anticipate changes. Kurzweil has made a number of predictions, at least someof which have been roughly accurate. Kurzweil predicts that the “singularity” will occur around the year 2045. Machines will have reached the point when they not only have surpassed humans in intelligence; machines also will have created next-generation robots even smarter than themselves.

Kurzweil holds that this process of self-improvement can be repeated indefinitely, leading to an explosion–thus the term “singularity”–of ever-smarter and ever more capable robots. Moreover, humans will have merged with their robot creations and will, at some point, become immortal.

Robots of ever-increasing intelligence and ability will require more power. Of course, there will be breakthroughs in energy technology, likely including nuclear fusion and perhaps even antimatter and/or black holes. So the cost to produce prodigious amounts of energy will keep coming down. At the same time, because Moore’s law cannot continue forever, super robots eventually will need ever-increasing amounts of energy. At some point, this will probably require traveling–or sending nanobot probes–tonumerous other stars orto other areaswhere the energy of antimatter and/or of black holes can be harnessed.

Kaku notes that most people in AI agree that a “singularity” will occur at some point. But it’s extremely difficult to predict the exact timing. It could happen sooner than Kurzweil predicts or it could end up taking much longer.

Kurzweil wants to bring his father back to life. Eventually something like this will be possible. Kaku:

…I once asked Dr. Robert Lanza of the company Advanced Cell Technology how he was able to bring a long-dead creature “back to life,” making history in the process. He told me that the San Diego Zoo asked him to create a clone of a banteng, an oxlike creature that had died out about twenty-five years earlier. The hard part was extracting a usable cell for the purpose of cloning. However, he was successful, and then he FedExed the cell to a farm, where it was implanted into a female cow, which then gave birth to this animal. Although no primate has ever been cloned, let alone a human, Lanza feels it’s a technical problem, and that it’s only a matter of time before someone clones a human. (page 273)

The hard part of cloning a human would be bringing back their memories and personality, says Kaku. One possibility would be creating a large data file containing all known information about a person’s habits and life. Such a file could be remarkably accurate. Even for people dead today, scores of questions could be asked to friends, relatives, and associates. This could be turned into hundreds of numbers, each representing a different trait that could be ranked from 0 to 10, writes Kaku.

When technology has advanced enough, it will become possible–perhaps via the Connectome Project–to recreate a person’s brain, neuron for neuron. If it becomes possible for you to have your connectome completed, then your doctor–or robodoc–would have all your neural connections on a hard drive. Then, says Kaku, at some point, you could be brought back to life, using either a clone or a network of digital transistors (inside an exeskeleton or surrogate of some sort).

Dr. Hans Moravec, former director of the Artificial Intelligence Laboratory at Carnegie Mellon University, has pioneered an intriguing idea: transferring your mind into an immortal robotic body while you’re still alive. Kaku explains what Moravec told him:

First, you lie on a stretcher, next to a robot lacking a brain. Next, a robotic surgeon extracts a few neurons from your brain, and then duplicates these neurons with some transistors located in the robot. Wires connect your brain to the transistors in the robot’s empty head. The neurons are then thrown away and replaced by the transistor circuit. Since your brain remains connected to these transistors via wires, it functions normally and you are fully conscious during this process. Then the super surgeon removes more and more neurons from your brain, each time duplicating these neurons with transistors in the robot. Midway through the operation, half your brain is empty; the other half is connected by wires to a large collection of transistors inside the robot’s head. Eventually all the neurons in your brain have been removed, leaving a robot brain that is an exact duplicate of your original brain, neuron for neuron. (page 280)

When you wake up, you are likely to have a few superhuman powers, perhaps including a form of immortality. This technology is likely far in the future, of course.

Kaku then observes that there is another possible path to immortality that does not involve reverse engineering the brain. Instead, super smart nanobots could periodically repair your cells. Kaku:

…Basically, aging is the buildup of errors, at the genetic and cellular level. As cells get older, errors begin to build up in their DNA and cellular debris also starts to accumulate, which makes the cells sluggish. As cells begin slowly to malfunction, skin begins to sag, bones become frail, hair falls out, and our immune system deteriorates. Eventually, we die.

But cells also have error-correcting mechanisms. Over time, however, even these error-correcting mechanisms begin to fail, and aging accelerates. The goal, therefore, is to strengthen natural cell-repair mechanisms, which can be done via gene therapy and the creation of new enzymes. But there is also another way: using “nanobot” assemblers.

One of the linchpins of this futuristic technology is something called the “nanobot,” or an atomic machine, which patrols the bloodstream, zapping cancer cells, repairing the damage from the aging process, and keeping us forever young and healthy. Nature has already created some nanobots in the form of immune cells that patrol the body in the blood. But these immune cells attack viruses and foreign bodies, not the aging process.

Immortality is within reach if these nanobots can reverse the ravages of the aging process at the molecular and cellular level. In this vision, nanobots are like immune cells, tiny police patrolling your bloodstream. They attack any cancer cells, neutralize viruses, and clean out the debris and mutations. Then the possibility of immortality would be within reach using our own bodies, not some robot or clone. (pages 281-282)

Kaku writes that his personal philosophy is simple: If something is possible based on the laws of physics, then it becomes an engineering and economics problem to build it. A nanobot is an atomic machine with arms and clippers that grabs molecules, cuts them at specific points, and then splices then back together. Such a nanobot would be able to create almost any known molecule. It may also be able to self-reproduce.

The late Richard Smalley, a Nobel Laureate in chemistry, argued that quantum forces would prevent nanobots from being able to function. Eric Drexler, a founder of nanotechnology, pointed out that ribosomes in our own body cut and splice DNA molecules at specific points, enabling the creation of new DNA strands. Eventually Drexler admitted quantum forces do get in the way sometimes, while Smalley acknowledged that if ribosomes can cut and split molecules, perhaps there are other ways, too.

Ray Kurzweil is convinced that nanobots will shape society itself. Kaku quotes Kurzweil:

…I see it, ultimately, as an awakening of the whole universe. I think the whole universe right now is basically made up of dumb matter and energy and I think it will wake up. But if it becomes transformed into this sublimely intelligent matter and energy, I hope to be a part of that.

 

THE MIND AS PURE ENERGY

Kaku writes that it’s well within the laws of physics for the mind to be in the form of pure energy, able to explore the cosmos. Isaac Asimov said his favorite science-fiction short story was “The Last Question.” In this story, humans have placed their physical bodies in pods, while their minds roam as pure energy. But they cannot keep the universe itself from dying in the Big Freeze. So they create a supercomputer to figure out if the Big Freeze can be avoided. The supercomputer responds that there is not enough data. Eons later, when stars are darkening, the supercomputer finds a solution: It takes all the dead stars and combines them, producing an explosion. The supercomputer says, “Let there be light!”

And there was light. Humanity, with its supercomputer, had become capable of creating a new universe.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Physics of the Future


(Image: Zen Buddha Silence by Marilyn Barbone.)

August 13, 2017

Science and technology are moving forward faster than ever before:

…this is just the beginning. Science is not static. Science is exploding exponentially all around us. (page 12)

Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future. His book,Physics of the Future(Anchor Books, 2012), is a result.

Kaku explains why his predictions may carry more weight than those of other futurists:

  • His book is based on interviews with more than 300 top scientists.
  • Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
  • Prototypes of all the technologies mentioned in the book already exist.
  • As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.

The ancients had little understanding of the forces of nature, so they invented the gods of mythology. Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.

We are on the verge of becoming a planetary, or Type I, civilization. This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.

But there are still some things, like face to face meetings, that appear not to have changed much. Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years. People still like to see tourist attractions in person. People still like live performances. Many people still prefer taking courses in-person rather than online. (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)

Here are the chapters from Kaku’s book that I cover:

  • Future of the Computer
  • Future of Artificial Intelligence
  • Future of Medicine
  • Nanotechnology
  • Future of Energy
  • Future of Space Travel
  • Future of Humanity

 

FUTURE OF THE COMPUTER

Kaku quotes Helen Keller:

No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.

According to Moore’s law, computer power doubles every eighteen months. Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly. Also, exponential growth is often not noticeable for the first few decades. But eventually things can change dramatically.

Even the near future may be quite different, writes Kaku:

…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control. They will, to a degree, recognize the human voice and face and converse in a formal language. They will be able to create entire virtual worlds that we can only dream of today. Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper. Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders. (pages 25-26)

In order to discuss the future of science and technology, Kaku has divided each chapter into three parts: the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).

In the near future, we cansurf the internet via special glasses or contact lenses. We cannavigate with a handheld device or just by moving our hands. We can connect to our office via the lense. It’s likely that when we encounter a person, we will see their biography on our lense.

Also, we will be able to travel by driverless cars. This will allow us to use commute time to access the internet via our lenses or to do other work. Kaku notes that the wordcar accident may disappear from the language once driveless cars become advanced and ubiquitous enough. Instead of nearly 40,000 dying in the United States in car accidents each year, there may bezero deaths from car accidents. Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.

At home, you will have a room with screens on every wall. If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.

You won’t need to carry a computer with you. Computers will be embedded nearly everywhere. You’ll have constant access to computers and the internet via your glasses or contact lenses.

As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person. This includes the moon, Mars, and other currently exotic locations.

Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland. Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill. Kaku found that he could run, hide, sprint, or lie down. Everything he saw was very realistic. In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.

Your doctor – likely a human face appearing on your wall – will have all your genetic information. Also, you’ll be able to pass a tiny probe over your body and diagnose any illness. (MRI machines will be as small as a phone.) As well, tiny chips or sensors will be embedded throughout your environment. Most forms of cancer will be identified and destroyed before a tumor ever forms. Kaku says the wordtumor will disappear from the human language.

Furthermore, we’ll probably be able to slow down and even reverse the aging process. We’ll be able to regrow organs based on computerized access to our genes. We’ll likely be able to reengineer our genes.

In the medium term (2030 to 2070):

  • Moore’s law may reach an end. Computing power will still continue to grow exponentially, however, just not as fast as before.
  • When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail. You’ll be able to download informative lectures about anything you see. In fact, a real professor willappear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
  • If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers. You’ll be able to see through hills and other obstacles.
  • If you’re a surgeon, you’ll see in great detail everything inside the body. You’ll have access to all medical records, etc.
  • Universal translators will allow any two people to converse.
  • True 3-D images will surround us when we watch a movie. 3-D holograms will become a reality.

In the far future (2070 to 2100):

We will be able to control computers directly with our minds.

John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain. Through trial and error, the paralyzed person learns to move the cursor on a computer screen. Eventually they can read and write e-mails, and play computer games. Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.

Similarly, paralyzed people will be able to control mechanical arms and legs from their brains. Experiments with monkeys have already achieved this.

Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain. MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.

Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy. In this way, we’ll be able to control objects just by thinking. Astronauts on earth will be able to control superhuman robotic bodies on the moon.

 

FUTURE OF ARTIFICIAL INTELLIGENCE

AI pioneer Herbert Simon, in 1965, said:

Machines will be capable, in twenty years, of doing any work a man can do.

Unfortunately not much progress was made. In 1974, the first AI winter began as the U.S. and British governments cut off funding.

Progress again was made in the 1980’s. But because it was overhyped, another backlash occurred and a second AI winter began. Many people left the field as funding disappeared.

The human brain is a type of neural network. Neural networks follow Hebb’s rule: every time a correct decision is made, those neural pathways are reinforced. Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience. ‘

Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers. Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.

Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).

What’s interesting is that robots are superfast when doing human mental calculations. But robots still are not good at visual pattern recognition, movement, and common sense. Robots can see far more detail than humans, but robots have trouble making sense of what they see. Also, many things in our experience that we as humans know by common sense, robots don’t understand.

There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things. But so far, these projects haven’t worked.

There are two ways to give a robot the ability to learn: top-down and bottom-up. An example of the top-down approach is STAIR (Stanford artificial intelligence robot). Everything is programmed into STAIR from the beginning. For STAIR to understand an image, it must compare the image to all the images already programmed into it.

The LAGR (learning applied to ground robots) uses the bottom-up approach. It learns everything from scratch, by bumping into things. LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.

Robots will become ever more helpful in medicine:

For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia. Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar. But the da Vinci robotic system can vastly decrease all these. The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery. Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body. There are 800 hospitals in Europe and North and South America that use this system; 48,000 operations were performed in 2006 alone using this robot. Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.

In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today. In fact, in the future, only rarely will the surgeon slice the skin at all. Noninvasive surgery will become the norm.

Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread. Micromachines smaller than the period at the end of this sentence will do much of the mechanical work. (pages 93-94)

But to make robots intelligent, scientists must learn more about how the human brain works.

The human brain has roughly three levels. The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc. At the next level, there is the monkey brain, or the limbic system, located at the center of our brain. Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.

The third level of the human brain is the front and outer part – the cerebral cortex. This level defines humanity and is responsible for the ability to think logically and rationally.

Scientists still have a way to go in understanding in sufficient detail how the human brain works.

By midcentury, scientists will be able to reverse engineer the brain. In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer. Kaku quotes Fred Hapgood from MIT:

Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.

By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku. However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.

For example, says Kaku, the Human Genome Project is like a dictionary with no definitions. We can spell out each gene in the human body. But we still don’t know what each gene does exactly. Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm,C. elegans. But scientists still can’t fully translate this map into the worm’s behavior.

Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.

When will machines become conscious? Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future. If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious. On the other hand, something like the Turning test may help to identify when machineshave become practically indistinguishable from humans.

When will robots exceed humans? Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.

What if superintelligent robots can make even smarter copies of themselves? They might thereby gain the ability to evolve exponentially. Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.

Thesingularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially. The inventor Ray Kurzweil has become a spokesman for the singularity. But he thinks humans will merge with this digital superintelligence. Kaku quotes Kurzweil:

It’s not going to be an invasion of intelligent machines coming over the horizon. We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.

Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us. The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).

One problem is that the military is the largest funder of AI research. On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).

Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.

One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience. Such an army might turn into a practical way to explore the solar system and beyond. One by-product of Brooks’ idea is the Mars Rover.

Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm. AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.

Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics. Thus, they have sought a single, unifying equation underlying all intelligence. But, says Minsky, there is no such thing:

Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness. Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task. He calls this ‘the society of minds’: that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years. (page 123)

Brooks predicts that, by 2100, there will be very intelligent robots. But we will be part robot and part connected with robots.

He sees this progressing in stages. Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions. For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf. These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…

Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain. One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons. Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision. These groups, for the first time in history, have been able to restore a degree of sight to the blind… (pages 124-125)

Scientists have also successfully created a robotic hand. One patient, Robin Ekenstam, had his right hand amputated. Scientists have given him a robotic hand with four motors and forty sensors. The doctors connected Ekenstam’s nerves to the chips in the artificial hand. As a result, Ekenstam is able to use the artificial hand as if it were his own hand. He feels sensations in the artificial fingers when he picks stuff up. In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.

Furthermore, the brain is extremely plastic because it is a neural network. So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.

And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities. Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats. Similarly, genetic engineering will become possible. As Brooks commented:

We will now longer find ourselves confined by Darwinian evolution.

Another way people will merge with robots is with surrogates and avatars. For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.

Robot pioneer Hans Morevic has described one way this could happen:

…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot. The operation starts when we lie beside a robot without a brain. A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull. As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot. Not only do we have a robotic body, we have also the benefits of a robot: immortality in superhuman bodies that are perfect in appearance. (pages 130-131)

 

FUTURE OF MEDICINE

Kaku quotes Nobel Laureate James Watson:

No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?

Nobel Laureate David Baltimore:

I don’t really think our bodies are going to have any secrets left within this century. And so, anything that we can manage to think about will probably have a reality.

Kaku mentions biologist Robert Lanza:

Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit. In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before. Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah. There, the fertilized cell was implanted into a female cow. Ten months later he got the news that his latest creation had just been born. On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out. Another day, he could be working on cloning human embryo cells. He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells. (page 138)

Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book,What is Life? He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.

Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule. In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix. When unraveled, a single strand of DNA stretches about 6 feet long. On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code. By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life. (page 140)

Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form. David Baltimore:

Biology is today an information science.

Kaku writes:

The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule. Atom for atom, we know how to build the molecules of life from scratch. And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.

Welcome to bioinformatics:

…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms. For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA. In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes. (page 143)

You’ll talk to your doctor – likely a software program – on the wall screen. Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form. If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.

If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed. (There are over 91,000 in the United States waiting for an organ transplant.)

…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells. The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells. (page 144)

Eventually cloning will be possible for humans.

The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep. By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original. (page 150)

Successes in animal studies will be translated to human studies. First diseases caused by a single mutated gene will be cured. Then diseases caused by multiple muted genes will be cured.

At some point, there will be “designer children.” Kaku quotes Harvard biologist E. O. Wilson:

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become.

The “smart mouse” gene was isolated in 1999. Mice that have it are better able to navigate mazes and remember things. Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn. This supports Hebb’s rule: learning occurs when certain neural pathways are reinforced.

It will take decades to iron out side effects and unwanted consequences of genetic engineering. For instance, scientists now believe that there is a healthy balance between forgetting and remembering. It’s important to remember key lessons and specific skills. But it’s also important not to remember too much. People need a certain optimism in order to make progress and evolve.

Scientists now know what aging is: Aging is the accumulation of errors at the genetic and cellular level. These errors have various causes. For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku. Errors can also accumulate as ‘junk’ molecular debris.

The buildup of genetic errors is a by-product of the second law of thermodynamics: entropy always increases. However, there’s an important loophole, notes Kaku. Entropy can be reduced in one place as long as it is increased at least as much somewhere else. This means that aging is reversible. Kaku quotes Richard Feynman:

There is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Kaku continues:

…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding. His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals. In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…

…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM. By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers. Scientists will be able to scan millions of genomes of two groups of people, the young and the old. By comparing the two groups, one can identify where aging takes place at the genetic level. A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated. (pages 168-169)

Scientists think aging is only 35 percent determined by genes. Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria. This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging. Soon we could live to 150. By 2100, we could live well beyond that.

If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent. This is called calorie restriction. Every organism studied so far exhibits this phenomenon.

…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging. In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time. Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long. (page 170)

Now scientists have shown that caloric restriction also works for primates: less diabetes, less cancer, less heart disease, and better health and longer life.

In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells. SIR2 is activated when it detects that the energy reserves of a cell are low. The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins. Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.

Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair. If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine. (page 171)

Kaku reports what William Haseltine, biotech pioneer, told him:

The nature of life is not mortality. It’s immortality. DNA is an immortal molecule. That molecule first appeared perhaps 3.5 billion years ago. That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that. First to extend our lives two- or three-fold. And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely. And I don’t think that will be an unnatural process. (page 173)

Kaku concludes that extending life span in the future will likely result from a combination of activities:

  • growing new organs as they wear out or become diseased, via tissue engineering and stem cells
  • ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
  • using gene therapy to alter genes that may slow down the aging process
  • maintaining a healthy lifestyle (exercise and a good diet)
  • using nanosensors to detect diseases like cancer years before they become a problem

Kaku quotes Richard Dawkins:

I believe that by 2050, we shall be able to read the language [of life]. We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.

Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor. After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.

 

NANOTECHNOLOGY

Kaku:

For the most part, nanotechnology is still a very young science. But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes. MEMS are tiny machines so small they can easily fit on the tip of a needle. They are created using the same etching technology used in the computer business. Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them. (pages 207-208)

Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car. This has already saved thousands of lives.

One day nanomachines may be able to replace surgery entirely. Cutting the skin may become completely obsolete. Nanomachines will also be able to find and kill cancer cells in many cases. These nanomachines can be guided by magnets.

DNA fragments can be embedded on a tiny chip using transistor etching technology. The DNA fragments can bind to specific gene sequences. Then, using a laser, thousands of genes can be read at one time, rather than one by one. Prices for these DNA chips continue to plummet due to Moore’s law.

Small electronic chips will be able to do the work that is now done by an entire laboratory. These chips will be embedded in our bathrooms. Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks. In the future, it may cost pennies and take just a few minutes.

In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite. They won the Nobel Prize for their work. Graphene is a single sheet of carbon, no more than one atom thick. And it can conduct electricity. It’s also the strongest material ever tested. (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)

Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor: one atom thick and ten atoms across. (The smallest transistors currently are about 30 nanometers. Novoselov’s transistors are 30 times smaller.)

The real challenge now is how to connect molecular transistors.

The most ambitious proposal is to use quantum computers, which actually compute on individual atoms. Quantum computers are extremely powerful. The CIA has looked at them for their code-breaking potential.

Quantum computers actually exist. Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.” When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.

The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations. (When atoms are “coherent,” they vibrate in phase with one another.) Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.

Scientists are working on programmable matter the size of grains of sand. These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object. In fact, many common consumer products may be replaced by software programs sent over the internet. If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.

In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything. This would be the crowning achievement of engineering, says Kaku. One problem is the sheer number of atoms that would need to be re-arranged. But this could be solved by self-replicating nanobots.

A version of this “replicator” already exists. Mother Nature can take the food we eat and create a baby in nine months. DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food,notes Kaku. Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms. (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)

 

FUTURE OF ENERGY

Kaku writes that in this century, we will harness the power of the stars. In the short term, this means solar and hydrogen will replace fossil fuels. In the long term, it means we’ll tap the power of fusion and even solar energy from outer space. Also, cars and trains will be able to float using magnetism. This can drastically reduce our use of energy, since most energy today is used to overcome friction.

Currently, fossil fuels meet about 80 percent of the world’s energy needs. Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.

Electric vehicles will reduce the use of fossil fuels. Butwe also have to transform the way electricity is generated. Solar power will keep getting cheaper. But much more clean energy will be required in order gradually to replace fossil fuels.

Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases. However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.

Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve. This increases the odds that terrorists could acquire nuclear weapons.

Within a few decades, global warming will become even more obvious. The signs are already clear, notes Kaku:

  • The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
  • Greenland’s ice shelves continue to shrink. (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
  • Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off. (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
  • For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
  • Temperatures started to be reliably recorded in the late 1700s; 1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded. Levels of carbon dioxide are rising dramatically.
  • As the earth heats up, tropical diseases are gradually migrating northward.

It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide. But we must be careful about unintended side effects on ecosystems.

Eventually fusion power may solve most of our energy needs. Fusion powers the sun and lights up all the stars.

Anyone who can successfully master fusion power will have unleashed unlimited eternal energy. And the fuel for these fusion plants comes from ordinary seawater. Pound for pound, fusion power releases 10 million times more power than gasoline. An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum. (page 272)

It’s extremely difficult to heat hydrogen gas to tens of millions of degrees. But scientists will probably master fusion power within the next few decades. And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.

One way scientists are tryingproduce nuclear fusion is by focusing huge lasers on to a tiny point. If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion. This approach is called inertial confinement fusion.

The other main approachused by scientists to try to create fusion is magnetic confinement fusion. A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.

What is most difficult in this approach is squeezing the hydrogen gas uniformly. Otherwise, it bulges out in complex ways. Scientists are using supercomputers to try to control this process. (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion. So stars form easily.)

Most of the energy we burn is used to overcome friction. Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.

In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance. Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy. The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.

But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero. Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero. This is important because liquid nitrogen forms at 77 degrees (Kelvin). Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.

Remember that most energy is used to overcome friction. Even for electricity, up to 30 percent can be lost during transmission. But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years. Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.

Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.

The reason the magnet floats is simple. Magnetic lines of force cannot penetrate a superconductor. This is the Meissner effect. (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.) When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic. This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float. (page 289)

Room temperature superconductors will allow trains and cars to move without any friction. This will revolutionize transportation. Compressed air could get a car going. Then the car could float almost forever as long as the surface is flat.

Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains. A maglev train does lose energy to air friction. In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.

Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible. A reduced cost of space travel may make it feasibleto send hundreds of solar satellites into space. One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles. But the main problem is the cost of booster rockets. (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are workingto reduce the cost of rockets by making them reusable.)

 

FUTURE OF SPACE TRAVEL

Kaku quotes Carl Sagan:

We have lingered long enough on the shores of the cosmic ocean. We are ready at last to set sail for the stars.

Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:

So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition. This, in turn, will generate more interest in one day sending a probe to these distant planets. There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms. (page 297)

Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars. But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.

For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans. And the surface of Europa is continually heated by the tidal forces caused by gravity.

It had been thought that life required sunlight. But in 1977, life was found on earth, deep under water in the Galapagos Rift. Energy from undersea volcano vents provided enough energy for life. Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents. Some of the most primitive forms of DNA have been found on the bottom of the ocean.

In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature. Kaku:

At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty. In one scenario, our universe is a huge bubble of some sort that is continually expanding. We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper). But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath. Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation). Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion. (page 301)

Space travel is very expensive. It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon. It costs much more to send a person to Mars.

Robotic missions are far cheaper than manned missions. And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.

Kaku next describes a mission to Mars:

Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission. But getting to Mars will be much more difficult than reaching the moon. In contrast to the moon, Mars represents a quantum leap in difficulty. It takes only three days to reach the moon. It takes six months to a year to reach Mars.

In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like. Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.

Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station. To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars. With no air, soil, or water, everything must be brought from earth. It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars. The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth. Any rip in a space suit would create rapid depressurization and death. (page 312)

Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth. The temperature never goes above the melting point of ice. And the dust storms are ferocious and often engulf the entire planet.

Eventually astronauts may be able to terraform Mars to make it more hospitable for life. The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice. (Methane gas is an even more potent greenhouse gas than carbon dioxide.) Once the temperature rises, the underground permafrost may begin to thaw. Riverbeds would fill with water, and lakes and oceansmight form again. This would release more carbon dioxide, leading to a positive feedback loop.

Another possible way to terraform Mars would be to deflect a comet towards the planet. Comets are made mostly of water ice. A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.

The polar regions of Mars are made of frozen carbon dioxide and ice. It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps. This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.

Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars. They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide. They could also be genetically modified to maximize their growth on Mars. These algae pools could accelerate terraforming in several ways. First, they could convert carbon dioxide into oxygen. Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun. Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet. Fourth, the algae can be harvested for food. Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen. (page 315)

Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.

2070 to 2100: A Space Elevator and Interstellar Travel

Near the end of the century, scientists may finally be able to construct a space elevator. With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky. Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.

One challenge is to create a carbon nanotube cable that is 50,000 miles long. Another challenge is that space satellites in orbit travel at 18,000 miles per hour. If a satellite collided with the space elevator, it would be catastrophic. So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.

Another challenge is turbulent weather on earth. The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform. Moreover, there must be an escape pod in case the cable breaks.

Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt. The next goal would be travelling to a star. A conventional chemical rocket would take 70,000 years to reach the nearest star. But there are several proposals for an interstellar craft:

  • solar sail
  • nuclear rocket
  • ramjet fusion
  • nanoships

Although light has no mass, it has momentum and so can exert pressure. The pressure is super tiny. But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft. The solar sail would likely be miles wide. The craft would have to circle the sun for a few years, gaining more and more momentum. Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.

Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power. One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters. It would have been powered by 1,000 hydrogen bombs. (This also would have been a good way to get rid of atomic bombs meant only for warfare.) Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion. So the project was set aside.

A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust. In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space. The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy vianuclear fusion. With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.

Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year. This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship. (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light. But meanwhile, on earth, millions of years will have passed.)

Note that there are still engineering questions about the ramjet fusion engine. For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space. Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.

Another possibility is antimatter rocket ships. If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel. Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.

Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars. These nanoships might become cheap enough to produce and to fuel. They might even be self-replicating.

Millions of nanoships could gather intelligence like a “swarm” does. For instance, a single ant is super simple. But a colony of ants can create a complex ant hill. A similar concept is the “smart dust” considered by the Pentagon. Billions of particles, each a sensor, could be used to gather a great deal of information.

Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light. Moreover, scientists may be able to create one or a few self-replicating nanoprobes. Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.

 

FUTURE OF HUMANITY

Kaku writes:

All the technological revolutions described here are leading to a single point: the creation of a planetary civilization. This transition is perhaps the greatest in human history. In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos. Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate. (pages 378-379)

In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations. So he proposed three types of civilization:

  • A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
  • A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
  • A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).

Kaku explains:

The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations. Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies. (page 381)

Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet. There are signs, says Kaku, that humanity will reach Type I in a matter of decades.

  • The internet allows a person to connect with virtually anyone else on the planet effortlessly.
  • Many families around the worldhave middle-class ambitions: a suburban house and two cars.
  • The criterion for being a superpower is not weapons, but economic strength.
  • Entertainers increasingly consider the global appeal of their products.
  • People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
  • The news is becoming planetary.
  • Soccer and the Olympics are emerging to dominate planetary sports.
  • The environment is debated on a planetary scale. People realize they must work together to control global warming and pollution.
  • Tourism is one of the fastest-growing industries on the planet.
  • War has rarely occurred between two democracies. A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
  • Diseases will be controlled on a planetary basis.

A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova. Or we may be able to keep the sun from exploding. (Or we might be able to change the orbit of our planet.) Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere. Also, we probably will have colonized not just the entire solar system, but nearby stars.

By the time we become a Type III civilization, we will have explored most of the galaxy. We may have done this using self-replicating robot probes. Or we may have mastered Planck energy (10^19 billion electron volts). At this energy, space-time itself becomes unstable. The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time. By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time. As a result, a Type III civilization might be able to colonize the entire galaxy.

It’s possible that a more advanced civilization has already visited or detected us. For instance, they may have used tiny self-replicating probes that we haven’t noticed yet. It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.

Kaku writes that many people are not aware of the historic transition humanity is now making. But this could change if we discover evidence of intelligent life somewhere in outer space. Then we would consider our level of technological evolution relative to theirs.

Consider the SETI Institute. This is from their website (www.seti.org):

SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.

Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets. Whether evolution will give rise to intelligent, technological civilizations is open to speculation. However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.

Finding evidence of other technological civilizations however, requires significant effort. Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.

Work at the Center is divided into two areas: Research and Development (R&D) andProjects. R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects. The algorithms and technology developed in the lab are first field-tested and then implemented during observing. The observing results are used to guide the development of new hardware, software, and observing facilities. The improved SETI observing Projects in turn provide new ideas for Research and Development. This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.

Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is. A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible. If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Warren Buffett on Jack Bogle


(Image: Zen Buddha Silence by Marilyn Barbone.)

July 23, 2017

Warren Buffett has long maintained that most investors–large and small–would be best off by simply investing in ultra-low-cost index funds. Buffett explains his reasoning again in the 2016 Letter to Berkshire Shareholders (see pages 21-25): http://berkshirehathaway.com/letters/2016ltr.pdf

Passive investors will essentially match the market over time. So, argues Buffett, active investors will match the market over time before costs (including fees and expenses). After costs, active investors will, in aggregate, trail the market by the total amount of costs. Thus, the net returns of most active investors will trail the market over time. Buffett:

There are, of course, some skilled individuals who are highly likely to out-perform the S&P over long stretches. In my lifetime, though, I’ve identified–early on–only ten or so professionals that I expected would accomplish this feat.

There are no doubt many hundreds of people–perhaps thousands–whom I have never met and whose abilities would equal those of the people I’ve identified. The job, after all, is not impossible. The problem simply is that the great majority of managers who attempt to over-perform will fail. The probability is also very high that the person soliciting your funds will not be the exception who does well.

As for those active managers who produce a solid record over 5-10 years, manyof them will have had a fair amount of luck. Moreover, good records attract assets under management. But large sums are always a drag on performance.

 

BUFFETT’S BETAGAINST PROT‰G‰ PARTNERS

Long Bets is a non-profit started by Jeff Bezos. As Buffett describes in his 2016 Letter to Shareholders, “proposers” can post a proposition at www.Longbets.org that will be proved right or wrong at some date in the future. They wait for someone to take the other side of the bet. Each side names a charity that will be the beneficiary if its side wins and writes a brief essay defending its position.

Buffett:

Subsequently, I publicly offered to wager $500,000 that no investment pro could select a set of at least five hedge funds–wildly-popular and high-fee investing vehicles–that would over an extended period match the performance of an unmanaged S&P-500 index fund charging only token fees. I suggested a ten-year bet and named a low-cost Vanguard S&P fund as my contender. I then sat back and waited expectantly for a parade of fund managers–who could include their own fund as one of the five–to come forth and defend their occupation. After all, these managers urged others to bet billions on their abilities. Why should they fear putting a little of their own money on the line?

What followed was the sound of silence. Though there are thousands of professional investment managers who have amassed staggering fortunes by touting their stock-selecting prowess, only one man–Ted Seides–stepped up to my challenge. Ted was a co-manager of Protégé Partners, an asset manager that had raised money from limited partners to form a fund-of-funds–in other words, a fund that invests in multiple hedge funds.

I hadn’t known Ted before our wager, but I like him and admire his willingness to put his money where his mouth was…

For Protégé Partners’ side of our ten-year bet, Ted picked five funds-of-funds whose results were to be averaged and compared against my Vanguard S&P index fund. The five he selected had invested their money in more than 100 hedge funds, which meant that the overall performance of the funds-of-funds would not be distorted by the good or poor results of a single manager.

Here are the results so far after nine years (from 2008 thru 2016):

Net return after 9 years
Fund of Funds A 8.7%
Fund of Funds B 28.3%
Fund of Funds C 62.8%
Fund of Funds D 2.9%
Fund of Funds E 7.5%

 

Net return after 9 years
S&P 500 Index Fund 85.4%

 

Compound Annual Return
All Funds of Funds 2.2%
S&P 500 Index Fund 7.1%

To see a more detailed table of the results, go to page 22 of the Berkshire 2016 Letter: http://berkshirehathaway.com/letters/2016ltr.pdf

Buffett continues:

The compounded annual increase to date for the index fund is 7.1%, which is a return that could easily prove typical for the stock market over time. That’s an important fact: A particularly weak nine years for the market over the lifetime of this bet would have probably helped the relative performance of the hedge funds, because many hold large ‘short’ positions. Conversely, nine years of exceptionally high returns from stocks would have provided a tailwind for index funds.

Instead we operated in what I would call a ‘neutral’ environment. In it, the five funds-of-funds delivered, through 2016, an average of only 2.2%, compounded annually. That means $1 million invested in those funds would have gained $220,000. The index fund would meanwhile have gained $854,000.

Bear in mind that every one of the 100-plus managers of the underlying hedge funds had a huge financial incentive to do his or her best. Moreover, the five funds-of-funds managers that Ted selected were similarly incentivized to select the best hedge-fund managers possible because the five were entitled to performance fees based on the results of the underlying funds.

I’m certain that in almost all cases the managers at both levels were honest and intelligent people. But the results for their investors were dismal–really dismal. And, alas, the huge fixed fees charged by all of the funds and funds-of-funds involved–fees that were totally unwarranted by performance–were such that their managers were showered with compensation over the nine years that have passed. As Gordon Gekko might have put it: ‘Fees never sleep.’

The underlying hedge-fund managers in our bet received payments from their limited partners that likely averaged a bit under the prevailing hedge-fund standard of ‘2 and 20,’ meaning a 2% annual fixed fee, payable even when losses are huge, and 20% of profits with no clawback (if good years were followed by bad ones). Under this lopsided arrangement, a hedge fund operator’s ability to simply pile up assets under management has made many of these managers extraordinarily rich, even as their investments have performed poorly.

Still, we’re not through with fees. Remember, there were the fund-of-funds managers to be fed as well. These managers received an additional fixed amount that was usually set at 1% of assets. Then, despite the terrible overall record of the five funds-of-funds, some experienced a few good years and collected ‘performance’ fees. Consequently, I estimate that over the nine-year period roughly 60%–gulp!–of all gains achieved by the five funds-of-funds were diverted to the two levels of managers. That was their misbegotten reward for accomplishing something far short of what their many hundreds of limited partners could have effortlessly–and with virtually no cost–achieved on their own.

In my opinion, the disappointing results for hedge-fund investors that this bet exposed are almost certain to recur in the future. I laid out my reasons for that belief in a statement that was posted on the Long Bets website when the bet commenced (and that is still posted there)…

Even if you take the smartest 10% of all active investors, most of them will trail the market, net of costs, over the course of a decade or two. Most investors (even the smartest) who think they can beat the market are wrong. Buffett’s bet against Protégé Partners is yet another example of this.

 

BUFFETT PRAISES BOGLE

If a statue is ever erected to honor the person who has done the most for American investors, the handsdown choice should be Jack Bogle. For decades, Jack has urged investors to invest in ultra-low-cost index funds. In his crusade, he amassed only a tiny percentage of the wealth that has typically flowed to managers who have promised their investors large rewards while delivering them nothing–or, as in our bet, less than nothing–of added value.

In his early years, Jack was frequently mocked by the investment-management industry. Today, however, he has the satisfaction of knowing that he helped millions of investors realize far better returns on their savings than they otherwise would have earned. He is a hero to them and to me.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Why You Shouldn’t Try Market Timing


(Image: Zen Buddha Silence by Marilyn Barbone.)

July 2, 2017

In Investing: The Last Liberal Art (Columbia University Press, 2nd edition, 2013), Robert Hagstrom has an excellent chapter on decision making. Hagstrom examines Philip Tetlock’s discussion of foxes versus hedgehogs.

 

PHILIP TETLOCK’S STUDY OF POLITICAL FORECASTING

Philip Tetlock, professor of psychology at the University of Pennsylvania, spent fifteen years (1988-2003) studying the political forecasts made by 284 experts. As Hagstrom writes:

All of them were asked about the state of the world; all gave their prediction of what would happen next. Collectively, they made over 27,450 forecasts. Tetlock kept track of each one and calculated the results. How accurate were the forecasts? Sadly, but perhaps not surprisingly, the predictions of experts are no better than ‘dart-throwing chimpanzees.’ (page 149)

In other words, one could have rolled a 6-sided dice 27,450 times over the course of fifteen years, and one would have achieved the same level of predictive accuracy as this group of top experts. (The predictions were in the form of: more of X, no change in X, or less of X. Rolling a 6-sided dice would be one way to generate random outcomes among three equally likely scenarios.)

In a nutshell, political experts generally achieve high levels of knowledge (about history, politics, etc.), but most of this knowledge does not help in making predictions. When it comes to predicting the future, political experts suffer from overconfidence, hindsight bias, belief system defenses, and lack of Bayesian process, says Hagstrom.

Although the overall record of political forecasting is dismal, Tetlock was still able to identify a few key differences:

The aggregate success of the forecasters who behaved most like foxes was significantly greater than those who behaved like hedgehogs. (page 150)

The distinction between foxes and hedgehogs goes back to an essay by Sir Isaiah Berlin entitled, ‘The Hedgehog and the Fox: An Essay on Tolstoy’s View of History.’ Berlin defined hedgehogs as thinkers who viewed the world through the lens of a single defining idea, and foxes as thinkers who were skeptical of grand theories and instead drew on a wide variety of ideas and experiences before making a decision.

 

FOXES VERSUS HEDGEHOGS

Hagstrom clearly explains key differences between Foxes and Hedgehogs:

Why are hedgehogs penalized? First, because they have a tendency to fall in love with pet theories, which gives them too much confidence in forecasting events. More troubling, hedgehogs were too slow to change their viewpoint in response to disconfirming evidence. In his study, Tetlock said Foxes moved 59 percent of the prescribed amount toward alternate hypotheses, while Hedgehogs moved only 19 percent. In other words, Foxes were much better at updating their Bayesian inferences than Hedgehogs.

Unlike Hedgehogs, Foxes appreciate the limits of their own knowledge. They have better calibration and discrimination scores than Hedgehogs. (Calibration, which can be thought of as intellectual humility, measures how much your subjective probabilities correspond to objective probabilities. Discrimination, sometimes called justified decisiveness, measures whether you assign higher probabilities to things that occur than to things that do not.) Hedgehogs have a stubborn belief in how the world works, and they are more likely to assign probabilities to things that have not occurred than to things that actually occur.

Tetlock tells us Foxes have three distinct cognitive advantages.

  1. They begin with ‘reasonable starter’ probability estimates. They have better ‘inertial-guidance’ systems that keep their initial guesses closer to short-term base rates.
  2. They are willing to acknowledge their mistakes and update their views in response to new information. They have a healthy Bayesian process.
  3. They can see the pull of contradictory forces, and, most importantly, they can appreciate relevant analogies.

Hedgehogs start with one big idea and follow through – no matter the logical implications of doing so. Foxes stitch together a collection of big ideas. They see and understand the analogies and then create an aggregate hypothesis. I think we can say the fox is the perfect mascot for the College of Liberal Arts Investing. (pages 150-151)

 

KNOWING WHAT YOU DON’T KNOW

We have two classes of forecasters: Those who don’t know – and those who don’t know they don’t know. – John Kenneth Galbraith

Last year, I wrote about The Most Important Thing, a terrific book by the great value investor Howard Marks. See: https://boolefund.com/howard-marks-the-most-important-thing/

One of the sections from that blog post, ‘Knowing What You Don’t Know,’ is directly relevant to the discussion of foxes versus hedgehogs. We can often ‘take the temperature’ of the stock market. Thus, we can have some idea that the market is high and may fall after an extended period of increases.

But we can never know for sure that the market will fall, and if so, when precisely. In fact, the market does not even have to fall much at all. It could move sideways for a decade or two, and still end up at more normal levels. Thus, we should always focus our energy and time on finding individual securities that are undervalued.

There could always be a normal bear market, meaning a drop of 15-25%. But that doesn’t conflict with a decade or two of a sideways market. If we own stocks that are cheap enough, we could still be fully invested. Even when the market is quite high, there are usually cheap micro-cap stocks, for instance. Buffett made a comment indicating that he would have been fully invested in 1999 if he were managing a small enough sum to be able to focus on micro caps:

If I was running $1 million, or $10 million for that matter, I’d be fully invested.

There are a few cheap micro-cap stocks today. Moreover, some oil-related stocks are cheap from a 5-year point of view.

Warren Buffett, when he was running the Buffett Partnership, knew for a period of almost ten years (roughly 1960 to 1969) that the stock market was high (and getting higher) and would either fall or move sideways for many years. Yet he was smart enough never to predict precisely when the correction would occur. Because Buffett stayed focused on finding individual companies that were undervalued, Buffett produced an outstanding track record for the Buffett Partnership. Had he ever not invested in cheap stocks because he knew the stock market was high, Buffett would not have produced such an excellent track record. (For more about the Buffett Partnership, see: https://boolefund.com/warren-buffetts-ground-rules/)

Buffett on forecasting:

We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.

Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.

Here is what Ben Graham, the father of value investing, said about forecasting the stock market:

…if I have noticed anything over these 60 years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

Howard Marks has tracked (in a limited way) many macro predictions, including U.S. interest rates, the U.S. stock market, and the yen/dollar exchange rate. He found quite clearly that most forecasts were not correct.

I can elaborate on two examples that I spent much time on (when I should have stayed focused on finding individual companies available at cheap prices):

  • the U.S. stock market
  • the yen/dollar exchange

The U.S. stock market

A secular bear market for U.S. stocks began (arguably) in the year 2000 when the 10-year Graham-Shiller P/E – also called the CAPE (cyclically adjusted P/E) – was over 30, its highest level in U.S. history. The long-term average CAPE is around 16. Based on over one hundred years of history, the pattern for U.S. stocks in a secular bear market would be relatively flat or lower until the CAPE approached 10.

However, ever since Greenspan started running the Fed in the 1980’s, the Fed has usually had a policy of stimulating the economy and stocks by lowering rates or keeping rates as low as possible. This has caused U.S. stocks to be much higher than otherwise. For instance, with rates today staying near zero, U.S. stocks could easily be at least twice as high as ‘normal’ indefinitely, assuming the Fed decides to keep rates low for many more years. Furthermore, as Buffett has noted, very low rates for many decades would eventually mean price/earnings ratios on stocks of 100.

In addition to the current Fed regime, there are several additional reasons why rates may stay low. As Jeremy Grantham recently wrote:

  • We could be between waves of innovation, which suppresses growth and the demand for capital.
  • Population in the developed world and in China is rapidly aging. With more middle-aged savers and less high-consuming young workers, the result could be excess savings that depresses all returns on capital.
  • Nearly 100% of all the recovery in total income since 2009 has gone to the top 0.1%.

Grantham discusses all of these possible reasons for low rates in the Q3 2016 GMO Letter: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/gmo-quarterly-letters/hellish-choices-what’s-an-asset-owner-to-do-and-not-with-a-bang-but-a-whimper.pdf?sfvrsn=8

Grantham gives more detail on income inequality in the Q4 2016 GMO Letter: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/gmo-quarterly-letters/is-trump-a-get-out-of-hell-free-card-and-the-road-to-trumpsville-the-long-long-mistreatment-of-the-american-working-class.pdf?sfvrsn=6

(In order to see GMO commentaries, you may have to register but it’s free.)

Around the year 2012 (or even earlier), some of the smartest market historians – including Russell Napier, author of Anatomy of the Bear – started predicting that the S&P 500 Index would fall towards a CAPE of 10 or lower, which is how every previous U.S. secular bear market concluded. It didn’t happen in 2012, or in 2013, or in 2014, or in 2015, or in 2016. Moreover, it may not happen in 2017 or even 2018.

Again, there could always be a normal bear market involving a drop of 15-25%. But that doesn’t conflict with a sideways market for a decade or two. Grantham suggests total returns of about 2.8% per year for the next 20 years.

Grantham, an expert on bubbles, also pointed out that the usual ingredients for a bubble do not exist today. Normally in a bubble, there are excellent economic fundamentals combined with a euphoric extrapolation of those fundamentals into the future. Grantham in Q3 2016 GMO Letter:

  • Current fundamentals are way below optimal – trend line growth and productivity are at such low levels that the usually confident economic establishment is at an obvious loss to explain why. Capacity utilization is well below peak and has been falling. There is plenty of available labor hiding in the current low participation rate (at a price). House building is also far below normal.
  • Classic bubbles have always required that the geopolitical world is at least acceptable, more usually well above average. Today’s, in contrast, you can easily agree isunusually nerve-wracking.
  • Far from euphoric extrapolations, the current market has been for a long while and remains extremely nervous. Investor trepidation is so great that many are willing to tie up money in ultra-safe long-term government bonds that guarantee zero real return rather than buy the marginal share of stock! Cash reserves are high and traditional measures of speculative confidence are low. Most leading commentators are extremely bearish. The net effect of this nervousness is shown in the last two and a half years of the struggling U.S. market…so utterly unlike the end of the classic bubbles.
  • …They – the bubbles in stocks and houses – all coincided with bubbles in credit…Credit is, needless to say, complex…What is important here is the enormous contrast between the credit conditions that previously have been coincident with investment bubbles and the lack of a similarly consistent and broad-based credit boom today.

The yen/dollar exchange

As for the yen/dollar exchange, some of the smartest macro folks around predicted (in 2010 and later) that shorting the yen vs. the U.S. dollar would be the ‘trade of the decade,’ and that the yen/dollar exchange would exceed 200. In 2007, the yen/dollar was over 120. By 2011-2012, the yen/dollar had gone to around 76. In late 2014 and for most of 2015, the yen/dollar again exceeded 120. However, in late 2015, the BOJ decided not to try to weaken their currency further by printing even larger amounts of money. The yen/dollar declined from over 120 to about 106. Since then, it has remained below 120.

The ‘trade of the decade argument’ was the following: the debt-to-GDP in Japan has reached stratospheric levels (over 400-500%, including over 250% for government debt-to-GDP), government deficits have continued to widen, and the Japanese population is actually shrinking. Since long-term GDP growth is essentially population growth plus productivity growth, it should become mathematically impossible for the Japanese government to pay back its debt without a significant devaluation of their currency. If the BOJ could devalue the yen by 67% – which would imply a yen/dollar exchange rate of well over 200 – then Japan could repay the government debt in seriously devalued currency. In this scenario – a yen devaluation of 67% – Japan effectively would only have to repay 33% of the government debt. Currency devaluation – inflating away the debts – is what most major economies throughout history have done.

Although the U.S. dollar may be stronger than the yen or the euro, all three governments want to devalue their currency over time. Therefore, even if the yen loses value, it’s not at all clear how long this will take when you consider the yen versus the dollar. The yen ‘collapse’ could be delayed by many years. So if you compare a yen/dollar short position versus a micro-cap value investment strategy, it’s likely that the micro-cap value investment strategy will produce higher returns with less risk.

  • Similar logic applies to market timing. You may get lucky once in a row trying to time the market. But simply buying cheap stocks – and holding them for at least 3 to 5 years before buying cheaper stocks – is likely to do much better over the course of decades. Countless extremely intelligent investors throughout history have gone mostly to cash based on a market prediction, only to see the market continue to move higher for many years or even decades. Again: Even if the market is high, it can go sideways for a decade or two. If you buy baskets of cheap micro-cap for a decade or two, there is virtually no chance of losing money, and there’s an excellent chance of doing well.

Also, the total human economy is likely to be much larger in the future, and there may be some way to help the Japanese government with its debts. The situation wouldn’t seem so insurmountable if Japan could grow its population. But this might happen in some indirect way if the total economy becomes more open in the future, perhaps involving the creation of a new universal currency.

TWO SCHOOLS: ‘I KNOW’ vs. ‘I DON’T KNOW’

Financial forecasting cannot be done with any sort of consistency. Every year, there are many people making financial forecasts, and so purely as a matter of chance, a few will be correct in a given year. But the ones correct this year are almost never the ones correct the next time around, because what they’re trying to predict can’t be predicted with any consistency. Howard Marks writes:

I am not going to try to prove my contention that the future is unknowable. You can’t prove a negative, and that certainly includes this one. However, I have yet to meet anyone who consistently knows what lies ahead macro-wise…

One way to get to be right sometimes is to always be bullish or always be bearish; if you hold a fixed view long enough, you may be right sooner or later. And if you’re always an outlier, you’re likely to eventually be applauded for an extremely unconventional forecast that correctly foresaw what no one else did. But that doesn’t mean your forecasts are regularly of any value…

It’s possible to be right about the macro-future once in a while, but not on a regular basis. It doesn’t do any good to possess a survey of sixty-four forecasts that includes a few that are accurate; you have to know which ones they are. And if the accurate forecasts each six months are made by different economists, it’s hard to believe there’s much value in the collective forecasts.

Marks gives one more example: How many predicted the crisis of 2007-2008? Of those who did predict it – there was bound to be some from pure chance alone – how many of those then predicted the recovery starting in 2009 and continuing until today (early 2017)? The answer is ‘very few.’ The reason, observes Marks, is that those who got 2007-2008 right “did so at least in part because of a tendency toward negative views.” They probably were negative well before 2007-2008, and more importantly, they probably stayed negative afterward. And yet, from a close of 676.53 on March 9, 2009, the S&P 500 Index has increased more than 240% to a close of 2316.10 on February 10, 2017.

Marks has a description for investors who believe in the value of forecasts. They belong to the ‘I know’ school, and it’s easy to identify them:

  • They think knowledge of the future direction of economies, interest rates, markets and widely followed mainstream stocks is essential for investment success.
  • They’re confident it can be achieved.
  • They know they can do it.
  • They’re aware that lots of other people are trying to do it too, but they figure either (a) everyone can be successful at the same time, or (b) only a few can be, but they’re among them.
  • They’re comfortable investing based on their opinions regarding the future.
  • They’re also glad to share their views with others, even though correct forecasts should be of such great value that no one would give them away gratis.
  • They rarely look back to rigorously assess their record as forecasters. (page 121)

Marks contrasts the confident ‘I know’ folks with the guarded ‘I don’t know’ folks. The latter believe you can’t predict the macro-future, and thus the proper goal for investing is to do the best possible job analyzing individual securities. If you belong to the ‘I don’t know’ school, eventually everyone will stop asking you where you think the market’s going.

You’ll never get to enjoy that one-in-a-thousand moment when your forecast comes true and the Wall Street Journal runs your picture. On the other hand, you’ll be spared all those times when forecasts miss the mark, as well as the losses that can result from investing based on overrated knowledge of the future.

No one likes investing on the assumption that the future is unknowable, observes Marks. But if the future IS largely unknowable, then it’s far better as an investor to acknowledge that fact than to pretend otherwise.

Furthermore, says Marks, the biggest problems for investors tend to happen when investors forget the difference between probability and outcome (i.e., the limits of foreknowledge):

  • when they believe the shape of the probability distribution is knowable with certainty (and that they know it),
  • when they assume the most likely outcome is the one that will happen,
  • when they assume the expected result accurately represents the actual result, or
  • perhaps most important, when they ignore the possibility of improbable outcomes.

Marks sums it up:

Overestimating what you’re capable of knowing or doing can be extremely dangerous – in brain surgery, transocean racing or investing. Acknowledging the boundaries of what you can know – and working within those limits rather than venturing beyond – can give you a great advantage. (page 123)

Or as Warren Buffett wrote in the 2014 Berkshire Hathaway Letter to Shareholders:

Anything can happen anytime in markets. And no advisor, economist, or TV commentator – and definitely not Charlie nor I – can tell you when chaos will occur. Market forecasters will fill your ear but will never fill your wallet.

Link: http://berkshirehathaway.com/letters/2014ltr.pdf

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Does the Stock Market Overreact?


(Image: Zen Buddha Silence by Marilyn Barbone.)

June 18, 2017

Richard H. Thaler recently published a book entitled Misbehaving: The Making of Behavioral Economics. It’s an excellent book. According to Nobel Laureate Daniel Kahneman, Richard Thaler is “the creative genius who invented the field of behavioral economics.”

Thaler defines “Econs” as the fully rational human beings that traditional economists have always assumed for their models. “Humans” are often less than fully rational, as demonstrated not only by decades of experiments, but also by the history of various asset prices.

For this blog post, I will focus on Part VI (Finance, pages 203-253). But first a quotation Thaler has at the beginning of his book:

The foundation of political economy and, in general, of every social science, is evidently psychology. A day may come when we shall be able to deduce the laws of social science from the principles of psychology.

– Vilfredo Pareto, 1906

 

THE BEAUTY CONTEST

Chicago economist Eugene Fama coined the term “efficient market hypothesis,” or EMH for short. Thaler writes that the EMH has two (related) components:

  • the price is right – the idea is that any asset will sell for its “intrinsic value.” “If the rational valuation of a company is $100 million, then its stock will trade such that the market cap of the firm is $100 million.”
  • no free lunch– EMH holds that all publically available information is already reflected in current stock prices, thus there is no reliable way to “beat the market” over time.

NOTE: If prices are always right, that means that there can never be bubbles in asset prices. It also implies that there are no undervalued stocks, at least none that an investor could consistently identify. There is no way to “beat the market” over a long period of time except by luck. Warren Buffett was lucky.

Thaler observes that finance did not become a mainstream topic in economics departments before the advent of cheap computer power and great data. The University of Chicago was the first to develop a comprehensive database of stock prices going back to 1926. After that, research took off, and by 1970 EMH was well-established.

Thaler also points out that the famous economist J. M. Keynes was “a true forerunner of behavioral finance.” Keynes, who was a great value investor, thought that “animal spirits” play an important role in financial markets.

Keynes also observed that professional investors areplaying an intricate guessing game, similar to picking out the prettiest faces from a set of photographs:

…It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be. And there are some, I believe, who practice the fourth, fifth, and higher degrees.

 

DOES THE STOCK MARKET OVERREACT?

On average, investors overreact to recent poor performance for low P/E stocks, which is why the P/E’s are low. And, on average, investors overreact to recent good performance for high P/E stocks, which is why the P/E’s are high.

Having said that, Thaler is quick to quote a warning by Ben Graham about timing: ‘Undervaluations caused by neglect or prejudice may persist for an inconveniently long time, and the same applies to inflated prices caused by overenthusiasm or artificial stimulus.’ Thaler gives the example of the late 1990s: for years, Internet stocks just kept going up, while value stocks just kept massively underperforming.

According to Thaler, most academic financial economists overlooked Graham’s work:

It was not so much that anyone had refuted Graham’s claim that value investing worked; it was more that the efficient market theory of the 1970s said that value investing couldn’t work. But it did. Late that decade, accounting professor Sanjoy Basu published a thoroughly competent study of value investing that fully supported Graham’s strategy. However, in order to get such papers published at the time, one had to offer abject apologies for the results. (page 221)

Thaler and his research partner Werner De Bondt came up with the following. Suppose that investors are overreacting. Suppose that investors are overly optimistic about the future growth of high P/E stocks, thus driving the P/E’s “too high.” And suppose that investors are excessively pessimistic about low P/E stocks, thus driving the P/E’s “too low.” Then subsequent high returns from value stocks and low returns from growth stocks present simple reversion to the mean. But EMH says that:

  • The price is right: Stock prices cannot diverge from intrinsic value.
  • No free lunch: Because all information is already in the stock price, it is not possible to beat the market. Past stock prices and the P/E cannot predict future price changes.

Thaler and De Bondt took all the stocks listed on the New York Stock Exchange, and ranked their performance over three to five years. They isolated the worst performing stocks, which they called “Losers.” And they isolated the best performing stocks, which they called “Winners.” Writes Thaler:

If markets were efficient, we should expect the two portfolios to do equally well. After all, according to the EMH, the past cannot predict the future. But if our overreaction hypothesis were correct, Losers would outperform Winners. (page 223)

Results:

The results strongly supported our hypothesis. We tested for overreaction in various ways, but as long as the period we looked back at to create the portfolios was long enough, say three years, then the Loser portfolio did better than the Winner portfolio. Much better. For example, in one test we used five years of performance to form the Winner and Loser portfolios and then calculated the returns of each portfolio over the following five years, compared to the overall market. Over the five-year period after we formed our portfolios, the Losers outperformed the market by about 30% while the Winners did worse than the market by about 10%.

 

THE REACTION TO OVERREACTION

In response to widespread evidence that ‘Loser’ stocks (low P/E) – as a group – outperform ‘Winner’ stocks, defenders of EMH were forced to argue that ‘Loser’ stocks are riskier as a group.

NOTE: On an individual stock basis, a low P/E stock may be riskier. But a basket of low P/E stocks generally far outperforms a basket of high P/E stocks. The question is whether a basket of low P/E stocks is riskier than a basket of high P/E stocks.

According to the CAPM (Capital Asset Pricing Model), the measure of the riskiness of a stock is its correlation with the rest of the market, or “beta.” If a stock has a beta of 1.0, then its volatility is similar to the volatility of the whole market. If a stock has a beta of 2.0, then its volatility is double the volatility of the whole market (e.g., if the whole market goes up or down by 10%, then this individual stock will go up or down by 20%).

According to CAPM, if the basket of Loser stocks subsequently outperforms the market while the basket of Winner stocks underperforms, then the Loser stocks must have high betas and the Winner stocks must have low betas. But Thaler and De Bondt found the opposite. Loser stocks (value stocks) were much less risky as measured by beta.

Eventually Eugene Fama himself, along with research partner Kenneth French, published a series of papers documenting that, indeed, both value stocks and small stocks earn higher returns than predicted by CAPM. In short, “the high priest of efficient markets” (as Thaler calls Fama) had declared that CAPM was dead.

But Fama and French were not ready to abandon the EMH (Efficient Market Hypothesis). They came up with the Fama-French Three Factor Model. They showed that value stocks are correlated – a value stock will tend to do well when other value stocks are doing well. And they showed that small-cap stocks are similarly correlated.

The problem, again, is that there is no evidence that a basket of value stocks is riskier than a basket of growth stocks. And there is no theoretical reason to believe that value stocks, as a group, are riskier.

Thaler asserts thatthe debate was settled by the paper ‘Contrarian Investment, Extrapolation, and Risk’ published in 1994 by Josef Lakonishok, Andrei Shleifer, and Robert Vishny. This paper shows clearly that value stocks outperform, and value stocks are, if anything, less risky than growth stocks. Link to paper: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

Lakonishok, Shleifer, and Vishny launched the highly successful LSV Asset Management based on their research: http://lsvasset.com/

(Recently Fama and French have introduced a five-factor model, which includes profitability. Profitability was one of Ben Graham’s criteria.)

 

THE PRICE IS NOT RIGHT

If you held a stock forever, it would be worth all future dividends discounted back to the present. Even if you sold the stock, as long as you held it for a very long time, the distant future sales price (discounted back to the present) would be a negligible part of the intrinsic value of the stock. The stock price is really the present value of all expected future dividend payments.

Bob Shiller collected historical data on stock prices and dividends.

Then, starting in 1871, for each year he computed what he called the ‘ex post rational’ forecast of the stream of future dividends that would accrue to someone who bought a portfolio of the stocks that existed at that time. He did this by observing the actual dividends that got paid out and discounting them back to the year in question. After adjusting for the well-established trend that stock prices go up over long periods of time, Shiller found that the present value of dividends was… highly stable. But stock prices, which we should interpret as attempts to forecast the present value of dividends, are highly variable….(231-232, my emphasis)

Shiller demonstrated that a stock price typically moves around much more than the intrinsic value of the underlying business.

October 1987 provides yet another example of stock prices moving much more than fundamental values. The U.S. stock market dropped more than 25% from Thursday, October 15, 1987 to Monday, October 19, 1987. This happened in the absence of any important news, financial or otherwise. Writes Thaler:

If prices are too variable, then they are in some sense ‘wrong.’ It is hard to argue that the price at the close of trading on Thursday, October 15, and the price at the close of trading the following Monday – which was more than 25% lower – can both be rational measures of intrinsic value, given the absence of news.

THE BATTLE OF CLOSED-END FUNDS

It’s important to note that although the assumption of rationality and the EMH have been demonstrated not to be true – at least strictly speaking – behavioral economists have not invented a model of human behavior that can supplant rationalist economics. Therefore, rationalist economics, and not behaviorist economics, is still the chief basis by which economists attempt to predict human behavior.

Neuroscientists, psychologists, biologists, and other scientists will undoubtedly learn much more about human behavior in the coming decades. But even then, human behavior, due to its complexity, may remain partly unpredictable for some time. Thus, rationalist economic models may continue to be useful.

  • Rationalist models, including game theory, may also be central to understanding and predicting artificially intelligent agents.
  • It’s also possible (as hard as it may be to believe) that human beings will evolve – perhaps partly with genetic engineering and/or with help from AI – and become more rational overall.

The Law of One Price

In an efficient market, the same asset cannot sell simultaneously for two different prices. Thaler gives the standard example of gold selling for $1,000 an ounce in New York and $1,010 an ounce in London. If transaction costs were small enough, a smart trader could buy gold in New York and sell it in London. This would eventually cause the two prices to converge.

But there is one obvious example that violates this law of one price: closed-end funds, which had already been written about by Ben Graham.

For an open-end fund, all trades take place at NAV (net asset value). Investors can purchase a stake in an open-end fund on the open market, without there having to be a seller. So the total amount invested in an open-end fund can vary depending upon what investors do.

But for a closed-end fund, there is an initial amount invested in the fund, say $100 million, and then there can be no further investments and no withdrawals. A closed-end fund is traded on an exchange. So an investor can buy partial ownership of a closed-end fund, but this means that a previous owner must sell that stake to the buyer.

According to EMH, closed-end funds should trade at NAV. But in the real world, many closed-end funds trade at prices different from NAV (sometimes a premium and sometimes a discount). This is an obvious violation of the law of one price.

Charles Lee, Andrei Shleifer, and Richard Thaler wrote a paper on closed-end funds in which they identified four puzzles:

  • Closed-end funds are often sold by brokers with a sales commission of 7%. But within six months, the funds typically sell at a discount of more than 10%. Why do people repeatedly pay $107 for an asset that in six months is worth $90?
  • More generally, why do closed-end funds so often trade at prices that differ from the NAV of its holdings?
  • The discounts and premia vary noticeably across time and across funds. This rules out many simple explanations.
  • When a closed-end fund, often under pressure from shareholders, changes its structure to an open-end fund, its price often converges to NAV.

The various premia and discounts on closed-end funds simply make no sense. These mispricings would not exist if investors were rational because the only rational price for a closed-end fund is NAV.

Lee, Shleifer, and Thaler discovered that individual investors are the primary owners of closed-end funds. So Thaler et al. hypothesized that individual investors have more noticeably shifting moods of optimism and pessimism. Says Thaler:

We conjectured that when individual investors are feeling perky, discounts on closed-end funds shrink, but when they get depressed or scared, the discounts get bigger. This approach was very much in the spirit of Shiller’s take on social dynamics, and investor sentiment was clearly one example of ‘animal spirits.’ (pages 241-242)

In order to measure investor sentiment, Thaler et al. used the fact that individual investors are more likely than institutional investors to own shares of small companies. Thaler et al. reasoned that if the investor sentiment of individual investors changes, it would be apparent both in the discounts of closed-end funds and in the relative performance of small companies (vs. big companies). And this is exactly what Thaler et al. found upon doing the research. The greater the discounts to NAV for closed-end funds, the larger the difference was in returns between small stocks and large stocks.

 

NEGATIVE STOCK PRICES

Years later, Thaler revisited the law of one price with a Chicago colleague, Owen Lamont. Owen had spotted a blatant violation of the law of one price involving the company 3Com. 3Com’s main business was in networking computers using Ethernet technology, but through a merger they had acquired Palm, which made a very popular (at the time) handheld computer the Palm Pilot.

In the summer of 1999, as most tech stocks seemed to double almost monthly, 3Com stock seemed to be neglected. So management came up with the plan to divest itself of Palm. 3Com sold about 4% of its stake in Palm to the general public and 1% to a consortium of firms. As for the remaining 95% of Palm, each 3Com shareholder would receive 1.5 shares of Palm for each share of 3Com they owned.

Once this information was public, one could infer the following: As soon as the initial shares of Palm were sold and started trading, 3Com shareholders would in a sense have two separate investments. A single share of 3Com included 1.5 shares of Palm plus an interest in the remaining parts of 3Com – what’s called the “stub value” of 3Com. Note that the remaining parts of 3Com formed a profitable business in its own right. So the bottom line is that one share of 3Com should equal the “stub value” of 3Com plus 1.5 times the price of Palm.

When Palm started trading, it ended the day at $95 per share. So what should one share of 3Com be worth? It should be worth the “stub value” of 3Com – the remaining profitable businesses of 3Com (Ethernet tech, etc.) – PLUS 1.5 times the price of Palm, or 1.5 x $95, which is $143.

Again, because the “stub value” of 3Com involves a profitable business in its own right, this means that 3Com should trade at X (the stub value) plus $143, so some price over $143.

But what actually happened? The same day Palm started trading, ending the day at $95, 3Com stock fell to $82 per share. Thaler writes:

That means that the market was valuing the stub value of 3Com at minus $61 per share, which adds up to minus $23 billion! You read that correctly. The stock market was saying that the remaining 3Com business, a profitable business, was worth minus $23 billion. (page 246)

Thaler continues:

Think of it another way. Suppose an Econ is interested in investing in Palm. He could pay $95 and get one share of Palm, or he could pay $82 and get one share of 3Com that includes 1.5 shares of Palm plus an interest in 3Com.

Thaler observes that two things are needed for such a blatant violation of the law of one price to emerge and persist:

  • You need some traders who want to own shares of the now publicly traded Palm, traders who appear not to realize the basic math of the situation. These traders are called “noise traders,” because they are trading not based on real information (or real news), but based purely on “noise.” (The term “noise traders” was invented by Fischer Black. See: http://www.e-m-h.org/Blac86.pdf)
  • There also must be something preventing smart traders from driving prices back to where they are supposed to be. After all, the sensible investor can buy a share of 3Com for $82, and get 1.5 shares of Palm (worth $143) PLUS an interest in remaining profitable businesses of 3Com. Actually, the rational investor would go one step further: buy 3Com shares (at $82) and then short an appropriate number of Palm shares (at $95). When the deal is completed and the rational investor gets 1.5 shares of Palm for each share of 3Com owned, he can then use those shares of Palm to repay the shares he borrowed earlier when shorting the publicly traded Palm stock. This was a CAN’T LOSE investment. Then why wasn’t everyone trying to do it?

The problem was that there were very few shares of Palm being publicly traded. Some smart traders made tens of thousands. But there wasn’t enough publicly traded Palm stock available for any rational investor to make a huge amount of money. So the irrational prices of 3Com and Palm were not corrected.

Thaler also tells a story about a young Benjamin Graham. In 1923, DuPont owned a large number of shares of General Motors. But the market value of DuPont was about the same as its stake in GM. DuPont was a highly profitable firm. So this meant that the stock market was putting the “stub value” of DuPont’s highly profitable business at zero. Graham bought DuPont and sold GM short. He made a lot of money when the price of DuPont went up to more rational levels.

In mid-2014, says Thaler, there was a point when Yahoo’s holdings of Alibaba were calculated to be worth more than the whole of Yahoo.

Sometimes, as with the closed-end funds, obvious mispricings can last for a long time, even decades. Andrei Shleifer and Robert Vishny refer to this as the “limits of arbitrage.”

 

THALER’S CONCLUSIONS ABOUT BEHAVIORAL FINANCE

What are the implications of these examples? If the law of one price can be violated in such transparently obvious cases such as these, then it is abundantly clear that even greater disparities can occur at the level of the overall market. Recall the debate about whether these was a bubble going on in Internet stocks in the late 1990s…. (page 250)

So where do I come down on the EMH? It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful. In a world of Econs, I believe that the EMH would be true. And it would not have been possible to do research in behavioral finance without the rational model as a starting point. Without the rational framework, there are no anomalies from which we can detect misbehavior. Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research. We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed. Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’ There are definitely anomalies: sometimes the market overreacts, and sometimes it underreacts. But it remains the case that most active money managers fail to beat the market…

I have a much lower opinion about the price-is-right component of the EMH, and for many important questions, this is the more important component…

My conclusion: the price is often wrong, and sometimes very wrong. Furthermore, when prices diverge from fundamental value by such wide margins, the misallocation of resources can be quite big. For example, in the United States, where home prices were rising at a national level, some regions experienced especially rapid price increases and historically high price-to-rental ratios. Had both homeowners and lenders been Econs, they would have noticed these warning signals and realized that a fall in home prices was becoming increasingly likely. Instead, surveys by Shiller showed that these were the regions in which expectations about the future appreciation of home prices were the most optimistic. Instead of expecting mean reversion, people were acting as if what goes up must go up even more. (my emphasis)

Thaler adds that policy-makers should realize that asset prices are often wrong, and sometimes very wrong, instead of assuming that prices are always right.

 

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approachesintrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: [email protected]

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed.No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.