Deep Value: Profiting from Mean Reversion

(Image:  Zen Buddha Silence by Marilyn Barbone.)

November 12, 2017

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.  Sometimes it seems that there are misconceptions about deep value investing.

  • First, deep value stocks have on occasion been called cheap relative to future growth.  But it’s often more accurate to say that deep value stocks are cheap relative to normalized earnings or cash flows.
  • Second, the cheapness of deep value stocks has often been said to be relative to “net tangible assets.”  However, in many cases, even including stocks at a discount to tangible assets, mean reversion relates to the future normalized earnings or cash flows that the assets can produce.
  • Third, typically more than half of deep value stocks underperform the market.  And deep value stocks are more likely to be distressed than average stocks.  Do these facts imply that a deep value investment strategy is riskier than average?  No…

Have you noticed these misconceptions?  I’m curious to hear your take.  Please let me know.

Here are the sections in this blog post:

  • Introduction
  • Mean Reversion as “Return to Normal” instead of “Growth”
  • Revenues, Earnings, Cash Flows, NOT Asset Values
  • Is Deep Value Riskier?
  • A Long Series of Favorable Bets
  • “Cigar Butt’s” vs. See’s Candies
  • Microcap Cigar Butt’s

 

INTRODUCTION

Deep value stocks tend to fit two criteria:

  • Deep value stocks trade at depressed multiples.
  • Deep value stocks have depressed fundamentals – they have generally been doing terribly in terms of revenues, earnings, or cash flows, and often the entire industry is doing poorly.

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.

  • Low multiples include low P/E (price-to-earnings), low P/B (price-to-book), low P/CF (price-to-cash flow), and low EV/EBIT (enterprise value-to-earnings before interest and taxes).
  • Mean reversion implies that, in general, deep value stocks are underperforming their economic potential.  On the whole, deep value stocks will experience better future economic performance than is implied by their current stock prices.

If you look at deep value stocks as a group, it’s a statistical fact that many will experience better revenues, earnings, or cash flows in the future than what is implied by their stock prices.  This is due largely to mean reversion.  The future economic performance of these deep value stocks will be closer to normal levels than their current economic performance.

Moreover, the stock price increases of the good future performers will outweigh the languishing stock prices of the poor future performers.  This causes deep value stocks, as a group, to outperform the market over time.

Two important notes:

  1. Generally, for deep value stocks, mean reversion implies a return to more normal levels of revenues, earnings, or cash flows.  It does not often imply growth above and beyond normal levels.
  2. For most deep value stocks, mean reversion relates to future economic performance and not to tangible asset value per se.

(1) Mean Reversion as Return to More Normal Levels

One of the best papers on deep value investing is by Josef Lakonishok, Andrei Shleifer, and Robert Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.”  Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

LSV (Lakonishok, Schleifer, and Vishny) correctly point out that deep value stocks are better identified by using more than one multiple.  LSV Asset Management currently manages $105 billion using deep value strategies that rely simultaneously on several metrics for cheapness, including low P/E and low P/CF.

  • In Quantitative Value (Wiley, 2012), Tobias Carlisle and Wesley Gray find that low EV/EBIT outperformed every other measure of cheapness, including composite measures.
  • However, James O’Shaughnessy, in What Works on Wall Street (McGraw-Hill, 2011), demonstrates – with great thoroughness – that, since the mid-1920’s, composite approaches (low P/S, P/E, P/B, EV/EBITDA, P/FCF) have been the best performers.
  • Any single metric may be more easily arbitraged away by a powerful computerized approach.  Walter Schloss once commented that low P/B was working less well because many more investors were using it.  (In recent years, low P/B hasn’t worked.)

LSV explain why mean reversion is the essence of deep value investing.  Investors, on average, are overly pessimistic about stocks at low multiples.  Investors understimate the mean reversion in future economic performance for these out-of-favor stocks.

However, in my view, the paper would be clearer if it used (in some but not all places) “return to more normal levels of economic performance” in place of “growth.”  Often it’s a return to more normal levels of economic performance – rather than growth above and beyond normal levels – that defines mean reversion for deep value stocks.

(2) Revenues, Earnings, Cash Flows NOT Net Asset Values

Buying at a low price relative to tangible asset value is one way to implement a deep value investing strategy.  Many value investors have successfully used this approach.  Examples include Ben Graham, Walter Schloss, Peter Cundill, John Neff, and Marty Whitman.

Warren Buffett used this approach in the early part of his career.  Buffett learned this method from his teacher and mentor, Ben Graham.  Graham called this the “net-net” approach.  You take net working capital minus ALL liabilities.  If the stock price is below that level, and if you buy a basket of such “net-net’s,” you can’t help but do well over time.  These are extremely cheap stocks, on average.  (The only catch is that there must be enough net-net’s in existence to form a basket, which is not always the case.)

Buffett on “cigar butts”:

…I call it the cigar butt approach to investing.  You walk down the street and you look around for a cigar butt someplace.  Finally you see one and it is soggy and kind of repulsive, but there is one puff left in it.  So you pick it up and the puff is free – it is a cigar butt stock.  You get one free puff on it and then you throw it away and try another one.  It is not elegant.  But it works.  Those are low return businesses.

Link: http://intelligentinvestorclub.com/downloads/Warren-Buffett-Florida-Speech.pdf

But most net-net’s are NOT liquidated.  Rather, there is mean reversion in their future economic performance – whether revenues, earnings, or cash flows.  That’s not to say there aren’t some bad businesses in this group.  For net-net’s, when economic performance returns to more normal levels, typically you sell the stock.  You don’t (usually) buy and hold net-net’s.

Sometimes net-net’s are acquired.  But in many of these cases, the acquirer is focused mainly on the earnings potential of the assets.  (Non-essential assets may be sold, though.)

In sum, the specific deep value method of buying at a discount to net tangible assets has worked well in general ever since Graham started doing it.  And net tangible assets do offer additional safety.  That said, when these particular cheap stocks experience mean reversion, often it’s because revenues, earnings, or cash flows return to “more normal” levels.  Actual liquidation is rare.

 

IS DEEP VALUE RISKIER?

According to a study done by Joseph Piotroski from 1976 to 1996 – discussed below – although a basket of deep value stocks clearly beats the market over time, only 43% of deep value stocks outperform the market, while 57% underperform.  By comparison, an average stock has a 50% chance of outperforming the market and a 50% chance of underperforming.

Let’s assume that the average deep value stock has a 57% chance of underperforming the market, while an average stock has only a 50% chance of underperforming.  This is a realistic assumption not only because of Piotroski’s findings, but also because the average deep value stock is more likely to be distressed (or to have problems) than the average stock.

Does it follow that the reason deep value investing does better than the market over time is that deep value stocks are riskier than average stocks?

It is widely accepted that deep value investing does better than the market over time.  But there is still disagreement about how risky deep value investing is.  Strict believers in the EMH (Efficient Markets Hypothesis) – such as Eugene Fama and Kenneth French – argue that value investing must be unambiguously riskier than simply buying an S&P 500 Index fund.  On this view, the only way to do better than the market over time is by taking more risk.

Now, it is generally true that the average deep value stock is more likely to underperform the market than the average stock.  And the average deep value stock is more likely to be distressed than the average stock.

But LSV show that a deep value portfolio does better than an average portfolio, especially during down markets.  This means that a basket of deep value stocks is less risky than a basket of average stocks.

  • A “portfolio” or “basket” of stocks refers to a group of stocks.  Statistically speaking, there must be at least 30 stocks in the group.  In the case of LSV’s study – like most academic studies of value investing – there are hundreds of stocks in the deep value portfolio.  (The results are similar over time whether you have 30 stocks or hundreds.)

Moreover, a deep value portfolio only has slightly more volatility than an average portfolio, not nearly enough to explain the significant outperformance.  In fact, when looked at more closely, deep value stocks as a group have slightly more volatility mainly because of upside volatility – relative to the broad market – rather than because of downside volatility.  This is captured not only by the clear outperformance of deep value stocks as a group over time, but also by the fact that deep value stocks do much better than average stocks in down markets.

Deep value stocks, as a group, not only outperform the market, but are less risky.  Ben Graham, Warren Buffett, and other value investors have been saying this for a long time.  After all, the lower the stock price relative to the value of the business, the less risky the purchase, on average.  Less downside implies more upside.

 

A LONG SERIES OF FAVORABLE BETS

Let’s continue to assume that the average deep value stock has a 57% chance of underperforming the market.  And the average deep value stock has a greater chance of being distressed than the average stock.  Does that mean that the average individual deep value stock is riskier than the average stock?

No, because the expected return on the average deep value stock is higher than the expected return on the average stock.  In other words, on average, a deep value stock has more upside than downside.

Put very crudely, in terms of expected value:

[(43% x upside) – (57% x downside)] > [avg. return]

43% times the upside, minus 57% times the downside, is greater than the return from the average stock (or from the S&P 500 Index).

The crucial issue relates to making a long series of favorable bets.  Since we’re talking about a long series of bets, let’s again consider a portfolio of stocks.

  • Recall that a “portfolio” or “basket” of stocks refers to a group of at least 30 stocks.

A portfolio of average stocks will simply match the market over time.  That’s an excellent result for most investors, which is why most investors should just invest in index funds: http://boolefund.com/warren-buffett-jack-bogle/

A portfolio of deep value stocks will, over time, do noticeably better than the market.  Year in and year out, approximately 57% of the deep value stocks will underperform the market, while 43% will outperform.  But the overall outperformance of the 43% will outweigh the underperformance of the 57%, especially over longer periods of time.  (57% and 43% are used for illustrative purposes here.  The actual percentages vary.)

Say that you have an opportunity to make the same bet 1,000 times in a row, and that the bet is as follows:  You bet $1.  You have a 60% chance of losing $1, and a 40% chance of winning $2.  This is a favorable bet because the expected value is positive: 40% x $2 = $0.80, while 60% x $1 = $0.60.  If you made this bet repeatedly over time, you would average $0.20 profit on each bet, since $0.80 – $0.60 = $0.20.

If you make this bet 1,000 times in a row, then roughly speaking, you will lose 60% of them (600 bets) and win 40% of them (400 bets).  But your profit will be about $200.  That’s because 400 x $2 = $800, while 600 x $1 = $600.  $800 – $600 = $200.

Systematically investing in deep value stocks is similar to the bet just described.  You may lose 57% of the bets and win 43% of the bets.  But over time, you will almost certainly profit because the average upside is greater than the average downside.  Your expected return is also higher than the market return over the long term.

 

“CIGAR BUTT’S” vs. SEE’S CANDIES

In his 1989 Letter to Shareholders, Buffett writes about his “Mistakes of the First Twenty-Five Years,” including a discussion of “cigar butt” (deep value) investing:

My first mistake, of course, was in buying control of Berkshire.  Though I knew its business – textile manufacturing – to be unpromising, I was enticed to buy because the price looked cheap.  Stock purchases of that kind had proved reasonably rewarding in my early years, though by the time Berkshire came along in 1965 I was becoming aware that the strategy was not ideal. 

If you buy a stock at a sufficiently low price, there will usually be some hiccup in the fortunes of the business that gives you a chance to unload at a decent profit, even though the long-term performance of the business may be terrible.  I call this the ‘cigar butt’ approach to investing.  A cigar butt found on the street that has only one puff left in it may not offer much of a smoke, but the ‘bargain purchase’ will make that puff all profit. 

Link: http://www.berkshirehathaway.com/letters/1989.html

Buffett has made it clear that cigar butt (deep value) investing does work.  In fact, fairly recently, Buffett bought at basket of cigar butts in South Korea.  The results were excellent.  But he did this in his personal portfolio.

This highlights a major reason why Buffett evolved from investing in cigar butts to investing in higher quality businesses:  size of investable assets.  When Buffett was managing a few hundred million dollars or less, which includes when he managed an investment partnership, Buffett achieved outstanding results in part by investing in cigar butts.  But when investable assets swelled into the billions of dollars at Berkshire Hathaway, Buffett began investing in higher quality companies.

  • Cigar butt investing works best for micro caps.  But micro caps won’t move the needle if you’re investing many billions of dollars.

The idea of investing in higher quality companies is simple:  If you can find a business with a sustainably high ROE – based on a sustainable competitive advantage – and if you can hold that stock for a long time, then your returns as an investor will approximate the ROE (return on equity).  This assumes that the company can continue to reinvest all of its earnings at the same ROE, which is extremely rare when you look at multi-decade periods.

  • The quintessential high-quality business that Buffett and Munger purchased for Berkshire Hathaway is See’s Candies.  They paid $25 million for $8 million in tangible assets in 1972.  Since then, See’s Candies has produced over $2 billion in (pre-tax) earnings, while only requiring a bit over $40 million in reinvestment.
  • See’s turns out more than $80 million in profits each year.  That’s over 100% ROE (return on equity), which is extraordinary.  But that’s based mostly on assets in place.  The company has not been able to reinvest most of its earnings.  Instead, Buffett and Munger have invested the massive excess cash flows in other good opportunities – averaging over 20% annual returns on these other investments (for most of the period from 1972 to present).

Furthermore, buying and holding stock in a high-quality business brings enormous tax advantages over time because you never have to pay taxes until you sell.  Thus, as a high-quality business – with sustainably high ROE – compounds value over many years, a shareholder who never sells receives the maximum benefit of this compounding.

Yet it’s extraordinarily difficult to find a business that can sustain ROE at over 20% – including reinvested earnings – for decades.  Buffett has argued that cigar butt (deep value) investing produces more dependable results than investing exclusively in high-quality businesses.  Very often investors buy what they think is a higher-quality business, only to find out later that they overpaid because the future performance does not match the high expectations that were implicit in the purchase price.  Indeed, this is what LSV show in their famous paper (discussed above) in the case of “glamour” (or “growth”) stocks.

 

MICROCAP CIGAR BUTTS

Buffett has said that you can do quite well as an investor, if you’re investing smaller amounts, by focusing on cheap micro caps.  In fact, Buffett has maintained that he could get 50% per year if he could invest only in cheap micro caps.

Investing systematically in cheap micro caps can often lead to higher long-term results than the majority of approaches that invest in high-quality stocks.

First, micro caps, as a group, far outperform every other category.  See the historical performance here: http://boolefund.com/best-performers-microcap-stocks/

Second, cheap micro caps do even better.  Systematically buying at low multiples works over the course of time, as clearly shown by LSV and many others.

Finally, if you apply the Piotroski F-Score to screen cheap micro caps for improving fundamentals, performance is further boosted:  The biggest improvements in performance are concentrated in cheap micro caps with no analyst coverage.  See: http://boolefund.com/joseph-piotroski-value-investing/

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Lifelong Learning

(Image: Zen Buddha Silence, by Marilyn Barbone)

November 5, 2017

Lifelong learning—especially if pursued in a multidisciplinary fashion—can continuously improve your productivity and ability to think.  Lifelong learning boosts your capacity to serve others.

Robert Hagstrom’s wonderful book, Investing: The Last Liberal Art (Columbia University Press, 2013), is based on the notion of lifelong, multidisciplinary learning.

Ben Franklin was a strong advocate for this broad-based approach to education.  Charlie Munger—Warren Buffett’s business partner—wholeheartedly agrees with Franklin.  Hagstrom quotes Munger:

Worldly wisdom is mostly very, very simple.  There are a relatively small number of disciplines and a relatively small number of truly big ideas.  And it’s a lot of fun to figure out.  Even better, the fun never stops…

What I am urging on you is not that hard to do.  And the rewards are awesome… It’ll help you in business.  It’ll help you in law.  It’ll help you in life.  And it’ll help you in love… It makes you better able to serve others, it makes you better able to serve yourself, and it makes life more fun.

Hagstrom’s book is necessarily abbreviated.  This blog post even more so.  Nonetheless, I’ve tried to capture many of the chief lessons put forth by Hagstrom.

Here’s the outline:

  • A Latticework of Mental Models
  • Physics
  • Biology
  • Sociology
  • Psychology
  • Philosophy
  • Literature
  • Mathematics
  • Decision Making

(Image: Unfolding of the Mind, by Agsandrew)

 

A LATTICEWORK OF MENTAL MODELS

Charlie Munger has long maintained that in order to be able to solve a broad array of problems in life, you must have a latticework of mental models.  This means you have to master the central models from various areas—physics, biology, social studies, psychology, philosophy, literature, and mathematics.

As you assimilate the chief mental models, those models will strengthen and support one another, notes Hagstrom.  So when you make a decision—whether in investing or in any other area—that decision is more likely to be correct if multiple mental models have led you to the same conclusion.

Ultimately, a dedication to lifelong, multidiscipinary learning will make us better people—better leaders, citizens, parents, spouses, and friends.

In the summer of 1749, Ben Franklin put forward a proposal for the education of youth.  The Philadelphia Academy—later called the University of Pennsylvania—would stress both classical (“ornamental”) and practical education.  Hagstrom quotes Franklin:

As to their studies, it would be well if they could be taught everything that is useful and everything that is ornamental.  But art is long and their time is short.  It is therefore proposed that they learn those things that are likely to be most useful and most ornamental, regard being had to the several professions for which they are intended.

Franklin held that gaining the ability to think well required the study of philosophy, logic, mathematics, religion, government, law, chemistry, biology, health, agriculture, physics, and foreign languages.  Moreover, says Hagstrom, Franklin viewed the opportunity to study so many subjects as a wonderful gift rather than a burden.

(Painting by Mason Chamberlin (1762) – Philadelphia Museum of Art, via Wikimedia Commons)

Franklin himself was devoted to lifelong, multidisciplinary learning.  He remained open-minded and intellectually curious throughout his life.

Hagstrom also observes that innovation often depends on multidisciplinary thinking:

Innovative thinking, which is our goal, most often occurs when two or more mental models act in combination.

 

PHYSICS

Hagstrom remarks that the law of supply and demand in economics is based on the notion of equilibrium, a fundamental concept in physics.

(Research scientist writing physics diagrams and formulas, by Shawn Hempel)

Many historians consider Sir Isaac Newton to be the greatest scientific mind of all time, points out Hagstrom.  When he arrived at Trinity College at Cambridge, Newton had no mathematical training.  But the scientific revolution had already begun.  Newton was influenced by the ideas of Johannes Kepler, Galileo Galilei, and René Descartes.  Hagstrom:

The lesson Newton took from Kepler is one that has been repeated many times throughout history:  Our ability to answer even the most fundamental aspects of human existence depends largely on measuring instruments available at the time and the ability of scientists to apply rigorous mathematical reasoning to the data.

Galileo invented the telescope, which then proved that the heliocentric model proposed by Nicolaus Copernicus was correct, rather than the geocentric model—first proposed by Aristotle and later developed by Ptolemy.  Moreover, Galileo developed the mathematical laws that describe and predict falling objects.

Hagstrom then explains the influence of Descartes:

Descartes promoted a mechanical view of the world.  He argued that the only way to understand how something works is to build a mechanical model of it, even if that model is constructed only in our imagination.  According to Descartes, the human body, a falling rock, a growing tree, or a stormy night all suggested that mechanical laws were at work.  This mechanical view provided a powerful research program for seventeenth century scientists.  It suggested that no matter how complex or difficult the observation, it was possible to discover the underlying mechanical laws to explain the phenomenon.

In 1665, due to the Plague, Cambridge was shut down.  Newton was forced to retreat to the family farm.  Hagstrom writes that, in quiet and solitude, Newton’s genius emerged:

His first major discovery was the invention of fluxions or what we now call calculus.  Next he developed the theory of optics.  Previously it was believed that color was a mixture of light and darkness.  But in a series of experiments using a prism in a darkened room, Newton discovered that light was made up of a combination of the colors of the spectrum.  The highlight of that year, however, was Newton’s discovery of the universal law of gravitation.

(Copy of painting by Sir Godfrey Kneller (1689), via Wikimedia Commons)

Newton’s three laws of motion unified Kepler’s planetary laws with Galileo’s laws of falling bodies.  It took time for Newton to state his laws with mathematical precision.  He waited twenty years before finally publishing Principia Mathematica.

Newton’s three laws were central to a shift in worldview on the part of scientists.  The evolving scientific view held that the future could be predicted based on present data if scientists could discover the mathematical, mechanical laws underlying the data.

Prior to the scientific worldview, a mystery was often described as an unknowable characteristic of an “ultimate entity,” whether an “unmoved mover” or a deity.  Under the scientific worldview, a mystery is a chance to discover fundamental scientific laws.  The incredible progress of physics—which now includes quantum mechanics, relativity, and the Big Bang—has depended in part on the belief by scientists that reality is comprehensible.  Albert Einstein:

The most incomprehensible thing about the universe is that it is comprehensible.

Physics was—and is—so successful in explaining and predicting a wide range of phenomena that, not surprisingly, scientists from other fields have often wondered whether precise mathematical laws or ideas can be discovered to predict other types of phenomena.  Hagstrom:

In the nineteenth century, for instance, certain scholars wondered whether it was possible to apply the Newtonian vision to the affairs of men.  Adolphe Quetelet, a Belgian mathematician known for applying probability theory to social phenomena, introduced the idea of “social physics.”  Auguste Comte developed a science for explaining social organizations and for guiding social planning, a science he called sociology.  Economists, too, have turned their attention to the Newtonian paradigm and the laws of physics.

After Newton, scholars from many fields focused their attention on systems that demonstrate equilibrium (whether static or dynamic), believing that it is nature’s ultimate goal.  If any deviations in the forces occurred, it was assumed that the deviations were small and temporary—and the system would always revert back to equilibrium.

Hagstrom explains how the British economist Alfred Marshall adopted the concept of equilibrium in order to explain the law of supply and demand.  Hagstrom quotes Marshall:

When demand and supply are in stable equilibrium, if any accident should move the scale of production from its equilibrium position, there will instantly be brought into play forces tending to push it back to that position; just as a stone hanging from a string is displaced from its equilibrium position, the force of gravity will at once tend to bring it back to its equilibrium position.  The movements of the scale of production about its position of equilibrium will be of a somewhat similar kind.

(Alfred Marshall, via Wikimedia Commons)

Marshall’s Principles of Economics was the standard textbook until Paul Samuelson published Economics in 1948, says Hagstrom.  But the concept of equilibrium remained.  Firms seeking to maximize profits translate the preferences of households into products.  The logical structure of the exchange is a general equilibrium system, according to Samuelson.

Samuelson’s view of the stock market was influenced by the works of Louis Bachelier, Maurice Kendall, and Alfred Cowles, notes Hagstrom.

In 1932, Cowles founded the Cowles Commission for Research and Economics.  Later on, Cowles studied 6,904 predictions of the stock market from 1929 to 1944.  Cowles learned that no one had demonstrated any ability to predict the stock market.

Kendall, a professor of statistics at the London School of Economics, studied the histories of various individual stock prices going back fifty years.  Kendall was unable to find any patterns that would allow accurate predictions of future stock prices.

Samuelson thought that stock prices jump around because of uncertainty about how the businesses in question will perform in the future.  The intrinsic value of a given stock is determined by the future cash flow the business will produce.  But that future cash flow is unknown.

Bachelier’s work showed that the mathematical expectation of a speculator is zero, meaning that the current stock price is in equilibrium based on an equal number of buyers and sellers.

Samuelson, building on Bachelier’s work, invented the rational expectations hypothesis.  From the assumption that market participants are rational, it followed that the current stock price is the best collective guess of the intrinsic value of the business—based on estimated future cash flows.

Eugene Fama later extended Samuelson’s view into what came to be called the Efficient Markets Hypothesis (EMH).  Stock prices fully reflect all available information, therefore it’s not possible—except by luck—for any individual investor to beat the market over the long term.

Many scientists have questioned the EMH.  The stock market sometimes does not seem rational.  People often behave irrationally.

In science, however, it’s not enough to show that the existing theory has obvious flaws.  In order to supplant existing scientific theory, scientists must come up with a better theory—one that better predicts the phenomena in question.  Rationalist economics, including EMH, is still the best approximation for a wide range of phenomena.

Some scientists are working with the idea of a complex adaptive system as a possible replacement for more traditional ideas of the stock market. Hagstrom:

Every complex adaptive system is actually a network of many individual agents all acting in parallel and interacting with one another.  The critical variable that makes a system both complex and adaptive is the idea that agents (neurons, ants, or investors) in the system accumulate experience by interacting with other agents and then change themselves to adapt to a changing environment.  No thoughtful person, looking at the present stock market, can fail to conclude that it shows all the traits of a complex adaptive system.  And this takes us to the crux of the matter.  If a complex adaptive system is, by definition, continuously adapting, it is impossible for any such system, including the stock market, ever to reach a state of perfect equilibrium.

It’s much more widely accepted today that people often do behave irrationally.  But Fama argues that an efficient market does not require perfect rationality or information.

Hagstrom concludes that, while the market is mostly efficient, rationalist economics is not the full answer.  There’s much more to the story, although it will take time to work out the details.

 

BIOLOGY

(Photo by Ben Schonewille)

Robert Darwin, a respected physician, enrolled his son Charles at the University of Edinburgh.  Robert wanted his son to study medicine.  But Charles had no interest.  Instead, he spent his time studying geology and collecting insects and specimens.

Robert realized his son wouldn’t become a doctor, so he sent Charles to Cambridge to study divinity.  Although Charles got a bachelor’s degree in theology, he formed some important connections with scientists, says Hagstrom:

The Reverend John Stevens Henslow, professor of botany, permitted the enthusiastic amateur to sit in on his lectures and to accompany him on his daily walks to study plant life.  Darwin spent so many hours in the professor’s company that he was known around the university as “the man who walks with Henslow.”

Later, Professor Henslow recommended Darwin for the position of naturalist on a naval expedition.  Darwin’s father objected, but Darwin’s uncle, Josiah Wedgewood II, intervened.  When the HMS Beagle set sail on December 27, 1831, from Plymouth, England, Charles Darwin was aboard.

Darwin’s most important observations happened at the Galapagos Islands, near the equator, six hundred miles west of Ecuador.  Hagstrom:

Darwin, the amateur geologist, knew that the Galapagos were classified as oceanic islands, meaning they had arisen from the sea by volcanic action with no life forms aboard.  Nature creates these islands and then waits to see what shows up.  An oceanic island eventually becomes inhabited but only by forms that can reach it by wings (birds) or wind (spores and seeds)…

Darwin was particularly fascinated by the presence of thirteen types of finches.  He first assumed these Galapagos finches, today called Darwin’s finches, were a subspecies of the South American finches he had studied earlier and had most likely been blown to sea in a storm.  But as he studied distribution patterns, Darwin observed that most islands in the archipelago carried only two or three types of finches; only the larger central islands showed greater diversification.  What intrigued him even more was that all the Galapagos finches differed in size and behavior.  Some were heavy-billed seedeaters; others were slender billed and favored insects.  Sailing through the archipelago, Darwin discovered that the finches on Hood Island were different from those on Tower Island and that both were different from those on Indefatigable Island.  He began to wonder what would happen if a few finches on Hood Island were blown by high winds to another island.  Darwin concluded that if the newcomers were pre-adapted to the new habitat, they would survive and multiply alongside the resident finches; if not, their number would ultimately diminish.  It was one thread of what would ultimately become his famous thesis.

(Galapagos Islands, Photo by Hugoht)

Hagstrom continues:

Reviewing his notes from the voyage, Darwin was deeply perplexed.  Why did the birds and tortoises on some islands of the Galapagos resemble the species found in South America while those on other islands did not?  This observation was even more disturbing when Darwin learned that the finches he brought back from the Galapagos belonged to different species and were not simply different varieties of the same species, as he had previously believed.  Darwin also discovered that the mockingbirds he had collected were three distinct species and the tortoises represented two species.  He began referring to these troubling questions as “the species problem,” and outlined his observations in a notebook he later entitled “Notebook on the Transmutation of the Species.”

Darwin now began an intense investigation into the species variation.  He devoured all the written work on the subject and exchanged voluminous correspondence with botanists, naturalists, and zookeepers—anyone who had information or opinions about species mutation.  What he learned convinced him that he was on the right track with his working hypothesis that species do in fact change, whether from place to place or from time period to time period.  The idea was not only radical at the time, it was blasphemous.  Darwin struggled to keep his work secret.

(Photo by Maull and Polyblank (1855), via Wikimedia Commons)

It took several years—until 1838—for Darwin to put together his hypothesis.  Darwin wrote in his notebook:

Being well-prepared to appreciate the struggle for existence which everywhere goes on from long-continued observation of the habits of animals and plants, it at once struck me that under these circumstances, favorable variations would tend to be preserved and unfavorable ones to be destroyed.  The result of this would be the formation of new species.  Here, then, I had at last got a theory—a process by which to work.

The struggle for survival was occurring not only between species, but also between individuals of the same species, Hagstrom points out.  Favorable variations are preserved.  After many generations, small gradual changes begin to add up to larger changes.  Evolution.

Darwin delayed publishing his ideas, perhaps because he knew they would be highly controversial, notes Hagstrom.  Finally, in 1859, Darwin published On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life.  The book sold out on its first day.  By 1872, The Origin of Species was in its sixth edition.

Hagstrom writes that in the first edition of Alfred Marshall’s famous textbook, Principles of Economics, the economist put the following on the title page:

Natura non facit saltum

Darwin himself used the same phrase—which means “nature does not make leaps”—in his book, The Origin of Species.  Although Marshall never explained his thinking explicitly, it seems Marshall meant to align his work with Darwinian thinking.

Less than two decades later, Austrian-born economist Joseph Schumpeter put forth his central idea of creative destruction.  Hagstrom quotes British economist Christopher Freeman, who—after studying Schumpeter’s life—remarked:

The central point of his whole life work is that capitalism can only be understood as an evolutionary process of continuous innovation and creative destruction.

Hagstrom explains:

Innovation, said Schumpeter, is the profitable application of new ideas, including products, production processes, supply sources, new markets, or new ways in which a company could be organized.  Whereas standard economic theory believed progress was a series of small incremental steps, Schumpeter’s theory stressed innovative leaps, which in turn caused massive disruption and discontinuity—an idea captured in Schumpeter’s famous phrase “the perennial gale of creative destruction.”

But all these innovative possibilities meant nothing without the entrepreneur who becomes the visionary leader of innovation.  It takes someone exceptional, said Schumpeter, to overcome the natural obstacles and resistance to innovation.  Without the entrepreneur’s desire and willingness to press forward, many great ideas could never be launched.

(Image from the Department of Economics, University of Freiburg, via Wikimedia Commons)

Moreover, Schumpeter held that entrepreneurs can thrive only in certain environments.  Property rights, a stable currency, and free trade are important.  And credit is even more important.

In the fall of 1987, a group of physicists, biologists, and economists held a conference at the Santa Fe Institute.  The economist Brian Arthur gave a presentation on “New Economics.”  A central idea was to apply the concept of complex adaptive systems to the science of economics.  Hagstrom records that the Santa Fe group isolated four features of the economy:

Dispersed interaction:  What happens in the economy is determined by the interactions of a great number of individual agents all acting in parallel.  The action of any one individual agent depends on the anticipated actions of a limited number of agents as well as on the system they cocreate.

No global controller:  Although there are laws and institutions, there is no one global entity that controls the economy.  Rather, the system is controlled by the competition and coordination between agents of the system.

Continual adaptation:  The behavior, actions, and strategies of agents, as well as their products and services, are revised continually on the basis of accumulated experience.  In other words, the system adapts.  It creates new products, new markets, new institutions, and new behavior.  It is an ongoing system.

Out-of-equilibrium dynamics:  Unlike the equilibrium models that dominate the thinking in classical economics, the Santa Fe group believed the economy, because of constant change, operates far from equilibrium.

Hagstrom argues that different investment or trading strategies throughout history have competed against one another.  Those that have most accurately predicted the future for various businesses and their associated stock prices have survived, while less profitable strategies have disappeared.

But in any given time period, once a specific strategy becomes profitable, then more money flows into it, which eventually makes it less profitable.  New strategies are then invented and compete against one another.  As a result, a new strategy becomes dominant and then the process repeats.

Thus, economies and markets evolve over time.  There is no stable equilibrium in a market except in the short term.  To go from the language of biology to the language of business, Hagstrom refers to three important books:

  • Creative Destruction: Why Companies That Are Built to Last Underperform the Market—and How to Successfully Transform Them, by Richard Foster and Sarah Kaplan of McKinsey & Company
  • The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, by Clayton Christensen
  • The Innovator’s Solution: Creating and Sustaining Successful Growth, by Clayton Christensen and Michael Raynor

Hagstrom sums up the lessons from biology as compared to the previous ideas from physics:

Indeed, the movement from the mechanical view of the world to the biological view of the world has been called the “second scientific revolution.”  After three hundred years, the Newtonian world, the mechanized world operating in perfect equilibrium, is now the old science.  The old science is about a universe of individual parts, rigid laws, and simple forces.  The systems are linear:  Change is proportional to the inputs.  Small changes end in small results, and large changes make for large results.  In the old science, the systems are predictable.

The new science is connected and entangled.  In the new science, the system is nonlinear and unpredictable, with sudden and abrupt changes.  Small changes can have large effects while large events may result in small changes.  In nonlinear systems, the individual parts interact and exhibit feedback effects that may alter behavior.  Complex adaptive systems must be studied as a whole, not in individual parts, because the behavior of the system is greater than the sum of the parts.

The old science was concerned with understanding the laws of being.  The new science is concerned with the laws of becoming.

(Photo by Isabellebonaire)

Hagstrom then quotes the last passage from Darwin’s The Origin of Species:

It is interesting to contemplate an entangled bank, clothed with many plants of many kinds, with birds singing on the bushes, with various insects flitting about, and with worms crawling through the damp earth, and to reflect that these elaborately constructed forms, so different from each other, and dependent on each other in so complex a manner, have all been produced by laws acting around us.  These laws, taken in the largest sense, being Growth with Reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the external conditions of life, and from use and disuse; a Ratio of Increase so high as to lead to a Struggle for Life, and as a consequence to Natural Selection, entailing divergence of Character and Extinction of less improved forms.  Thus, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of higher animals, directly follows.  There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.

 

SOCIOLOGY

Because significant increases in computer power are now making vast amounts of data about human behavior available, the social sciences may at some point get enough data to figure out more precisely and more generally the laws of human behavior.  But we’re not there yet.

(Auguste Comte, via Wikimedia Commons)

The nineteenth century—despite the French philosopher Auguste Comte’s efforts to establish one unified social science—ended with several distinct specialties, says Hagstrom, including economics, political science, and anthropology.

Scottish economist Adam Smith published his Wealth of Nations in 1776.  Smith argued for what is now called laissez-faire capitalism, or a system free from government interference, including industry regulation and protective tariffs.  Smith also held that a division of labor, with individuals specializing in various tasks, led to increased productivity.  This meant more goods at lower prices for consumers, but it also meant more wealth for the owners of capital.  And it implied that the owners of capital would try to limit the wages of labor.  Furthermore, working conditions would likely be bad without government regulation.

Predictably, political scientists appeared on the scene to study how the government should protect the rights of workers in a democracy.  Also, the property rights of owners of capital had to be protected.

Social psychologists studied how culture affects psychology, and how the collective mind affects culture.  Social biologists, meanwhile, sought to apply biology to the study of society, notes Hagstrom.  Recently scientists, including Edward O. Wilson, have introduced sociobiology, which involves the attempt to apply the scientific principles of biology to social development.

Hagstrom writes:

Although the idea of a unified theory of social science faded in the late nineteenth century, here at the beginning of the twenty-first century, there has been a growing interest in what we might think of as a new unified approach.  Scientists have now begun to study the behavior of whole systems—not only the behavior of individuals and groups but the interactions between them and the ways in which this interaction may in turn influence subsequent behavior.  Because of this reciprocal influence, our social system is constantly engaged in a socialization process the consequence of which not only alters our individual behavior but often leads to unexpected group behavior.

To explain the formation of a social group, the theory of self-organization has been developed.  Ilya Prigogine, the Russian chemist, was awarded the Nobel Prize in 1977 for his thermodynamic concept of self-organization.

Paul Krugman, winner of the 2008 Nobel Prize for Economics, studied self-organization as applied to the economy.  Hagstrom:

Setting aside for the moment the occasional recessions and recoveries caused by exogenous events such as oil shocks or military conflicts, Krugman believes that economic cycles are in large part caused by self-reinforcing effects.  During a prosperous period, a self-reinforcing process leads to greater construction and manufacturing until the return on investment begins to decline, at which point an economic slump begins.  The slump in itself becomes a self-reinforcing effect, leading to lower production; lower production, in turn, will eventually cause return on investment to increase, which starts the process all over again.

Hagstrom notes that equity and debt markets are good examples of self-organizing, self-reinforcing systems.

If self-organization is the first characteristic of complex adaptive systems, then emergence is the second characteristic.  Hagstrom says that emergence refers to the way individual units—whether cells, neurons, or consumers—combine to create something greater than the sum of the parts.

(Collective Dynamics of Complex Systems, by Dr. Hiroki Sayama, via Wikimedia Commons)

One fascinating aspect of human collectives is that, in many circumstances—like finding the shortest way through a maze—the collective solution is better when there are both smart and not-so-smart individuals in the collective.  This more diverse collective outperforms a group that is composed only of smart individuals.

This implies that the stock market may more accurately aggregate information when the participants include many different types of people, such as smart and not-so-smart, long-term and short-term, and so forth, observes Hagstrom.

There are many areas where a group of people is actually smarter than the smartest individual in the group.  Hagstrom mentions that Francis Galton, the English Victorian-era polymath, wrote about a contest in which 787 people guessed at the weight of a large ox.  Most participants in the contest were not experts by any means, but ordinary people.  The ox actually weighed 1,198 pounds.  The average guess of the 787 guessers was 1,197 pounds, which was more accurate than the guesses made by the smartest and the most expert guessers.

This type of experiment can easily be repeated.  For example, take a jar filled with pennies, where only you know how many pennies are in the jar.  Pass the jar around in a group of people and ask each person—independently (with no discussion)—to write down their guess of how many pennies are in the jar.  In a group that is large enough, you will nearly always discover that the average guess is better than any individual guess.  (That’s been the result when I’ve performed this experiment in classes I’ve taught.)

In order for the collective to be that smart, the members must be diverse and the members’ guesses must be independent from one another.  So the stock market is efficient when these two conditions are satisfied.  But if there is a breakdown in diversity, or if individuals start copying one another too much—what Michael Mauboussin calls an information cascade—then you could have a boom, fad, fashion, or crash.

There are some situations where an individual can be impacted by the group.  Solomon Asch did a famous experiment in which the subject is supposed to match lines that have the same length.  It’s an easy question that every subject—if left alone—gets right.  But then Asch has seven out of eight participants deliberately choose the wrong answer, unbeknownst to the subject of the experiment, who is the eighth participant in the same room.  When this experiment was repeated many times, roughly one-third of the subjects gave the same answer as the group, even though this answer is obviously wrong.  Such can be the power of a group opinion.

Hagstrom asks about how crashes can happen.  Danish theoretical physicist Per Bak developed the notion of self-organized criticality.

According to Bak, large complex systems composed of millions of interacting parts can break down not only because of a single catastrophic event but also because of a chain reaction of smaller events.  To illustrate the concept of self-criticality, Bak often used the metaphor of a sand pile… Each grain of sand is interlocked in countless combinations.  When the pile has reached its highest level, we can say the sand is in a state of criticality.  It is just on the verge of becoming unstable.

(Computer Simulation of Bak-Tang-Weisenfeld sandpile, with 28 million grains, by Claudio Rocchini, via Wikimedia Commons)

Adding one more grain starts an avalanche.  Bak and two colleagues applied this concept to the stock market.  They assumed that there are two types of agents, rational agents and noise traders.  Most of the time, the market is well-balanced.

But as stock prices climb, rational agents sell and leave the market, while more noise traders following the trend join.  When noise traders—trend followers—far outnumber rational agents, a bubble can form in the stock market.

 

PSYCHOLOGY

The psychologists Daniel Kahneman and Amos Tversky did research together for over two decades.  Kahneman was awarded the Nobel Prize in Economics in 2002.  Tversky would also have been named had he not passed away.

(Daniel Kahneman, via Wikimedia Commons)

Much of their groundbreaking research is contained in Judgment Under Uncertainty: Heuristics and Biases (1982).

Here you will find all the customary behavioral finance terms we have come to know and understand:  anchoring, framing, mental accounting, overconfidence, and overreaction bias.  But perhaps the most significant insight into individual behavior was loss aversion.

Kahneman and Tversky discovered that how choices are framed—combined with loss aversion—can materially impact how people make decisions.  For instance, in one of their well-known experiments, they asked people to choose between the following two options:

  • (a) Save 200 lives for sure.
  • (b) Have a one-third chance of saving 600 lives and a two-thirds chance of saving no one.

In this scenario, people overwhelmingly chose (a)—to save 200 lives for sure.  Kahneman and Tversky next asked the same people to choose between the following two options:

  • (a) Have 400 people die for sure.
  • (b) Have a two-thirds chance of 600 people dying and a one-third chance of no one dying.

In this scenario, people preferred (b)—a two-thirds chance of 600 people dying, and a one-third chance of no one dying.

But the two versions of the problem are identical.  The number of people saved in the first version equals the number of people who won’t die in the second version.

What Kahneman and Tversky had demonstrated is that people are risk averse when considering potential gains, but risk seeking when facing the possibility of a certain loss.  This is the essence of prospect theory, which is captured in the following graph:

(Value function in Prospect Theory, drawing by Marc Rieger, via Wikimedia Commons)

Loss aversion refers to the fact that people weigh a potential loss about 2.5 times more than an equivalent gain.  That’s why the value function in the graph is steeper for losses.

Richard Thaler and Shlomo Benartzi researched loss aversion by hypothesizing that the less frequently an investor checks the price of a stock he or she owns, the less likely the investor will be to sell the stock because of temporary downward volatility.  Thaler and Benartzi invented the term myopic loss aversion.

Hagstrom writes:

In my opinion, the single greatest psychological obstacle that prevents investors from doing well in the stock market is myopic loss aversion.  In my twenty-eight years in the investment business, I have observed firsthand the difficulty investors, portfolio managers, consultants, and committee members of large institutional funds have with internalizing losses (loss aversion), made all the more painful by tabulating losses on a frequent basis (myopic loss aversion).  Overcoming this emotional burden penalizes all but a very few select individuals.

Perhaps it is not surprising that the one individual who has mastered myopic loss aversion is also the world’s greatest investor—Warren Buffett…

Buffett understands that as long as the earnings of the businesses you own move higher over time, there’s no reason to worry about shorter term stock price volatility.  Because Berkshire Hathaway, Buffett’s investment vehicle, holds both public stocks and wholly owned private businesses, Buffett’s long-term outlook has been reinforced.  Hagstrom quotes Buffett:

I don’t need a stock price to tell me what I already know about value.

Hagstrom mentions Berkshire’s investment in The Coca-Cola Company (KO), in 1988.  Berkshire invested $1 billion, which was at that time the single largest investment Berkshire had ever made.  Over the ensuing decade, KO stock went up ten times, while the S&P 500 Index only went up three times.  But four out of those ten years, KO stock underperformed the market.  Trailing the market 40 percent of the time didn’t bother Buffett a bit.

As Hagstrom observes, Benjamin Graham—the father of value investing, and Buffett’s teacher and mentor—made a distinction between the investor focused on long-term business value and the speculator who tries to predict stock prices in the shorter term.  The true investor should never be concerned with shorter term stock price volatility.

(Ben Graham, Photo by Equim43, via Wikimedia Commons)

Hagstrom quotes Graham’s The Intelligent Investor:

The investor who permits himself to be stampeded or unduly worried by unjustified market declines in his holdings is perversely transforming his basic advantage into a basic disadvantage.  That man would be better off if his stocks had no market quotation at all, for he would then be spared the mental anguish caused him by another person’s mistakes of judgment.

Terence Odean, a behavioral economist, has done extensive research on the investment decisions of individuals and households.  Odean discovered that:

  • Many investors trade often—Odean found a 78 percent portfolio turnover ratio in his first study, which tracked 97,483 trades from ten thousand randomly selected accounts.
  • Over the subsequent 4 months, one year, and two years, the stocks that investors bought consistently trailed the market, while the stocks that investors sold beat the market.

Hagstrom mentions that people use mental models as a basis for understanding reality and making decisions.  But we tend to assume that each mental model we have is equally probable, rather than working to assign different probabilities to different models.

Moreover, people typically can make models for what something is—or what is true—instead of what something is not—or what is false.  Also, our mental models are usually quite incomplete.  And we tend to forget details of our models, especially after time passes.  Finally, writes Hagstrom, people tend to construct mental models based on superstition or unwarranted belief.

Hagstrom asks the question: Why do people tend to be so gullible in general?  For instance, while there’s no evidence that market forecasts have any value, many otherwise intelligent people pay attention to them and even make decisions based on them.

The answer, states Hagstrom, is that we are wired to seek and to find patterns.  We have two basic mental systems, System 1 (intuition) and System 2 (reason).  System 1 operates automatically.  It takes mental shortcuts which often work fine, but not always.  System 1 is designed to find patterns.  And System 1 seeks confirming evidence for its hypotheses (patterns).

But even System 2—which humans can use to do math, logic, and statistics—uses a positive test strategy, meaning that it seeks confirming evidence for its hypotheses (patterns), rather than disconfirming evidence.

 

PHILOSOPHY

Hagstrom introduces the chapter:

A true philosopher is filled with a passion to understand, a process that never ends.

(Socrates, J. Aars Platon (1882), via Wikimedia Commons)

Metaphysics is one area of philosophy.  Aesthetics, ethics, and politics are other areas.  But Hagstrom focuses his discussion of philosophy on epistemology, the study of knowledge.

Having spent a few years studying the history and philosophy of science, I would say that epistemology includes the following questions:

  • What different kinds of knowledge can we have?
  • What constitutes scientific knowledge?
  • Is any part of our knowledge certain, or can all knowledge be improved indefinitely?
  • How does scientific progress happen?

In a sense, epistemology is thinking about thinking.  Epistemology is also studying the history of science in great detail, because humans have made enormous progress in generating scientific knowledge.

Studying epistemology can help us to become better, more rigorous, and more coherent thinkers, which can make us better investors.

Hagstrom makes it clear in the Preface that his book is necessarily abbreviated, otherwise it would have been a thousand pages long.  That said, had he been aware of Willard Van Orman Quine’s epistemology, Hagstrom likely would have mentioned it.

Here is a passage from Quine’s From A Logical Point of View:

The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges.  Or, to change the figure, total science is like a field of force whose boundary conditions are experience.  A conflict with experience at the periphery occasions readjustments in the interior of the field.  Truth values have to be redistributed over some of our statements.  Re-evaluation of some statements entails re-evaluation of others, because of their logical interconnections—the logical laws being in turn simply certain further statements of the system, certain further elements of the field.  Having re-evaluated one statement we must re-evaluate some others, which may be statements logically connected with the first or may be the statements of logical connections themselves.  But the total field is so underdetermined by its boundary conditions, experience, that there is much latitude of choice as to what statements to re-evaluate in the light of any single contrary experience.  No particular experiences are linked with any particular statements in the interior of the field, except indirectly through considerations of equilibrium affecting the field as a whole.

If this view is right, it is misleading to speak of the empirical content of an individual statement—especially if it is a statement at all remote from the experiential periphery of the field.  Furthermore it becomes folly to seek a boundary between synthetic statements, which hold contingently on experience, and analytic statements, which hold come what may.  Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system.  Even a statement very close to the periphery can be held true in the face of recalcitrant experience by pleading hallucination or by amending certain statements of the kind called logical laws.  Conversely, by the same token, no statement is immune to revision.  Revision even of the logical law of the excluded middle has been proposed as a means of simplifying quantum mechanics…

(Image by Dmytro Tolokonov)

Quine continues:

For vividness I have been speaking in terms of varying distances from a sensory periphery.  Let me now try to clarify this notion without metaphor.  Certain statements, though about physical objects and not sense experience, seem peculiarly germane to sense experience—and in a selective way: some statements to some experiences, others to others.  Such statements, especially germane to particular experiences, I picture as near the periphery.  But in this relation of “germaneness” I envisage nothing more than a loose association reflecting the relative likelihood, in practice, of our choosing one statement rather than another for revision in the event of recalcitrant experience.  For example, we can imagine recalcitrant experiences to which we would surely be inclined to accomodate our system by re-evaluating just the statement that there are brick houses on Elm Street, together with related statements on the same topic.  We can imagine other recalcitrant experiences to which we would be inclined to accomodate our system by re-evaluating just the statement that there are no centaurs, along with kindred statements.  A recalcitrant experience can, I have urged, be accomodated by any of various alternative re-evaluations in various alternative quarters of the total system; but, in the cases which we are now imagining, our natural tendency to disturb the total system as little as possible would lead us to focus our revisions upon these specific statements concerning brick houses or centaurs.  These statements are felt, therefore, to have a sharper empirical reference than highly theoretical statements of physics or logic or ontology.  The latter statements may be thought of as relatively centrally located within the total network, meaning merely that little preferential connection with any particular sense data obtrudes itself.

As an empiricist, I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience.  Physical objects are conceptually imported into the situation as convenient intermediaries—not by definition in terms of experience, but simply as irreducible posits comparable, epistemologically, to the gods of Homer.  For my part I do, qua lay physicist, believe in physical objects and not in Homer’s gods; and I consider it a scientific error to believe otherwise.  But in point of epistemological footing the physical objects and the gods differ only in degree and not in kind.  Both sorts of entities enter our conception only as cultural posits.  The myth of physical objects is epistemologically superior to most in that it has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience.

Physical objects, small and large, are not the only posits.  Forces are another example; and indeed we are told nowadays that the boundary between energy and matter is obsolete.  Moreover, the abstract entities which are the substance of mathematics—ultimately classes and classes of classes and so on up—are another posit in the same spirit.  Epistemologically these are posits on the same footing with physical objects and gods, neither better nor worse except for differences in the degree to which they expedite our dealings with sense experiences.

Historically, philosophers distinguished between “analytic” statements, which were thought to be true by definition, and “synthetic” statements, which were thought to be true on the basis of certain empirical data or experiences.  One of Quine’s chief points is that this distinction doesn’t hold.

Mathematics, logic, scientific theories, scientific laws, working hypotheses, ordinary language, and much else including simple observations, are all a part of science.  The goal of science—which extends common sense—is to predict various future experiences—including experiments—on the basis of past experiences.

When predictions—including experiments—don’t turn out as expected, then any part of the totality of science is revisable.  Often it makes sense to revise specific hypotheses, or specific statements that are close to experience.  But sometimes highly theoretical statements or ideas—including the laws of mathematics, the laws of logic, and the most well-established scientific laws—are revised in order to make the overall system of science work better, i.e., predict phenomena (future experiences) better, with more generality or with more exactitude.

The chief way scientists have made—and continue to make—progress is by testing predictions that are implied by existing scientific theory or law, or that are implied by new hypotheses under consideration.

(Top quark and anti top quark pair decaying into jets, Collider Detector at Fermilab, via Wikimedia Commons)

Because of recent advances in computing power and because of the explosion of shared knowledge, ideas, and experiments on the internet, scientific progress is probably happening much faster than ever before.  It’s a truly exciting time for all curious people and scientists.  And once artificial intelligence passes the singularity threshold, scientific progress is likely to skyrocket, even beyond what we can imagine.

 

LITERATURE

Critical reading is a crucial part of becoming a better thinker.

(Photo by VijayGES2, via Wikimedia Commons)

One excellent book about how to read analytically is How to Read a Book, by Mortimer J. Adler.  The goal of analytical reading is to improve your understanding—as opposed to only gaining information.  To this end, Adler suggests active readers keep the following four questions in mind:

  • What is the book about as a whole?
  • What is being said in detail?
  • Is the book true, in whole or part?
  • What of it?

Before deciding to read a book in detail, it can be helpful to read the preface, table of contents, index, and bibliography.  Also, read a few paragraphs at random.  These steps will help you to get a sense of what the book is about as a whole.  Next, you can skim the book to learn more about what is being said in detail, and whether it’s worth reading the entire book carefully.

Then, if you decide to read the entire book carefully, you should approach it like you would approach assigned reading for a university class.  Figure out the main points and arguments.  Take notes if that helps you learn.  The goal is to understand the author’s chief arguments, and whether—or to what extent—those arguments are true.

The final step is comparative reading, says Hagstrom.  Adler considers this the hardest step.  Here the goal is to learn about a specific subject.  You want to determine which books on the subject are worth reading, and then compare and contrast these books.

Hagstrom points out that the three greatest detectives in fiction are Auguste Dupin, Sherlock Holmes, and Father Brown.  We can learn much from studying the stories involving these sleuths.

Auguste Dupin was created by Edgar Allan Poe.  Hagstrom remarks that we can learn the following from Dupin’s methods:

  • Develop a skeptic’s mindset; don’t automatically accept conventional wisdom.
  • Conduct a thorough investigation.

Sherlock Holmes was created by Sir Arthur Conan Doyle.

(Illustration by Sidney Paget (1891), via Wikimedia Commons)

From Holmes, we can learn the following, says Hagstrom:

  • Begin an investigation with an objective and unemotional viewpoint.
  • Pay attention to the tiniest details.
  • Remain open-minded to new, even contrary, information.
  • Apply a process of logical reasoning to all you learn.

Father Brown was created by G. K. Chesterton.  From Father Brown, we can learn:

  • Become a student of psychology.
  • Have faith in your intuition.
  • Seek alternative explanations and redescriptions.

Hagstrom ends the chapter by quoting Charlie Munger:

I believe in… mastering the best that other people have figured out [rather than] sitting down and trying to dream it up yourself… You won’t find it that hard if you go at it Darwinlike, step by step with curious persistence.  You’ll be amazed at how good you can get… It’s a huge mistake not to absorb elementary worldly wisdom… Your life will be enriched—not only financially but in a host of other ways—if you do.

 

MATHEMATICS

Hagstrom quotes Warren Buffett:

…the formula for valuing ALL assets that are purchased for financial gain has been unchanged since it was first laid out by a very smart man in about 600 B.C.E.  The oracle was Aesop and his enduring, though somewhat incomplete, insight was “a bird in the hand is worth two in the bush.”  To flesh out this principle, you must answer only three questions.  How certain are you that there are indeed birds in the bush?  When will they emerge and how many will there be?  What is the risk-free interest rate?  If you can answer these three questions, you will know the maximum value of the bush—and the maximum number of birds you now possess that should be offered for it.  And, of course, don’t literally think birds.  Think dollars.

Hagstrom explains that it’s the same formula whether you’re evaluating stocks, bonds, manufacturing plants, farms, oil royalties, or lottery tickets.  As long as you have the numbers needed for the calculation, the attractiveness of all investment opportunities can be evaluated and compared.

So to value any business, you have to estimate the future cash flows of the business, and then discount those cash flows back to the present.  This is the DCF—discounted cash flows—method for determining the value of a business.

Although Aesop gave the general idea, John Burr Williams, in The Theory of Investment Value (1938), was the first to explain the DCF approach explicitly.  Williams had studied mathematics and chemistry as an undergraduate at Harvard University.  After working as a securities analyst, Williams returned to Harvard to get a PhD in economics.  The Theory of Investment Value was Williams’ dissertation.

Hagstrom writes that in 1654, the Chevalier de Méré, a French nobleman who liked to gamble, asked the mathematician Blaise Pascal the following question: “How do you divide the stakes of an unfinished game of chance when one of the players is ahead?”

(Photo by Rossapicci, via Wikimedia Commons)

Pascal was a child prodigy and a brilliant mathematician.  To help answer de Méré’s question, Pascal turned to Pierre de Fermat, a lawyer who was also a brilliant mathematician.  Hagstrom reports that Pascal and Fermat exchanged a series of letters which are the foundation of what is now called probability theory.

There are two broad categories of probabilities:

  • frequency probability
  • subjective probability

A frequency probability typically refers to a system that can generate a great deal of statistical data over time.  Examples include coin flips, roulette wheels, cards, and dice, notes Hagstrom.  For instance, if you flip a coin 1,000 times, you expect to get heads about 50 percent of the time.  If you roll one 6-sided dice 1,000 times, you expect to get each number about 16.67 percent of the time.

If you don’t have a sufficient frequency of events, plus a long time period to analyze results, then you must rely on a subjective probability.  A subjective probability, says Hagstrom, is often a reasonable assessment made by a knowledgeable person.  It’s a best guess based a logical analysis of the given data.

When using a subjective probability, obviously you want to make sure you have all the available data that could be relevant.  And clearly you have to use logic correctly.

But the key to using a subjective probability is to update your beliefs as you gain new data.  The proper way to update your beliefs is by using Bayes’ Rule.

(Thomas Bayes, via Wikimedia Commons)

Bayes’ Rule

Eliezer Yudkowsky of the Machine Intelligence Research Institute provides an excellent intuitive explanation of Bayes’ Rule:  http://www.yudkowsky.net/rational/bayes

Yudkowsky begins by discussing a situation that doctors often encounter:

1% of women at age forty who participate in routine screening have breast cancer.  80% of women with breast cancer will get positive mammographies.  9.6% of women without breast cancer will also get positive mammographies.  A woman in this age group had a positive mammography in a routine screening.  What is the probability that she actually has breast cancer?

Most doctors estimate the probability between 70% and 80%, which is wildly incorrect.

In order to arrive at the correct answer, Yudkowsky asks us to think of the question as follows.  We know that 1% of women at age forty who participate in routine screening have breast cancer.  So consider 10,000 women who participate in routine screening:

  • Group 1: 100 women with breast cancer.
  • Group 2: 9,900 women without breast cancer.

After the mammography, the women can be divided into four groups:

  • Group A:  80 women with breast cancer, and a positive mammography.
  • Group B:  20 women with breast cancer, and a negative mammography.
  • Group C:  950 women without breast cancer, and a positive mammography.
  • Group D:  8,950 women without breast cancer, and a negative mammography.

So the question again:  If a woman out of this group of 10,000 women has a positive mammography, what is the probability that she actually has breast cancer?

The total number of women who had positive mammographies is 80 + 950 = 1,030.  Of that total, 80 women had positive mammographies AND have breast cancer.  In looking at the total number of positive mammographies (1,030), we know that 80 of them actually have breast cancer.

So if a woman out of the 10,000 has a positive mammography, the chances that she actually has breast cancer = 80/1030  or 0.07767 or 7.8%.

That’s the intuitive explanation.  Now let’s look at Bayes’ Rule:

P(A|B) = [P(B|A) P(A)] / P(B)

Let’s apply Bayes’ Rule to the same question above:

1% of women at age forty who participate in routine screening have breast cancer.  80% of women with breast cancer will get positive mammographies.  9.6% of women without breast cancer will also get positive mammographies.  A woman in this age group had a positive mammography in a routine screening.  What is the probability that she actually has breast cancer?

P(A|B) = the probability that the woman has breast cancer (A), given a positive mammography (B)

Here is what we know:

P(B|A) = 80% – the probability of a positive mammography (B), given that the woman has breast cancer (A)

P(A) = 1% – the probability that a woman out of the 10,000 screened actually has breast cancer

P(B) = (80+950) / 10,000 = 10.3% – the probability that a woman out of the 10,000 screened has a positive mammography

Bayes’ Rule again:

P(A|B) = [P(B|A) P(A)] / P(B)

P(A|B) = [0.80*0.01] / 0.103 = 0.008 / 0.103 = 0.07767 or 7.8%

Derivation of Bayes’ Rule:

Bayesians consider conditional probabilities as more basic than joint probabilities.  You can define P(A|B) without reference to the joint probability P(A,B).  To see this, first start with the conditional probability formula:

P(A|B) P(B) = P(A,B)

but by symmetry you get:

P(B|A) P(A) = P(A,B)

It follows that:

P(A|B) = [P(B|A) P(A)] / P(B)

which is Bayes’ Rule.

In conclusion, Hagstrom makes the important observation that there is much we still don’t know about nature and about ourselves.  (The question mark below is by Khaydock, via Wikimedia Commons.)

Nothing is absolutely certain.

One clear lesson from history—whether the history of investing, the history of science, or some other area—is that very often people who are “absolutely certain” about something turn out to be wrong.

Economist and Nobel laureate Kenneth Arrow:
  • Our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness.  Vast ills have followed a belief in certainty.

Investor and author Peter Bernstein:

The recognition of risk management as a practical art rests on a simple cliché with the most profound consequences:  when our world was created, nobody remembered to include certainty.  We are never certain;  we are always ignorant to some degree.  Much of the information we have is either incorrect or incomplete.

 

DECISION MAKING

Take a few minutes and try answering these three problems:

  • A bat and a ball cost $1.10.  The bat costs one dollar more than the ball.  How much does the ball cost?
  • If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets?
  • In a lake, there is a patch of lily pads.  Every day the patch doubles in size.  If it takes 48 days for the patch to cover the entire lake, how long will it take for the patch to cover half the lake?

Roughly 75 percent of the Princeton and Harvard students got at least one problem wrong.  These questions form the Cognitive Reflection Test, invented by Shane Frederick, an assistant professor of management science at MIT.

Recall that System 1 (intuition) is quick, associative, and operates automatically all the time.  System 2 (reason) is slow and effortful—it requires conscious activation and sustained focus—and it can learn to solve problems involving math, statistics, or logic.

To understand the mental mistake that many people—including smart people—make, let’s consider the first of the three questions:

  • A bat and a ball cost $1.10.  The bat costs one dollar more than the ball.  How much does the ball cost?

After we read the question, our System 1 (intuition) immediately suggests to us that the bat costs $1.00 and the ball costs 10 cents.  But if we slow down just a moment and engage System 2, we realize that if the bat costs $1.00 and the ball costs 10 cents, then the bat costs only 90 cents more than the ball.  This violates the condition stated in the problem that the bat costs one dollar more than the ball.  If we think a bit more, we see that the bat must cost $1.05 and the ball must cost 5 cents.

System 1 takes mental shortcuts, which often work fine.  But when we encounter a problem that requires math, statistics, or logic, we have to train ourselves to slow down and to think through the problem.  If we don’t slow down in these situations, we’ll often jump to the wrong conclusion.

(Cognitive Bias Codex, by John Manoogian III, via Wikimedia Commons.  For a closer look, try this link: https://upload.wikimedia.org/wikipedia/commons/1/18/Cognitive_Bias_Codex_-_180%2B_biases%2C_designed_by_John_Manoogian_III_%28jm3%29.jpg)

It’s possible to train your intuition under certain conditions, according to Daniel Kahneman.  Hagstrom:

Kahneman believes there are indeed cases where intuitive skill reveals the answer, but that such cases are dependent on two conditions.  First, “the environment must be sufficiently regular to be predictable” second, there must be an “opportunity to learn these regularities through prolonged practice.”  For familiar examples, think about the games of chess, bridge, and poker.  They all occur in regular environments, and prolonged practice at them helps people develop intuitive skill.  Kahneman also accepts the idea that army officers, firefighters, physicians, and nurses can develop skilled intuition largely because they all have had extensive experience in situations that, while obviously dramatic, have been repeated many times over.

Kahneman concludes that intuitive skill exists mostly in people who operate in simple, predictable environments and that people in more complex environments are much less likely to develop this skill.  Kahneman, who has spent much of his career studying clinicians, stock pickers, and economists, notes that evidence of intuitive skill is largely absent in this group.  Put differently, intuition appears to work well in linear systems where cause and effect is easy to identify.  But in nonlinear systems, including stock markets and economies, System 1 thinking, the intuitive side of our brain, is much less effectual.

Experts in fields such as investing, economics, and politics have, in general, not demonstrated the ability to make accurate forecasts or predictions with any reliable consistency.

Philip Tetlock, professor of psychology at the University of Pennsylvania, tracked 284 experts over fifteen years (1988-2003) as they made 27,450 forecasts.  The results are no better than “dart-throwing chimpanzees,” as Tetlock describes in Expert Political Judgment: How Good Is It? How Can We Know? (Princeton University Press, 2005).

Hagstrom explains:

It appears experts are penalized, like the rest of us, by thinking deficiencies.  Specifically, experts suffer from overconfidence, hindsight bias, belief system defenses, and lack of Bayesian process.

Hagstrom then refers to an essay by Sir Isaiah Berlin, “The Hedgehog and the Fox: An Essay on Tolstoy’s View of History.”  Hedgehogs view the world using one large idea, while Foxes are skeptical of grand theories and instead consider a variety of information and experiences before making decisions.

(Photo of Hedgehog, by Nevit Dilmen, via Wikimedia Commons)

Tetlock found that Foxes, on the whole, were much more accurate than Hedgehogs.  Hagstrom:

Why are hedgehogs penalized?  First, because they have a tendency to fall in love with pet theories, which gives them too much confidence in forecasting events.  More troubling, hedgehogs were too slow to change their viewpoint in response to discomfirming evidence.  In his study, Tetlock said Foxes moved 59 percent of the prescribed amount toward alternate hypotheses, while Hedgehogs moved only 19 percent.  In other words, Foxes were much better at updating their Bayesian inferences than Hedgehogs.

Unlike Hedgehogs, Foxes appreciate the limits of their knowledge.  They have better calibration and discrimination scores than Hedgehogs.  (Calibration, which can be thought of as intellectual humility, measures how much your subjective probabilities correspond to objective probabilities.  Discrimination, sometimes called justified decisiveness, measures whether you assign higher probabilities to things that occur than to things that do not.)

(Photo of Fox, by Alan D. Wilson, via Wikimedia Commons)

Hagstrom comments that Foxes have three distinct cognitive advantages, according to Tetlock:

  • They begin with “reasonable starter” probability estimates.  They have better “inertial-guidance” systems that keep their initial guesses closer to short-term base rates.
  • They are willing to acknowledge their mistakes and update their views in response to new information.  They have a healthy Bayesian process.
  • They can see the pull of contradictory forces, and, most importantly, they can appreciate relevant analogies.

Hagstrom concludes that the Fox “is the perfect mascot for The College of Liberal Arts Investing.”

Many people with high IQ have difficulty making rational decisions.  Keith Stanovich, professor of applied psychology at the University of Toronto, invented the term dysrationalia to refer to the inability to think and behave rationally despite high intelligence, remarks Hagstrom.  There are two principal causes of dysrationalia:

  • a processing problem
  • a content problem

Stanovich explains that people are lazy thinkers in general, preferring to think in ways that require less effort, even if those methods are less accurate.  As we’ve seen, System 1 operates automatically, with little or no effort.  Its conclusions are often correct.  But when the situation calls for careful reasoning, the shortcuts of System 1 don’t work.

Lack of adequate content is a mindware gap, says Hagstrom.  Mindware refers to rules, strategies, procedures, and knowledge that people possess to help solve problems.  Harvard cognitive scientist David Perkins coined the term mindware.  Hagstrom quotes Perkins:

What is missing is the metacurriculum—the ‘higher order’ curriculum that deals with good patterns of thinking in general and across subject matters.

Perkins’ proposed solution is a mindware booster shot, which means teaching new concepts and ideas in “a deep and far-reaching way,” connected with several disciplines.

Of course, Hagstrom’s book, Investing: The Last Liberal Art, is a great example of a mindware booster shot.

 

Hagstrom concludes by stressing the vital importance of lifelong, continuous learning.  Buffett and Munger have always highlighted this as a key to their success.

(Statue of Ben Franklin in front of College Hall, Philadelphia, Pennsylvania, Photo by Matthew Marcucci, via Wikimedia Commons)

Hagstrom:

Although the greatest number of ants in a colony will follow the most intense pheromone trail to a food source, there are always some ants that are randomly seeking the next food source.  When Native Americans were sent out to hunt, most of those in the party would return to the proven hunting grounds.  However, a few hunters, directed by a medicine man rolling spirit bones, were sent in different directions to find new herds.  The same was true of Norwegian fishermen.  Each day most of the ships in the fleet returned to the same spot where the previous day’s catch had yielded the greatest bounty, but a few vessels were also sent in random directions to locate the next school of fish.  As investors, we too must strike a balance between exploiting what is most obvious while allocating some mental energy to exploring new possibilities.

Hagstrom adds:

The process is similar to genetic crossover that occurs in biological evolution.  Indeed, biologists agree that genetic crossover is chiefly responsible for evolution.  Similarly, the constant recombination of our existing mental building blocks will, over time, be responsible for the greatest amount of investment progress.  However, there are occasions when a new and rare discovery opens up new opportunities for investors.  In much the same way that mutation can accelerate the evolutionary process, so too can new ideas speed us along in our understanding of how markets work.  If you are able to discover a new building block, you have the potential to add another level to your model of understanding.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Second Machine Age

(Image:  Zen Buddha Silence by Marilyn Barbone.)

October 15, 2017

Erik Brynjolfsson and Andrew McAfee are the authors of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (Norton, 2014).  It’s one of the best books I’ve read in the past few years.

The second machine age is going to bring enormous progress to economies and societies.  Total wealth – whether defined narrowly or much more broadly – will increase significantly.  But we do have to ensure that political and social structures are properly adjusted so that everyone can benefit from massive technological progress.

The first six chapters – starting with The Big Stories, and going thru Artificial and Human Intelligence in the Second Machine Age – give many examples of recent technological progress.

The five subsequent chapters – four of which are covered here – discuss the bounty and the spread.

Bounty is the increase in volume, variety, and quality and the decrease in cost of the many offerings brought on by modern technological progress.  It’s the best economic news in the world today.

Spread is differences among people in economic success.

The last four chapters discuss what interventions could help maximize the bounty while mitigating the effects of the spread.

Here are the chapters covered:

  • The Big Stories
  • The Skills of the New Machines:  Technology Races Ahead
  • Moore’s Law and the Second Half of the Chessboard
  • The Digitization of Just About Everything
  • Innovation:  Declining or Recombining?
  • Artificial and Human Intelligence in the Second Machine Age
  • Computing Bounty
  • Beyond GDP
  • The Spread
  • Implications of the Bounty and the Spread
  • Learning to Race With Machines:  Recommendations for Individuals
  • Policy Recommendations
  • Long-term Recommendations
  • Technology and the Future

 

THE BIG STORIES

Freeman Dyson:

Technology is a gift of God.  After the gift of life, it is perhaps the greatest of God’s gifts.  It is the mother of civilizations, of arts and of sciences.

James Watt’s brilliant tinkering with the steam engine in 1775 and 1776 was central to the Industrial Revolution:

The Industrial Revolution, of course, is not only the story of steam power, but steam started it all.  More than anything else, it allowed us to overcome the limitations of muscle power, human and animal, and generate massive amounts of useful energy at will.  This led to factories and mass production, to railways and mass transportation.  It led, in other words, to modern life.  The Industrial Revolution ushered in humanity’s first machine age – the first time our progress was driven primarily by technological innovation – and it was the most profound time of transformation the world has ever seen.  (pages 6-7)

(Note that Brynjolfsson and McAfee refer to the Industrial Revolution as “the first machine age.”  And they refer to the late nineteenth and early twentieth century as the Second Industrial Revolution.)

Brynjolfsson and McAfee continue:

Now comes the second machine age.  Computers and other digital advances are doing for mental power – the ability to use our brains to understand and shape our environments – what the steam engine and its descendants did for muscle power.  They’re allowing us to blow past previous limitations and taking us into new territory.  How exactly this transition will play out remains unknown, but whether or not the new machine age bends the curve as Watt’s steam engine, it is a very big deal indeed.  This book explains how and why.

For now, a very short and simple answer:  mental power is at least as important for progress and development – for mastering our physical and intellectual environment to get things done – as physical power.  So a vast and unprecedented boost to mental power should be a great boost to humanity, just as the earlier boost to physical power so clearly was.  (pages 7-8)

Brynjolfsson and McAfee admit that recent technological advances surpassed their expectations:

We wrote this book because we got confused.  For years we have studied the impact of digital technologies like computers, software, and communications networks, and we thought we had a decent understanding of their capabilities and limitations.  But over the past few years they started surprising us.  Computers started diagnosing diseases, listening and speaking to us, and writing high-quality prose, while robots started scurrying around warehouses and driving cars with minimal or no guidance.  Digital technologies had been laughably bad at a lot of these things for a long time – then they suddenly got very good.  How did this happen?  And what were the implications of this progress, which was astonishing and yet came to be considered a matter of course?

Brynjolfsson and McAfee did a great deal of reading.  But they learned the most by speaking with inventors, investors, entrepreneurs, engineers, scientists, and others making or using technology.

Brynjolfsson and McAfee report that they reached three broad conclusions:

The first is that we’re living at a time of astonishing progress with digital technologies – those that have computer hardware, software, and networks at their core.  These technologies are not brand-new;  businesses have been buying computers for more than half a century… But just as it took generations to improve the steam engine to the point that it could power the Industrial Revolution, it’s also taken time to refine our digital engines.

We’ll show why and how the full force of these technologies has recently been achieved and give examples of its power.  ‘Full,’ though, doesn’t mean ‘mature.’  Computers are going to continue to improve and do new and unprecedented things.  By ‘full force,’ we mean simply that the key building blocks are already in place for digital technologies to be as important and transformational for society and the economy as the steam engine.  In short, we’re at an inflection point – a point where the curve starts to bend a lot – because of computers.  We are entering a second machine age.

Our second conclusion is that the transformations brought about by digital technology will be profoundly beneficial ones.  We’re heading into an era that won’t just be different;  it will be better, because we’ll be able to increase both the variety and the volume of our consumption… we don’t just consume calories and gasoline.  We also consume information from books and friends, entertainment from superstars and amateurs, expertise from teachers and doctors, and countless other things that are not made of atoms.  Technology can bring us more choice and even freedom.

When these things are digitized – when they’re converted into bits that can be stored on a computer and sent over a network – they acquire some weird and wonderful properties.  They’re subject to different economics, where abundance is the norm rather than scarcity.  As we’ll show, digital goods are not like physical ones, and these differences matter.

…Digitization is improving the physical world, and these improvements are only going to become more important.  Among economic historians, there’s wide agreement that, as Martin Weitzman puts it, ‘the long-term growth of an advanced economy is dominated by the behavior of technical progress.’  As we’ll show, technical progress is improving exponentially.

Our third conclusion is less optimistic:  digitization is going to bring with it some thorny challenges… Technological progress is going to leave behind some people, perhaps even a lot of people, as it races ahead.  As we’ll demonstrate, there’s never been a better time to be a worker with special skills or the right education, because these people can use technology to create and capture value.  However, there’s never been a worse time to be a worker with only ‘ordinary’ skills and abilities to offer, because computers, robots, and other digital technologies are acquiring these skills and abilities at an extraordinary rate.  (pages 9-11)

 

THE SKILLS OF THE NEW MACHINES:  TECHNOLOGY RACES AHEAD

Arthur C. Clarke:

Any sufficiently advanced technology is indistinguishable from magic.

Computers are symbol processors, note Brynjolfsson and McAfee:  Their circuitry can be interpreted in the language of ones and zeroes, or as true or false, or as yes or no.

Computers are especially good at following rules or algorithms.  So computers are especially well-suited for arithmetic, logic, and similar tasks.

Historically, computers have not been very good at pattern recognition compared to humans.  Brynjolfsson and McAfee:

Our brains are extraordinarily good at taking in information via our senses and examining it for patterns, but we’re quite bad at describing or figuring out how we’re doing it, especially when a large volume of fast-changing information arrives at a rapid pace.  As the philosopher Michael Polanyi famously observed, ‘We know more than we can tell.’  (page 18)

Driving a car is an example where humans’ ability to recognize patterns in a mass of sense data was thought to be beyond a computer’s ability.  DARPA – the Defense Advanced Research Projects Agency – held a Grand Challenge for driverless cars in 2004.  It was a 150-mile course through the Mojave Desert in California.  There were fifteen entrants.  Brynjolfsson and McAfee:

The results were less than encouraging.  Two vehicles didn’t make it to the starting area, one flipped over in the starting area, and three hours into the race, only four cars were still operational.  The ‘winning’ Sandstorm car from Carnegie Mellon University covered 7.4 miles (less than 5 percent of the total) before veering off the course during a hairpin turn and getting stuck on an embankment.  The contest’s $1 million prize went unclaimed, and Popular Science called the event ‘DARPA’s Debacle in the Desert.’  (page 19)

Within a few years, however, driverless cars became far better.  By 2012, Google driverless cars had covered hundreds of thousands of miles with only two accidents (both caused by humans).  Brynjolfsson and McAfee:

Progress on some of the oldest and toughest challenges associated with computers, robots, and other digital gear was gradual for a long time.  Then in the past few years it became sudden;  digital gear started racing ahead, accomplishing tasks it had always been lousy at and displaying skills it was not supposed to acquire any time soon.   (page 20)

Another example of an area where it was thought fairly recently that computers wouldn’t become very good is complex communication.  But starting around 2011, computers suddenly seemed to get much better at using human languages to communicate with humans.  Robust natural language processing has become available to people with smartphones.

For instance, there are mobile apps that show you an accurate map and that tell you the fastest way for getting somewhere.  Also, Google’s translations on twitter have gotten much better recently (as of mid-2017).

In 2011, IBM’s supercomputer Watson beat Ken Jennings and Brad Rutter at Jeopardy!  This represented another big advance in which a computer combined pattern matching with complex communication.  The game involves puns, rhymes, wordplay, and more.

Jennings had won a record seventy-four times in a row in 2004.  Rutter beat Jennings in the 2005 Ultimate Tournament of Champions.  The early versions of IBM’s Watson came nowhere close to winning Jeopardy!  But when Watson won in 2011, it had three times as much money as either human opponent.  Jennings later remarked:

Just as factory jobs were eliminated in the twentieth century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines.  (page 27)

Robotics is another area where progress had been gradual, but recently became sudden, observe Brynjolfsson and McAfee.  Robot entered the English language via a 1921 Czech play, R.U.R. (Rossum’s “Universal” Robots), by Karel Capek.  Isaac Asimov coined the term robotics in 1941.

Robots have still lagged in perception and mobility, while excelling in many computational tasks.  This dichotomy is known as Moravec’s paradox, described on Wikipedia as:

the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources.  (pages 28-29)

Brynjolfsson and McAfee write that, at least until recently, most robots in factories could only handle items that showed up in exactly the same location and configuration each time.  To perform a different task, these robots would need to be reprogrammed.

In 2008, Rodney Brooks founded Rethink Robotics.  Brooks would like to create robots that won’t need to be programmed by engineers.  These new robots could be taught to do a task, or re-taught a new one, by shop floor workers.  At the company’s headquarters in Boston, Brynjolfsson and McAfee got a sneak peak at a new robot – Baxter.  It has two arms with claw-like grips.  The head is an LCD face.  It has wheels instead of legs.  Each arm can be manually trained to do a wide variety of tasks.

Brynjolfsson and McAfee:

Kiva, another young Boston-area company, has taught its automatons to move around warehouses safely, quickly, and effectively.  Kiva robots look like metal ottomans or squashed R2-D2’s.  They scuttle around buildings at about knee-height, staying out of the way of humans and one another.  They’re low to the ground so they can scoot underneath shelving units, lift them up, and bring them to human workers.  After these workers grab the products they need, the robot whisks the shelf away and another shelf-bearing robot takes its place.  Software tracks where all the products, shelves, robots, and people are in the warehouse, and orchestrates the continuous dance of the Kiva automatons.  In March of 2012, Kiva was acquired by Amazon – a leader in advanced warehouse logistics – for more than $750 million in cash.  (page 32)

Boston Dynamics, another New England startup, has built dog-like robots to support troops in the field.

A final example is the Double, which is essentially an iPad on wheels.  It allows the operator to see and hear what the robot does.

Brynjolfsson and McAfee present more evidence of technological progress:

On the Star Trek television series, devices called tricorders were used to scan and record three kinds of data:  geological, meteorological, and medical.  Today’s consumer smartphones serve all these purposes;  they can be put to work as seismographs, real-time weather radar maps, and heart- and breathing-rate monitors.  And, of course, they’re not limited to these domains.  They also work as media players, game platforms, reference works, cameras, and GPS devices.  (page 34)

Recently, the company Narrative Science was contracted with by Forbes.com in order to write earnings previews that are indistinguishable from human writing.

Brynjolfsson and McAfee conclude:

Most of the innovations described in this chapter have occurred in just the past few years.  They’ve taken place where improvement had been frustratingly slow for a long time, and where the best thinking often led to the conclusion that it wouldn’t speed up.  But then digital progress became sudden after being gradual for so long.  This happened in multiple areas, from artificial intelligence to self-driving cars to robotics.

How did this happen?  Was it a fluke – a confluence of a number of lucky, one-time advances?  No, it was not.  The digital progress we’ve seen recently certainly is impressive, but it’s just a small indication of what’s to come.  It’s the dawn of the second machine age.  To understand why it’s unfolding now, we need to understand the nature of technological progress in the era of digital hardware, software, and networks.  In particular, we need to understand its three characteristics:  that it is exponential, digital, and combinatorial.  The next three chapters will discuss each of these in turn.  (page 37)

 

MOORE’S LAW AND THE SECOND HALF OF THE CHESSBOARD

Moore’s Law roughly says that computing power per dollar doubles about every eighteen months.  It’s not a law of nature, note Brynjolfsson and McAfee, but a statement about the continued productivity of the computer industry’s engineers and scientists.  Moore first made his prediction in 1965.  He thought it would only last for ten years.  But it’s now lasted almost fifty years.

Brynjolfsson and McAfee point out that this kind of sustained progress hasn’t happened in any other field.  It’s quite remarkable.

Inventor and futurist Ray Kurzweil has told the story of the inventor and the emperor.  In the 6th century in India, a clever man invented the game of chess.  The man went to the capital city, Pataliputra, to present his invention to the emperor.  The emperor was so impressed that he asked the man to name his reward.

The inventor praised the emperor’s generosity and said, “All I desire is some rice to feed my family.”  The inventor then suggested they use the chessboard to determine the amount of rice he would receive.  He said to place one grain of rice on the first square, two grains on the second square, four grains on the third square, and so forth.  The emperor readily agreed.

If you were to actually do this, you would end up with more than eighteen quintillion grains of rice.  Brynjolfsson and McAfee:

A pile of rice this big would dwarf Mount Everest;  it’s more rice than has been produced in the history of the world.  (pages 45-46)

Kurzweil points out that when you get to the thirty-second square – which is the first half of the chessboard – you have about 4 billion grains of rice, or one large field’s worth.  But when you get to the second half of the chessboard, the result of sustained exponential growth becomes clear.

Brynjolfsson and McAfee:

Our quick doubling calculation also helps us understand why progress with digital technologies feels so much faster these days and why we’ve seen so many recent examples of science fiction becoming business reality.  It’s because the steady and rapid exponential growth of Moore’s Law has added up to the point that we’re now in a different regime of computing:  we’re now in the second half of the chessboard.  The innovations we described in the previous chapter – cars that drive themselves in traffic;  Jeopardy!-champion supercomputers;  auto-generated news stories;  cheap, flexible factory robots;  and inexpensive consumer devices that are simultaneously communicators, tricorders, and computers – have all appeared since 2006, as have countless other marvels that seem quite different from what came before.

One of the reasons they’re all appearing now is that the digital gear at their hearts is finally both fast and cheap enough to enable them.  This wasn’t the case just a decade ago.  (pages 47-48)

Brynjolfsson and McAfee later add:

It’s clear that many of the building blocks of computing – microchip density, processing speed, storage capacity, energy efficiency, download speed, and so on – have been improving at exponential rates for a long time.  (page 49)

For example, in 1996, the ASCI Red supercomputer cost $55 million to develop and took up 1,600 square feet of floor space.  It was the first computer to score above one teraflop – one trillion floating point operations per second – on the standard test for computer speed, note Brynjolfsson and McAfee.  It used eight hundred kilowatts per hour, roughly as much as eight hundred homes.  By 1997, it reached 1.8 teraflops.

Nine years later, the Sony PlayStation 3 hit 1.8 teraflops.  It cost five hundred dollars, took up less than a tenth of a square meter, and used about two hundred watts, observe Brynjolfsson and McAfee.

Exponential progress has made possible many of the advances discussed in the previous chapter.  IBM’s Watson draws on a plethora of clever algorithms, but it would be uncompetitive without computer hardware about one hundred times more powerful than Deep Blue, its chess-playing predecessor that beat the human world champion, Garry Kasparov, in a 1997 match.  (page 50)

SLAM

Researchers in artificial intelligence have long been interested in SLAM – simultaneous localization and mapping.  As of 2008, computers weren’t able to do this well for large areas.

In November 2010, Microsoft offered Kinect – a $150 accessory – as an addition to its Xbox gaming platform.

The Kinect could keep track of two active players, monitoring as many as twenty joints on each.  If one player moved in front of the other, the device made a best guess about the obscured person’s movements, then seamlessly picked up all joints once he or she came back into view.  Kinect could also recognize players’ faces, voices, and gestures and do so across a wide range of lighting and noise conditions.  It accomplished this with digital sensors including a microphone array (which pinpointed the source of sound better than a single microphone could), a standard video camera, and a depth perception system that both projected and detected infrared light.  Several onboard processors and a great deal of proprietary software converted the output of these sensors into information that game designers could use.  (page 53)

After its release, Kinect sold more than eight million units in sixty days, which makes it the fastest-selling consumer electronics device of all time.  But the Kinect system could do far more than its video game applications.  In August 2011, at the SIGGRAPH (the Association of Computing Machinery’s Special Interest Group on Graphics and Interactive Techniques) in Vancouver, British Columbia, a Microsoft team introduced KinestFusion as a solution to SLAM.

In a video shown at SIGGRAPH 2011, a person picks up a Kinect and points it around a typical office containing chairs, a potted plant, and a desktop computer and monitor.  As he does, the video splits into multiple screens that show what the Kinect is able to sense.  It immediately becomes clear that if the Kinect is not completely solving the SLAM for the room, it’s coming close.  In real time, Kinect draws a three-dimensional map of the room and all the objects in it, including a coworker.  It picks up the word DELL pressed into the plastic on the back of the computer monitor, even though the letters are not colored and only one millimeter deeper than the rest of the monitor’s surface.  The device knows where it is in the room at all times, and even knows how virtual ping-pong balls would bounce around if they were dropped into the scene.  (page 54)

Microsoft made available (in June 2011) a Kinect software development kit.  Less than a year later, a team led by John Leonard of MIT’s Computer Science and Artificial Intelligence Lab announced Kintinuous, a ‘spatially extended’ version of KinectFusion.  Users could scan large indoor volumes and even outdoor environments.

Another fascinating example of powerful digital sensors:

A Google autonomous car incorporates several sensing technologies, but its most important ‘eye’ is a Cyclopean LIDAR (a combination of ‘LIght’ and ‘raDAR’) assembly mounted on the roof.  This rig, manufactured by Velodyne, contains sixty-four separate laser beams and an equal number of detectors, all mounted in a housing that rotates ten times a second.  It generates about 1.3 million data points per second, which can be assembled by onboard computers into a real-time 3D picture extending one hundred meters in all directions.  Some early commercial LIDAR systems around the year 2000 cost up to $35 million, but in mid-2013 Velodyne’s assembly for self-navigating vehicles was priced at approximately $80,000, a figure that will fall much further in the future.  David Hall, the company’s founder and CEO, estimates that mass production would allow his product’s price to ‘drop to the level of a camera, a few hundred dollars.’  (page 55)

 

THE DIGITIZATION OF JUST ABOUT EVERYTHING

As of March 2017, Android users could choose from 2.8 million applications while Apple users could choose from 2.2 million.  One example of a free but powerful app – a version of which is available from several companies – is one that gives you a map plus driving directions.  The app tells you the shortest route available now.

Digitization is turning all kinds of information and bits into the language of computers – ones and zeroes.  What’s crucial about digital information is that it’s non-rival and it has close to zero marginal cost of reproduction.  In other words, it can be used over and over – it doesn’t get ‘used up’ – and it’s extremely cheap to make another copy.  (Rival goods, by contrast, can only be used by one person at a time.)

In 1991, at the Nineteenth General Conference on Weights and Measures, the set of prefixes was expanded to include a yotta, representing one septillion, or 10^24.  As of 2012, there were 2.7 zettabytes of digital data – or 2.7 sextillion bytes.  This is only one prefix away from a yotta.

The explosion of digital information, while obviously not always useful, can often lead to scientific advances – i.e., understanding and predicting phenomena more accurately or more simply.  Some search terms have been found to have predictive value.  Same with some tweets.  Culturonomics makes use of digital information – like scanned copies of millions of books written over the centuries – to study human culture.

 

INNOVATION:  DECLINING OR RECOMBINING?

Linus Pauling:

If you want to have good ideas, you must have many ideas.

Innovation is the essential long-term driver of progress.  As Paul Krugman said:

Productivity isn’t everything, but in the long run it is almost everything.  (page 72)

Improving the standard of living over time depends almost entirely on raising output per worker, Krugman explained.  Brynjolfsson and McAfee write that most economists agree with Joseph Schumpeter’s observation:

Innovation is the outstanding fact in the economic history of capitalist society… and also it is largely responsible for most of what we would at first sight attribute to other factors.  (page 73)

The original Industrial Revolution resulted in large part from the steam engine.  The Second Industrial Revolution depended largely on three innovations:  electricity, the internal combustion engine, and indoor plumbing with running water.

Economist Robert Gordon, a widely respected researcher on productivity and economic growth, has concluded that by 1970, economic growth stalled out.  The three main inventions of the Second Industrial Revolution had their effect from 1870 to 1970.  But there haven’t been economically significant innovations since 1970, according to Gordon.  Some other economists, such as Tyler Cowen, agree with Gordon’s basic view.

The most economically important innovations are called general purposes technologies (GPTs).  GPTs, according to Gavin Wright, are:

deep new ideas or techniques that have the potential for important impacts on many sectors of the economy.  (page 76)

‘Impacts,’ note Brynjolfsson and McAfee, mean significant boosts in output due to large productivity gains.  They noticeably accelerate economic progress.  GPTs, economists have concurred, should be pervasive, improving over time, and should lead to new innovations.

Isn’t information and communication technology (ICT) a GPT?  Most economic historians think so.  Economist Alexander Field compiled a list of candidates for GPTs, and ICT was tied with electricity as the second most common GPT.  Only the steam engine was ahead of ICT.

Not everyone agrees.  Cowen argues basically that ICT is coming up short on the revenue side of things.

The ‘innovation-as-fruit’ view, say Brynjolfsson and McAfee, is that there are discrete inventions followed by incremental improvements, but those improvements stop being significant after a certain point.  The original inventions have been used up.

Another way to look at innovation, however, is not coming up with something big and new, but recombining things that already exist.  Complexity scholar Brian Arthur holds this view of innovation.  So does economist Paul Romer, who has written that we, as humans, nearly always underestimate how many new ideas have yet to be discovered.

The history of physics may serve as a good illustration of Romer’s point.  At many different points in the history of physics, at least some leading physicists have asserted that physics was basically complete.  This has always been dramatically wrong.  As of 2017, is physics closer to 10% complete or 90% complete?  Of course, no one knows for sure.  But how much more will be discovered and invented if we have AI with IQ 1,000,000+ being handled by genetically engineered scientists?  In my view, physics is probably closer to 10% complete.

Brynjolfsson and McAfee point out that ICT leads to recombinant innovation, perhaps like nothing else has.

…digital innovation is recombinant innovation in its purest form.  Each development becomes a building block for future innovations.  Progress doesn’t run out;  it accumulates… Moore’s Law makes computing devices and sensors exponentially cheaper over time, enabling them to be built economically into more and more gear, from doorknobs to greeting cards.  Digitization makes available massive bodies of data relevant to almost any situation, and this information can be infinitely reproduced and reused because it is non-rival.  As a result of these two forces, the number of potentially valuable building blocks is exploding around the world, and the possibilities are multiplying as never before.  We’ll call this the ‘innovation-as-building-block’ view of the world;  it’s the one held by Arthur, Romer, and the two of us.  From this perspective, unlike the innovation-as-fruit view, building blocks don’t ever get eaten or otherwise used up.  In fact, they increase the opportunities for future recombinations.

…In his paper, ‘Recombinant Growth,’ the economist Martin Weitzman developed a mathematical model of new growth theory in which the ‘fixed factors’ in an economy – machine tools, trucks, laboratories, and so on – are augmented over time by pieces of knowledge that he calls ‘seed ideas,’ and knowledge itself increases over time as previous seed ideas are recombined into new ones.  (pages 81-82)

As the number of seed ideas increases, the combinatorial possibilities explode quickly.  Weitzman:

In the early stages of development, growth is constrained by number of potential new ideas, but later on it is constrained only by the ability to process them.

ICT connects nearly everyone, and computing power continues to follow Moore’s Law.  Brynjolfsson and McAfee:

We’re interlinked by global ICT, and we have affordable access to masses of data and vast computing power.  Today’s digital environment, in short, is a playground for large-scale recombination.  (page 83)

…The innovation scholars Lars Bo Jeppesen and Karim Lakhani studied 166 scientific problems posted to Innocentive, all of which had stumped their home organizations.  They found that the crowd assembled around Innocentive was able to solve forty-nine of them, for a success rate of nearly 30 percent.  They also found that people whose expertise was far away from the apparent domain of the problem were more likely to submit winning solutions.  In other words, it seemed to actually help a solver to be ‘marginal’ – to have education, training, and experience that were not obviously relevant for the problem.  (page 84)

Kaggle is similar to Innocentive, but Kaggle is focused on data-intensive problems with the goal being to improve the baseline prediction.  The majority of Kaggle contests, says Brynjolfsson and McAfee, are won by people who are marginal to the domain of the challenge.  In one problem involving artificial intelligence – computer grading of essays – none of the top three finishers had any formal training in artificial intelligence beyond a free online course offered by Stanford AI faculty, open to anyone in the world.

 

ARTIFICIAL AND HUMAN INTELLIGENCE IN THE SECOND MACHINE AGE

Previous chapters discussed three forces – sustained exponential improvement in most aspects of computing, massive amounts of digitized information, and recombinant invention – that are yielding significant innovations.  But, state Brynjolfsson and McAfee, when you consider also that most people on the planet are connected via the internet and that useful artificial intelligence (AI) is emerging, you have to be even more optimistic about future innovations.

Digital technologies will restore hearing to the deaf via cochlear implants.  Digital technologies will likely restore sight to the fully blind, perhaps by retinal implants.  That’s just the beginning, to say nothing of advances in biosciences.  Dr. Watson will become the best diagnostician in the world.  Another supercomputer will become the best surgeon in the world.

Brynjolfsson and McAfee summarize:

The second machine age will be characterized by countless instances of machine intelligence and billions of interconnected brains working together to better understand and improve our world.  (page 96)

 

COMPUTING BOUNTY

Milton Friedman:

Most economic fallacies derive from the tendency to assume that there is a fixed pie, that one party can gain only at the expense of another.

Productivity growth comes from technological innovation and from improvements in production techniques.  The 1940s, 1950s, and 1960s were a time of rapid productivity growth.  The technologies of the first machine age, such as electricity and the internal combustion engine, were largely responsible.

But in 1973, things slowed down.  What’s interesting is that computers were becoming available during this decade.  But like the chief innovations of the first machine age, it would take a few decades before computing would begin to impact productivity growth significantly.

The internet started impacting productivity within a decade after its invention in 1989.  And even more importantly, enterprise-wide IT systems boosted productivity in the 1990s.  Firms that used IT throughout the 1990s were noticeably more productive as a result.

Brynjolfsson and McAfee:

The first five years of the twenty-first century saw a renewed wave of innovation and investment, this time less focused on computer hardware and more focused on a diversified set of applications and process innovations… In a statistical study of over six hundred firms that Erik did with Lorin Hitt, he found that it takes an average of five to seven years before full productivity benefits of computers are visible in the productivity of the firms making the investments.  This reflects the time and energy required to make the other complementary investments that bring a computerization effort success.  In fact, for every dollar of investment in computer hardware, companies need to invest up to another nine dollars in software, training, and business process redesign.  (pages 104-105)

Brynjolfsson and McAfee conclude:

The explanation for this productivity surge is in the lags that we always see when GPTs are installed.  The benefits of electrification stretched for nearly a century as more and more complementary innovations were implemented.  The digital GPTs of the second machine age are no less profound.  Even if Moore’s Law ground to a halt today, we could expect decades of complementary innovations to unfold and continue to boost productivity.  However, unlike the steam engine or electricity, second machine age technologies continue to improve at a remarkably rapid exponential pace, replicating their power with digital perfection and creating even more opportunities for combinatorial innovation.  The path won’t be smooth… but the fundamentals are in place for bounty that vastly exceeds anything we’ve ever seen before.  (page 106)

 

BEYOND GDP

Brynjolfsson and McAfee note that President Hoover had to rely on data such as freight car loadings, commodity prices, and stock prices in order to try to understand what was happening during the Great Depression.

The first set of national accounts was presented to Congress in 1937 based on the pioneering work of Nobel Prize winner Simon Kuznets, who worked with researchers at the National Bureau of Economic Research and a team at the U.S. Department of Commerce.  The resulting set of metrics served as beacons that helped illuminate many of the dramatic changes that transformed the economy throughout the twentieth century.

But as the economy has changed, so, too, must our metrics.  More and more what we care about in the second machine age are ideas, not things – mind, not matter;  bits, not atoms;  and interactions, not transactions.  The great irony of this information age is that, in many ways, we know less about the sources of value in the economy than we did fifty years ago.  In fact, much of the change has been invisible for a long time simply because we did not know what to look for.  There’s a huge layer of the economy unseen in the official data and, for that matter, unaccounted for on the income statements and balance sheets of most companies.  Free digital goods, the sharing economy, intangibles and changes in our relationships have already had big effects on our well-being.  They also call for new organizational structures, new skills, new institutions, and perhaps even a reassessment of some of our values.  (pages 108-109)

Brynjolfsson and McAfee write:

In addition to their vast library of music, children with smartphones today have access to more information in real time via the mobile web than the president of the United States had twenty years ago.  Wikipedia alone claims to have over fifty times as much information as Encyclopaedia Britannica, the premier compilation of knowledge for most of the twentieth century.  Like Widipedia but unlike Britannica, much of the information and entertainment available today is free, as are over one million apps on smartphones.

Because they have zero price, these services are virtually invisible in the official statistics.  They add value to the economy but not dollars to GDP.  And because our productivity data are, in turn, based on GDP metrics, the burgeoning availability of free goods does not move the productivity dial.  There’s little doubt, however, that they have real value.  (pages 110-111)

Free products can push GDP downward.  A free online encyclopedia available for pennies instead of thousands of dollars makes you better off, but it lowers GDP, observe Brynjolfsson and McAfree.  GDP was a good measure of economic growth throughout most of the twentieth century.  Higher levels of production generally led to greater well-being.  But that’s no longer true to the same extent due to the proliferation of digital goods that do not have a dollar price.

One way to measure the value of goods that are free or nearly free is to find out how much people would be willing to pay for them.  This is known as consumer surplus, but in practice it’s extremely difficult to measure.

New goods and services have not been fully captured in GDP figures.

For the overall economy, the official GDP numbers miss the value of new goods and services added to the tune of about 0.4 percent of additional growth each year, according to economist Robert Gordon.  Remember that productivity growth has been in the neighborhood of 2 percent per year for most of the past century, so contribution of new goods is not a trivial portion.  (pages 117-118)

GDP misses the full value of digital goods and services.  Similarly, intangible assets are not fully measured.

Just as free goods rather than physical products are an increasingly important share of consumption, intangibles also make up a growing share of the economy’s capital assets.  Production in the second machine age depends less on physical equipment and structures and more on the four categories of intangible assets:  intellectual property, organizational capital, user-generated content, and human capital.  (page 119)

Paul Samuelson and Bill Nordhaus have observed that GDP is one of the great inventions of the twentieth century.  But as Brynjolfsson and McAfee indicate, digital innovation means that we also need innovation in our economic metrics.

The new metrics will differ both in conception and execution.  We can build on some of the existing surveys and techniques researchers have been using.  For instance, the human development index uses health and education statistics to fill in some of the gaps in official GDP statistics;  the multidimensional poverty index uses ten different indicators – such as nutrition, sanitation, and access to water – to assess well-being in developing countries.  Childhood death rates and other health indicators are recorded in other periodic household surveys like the Demographic and Health Surveys.

There are several promising projects in this area.  Joe Stiglitz, Amartya Sen, and Jean-Paul Fitoussi have created a detailed guide for how we can do a comprehensive overhaul of our economic statistics.  Another promising project is the Social Progress Index that Michael Porter, Scott Stern, Roberto Lauria, and their colleagues are developing.  In Bhutan, they’ve begun measuring ‘Gross National Happiness.’  There is also a long-running poll behind the Gallup-Healthways Well-Being Index.

These are all important improvements, and we heartily support them.  But the biggest opportunity is in using the tools of the second machine age itself:  the extraordinary volume, variety, and timeliness of data available digitally.  The Internet, mobile phones, embedded sensors in equipment, and a plethora of other sources are delivering data continuously.  For instance, Roberto Rigobon and Alberto Cavallo measure online prices from around the world on a daily basis to create an inflation index that is far timelier and, in many cases, more reliable, than official data gathered via monthly surveys with much smaller samples.  Other economists are using satellite mapping of nighttime artificial light sources to estimate economic growth in different parts of the world, and assessing the frequency of Google searches to understand changes in unemployment and housing.  Harnessing this information will produce a quantum leap in our understanding of the economy, just as it has already changed marketing, manufacturing, finance, retailing, and virtually every other aspect of business decision-making.

As more data become available, and the economy continues to change, the ability to ask the right questions will become even more vital.  No matter how bright the light is, you won’t find your keys by searching under a lamppost if that’s not where you lost them.  We must think hard about what it is we really value, what we want more of, and what we want less of.  GDP and productivity growth are important, but they are a means to an end and not ends in and of themselves.  Do we want to increase consumer surplus?  Then lower prices or more leisure might be signs of progress, even if they result in a lower GDP.  And, of course, many of our goals are nonmonetary.  We shouldn’t ignore the economic metrics, but neither should we let them crowd out our other values simply because they are more measurable.

In the meantime, we need to bear in mind that the GDP and productivity statistics overlook much of what we value, even when using a narrow economic lens.  What’s more, the gap between what we measure and what we value grows every time we gain access to a new good or service that never existed before, or when existing goods become free as they so often do when they are digitized.  (pages 123-124)

 

THE SPREAD

Brynjolfsson and McAfee:

…Advances in technology, especially digital technologies, are driving an unprecedented reallocation of wealth and income.  Digital technologies can replicate valuable ideas, insights, and innovations at very low cost.  This creates bounty for society and wealth for innovators, but diminishes the demand for previously important types of labor, which can leave many people with reduced incomes.

The combination of bounty and spread challenges two common though contradictory worldviews.  One common view is that advances in technology always boost incomes.  The other is that automation hurts workers’ wages as people are replaced by machines.  Both of these have a kernel of truth, but the reality is more subtle.  Rapid advances in our digital tools are creating unprecedented wealth, but there is no economic law that says all workers, or even a majority of workers, will benefit from these advances.

For almost two hundred years, wages did increase alongside productivity.  This created a sense of inevitability that technology helped (almost) everyone.  But more recently, median wages have stopped tracking productivity, underscoring the fact that such a decoupling is not only a theoretical possibility but also an empirical fact in our current economy.  (page 128)

Statistics on how the median worker is doing versus the top 1 percent are revealing:

…The year 1999 was the peak year for real (inflation-adjusted) income of the median American household.  It reached $54,932 that year, but then started falling.  By 2011, it had fallen nearly 10 percent to $50,054, even as overall GDP hit a record high.  In particular, wages of unskilled workers in the United States and other advanced countries have trended downward.

Meanwhile, for the first time since the Great Depression, over half the total income in the United States went to the top 10 percent of Americans in 2012.  The top 1 percent earned over 22 percent of income, more than doubling their share since the early 1980s.  The share of income going to the top hundredth of one percent of Americans, a few thousand people with annual incomes over $1 million, is now at 5.5 percent, after increasing more between 2011 and 2012 than any year since 1927-1928.  (page 129)

Technology is changing economics.  Brynjolfsson and McAfee point out two examples:  digital photography and TurboTax.

At one point, Kodak employed 145,300 people.  But recently, Kodak filed for bankruptcy.  Analog photography peaked in the year 2000.  As of 2014, over 2.5 billion people had digital cameras and the vast majority of photos are digital.  At the same time, Facebook has a market value many times what Kodak ever did.  And Facebook has created at least several billionaires, each of whom has a net worth more than ten times what George Eastman – founder of Kodak – had.  Also, in 2012, Facebook had over one billion users, despite employing only 4,600 people (roughly 1,000 of whom are engineers).

Just as digital photography has made it far easier for many people to take and store photos, so TurboTax software has made it much more convenient for many people to file their taxes.  Meanwhile, tens of thousands of tax preparers – including those at H&R Block – have had their jobs and incomes threatened.  But the creators of TurboTax have done very well – one is a billionaire.

The crucial reality from the standpoint of economics is that it takes a relatively small number of designers and engineers to create and update a program like TurboTax.  As we saw in chapter 4, once the algorithms are digitized they can be replicated and delivered to millions of users at almost zero cost.  As software moves to the core of every industry, this type of production process and this type of company increasingly populates the economy.  (pages 130-131)

Brynjolfsson and McAfee report that most Americans have become less wealthy over the past several decades.

Between 1983 and 2009, Americans became vastly wealthier overall as the total value of their assets increased.  However, as noted by economists Ed Wolff and Sylvio Allegretto, the bottom 80 percent of the income distribution actually saw a net decrease in their wealth.  Taken as a group, the top 20 percent got not 100 percent of the increase, but more than 100 percent.  Their gains included not only the trillions of dollars of wealth newly created in the economy but also some additional wealth that was shifted in their direction from the bottom 80 percent.  The distribution was also highly skewed even among relatively wealthy people.  The top 5 percent got 80 percent of the nation’s wealth increase;  the top 1 percent got over half of that, and so on for ever-finer subdivisions of the wealth distribution…

Along with wealth, the income distribution has also shifted.  The top 1 percent increased their earnings by 278 percent between 1979 and 2007, compared to an increase of just 35 percent for those in the middle of the income distribution.  The top 1 percent earned over 65 percent of the income between 2002 and 2007.  (page 131)

Brynjolfsson and McAfee then add:

As we discussed in our earlier book Race Against the Machine, these structural economic changes have created three overlapping pairs of winners and losers.  As a result, not everyone’s share of the economic pie is growing.  The first two sets of winners are those who have accumulated significant quantities of the right capital assets.  These can be either nonhuman capital (such as equipment, structures, intellectual property, or financial assets), or human capital (such as training, education, experience, and skills).  Like other forms of capital, human capital is an asset that can generate a stream of income.  A well-trained plumber can earn more each year than an unskilled worker, even if they both work the same number of hours.  The third group of winners is made up of the superstars among us who have special talents – or luck.  (pages 133-134)

The most basic economic model, write Brynjolfsson and McAfee, treats technology as a simple multiplier on everything else, increasing overall productivity evenly for everyone.  In other words, all labor is affected equally by technology.  Every hour worked produces more value than before.

A slightly more complex model allows for the possibility that technology may not affect all inputs equally, but rather may be ‘biased’ toward some and against others.  In particular, in recent years, technologies like payroll processing software, factory automation, computer-controlled machines, automated inventory control, and word processing have been deployed for routine work, substituting for workers in clerical tasks, on the factory floor, and doing rote information processing.

By contrast, technologies like big data and analytics, high-speed communications, and rapid prototyping have augmented the contributions made by more abstract and data-driven reasoning, and in turn have increased the value of people with the right engineering, creative, and design skills.  The net effect has been to decrease demand for less skilled labor while increasing the demand for skilled labor.  Economists including David Autor, Lawrence Katz and Alan Krueger, Frank Levy and Richard Murnane, Daren Acemoglu, and many others have documented this trend in dozens of careful studies.  They call it skill-biased technical change.  By definition, skill-biased technical change favors people with more human capital.  (page 135)

Skill-biased technical change can be seen in the growing income gaps between people with different levels of education.

Furthermore, organizational improvements related to technical advances may be even more significant than the technical advances themselves.

…Work that Erik did with Stanford’s Tim Bresnahan, Wharton’s Lorin Hitt, and MIT’s Shinkyu Yang found that companies used digital technologies to reorganize decision-making authority, incentives systems, information flows, hiring systems, and other aspects of their management and organizational processes.  This coinvention of organization and technology not only significantly increased productivity but tended to require more educated workers and reduce demand for less-skilled workers.  This reorganization of production affected those who worked directly with computers as well as workers who, at first glance, seemed to be far from the technology…

Among the industries in the study, each dollar of computer capital was often the catalyst for more than ten dollars of complementary investments in ‘organizational capital,’ or investments in training, hiring, and business process redesign.  The reorganization often eliminates a lot of routine work, such as repetitive order entry, leaving behind a residual set of tasks that require relatively more judgment, skills, and training.

Companies with the biggest IT investments typically made the biggest organizational changes, usually with a lag of five to seven years before seeing the full performance benefits.  These companies had the biggest increase in the demand for skilled work relative to unskilled work….

This means that the best way to use new technologies is usually not to make a literal substitution of a machine for each human worker, but to restructure the process.  Nonetheless, some workers (usually the less skilled ones) are still eliminated from the production process and others are augmented (usually those with more education and training), with predictable effects on the wage structure.  Compared to simply automating existing tasks, this kind of organizational coinvention requires more creativity on the part of entrepreneurs, managers, and workers, and for that reason it tends to take time to implement the changes after the initial invention and introduction of new technologies.  But once the changes are in place, they generate the lion’s share of productivity improvements. (pages 137-138)

Brynjolfsson and McAfee explain that skill-biased technical change can be somewhat misleading in the context of jobs eliminated as companies have reorganized.  It’s more accurate to say that routine tasks – whether cognitive or manual – have been replaced the most by computers.  One study by Nir Jaimovich and Henry Siu found that the demand for routine cognitive tasks such as cashiers, mail clerks, and bank tellers and routine manual tasks such as machine operators, cement masons, and dressmakers was not only falling, but falling at accelerating rate.

These jobs fell by 5.6 percent between 1981 and 1991, 6.6 percent between 1991 and 2001, and 11 percent between 2001 and 2011.  In contrast, both nonroutine cognitive work and nonroutine manual work grew in all three decades.  (pages 139-140)

Since the early 1980s, when computers began to be adopted, the share of income going to labor has declined while the share of income going to owners of physical capital has increased.  However, as new capital is added cheaply at the margin, the rewards earned by capitalists may not automatically grow relative to labor, observe the authors.

 

IMPLICATIONS OF THE BOUNTY AND THE SPREAD

Franklin D. Roosevelt:

The test of our progress is not whether we add more to the abundance of those who have much;  it is whether we provide enough for those who have little.

Like productivity, state Brynjolfsson and McAfee, GDP, corporate investment, and after-tax profits are also at record highs.  Yet the employment-to-population ratio is lower than at any time in at least two decades.   This raises three questions:

  • Will the bounty overcome the spread?
  • Can technology not only increase inequality but also create structural unemployment?
  • What about globalization, the other great force transforming the economy – could it explain recent declines in wages and employment?

Thanks to technology, we will keep getting ever more output from fewer inputs like raw materials, capital, and labor.  We will benefit from higher productivity, but also from free digital goods.  Brynjolfsson and McAfee:

… ‘Bounty’ doesn’t simply mean more cheap consumer goods and empty calories.  As we noted in chapter 7, it also means simultaneously more choice, greater variety, and higher quality in many areas of our lives.  It means heart surgeries performed without cracking the sternum and opening the chest cavity.  It means constant access to the world’s best teachers combined with personalized self-assessments that let students know how well they’re mastering the material.  It means that households have to spend less of their total budget over time on groceries, cars, clothing, and utilities.  It means returning hearing to the deaf and, eventually, sight to the blind.  It means less need to work doing boring, repetitive tasks and more opportunity for creative, interactive work.  (page 166)

However, technological progress is also creating ever larger differences in important areas – wealth, income, standards of living, and opportunities for advancement.  If the bounty is large enough, do we need to worry about the spread?  If all people’s economic lives are improving, then is increasing spread really a problem?  Harvard economist Greg Mankiw has argued that the enormous income of the ‘one percent’ may reflect – in large part – the rewards of creating value for everyone else.  Innovators improve the lives of many people, and the innovators often get rich as a result.

The high-tech industry offers many examples of this happy phenomenon in action.  Entrepreneurs create devices, websites, apps, and other goods and services that we value.  We buy and use them in large numbers, and the entrepreneurs enjoy great financial success…

We particularly want to encourage it because, as we saw in chapter 6, technological progress typically helps even the poorest people around the world.  Careful research has shown that innovations like mobile telephones are improving people’s incomes, health, and other measures of well-being.  As Moore’s Law continues to simultaneously drive down the cost and increase the capability of these devices, the benefits they bring will continue to add up.  (pages 167-168)

Those who believe in the strong bounty argument think that unmeasured price decreases, quality improvements, and other benefits outweigh the lost ground in other areas, such as the decline in the median real income.

Unfortunately, however, some important items such as housing, health care, and college have gotten much more expensive over time.  Brynjolfsson and McAfee cite research by economist Jared Bernstein, who found that while median family income grew by 20 percent between 1990 and 2008, prices for housing and college grew by about 50 percent, and health care by more than 150 percent.  Moreover, median incomes have been falling in recent years.

Brynjolfsson and McAfee then add:

That many Americans face stagnant and falling income is bad enough, but it is now combined with decreasing social mobility – an ever lower chance that children born at the bottom end of the spread will escape their circumstances and move upward throughout their lives and careers… This is exactly what we’d expect to see as skill-biased technical changes accelerates. (pages 170-171)

Based on economic theory and supported by most of the past two hundred years, economists have generally agreed that technological progress has created more jobs than it has destroyed.  Some workers are displaced by new technologies, but the increase in total output creates more than enough new jobs.

Regarding economic theory, there are three possible arguments:  inelastic demand, rapid change, and severe inequality.

If lower costs leads to lower prices of goods, and if lower prices leads to increased demand for the goods, then this may lead to an increase in the demand for labor.  It depends on the elasticity of demand.

For some goods, such as lighting, demand is relatively inelastic:  price declines have not led to a proportionate increase in demand.  For other goods, demand has been relatively elastic:  price declines have resulted in an even greater increase in demand.  One example, write Brynjolfsson and McAfee, is the Jevons paradox:  more energy efficiency can sometimes lead to greater total demand for energy.

If elasticity is exactly equal to one – so a 1 percent decline in price leads to a 1 percent increase in demand – then total revenues (price times quantity) are unchanged, explain Brynjolfsson and McAfee.  In this case, an increase in productivity, meaning less labor needed for each unit of output, will be exactly offset by an increase in total demand, so that the overall demand for labor is unchanged.  Elasticity of one, it can be argued, is what happens in the overall economy.

Brynjolfsson and McAfee remark that the second, more serious, argument for technological unemployment is that our skills, organizations, and institutions cannot keep pace with technological change.  What if it takes ten years for displaced workers to learn new skills?  What if, by then, technology has changed again?

Faster technological progress may ultimately bring greater wealth and longer lifespans, but it also requires faster adjustments by both people and institutions.  (page 178)

The third argument is that ongoing technological progress will lead to a continued decline in real wages for many workers.  If there’s technological progress where only those with specific skills, or only those who own a certain kind of capital, benefit, then the equilibrium wage may indeed approach a dollar an hour or even zero.  Over history, many inputs to production, from whale oil to horses, have reached a point where they were no longer needed even at zero price.

Although job growth has stopped tracking productivity upward in the past fifteen years or so, it’s hard to know what the future holds, say the authors.

Brynjolfsson and McAfee then ask:  What if there were an endless supply of androids that never break down and that could do all the jobs that humans can do, but at essentially no cost?  There would be an enormous increase in the volume, variety, and availability of goods.

But there would also be severe dislocations to the labor force.  Entrepreneurs would continue to invent new products and services, but they would staff these companies with androids.  The owners of androids and other capital assets or natural resources would capture all the value in the economy.  Those with no assets would have only labor to sell, but it would be worthless.  Brynjolfsson and McAfee sum it up:  you don’t want to compete against close substitutes when those substitutes have a cost advantage.

But in principle, machines can have very different strengths and weaknesses than humans.  When engineers work to amplify these differences, building on the areas where machines are strong and humans are weak, then the machines are more likely to complement humans rather than substitute for them.  Effective production is more likely to require both human and machine inputs, and the value of the human inputs will grow, not shrink, as the power of the machines increases.  A second lesson of economics and business strategy is that it’s great to be a complement to something that’s increasingly plentiful.  Moreover, this approach is more likely to create opportunities to produce goods and services that could never have been created by unaugmented humans, or machines that simply mimicked people, for that matter.  These new goods and services provide a path for productivity growth based on increased output rather than reduced inputs.

Thus in a very real sense, as long as there are unmet needs and wants in the world, unemployment is a loud warning that we simply aren’t thinking hard enough about what needs doing.  We aren’t being creative enough about solving the problems we have using the freed-up time and energy of the people whose old jobs were automated away.  We can do more to invent technologies and business models that augment and amplify the unique capabilities of humans to create new sources of value, instead of automating the ones that already exist.  As we will discuss further in the next chapters, this is the real challenge facing our policy makers, our entrepreneurs, and each of us individually.  (page 182)

 

LEARNING TO RACE WITH MACHINES:  RECOMMENDATIONS FOR INDIVIDUALS

Pablo Picasso on computers:

But they are useless.  They can only give you answers.

Even where digital machines are far ahead of humans, humans still have important roles to play.  IBM’s Deep Blue beat Garry Kasparov in a chess match in 1997.  And nowadays even cheap chess programs are better than any human.  Does that mean humans no longer have anything to contribute to chess?  Brynjolfsson and McAfee quote Kasparov’s comments on ‘freestyle’ chess (which involves teams of humans plus computers):

The teams of human plus machine dominated even the strongest computers.  The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop.  Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event.  The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time.  Their skill at manipulating and ‘coaching’ their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.  Weak human + machine + better process was superior to a strong computer alone and, more remarkably superior to a strong human + machine + inferior process.  (pages 189-190)

Brynjolfsson and McAfee explain:

The key insight from freestyle chess is that people and computers don’t approach the same task the same way.  If they did, humans would have had nothing to add after Deep Blue beat Kasparov;  the machine, having learned how to mimic human chess-playing ability, would just keep riding Moore’s Law and racing ahead.  But instead we see that people still have a great deal to offer the game of chess at its highest levels once they’re allowed to race with machines, instead of purely against them.

Computers are not as good as people at being creative:

We’ve never seen a truly creative machine, or an entrepreneurial one, or an innovative one.  We’ve seen software that could create lines of English text that rhymed, but none that could write a true poem… Programs that can write clean prose are amazing achievements, but we’ve not yet seen one that can figure out what to write about next.  We’ve also never seen software that could create good software;  so far, attempts at this have been abject failures.

These activities have one thing in common:  ideation, or coming up with new ideas or concepts.  To be more precise, we should probably say good new ideas or concepts, since computers can easily be programmed to generate new combinations of preexisting elements like words.  This however, is not recombinant innovation in any meaningful sense.  It’s closer to the digital equivalent of a hypothetical room full of monkeys banging away randomly on typewriters for a million years and still not reproducing a single play of Shakespeare’s.

Ideation in its many forms is an area today where humans have a comparative advantage over machines.  Scientists come up with new hypotheses.  Journalists sniff out a good story.  Chefs add a new dish to the menu.  Engineers on a factory floor figure out why a machine is no longer working properly.  [Workers at Apple] figure out what kind of tablet computer we actually want.  Many of these activities are supported or accelerated by computers, but none are driven by them.

Picasso’s quote at the head of this chapter is just about half right.  Computers are not useless, but they’re still machines for generating answers, not posing interesting new questions.  That ability still seems to be uniquely human, and still highly valuable.  We predict that people who are good at idea creation will continue to have a comparative advantage over digital labor for some time to come, and will find themselves in demand.  In other words, we believe that employers now and for some time to come will, when looking for talent, follow the advice attributed to the Enlightenment sage Voltaire:  ‘Judge a man by his questions, not his answers.’

Ideation, innovation, and creativity are often described as ‘thinking outside the box,’ and this characterization indicates another large and reasonably sustainable advantage of human over digital labor.  Computers and robots remain lousy at doing anything outside the frame of their programming… (pages 191-192)

Futurist Kevin Kelly:

You’ll be paid in the future based on how well you work with robots.  (page 193)

Brynjolfsson and McAfee sum it up:

So ideation, large-frame pattern recognition, and the most complex forms of communication are cognitive areas where people still seem to have the advantage, and also seem likely to hold onto it for some time to come.  Unfortunately, though, these skills are not emphasized in most educational environments today.  (page 194)

Sociologists Richard Arum and Josipa Roksa have found in their research that many American college students today are not good at critical thinking, written communication, problem solving, and analytic reasoning.  In other words, many college students are not good at ideation, pattern recognition, and complex communication.  Arum and Roksa came to this conclusion after testing college students’ ability to read background documents and write an essay on them.  A major reason for this shortcoming, say Arum and Roksa, is that college students spend only 9 percent of their time studying, while spending 51 percent of their time socializing, recreating, etc.

Brynjolfsson and McAfee emphasize that the future is uncertain:

We have to stress that none of our predictions and recommendations here should be treated as gospel.  We don’t project that computers and robots are going to acquire the general skills of ideation, large-frame pattern recognition, and highly complex communication any time soon, and we don’t think that Moravec’s paradox is about to be fully solved.  But one thing we’ve learned about digital progress is never say never.  Like many other observers, we’ve been surprised over and over as digital technologies demonstrated skills and abilities straight out of science fiction.

In fact, the boundary between uniquely human creativity and machine capabilities continues to change.  Returning to the game of chess, back in 1956, thirteen-year-old child prodigy Bobby Fischer made a pair of remarkably creative moves against grandmaster Donald Byrne.  First he sacrificed his knight, seemingly for no gain, and then exposed his queen to capture.  On the surface, these moves seemed insane, but several moves later, Fischer used these moves to win the game.  His creativity was hailed at the time as the mark of genius.  You today if you program that same position into a run-of-the-mill chess program, it will immediately suggest exactly the moves that Fischer played.  It’s not because the computer has memorized the Fischer-Byrne game, but rather because it searches far enough ahead to see that these moves really do pay off.  Sometimes, one man’s creativity is another machine’s brute-force analysis.

We’re confident that more surprises are in store.  After spending time working with leading technologists and watching one bastion of human uniqueness after another fall before the inexorable onslaught of innovation, it’s becoming harder and harder to have confidence that any given task will be indefinitely resistant to automation.  That means people will need to be more adaptable and flexible in their career aspirations, ready to move on from areas that become subject to automation, and seize new opportunities where machines complement and augment human capabilities.  Maybe we’ll see a program that can scan the business landscape, spot an opportunity, and write up a business plan so good it’ll have venture capitalists ready to invest.  Maybe we’ll see a computer that can write a thoughtful and insightful report on a complicated topic.  Maybe we’ll see an automatic medical diagnostician with all the different kinds of knowledge and awareness of a human doctor.  And maybe we’ll see a computer that can walk up the stairs to an elderly woman’s apartment, take her blood pressure, draw blood, and ask if she’s been taking her medication, all while putting her at ease instead of terrifying her.  We don’t think any of these advances is likely to come any time soon, but we’ve also learned that it’s very easy to underestimate the power of digital, exponential, and combinatorial innovation.  So never say never.  (pages 202-204)

 

POLICY RECOMMENDATIONS

Brynjolfsson and McAfee affirm that Economics 101 still applies because digital labor is still far from a complete substitute for human labor.

For now the best way to tackle our labor force challenges is to grow the economy.  As companies see opportunity for growth, the great majority will need to hire people to seize them.  Job growth will improve, and so will workers’ prospects.  (page 207)

Brynjolfsson and McAfee also note that there is broad agreement among conservative and liberal economists when it comes to the government policies recommended by Economics 101.

(1) Education

The more educated the populace is, the more innovation tends to occur, which leads to more productivity growth and thus faster economic growth.

The educational system can be improved by using technology.  Consider massive open online courses (MOOCs), which have two main economic benefits.

  • The first and most obvious one is that MOOCs enable low-cost replication of the best teachers, content, and methods.  Just as we can all listen to the best pop singer or cellist in the world today, students will soon have access to the most exciting geology demonstrations, the most insightful explanations of Renaissance art, and the most effective exercises for learning statistical techniques.
  • The second, subtler benefit from the digitization of education is ultimately more important.  Digital education creates an enormous stream of data that makes it possible to give feedback to both teacher and student.  Educators can run controlled experiments on teaching methods and adopt a culture of continuous improvement.  (pages 210-211)

Brynjolfsson and McAfee then add:

The real impact of MOOCs is mostly ahead of us, in scaling up the reach of the best teachers, in devising methods to increase the overall level of instruction, and in measuring and finding ways to accelerate student improvement… We can’t predict exactly which methods will be invented and which will catch on, but we do see a clear path for enormous progress.  The enthusiasm and optimism in this space is infectious.  Given the plethora of new technologies and techniques that are now being explored, it’s a certainty that some of them – in fact, we think many of them – will be significant improvements over current approaches to teaching and learning.  (pages 211-212)

On the question of how to improve the educational system – in addition to using technology – it’s what you might expect:  attract better teachers, lengthen school years, have longer school days, and implement a no-excuses philosophy that regularly tests students.  Surprise, surprise:  This is what has helped places like Singapore and South Korea to rank near the top in terms of education.  Of course, while some teachers should focus on teaching testable skills, other teachers should be used to teach hard-to-measure skills like creativity and unstructured problem solving, observe Brynjolfsson and McAfee.

(2)  Startups

Brynjolfsson and McAfee:

We champion entrepreneurship, but not because we think everyone can or should start a company.  Instead, it’s because entrepreneurship is the best way to create jobs and opportunity.  As old tasks get automated away, along with demand for their corresponding skills, the economy must invent new jobs and industries.  Ambitious entrepreneurs are best at this, not well-meaning government leaders or visionary academics.  Thomas Edison, Henry Ford, Bill Gates, and many others created new industries that more than replaced the work that was eliminated as farming jobs vanished over the decades.  The current transformation of the economy creates an equally large opportunity.  (page 214)

Joseph Schumpeter argued that innovation is central to capitalism, and that it’s essentially a recombinant process.  Schumpeter also held that innovation is more likely to take place in startups rather than in incumbent companies.

…Entrepreneurship, then, is an innovation engine.  It’s also a prime source of job growth.  In America, in fact, it appears to be the only thing that’s creating jobs.  In a study published in 2010, Tim Kane of the Kauffman Foundation used Census Bureau data to divide all U.S. companies into two categories:  brand-new startups and existing firms (those that had been around for at least a year).  He found that for all but seven years between 1977 and 2005, existing firms as a group were net job destroyers, losing an average of approximately one million jobs annually.  Startups, in sharp contrast, created on average a net three million jobs per year.  (pages 214-215)

Entrepreneurship in America remains the best in the world, but it appears to have stagnated recently.  One factor may be a decline in would-be immigrants.  Immigrants have been involved in a high percentage of startups, but this trend appears to have slowed recently.  Moreover, excessive regulation seems to be stymieing startups.

(3)  Job Matching

It should be easier to match people with jobs.  Better databases can be developed.  So can better algorithms for identifying the needed skills.  Ratings like TopCoder scores can provide objective metrics of candidate skills.

(4)  Basic Science

Brynjolfsson and McAfee:

After rising for a quarter-century, U.S. federal government support for basic academic research started to fall in 2005.  This is cause for concern because economics teaches that basic research has large beneficial externalities.  This fact creates a role for government, and the payoff can be enormous.  The Internet, to take one famous example, was born out of U.S. Defense Department research into how to build bomb-proof networks.  GPS systems, touchscreen displays, voice recognition software like Apple’s Siri, and many other digital innovations also arose from basic research sponsored by the government.  It’s pretty safe to say, in fact, that hardware, software, networks, and robots would not exist in anything like the volume, variety, and forms we know today without sustained government funding.  This funding should be continued, and the recent dispiriting trend of reduced federal funding for basic research in America should be reduced.  (pages 218-219)

For some scientific challenges, offering prizes can help:

Many innovations are of course impossible to describe in advance (that’s what makes them innovations).  But there are also cases where we know exactly what we’re looking for and just want somebody to invent it.  In these cases, prizes can be especially effective.  Google’s driverless car was a direct outgrowth of a Defense Advanced Research Projects Agency (DARPA) challenge that offered a one-million-dollar prize for a car that could navigate a specific course without a human driver.  Tom Kalil, Deputy Director for Policy of the United States Office of Science and Technology Policy, provides a great playbook for how to run a prize:

  • Shine a spotlight on a problem or opportunity
  • Pay only for results
  • Target an ambitious goal without predicting which team or approach is most likely to succeed
  • Reach beyond usual suspects to tap top talent
  • Stimulate private-sector investment many times greater than the prize purse
  • Bring out-of-discipline perspectives to bear
  • Inspire risk-taking by offering a level playing field
  • Establish clear target metrics and validation protocols

Over the past decade, the total federal and private funds earmarked for large prizes have more than tripled and now surpass $375 million.  This is great, but it’s just a tiny fraction of overall government spending on government research.  There remains great scope for increasing the volume and variety of innovation competitions.  (pages 219-220)

(5)  Upgrade Infrastructure

Brynjolfsson and McAfee write that, like education and scientific research, infrastructure has positive externalities.  That’s why nearly all economists agree that the government should be involved in building and maintaining infrastructure – streets and highways, bridges, ports, dams, airports and air traffic control systems, and so on.

Excellent infrastructure makes a country a more pleasant place to live, and also a more productive place in which to do business.  Ours, however, is not in good shape.  The American Society of Civil Engineers (ASCE) gave the United States an overall infrastructure grade of D+ in 2013, and estimated that the country has a backlog of over $3.6 trillion in infrastructure investment…

Bringing U.S. infrastructure up to an acceptable grade would be one of the best investments the country could make in its own future.  (pages 220-221)

Economists also agree on the importance of maximizing the potential inflow of legal immigrants, especially those who are highly skilled.

Any policy shift advocated by both the libertarian Cato Institute and the progressive Center for American Progress can truly be said to have diverse support.  Such is the case for immigration reform, a range of proposed changes with the broad goal of increasing the number of legal foreign-born workers and citizens in the United States.  Generous immigration policies really are part of the Econ 101 playbook;  there is wide agreement among economists that they benefit not only the immigrants themselves but also the economy of the country they move to.  (page 222)

Brynjolfsson and McAfee continue:

…Since 2007, it appears that net illegal immigration to the United States is approximately zero, or actually negative.  And a study by the Brookings Institution found that highly educated immigrants now outnumber less educated ones;  in 2010, 30 percent had at least a college education, while only 28 percent lacked the equivalent of a high school degree.

Entrepreneurship in America, particularly in technology-intensive sectors of the economy, is fueled by immigration to an extraordinary degree… As economist Michael Kremer demonstrated in a now classic paper, increasing the number of immigrant engineers actually leads to higher, not lower, wages for native-born engineers because immigrants help creative ecosystems flourish.  It’s no wonder that wages are higher for good software designers in Silicon Valley, where they are surrounded by others with similar and generally complementary skills, rather than in more isolated parts of the world.

Today, immigrants are having this large and beneficial effect on the country not because of America’s processes and policies but often despite them.  Immigration to the United States is often described as slow, complex, inefficient, and highly bureaucratic… (pages 222-223)

A green card should be stapled to every advanced diploma awarded to an immigrant, say Brynjolfsson and McAfee.  Furthermore, a separate ‘startup visa’ category should be created making it easier for entrepreneurs – especially those who have already attracted funding – to launch their ventures in the United States.

(6)  Tax Wisely

Obviously we should tax pollution, which is a negative externality.  Same goes for things like traffic congestion.  Singapore has implemented an Electronic Road Pricing System that has virtually eliminated congestion, note the authors.

Also, land could be taxed more.  So could government-owned oil and gas leases.  Finally, the top marginal income tax could be increased without harming the economy.

 

LONG-TERM RECOMMENDATIONS

Voltaire:

Work saves a man from three great evils:  boredom, vice, and need.

Brynjolfsson and McAfee first point out that technological progress shouldn’t be opposed.  Productivity growth is central to economic growth.  Overall, things continue to get better.  So we should encourage ongoing innovation and deal with the associated challenges as they come up.

We are also skeptical of efforts to come up with fundamental alternatives to capitalism.  By ‘capitalism’ here, we mean a decentralized economic system of production and exchange in which most of the means of production are in private hands (as opposed to belonging to the government), where most exchange is voluntary (no one can force you to sign a contract against your will), and where most goods have prices that vary based on relative supply and demand instead of being fixed by a central authority.  All of these features exist in most economies around the world today.  Many are even in place in today’s China, which is still officially communist.

These features are so widespread because they work so well.  Capitalism allocates resources, generates innovation, rewards effort, and builds affluence with high efficiency, and these are extraordinarily important things to do well in a society.  As a system, capitalism is not perfect, but it’s far better than the alternatives.  Winston Churchill said that, ‘Democracy is the worst form of government except for all those others that have been tried.’  We believe the same about capitalism.  (page 231)

What’s likely to change, though, remark Brynjolfsson and McAfee, are concepts related to income and money.

The idea of a basic income is that everyone receives a minimum standard of living.  People are free to improve on it by working, investing, starting a company, or other such activities.  English-American activist Thomas Paine argued for a form of basic income.  Later supporters have included the philosopher Bertrand Russell and civil rights leader Martin Luther King, Jr.

Many economists on both the left and the right have agreed with King.  Liberals including James Tobin, Paul Samuelson, and John Kenneth Galbraith and conservatives like Milton Friedman and Friedrich Hayek have all advocated income guarantees in one form or another, and in 1968 more than 1,200 economists signed a letter in support of the concept addressed to the U.S. Congress.

The president elected that year, Republican Richard Nixon, tried throughout his first term in office to enact it into law.  In a 1969 speech he proposed a Family Assistance Plan that had many features of a basic income program.  The plan had support across the ideological spectrum, but it also faced a large and diverse group of opponents.  (page 233)

In any case, basic income – especially on its own – is not the answer.  Referring to Voltaire’s quote, basic income saves a person from need, but not from boredom or vice.  Work is extremely important for human beings.  Brynjolfsson and McAfee mention that Daniel Pink, in Drive, identifies three major motivations:  mastery, autonomy, and purpose.

It seems that all around the world, people want to escape the evils of boredom, vice, and need and instead find mastery, autonomy, and purpose by working.  (page 235)

Work gives a great many individuals their sense of meaning.  What’s true for individuals is also true for communities.  Research has shown that people are happier and better off in communities where people work.

Brynjolfsson and McAfee then point out that economists have developed reliable ways to encourage and reward work.  Moreover, innovators and entrepreneurs are developing technologies that not only substitute for human labor, but also complement it.  The bottom line is that we should continue to try to create and maintain as many jobs as possible.

Perhaps a better way to help the poor is with a ‘negative income tax,’ which the conservative economist Milton Friedman suggested.  Say the negative income tax was 50%.  Friedman gave an example (in 1968) of $3,000 in income as the cutoff.  Someone making $3,000 (again in 1968 dollars) would neither pay a tax nor receive a negative income tax.  If a person made only $1,000, then they would get an additional $1,000 as a negative income tax, for a total of $2,000.  If the same person made $2,000, they would get an additional $500, for a total of $2,500.  Overall, the negative income tax combines a guaranteed minimum income with an incentive to work.

Brynjolfsson and McAfee also point out that taxes on labor are not ideal because they discourage labor.  Of course, we need some income taxes.  But it may be possible to raise other kinds of taxes – including Pigovian taxes on pollution and other negative externalities, consumption taxes, and the value-added tax (VAT).  With a VAT, companies pay based on the difference between their costs (labor, raw materials, etc.) and the prices they charge customers.  A VAT is easy to collect, and it’s adjustable and lucrative, observe the authors.  The United States is the only country out of the thirty-four in the OECD that doesn’t have a VAT.

 

TECHNOLOGY AND THE FUTURE

Brynjolfsson and McAfee:

After surveying the landscape, we are convinced that we are at an inflection point – the early stages of a shift as profound as that brought on by the Industrial Revolution.  Not only are the new technologies exponential, digital, and combinatorial, but most of the gains are still ahead of us…

Our generation will likely have the good fortune to experience two of the most amazing events in history:  the creation of true machine intelligence and the connection of all humans via a common digital network, transforming the planet’s economics.  Innovators, entrepreneurs, scientists, tinkerers, and many other types of geeks will take advantage of this cornucopia to build technologies that astonish us, delight us, and work for us.  Over and over again, they’ll show how right Arthur C. Clarke was when he observed that a sufficiently advanced technology can be indistinguishable from magic.  (page 251)

Material needs and wants will become less important over time.  Brynjolfsson and McAfee:

We will increasingly be concerned with questions about catastrophic events, genuine existential risks, freedom versus tyranny, and other ways that technology can have unintended or unexpected side effects…

Until recently, our species did not have the ability to destroy itself.  Today it does.  What’s more, that power will reach the hands of more and more individuals as technologies become both more powerful and cheaper – and thus more ubiquitous.  Not all of those individuals will be both sane and well intentioned.  As Bill Joy and others have noted, genetic engineering and artificial intelligence can create self-replicating entities.  That means that someone working in a basement laboratory might someday use one of these technologies to unleash destructive forces that affect the entire planet.  The same scientific breakthroughs in genome sequencing that can be used to cure disease can also be used to create a weaponized version of the smallpox virus.  Computer programs can also self-replicate, becoming digital viruses, so the same global network that spreads ideas and innovations can also spread destruction.  The physical limits on how much damage any individual or small group could do are becoming less and less constrained.  Will our ability to detect and counteract destructive uses of technology advance rapidly enough to keep us safe?  That will be an increasingly important question to answer.  (pages 252-253)

Is the Singularity Near?

In utopian versions of digital consciousness, we humans don’t fight with machines;  we join with them, uploading our brains into the cloud and otherwise becoming part of a ‘technological singularity.’  This is a term coined in 1983 by science-fiction author Vernor Vinge, who predicted that, ‘We will soon create intelligences greater than our own… When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will move far beyond our understanding.’

Progress towards such a singularity, Vinge and others have argued, is driven by Moore’s Law.  Its accumulated doubling will eventually yield a computer with more processing and storage capacity than the human brain.  Once this happens, things become highly unpredictable.  Machines could become self-aware, humans and computers could merge seamlessly, or other fundamental transitions could occur… (pages 254-255)

As far as when such a singularity may happen, we simply don’t know.  Many have predicted the occurrence of such a singularity in 2050 or later.  But as Brynjolfsson and McAfee remind us, with all things digital, never say never.  If a supercomputer learns to re-write its own source code repeatedly – thus evolving rapidly – then what?

However, note Brynjolfsson and McAfee, the science-fiction of supercomputers and autonomous cars can be misleading:

…We humans build machines to do things that we see being done in the world by animals and people, but we typically don’t build them the same way that nature built us.  As AI trailblazer Frederick Jelinek put it beautifully, ‘Airplanes don’t flap their wings.’

It’s true that scientists, engineers, and other innovators often take cues from biology as they’re working, but it would be a mistake to think that this is always the case, or that major recent AI advances have come about because we’re getting better at mimicking human thought.  Journalist Stephen Baker spent a year with the Watson team to research his book Final Jeopardy!.  He found that, ‘The IBM team paid little attention to the human brain while programming Watson.  Any parallels to the brain are superficial, and only the result of chance.’

As we were researching this book we heard similar sentiments from most of the innovators we talked to.  Most of them weren’t trying to unravel the mysteries of human consciousness or understand exactly how we think;  they were trying to solve problems and seize opportunities.  As they did so, they sometimes came up with technologies that had human-like skills and abilities.  But these tools themselves were not like humans at all.  Current AI, in short, looks intelligent, but it’s an artificial resemblance.  That might change in the future.  We might start to build digital tools that more closely mimic our minds, perhaps even drawing on our rapidly improving capabilities for scanning and mapping brains.  And if we do so, those digital minds will certainly augment ours and might even eventually merge with them, or become self-aware on their own.  (pages 255-256)

Brynjolfsson and McAfee remain optimistic about the future:

Even in the face of all these challenges  – economic, infrastructural, biological, societal, and existential – we’re still optimistic.  To paraphrase Martin Luther King, Jr., the arc of history is long but it bends towards justice.  We think the data support this.  We’ve seen not just vast increases in wealth but also, on the whole, more freedom, more social justice, less violence, and less harsh conditions for the least fortunate and greater opportunities for more and more people.

Of course, our values and choices will determine our future:

In the second machine age, we need to think much more deeply about what it is we really want and what we value, both as individuals and as a society.  Our generation has inherited more opportunities to transform the world than any other.  That’s a cause for optimism, but only if we’re mindful of our choices.  (page 257)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here: http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Most Important Thing Illuminated

(Image:  Zen Buddha Silence by Marilyn Barbone.)

October 8, 2017

The Most Important Thing Illuminated (Columbia Business School, 2013) is an update of Howard Marks’ outstanding book on value investing, The Most Important Thing (2011).  The revision includes the original text plus comments from top value investors Christopher Davis, Joel Greenblatt, and Seth Klarman.  There are also notes from Howards Marks himself and from Columbia professor Paul Johnson.

The sections covered here are:

  • Second-Level Thinking
  • Understanding Market Efficiency
  • Value
  • The Relationship Between Price and Value
  • Understanding Risk
  • Recognizing Risk
  • Controlling Risk
  • Being Attentive to Cycles
  • Combating Negative Influences
  • Contrarianism
  • Finding Bargains
  • Patient Opportunism
  • Knowing What You Don’t Know
  • Appreciating the Role of Luck
  • Investing Defensively
  • Reasonable Expectations

 

SECOND-LEVEL THINKING

Nearly everyone can engage in first-level thinking, which is fairly simplistic.  But few can engage in second-level thinking.  Second-level thinking incorporates a variety of considerations, says Marks:

  • What is the range of likely future outcomes?
  • Which outcome do I think will occur?
  • What’s the probability I’m right?
  • What does the consensus think?
  • How does my expectation differ from the consensus?
  • How does the current price of the asset comport with the consensus view of the future, and with mine?
  • Is the consensus psychology that’s incorporated in the price too bullish or too bearish?
  • What will happen to the asset’s price if the consensus turns out to be right, and what if I’m right?

In order to do better than the market index, you must have an unconventional approach that works.  Joel Greenblatt comments:

The idea is that agreeing with the broad consensus, while a very comfortable place for most people to be, is not generally where above-average profits are found.  (page 7)

You can do better than the market over time if you use a proven method for betting against the consensus.  One way to achieve this is using a quantitative value investing strategy, which – for most of us – will produce better long-term results than trying to pick individual stocks.

 

UNDERSTANDING MARKET EFFICIENCY

Market prices are generally efficient and incorporate relevant information.  Assets sell at prices that offer fair risk-adjusted returns relative to other assets.  Marks says:

I agree that because investors work hard to evaluate every new piece of information, asset prices immediately reflect the consensus view of the information’s significance.  I do not, however, believe the consensus view is necessarily correct.  In January 2000, Yahoo sold at $237.  In April 2001 it was at $11.  Anyone who argues that the market was right both times has his or her head in the clouds;  it has to have been wrong on at least one of those occasions.  But that doesn’t mean many investors were able to detect and act on the market’s error.  (page 9)

Marks then explains:

The bottom line for me is that, although the more efficient markets often misvalue assets, it’s not easy for any one person – working with the same information as everyone else and subject to the same psychological influences – to consistently hold views that are different from the consensus and closer to being correct.

That’s what makes the mainstream markets awfully hard to beat – even if they aren’t always right.  (page 10)

Moreover, notes Marks, some asset classes are rather efficient.  In most of these:

  • the asset class is widely known and has a broad following;
  • the class is socially acceptable, not controversial or taboo;
  • the merits of the class are clear and comprehensible, at least on surface; and
  • information about the class and its components is distributed widely and evenly.

The Boole Microcap Fund is a quantitative value fund focused on micro caps.  Micro caps – because they are largely either ignored or misunderstood – are far more inefficient than small caps, mid caps, and large caps.  See: http://boolefund.com/best-performers-microcap-stocks/

Value investing – properly applied – is a way to invest systematically in underpriced stocks.  For details, see: http://boolefund.com/notes-on-value-investing/

Joel Greenblatt explains why value investing works:

Investments that are out of favor, that don’t look so attractive in the near term, are avoided by most professionals, who feel the need to add performance right now.  (page 17)

Marks decided to focus in his career on distressed debt because it was a noticeably less efficient asset class.

 

VALUE

Marks points out that you can either look at the fundamental attributes of the company – such as earnings and cash flows – or you can look at the associated stock price, and how it has moved in the past.  Value investing is the systematic purchase of businesses below their likely intrinsic values.

When you buy stock, you become a part owner of the underlying business.  So you would like to figure out what the business is worth, and then pay a price well below that.  Imagine if you were going to buy a laundromat or a farm.  You would want to figure out how much it earned in a normal year.  And you would want to estimate any future growth in those earnings.

  • Many businesses are difficult to value.  The trick, says Buffett, is to stay in your circle of competence:  If you focus on those businesses that you can value, you have a chance to find a few investments that will beat the market.  There are thousands of tiny businesses (public and private) – like laundromats – that you probably can value.

For most of us, a more reliable way to beat the market is by adopting a quantitative value strategy, which systematically buys stocks below intrinsic value, on average.  Lakonishok, Shleifer, and Vishny give a good explanation of quantitative value investing in their 1994 paper, “Contrarian Investment, Extrapolation, and Risk.”  Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

If you do spend time analyzing individual businesses that might be good long-term investments, then another trick is to find companies that have a sustainable competitive advantage.  Buffett uses the term moat.  A business with a moat has a sustainably high ROE (return on equity), which can make for a rewarding long-term investment if you pay a reasonable price.  See: http://boolefund.com/notes-on-value-investing/

Marks distinguishes between value and growth.

  • For many value investors, including Buffett, the future growth of a company’s cash flows is simply a component of its value today.

Marks points out that some investors look for a business that can grow a great deal in the future;  other investors focus on the value of a business today, and buying well below that value.  Marks comments that the “value” approach is more consistent, while the “growth” approach – when it works – can lead to more dramatic results.  Marks identifies himself as a value investor because he cherishes consistency above drama.

For value investing to work, not only do you have to buy consistently below intrinsic value;  but you also have to hold each stock long enough for the stock price to approach intrinsic value.  This can often take 3 to 5 years.  Meanwhile, you are very likely to be down from your initial purchase price, as Greenblatt explains:

Unless you buy at the exact bottom tick (which is next to impossible), you will be down at some point after you make every investment.  (page 26)

It’s challenging to own shares of a business that remains out-of-favor for an extended period of time.  One advantage of a quantitative value strategy is that it’s largely (or entirely) automated, which thereby minimizes psychological errors.

 

THE RELATIONSHIP BETWEEN PRICE AND VALUE

Marks explains:

For a value investor, price has to be the starting point.  It has been demonstrated time and time again that no asset is so good that it can’t become a bad investment if bought at too high a price.  And there are few assets so bad that they can’t be a good investment when bought cheap enough.  (page 29)

Marks later adds:

Investor psychology can cause a security to be priced just about anywhere in the short run, regardless of its fundamentals.  (page 32)

Overpriced investments are often “priced for perfection.”  In this situation, investors frequently overpay and then later discover that the investment is not perfect and has flaws.  By contrast, hated investments are often low risk:

The safest and most potentially profitable thing is to buy something when no one likes it.  Given time, its popularity, and thus its price, can only go one way:  up.  (page 33)

 

UNDERSTANDING RISK

Marks quotes Elroy Dimson:

Risk means more things can happen than will happen.  (page 39)

Because the market is mostly efficient, riskier investments have to offer higher potential returns in order to attract capital.  However, writes Marks, riskier investments don’t always produce higher returns, otherwise they wouldn’t be riskier.  In other words, riskier investments involve greater uncertainty:  there are some possible scenarios – with some probability of occurring – that involve lower returns or even a loss.

Following Buffett and Munger, Marks defines risk as the potential for permanent loss, which must be compared to the potential gain.  Risk is not volatility per se, but the possibility of downward volatility where the price never rebounds.

Like other value investors, Marks believes that the lower the price you pay relative to intrinsic value the higher the potential return:

Theory says high return is associated with high risk because the former exists to compensate for the latter.  But pragmatic value investors feel just the opposite:  They believe high return and low risk can be achieved simultaneously by buying things for less than they’re worth.  In the same way, overpaying implies both low return and high risk.

Dull, ignored, possibly tarnished and beaten-down securities – often bargains exactly because they haven’t been performing well – are often the ones value investors favor for high returns.  Their returns in bull markets are rarely at the top of the heap, but their performance is generally excellent on average, more consistent than that of ‘hot’ stocks and characterized by low variability, low fundamental risk and smaller losses when markets do badly.  (pages 47-48)

Risk ultimately is a subjective measure, says Marks.  People have different time horizons and different concerns (for instance, worried about trailing a benchmark versus worried about a permanent loss).  Marks quotes Graham and Dodd:

…the relation between different kinds of investments and the risk of loss is entirely too indefinite, and too variable with changing conditions, to permit of sound mathematical formulation.

Risk is just as uncertain after the fact, notes Marks:

A few years ago, while considering the difficulty of measuring risk prospectively, I realized that because of its latent, nonquantitative and subjective nature, the risk of an investment – defined as the likelihood of loss – can’t be measured in retrospect any more than it can a priori.

Let’s say you make an investment that works out as expected.  Does that mean it wasn’t risky?  Maybe you buy something for $100 and sell it a year later for $200.  Was it risky?  Who knows?  Perhaps it exposed you to great potential uncertainties that didn’t materialize.  Thus, its real riskiness might have been high.  Or let’s say the investment produces a loss.  Does that mean it was risky?  Or that it should have been perceived as risky at the time it was analyzed and entered into?

If you think about it, the response to these questions is simple:  The fact that something – in this case, loss – happened, doesn’t mean it was bound to happen, and the fact that something didn’t happen doesn’t mean it was unlikely.  (page 50)

It’s essential to model the future based on possible scenarios:

The possibility of a variety of outcomes means we mustn’t think of the future in terms of a single result but rather as a range of possibilities.  The best we can do is fashion a probability distribution that summarizes the possibilities and describes their relative likelihood.  We must think about the full range, not just the ones that are most likely to materialize.  (page 52)

Many investors make two related mistakes:

  • Assuming that the most likely scenario is certain;
  • Not imagining all possible scenarios, even highly unlikely ones (whether good or bad).

Marks describes investment results as follows:

For the most part, I think it’s fair to say that investment performance is what happens when a set of developments – geopolitical, macro-economic, company-level, technical and psychological – collide with an extent portfolio.  Many futures are possible, to paraphrase Dimson, but only one future occurs.  The future you get may be beneficial to your portfolio or harmful, and that may be attributable to your foresight, prudence or luck.  (page 54)

Marks refers to Nassim Taleb’s concept of “alternative histories.”  How your portfolio performs under the scenario that actually unfolds doesn’t tell you how it would have done under other possible scenarios.

As humans, we are subject to a set of related cognitive biases.  See: http://boolefund.com/cognitive-biases/

Hindsight bias causes us to view the past as much more predictable than it actually was.  The brain changes its own memories:

  • If some possible event actually happens, our brains tend to think, “I always thought that was likely.”
  • If some possible event doesn’t happen, our brains tend to think, “I always thought that was unlikely.

The fact that we view the past as more predictable than it actually was makes as view the future as more predictable than it actually is.  We feel comforted – and usually overconfident – because of our tendency to view both future and past as more predictable than they actually are.

Hindsight bias not only makes us overconfident about the future.  It also feeds into confirmation bias, which causes us to search for, remember, and interpret information in a way that confirms our pre-existing beliefs or hypotheses.

Thus, one of the most important mental habits for us to develop – as investors and in general – is always to seek disconfirming evidence for our hypotheses.  The more we like a hypothesis, the more important it is to look for disconfirming evidence.

Charlie Munger mentions Charles Darwin in “The Psychology of Human Misjudgment” (see Poor Charlie’s Alamanack: The Wit and Wisdom of Charles T.  Munger, expanded 3rd edition):

One of the most successful users of an antidote to first conclusion bias was Charles Darwin.  He trained himself, early, to intensively consider any evidence tending to disconfirm any hypothesis of his, more so if he thought his hypothesis was a particularly good one… He provides a great example of psychological insight correctly used to advance some of the finest mental work ever done. 

Munger sums up the lesson thus:

Any year in which you don’t destroy a best-loved idea is probably a wasted year.

 

RECOGNIZING RISK

Marks:

Recognizing risk often starts with understanding when investors are paying it too little heed, being too optimistic and paying too much for an asset as a result.  High risk, in other words, comes primarily from high prices.  Whether it be an individual security or other asset that is overrated and thus overpriced, or an entire market that’s been borne aloft by bullish sentiment and thus is sky-high, participating when prices are high rather than shying away is the main source of risk.  (page 58)

Marks interjects a comment:

Too-high prices come from investor psychology that’s too positive, and too-high investor sentiment often stems from a dearth of risk aversion.  Risk-averse investors are conscious of the potential for loss and demand compensation for bearing it – in the form of reasonable prices.  When investors aren’t sufficiently risk-averse, they’ll pay prices that are too high.  (page 59)

Christopher Davis points out that there are more traffic fatalities among drivers and passengers of SUVs.  Because drivers of SUVs feel safer, they drive riskier.  Most of us, as investors, feel more confident and less worried when prices have been going up for an extended period.  Since prices that are too high are the main source of investment risk, we have to learn how to overcome our psychological tendencies.  Marks elucidates:

The risk-is-gone myth is one of the most dangerous sources of risk, and a major contributor to any bubble.  At the extreme of the pendulum’s upswing, the belief that risk is low and that the investment in question is sure to produce profits intoxicates the herd and causes its members to forget caution, worry, and fear of loss, and instead to obsess about the risk of missing opportunity.  (page 62)

Marks again:

Investment risk comes primarily from too-high prices, and too-high prices often come from excessive optimism and inadequate skepticism and risk aversion.  Contributing underlying factors can include low prospective returns on safer investments, recent good performance by risky ones, strong inflows of capital, and easy availability of credit.  The key lies in understanding what impact things like these are having.  (page 63)

Investors generally overvalue what seems to have low risk, while undervaluing what seems to have high risk:

  • When everyone believes something is risky, their unwillingness to buy usually reduces its price to the point where it’s not risky at all.  Broadly negative opinion can make it the least risky thing, since all optimism has been driven out of its price.
  • And, of course, as demonstrated by the experience of Nifty Fifty investors, when everyone believes something embodies no risk, they usually bid it up to the point where it’s enormously risky.  No risk is feared, and thus no reward for risk-bearing – no ‘risk premium’ – is demanded or provided.  That can make the thing that’s most esteemed the riskiest.  (page 69)

The reason for this paradox, says Marks, is that most investors believe that quality, not price, determines whether an asset is risky.  However, low quality assets can be safe if their prices are low enough, while high quality assets can be risky if their prices are too high.  Chris Davis adds:

I agree – there are a number of dangers that come from using a term like ‘quality.’  First, investors tend to equate ‘high-quality asset’ with ‘high-quality investment.’  As a result, there’s an incorrect presumption or implication of less risk when taking on ‘quality’ assets.  As Marks rightly points out, quite often ‘high-quality’ companies sell for high prices, making them poor investments.  Second, ‘high-quality’ tends to be a phrase that incorporates a lot of hindsight bias or ‘halo effect.’  Usually, people referring to a ‘high-quality’ company are describing a company that has performed very well in the past.  The future is often quite different.  There is a long list of companies that were once described as ‘high quality’ or ‘built to last’ that are no longer around!  For this reason, investors should avoid using the word ‘quality.’  (pages 69-70)

 

CONTROLLING RISK

Risk control is generally invisible during good times.  But that doesn’t mean it isn’t desirable, says Marks.  No one can consistently predict the timing of bull markets or bear markets.  Therefore, risk control is always important, even during long bull markets.  Marks:

Bearing risk unknowingly can be a huge mistake, but it’s what those who buy the securities that are all the rage and most highly esteemed at a particular point in time – to which ‘nothing bad can possibly happen’ – repeatedly do.  On the other hand, intelligent acceptance of recognized risk for profit underlies some of the wisest, most profitable investments – even though (or perhaps due to the fact that) most investors dismiss them as dangerous speculations.  (page 75)

Marks later writes:

Even if we realize that unusual, unlikely things can happen, in order to act we make reasoned decisions and knowingly accept that risk when well paid to do so.  Once in a while, a ‘black swan’ will materialize.  But if in the future we always said, ‘We can’t do such-and-such, because the outcome could be worse than we’ve ever seen before,’ we’d be frozen in inaction.  (page 79)

You can’t avoid risk altogether as an investor or you’d get no return.  Therefore, you have to take risks intelligently, when you’re well paid to do so.  Marks concludes:

Over a full career, most investors’ results will be determined more by how many losers they have, and how bad they are, than by the greatness of their winners.  (page 80)

Daniel Pecaut and Corey Wrenn, in The University of Berkshire Hathaway, point out a central fact about how Buffett and Munger have achieved such a remarkable track record:

More than two-thirds of Berkshire’s performance over the S&P was earned during down years.  This is the fruit of Buffett and Munger’s ‘Don’t lose’ philosophy.  It’s the losing ideas avoided, as much as the money made in bull markets that has built Berkshire’s superior wealth over the long run.  (page xxi)

Buffett has made the same point.  His best ideas have not outperformed the best ideas of other great value investors.  However, his worst ideas have not been as bad, and have lost less over time, as compared with the worst ideas of other top value investors.

See: http://boolefund.com/university-berkshire-hathaway/

 

BEING ATTENTIVE TO CYCLES

Marks explains how the credit cycle works when times are good:

  • The economy moves into a period of prosperity.
  • Providers of capital thrive, increasing their capital base.
  • Because bad news is scarce, the risks entailed in lending and investing seem to have shrunk.
  • Risk averseness disappears.
  • Financial institutions move to expand their businesses – that is, to provide more capital.
  • They compete for market share by lowering demanded returns (e.g., cutting interest rates), lowering credit standards, providing more capital for a given transaction and easing covenants.  (page 83)

This is a cyclical process.  Overconfidence based on recent history leads to the disappearance of risk aversion.  Providers of capital make bad loans.  This causes the cycle to reverse:

  • Losses cause lenders to become discouraged and shy away.
  • Risk averseness rises, and along with it, interest rates, credit restrictions and covenant requirements.
  • Less capital is made available – and at the trough of the cycle, only to the most qualified of borrowers, if anyone.
  • Companies become starved for capital.  Borrowers are unable to roll over their debts, leading to defaults and bankruptcies.
  • This process contributes to and reinforces the economic contraction.  (page 84)

People and financial institutions become overly pessimistic based on recent history, which leads to excessive risk aversion.  Many solid loans are not made.  This causes the cycle to reverse again.

Marks, in agreement with Lakonishok, Shleifer, and Vishny (1994), explains why value investing can continue to work:

Investors will overvalue companies when they’re doing well and undervalue them when things get difficult.  (page 86)

Marks:

When things are going well, extrapolation introduces great risk.  Whether it’s company profitability, capital availability, price gains, or market liquidity, things that inevitably are bound to regress toward the mean are often counted on to improve forever.  (page 87)

It’s important to point out that there can be structural changes in the economy and the stock market.  For instance, interest rates may stay relatively low for a long time, in which case stocks may even be cheap today (with the S&P 500 Index over 2400).

Also, profit margins may be structurally higher:

  • There is a good Barron’s interview of Bruce Greenwald, “Channeling Graham and Dodd”.  Professor Greenwald indicated that Apple, Alphabet, Microsoft, Amazon, and Facebook – the five largest U.S. companies – have far higher normalized profit margins and ROE, as a group, than most large U.S. companies in history.
  • In brief, software and related technologies are becoming much more important in the global economy.  This is another key reason why U.S. stocks may not be overvalued, and may even be cheap.  See:  http://www.barrons.com/articles/bruce-greenwald-channeling-graham-and-dodd-1494649404

 

COMBATING NEGATIVE INFLUENCES

Marks discusses the importance of psychology:

The desire for more, the fear of missing out, the tendency to compare against others, the influence of the crowd and the dream of a sure thing – these factors are near universal.  Thus they have a profound collective impact on most investors and most markets.  The result is mistakes, and those mistakes are frequent, widespread, and recurring.  (page 97)

Marks observes that the biggest mistakes in investing are not analytical or informational, but psychological.  At the extremes, people get too greedy or too fearful:

Greed is an extremely powerful force.  It’s strong enough to overcome common sense, risk aversion, prudence, caution, logic, memory of painful past lessons, resolve, trepidation, and all the other elements that might otherwise keep investors out of trouble.  Instead, from time to time greed drives investors to throw in their lot with the crowd in pursuit of profit, and eventually they pay the price.

The counterpart of greed is fear – the second psychological factor we must consider.  In the investment world, the term doesn’t mean logical, sensible risk aversion.  Rather fear – like greed – connotes excess.  Fear, then, is more like panic.  Fear is overdone concern that prevents investors from taking constructive action when they should.  (page 99)

The third factor Marks mentions is the willing suspension of disbelief.  We are all prone to overconfidence, and in general, we think we’re better than we actually are.  Charlie Munger quotes Demosthenes:

Nothing is easier than self-deceit.  For what each man wishes, that he also believes to be true.

Or as the physicist Richard Feynman put it:

The first principle is that you must not fool yourself, and you are the easiest person to fool.

Marks later quotes Warren Buffett’s remark to Congress on June 2, 2010:

Rising prices are a narcotic that affects the reasoning power up and down the line.

The fourth psychological tendency is conformity with the crowd.  Swarthmore’s Solomon Asch conducted a famous experiment in the 1950’s.  The subject is shown two lines of obviously different lengths.  There are a few other people – shills – pretending to be subjects.

All the participants are asked if the lines are the same length.  (In fact, they obviously aren’t.)  The shills all say yes.  In a high percentage of the cases, the actual subject of the experiment disregards the obvious evidence of his own eyes and conforms to the view of the crowd.

So it is with the consensus view of the market.  Most people simply go along with the view of the crowd.  That’s not to say the crowd is necessarily wrong.  Often the crowd is right when it comes to the stock market.  But occasionally the crowd is very wrong about specific stocks, or even about the market itself.

The fifth psychological influence Marks notes is envy.  As Buffett remarked, “It’s not greed that drives the world, but envy.”  Munger has observed that envy is particularly stupid because there’s no upside.  Buffett agrees, joking: “Gluttony is a lot of fun.  Lust has its place, too, but we won’t get into that.”  Marks:

People who might be perfectly happy with their lot in isolation become miserable when they see others do better.  In the world of investing, most people find it terribly hard  to sit by and watch while others make more money than they do.  (page 102)

The sixth psychological influence is ego.  Investment results are compared.  In good times, aggressive and imprudent decisions often lead to the best results.  And the best results bring the greatest ego rewards, observes Marks.

Finally, Marks highlights the phenomenon of capitulation.  Consider the tech bubble in the late 90’s:

…The guy sitting next to you in the office tells you about an IPO he’s buying.  You ask what the company does.  He says he doesn’t know, but his broker told him it’s going to double on its day of issue.  So you say that’s ridiculous.  A week later he tells you it didn’t double… it tripled.  And he still doesn’t know what it does.  After a few more of these, it gets hard to resist.  You know it doesn’t make sense, but you want protection against continuing to feel like an idiot.  So, in a prime example of capitulation, you put in for a few hundred shares of the next IPO… and the bonfire grows still higher on the buying from converts like you.  (page 106)

Technological innovation drives economic progress.  But that doesn’t mean every innovative company is a good investment.  Joel Greenblatt comments:

Buffett’s famous line about the economics of airlines comes to mind.  Aviation is a huge and valuable innovation.  That’s not the same thing as saying it’s a good business.  (page 108)

 

CONTRARIANISM

Sir John Templeton:

To buy when others are despondently selling and to sell when others are euphorically buying takes the greatest courage, but provides the greatest profit.

Most investors are basically trend followers, writes Marks.  This works as long as the trend continues.

Marks quotes David Swensen’s Pioneering Portfolio Management (2000):

Investment success requires sticking with positions made uncomfortable by their variance with popular opinion.  Casual commitments invite casual reversal, exposing portfolio managers to the damaging whipsaw of buying high and selling low.  Only with the confidence created by a strong decision-making process can investors sell speculative excess and buy despair-driven value.

…Active management strategies demand uninstitutional behavior from institutions, creating a paradox that few can unravel.  Establishing and maintaining an unconventional investment profile requires acceptance of uncomfortably idiosyncratic portfolios, which frequently appear downright imprudent in the eyes of conventional wisdom.  (page 115)

Marks sums it up:

The ultimately most profitable investment actions are by definition contrarian:  you’re buying when everyone else is selling (and thus the price is low) or you’re selling when everyone else is buying (and thus the price is high).  (page 116)

Marks concludes:

The one thing I’m sure of is that by the time the knife has stopped falling, the dust has settled and the uncertainty has been resolved, there’ll be no great bargains left.  When buying something has become comfortable again, its price will no longer be so low that it’s a great bargain.

Thus, a hugely profitable investment that doesn’t begin with discomfort is usually an oxymoron.  

It’s our job as contrarians to catch falling knives, hopefully with care and skill.  That’s why the concept of intrinsic value is so important.  If we hold a view of value that enables us to buy when everyone else is selling – and if our view turns out to be right – that’s the route to the greatest rewards earned with the least risk.  (page 121)

It’s important to emphasize again what can happen when certain assets become widely ignored or despised:  The lower the price goes below probable intrinsic value, the lower the risk and the higher the reward.  For value investors, some of the highest-returning investments can simultaneously have the lowest risk.  Modern finance theory regards this situation as impossible.  According to modern finance, higher rewards always require higher risk.

 

FINDING BARGAINS

Marks repeats an important concept:

Our goal isn’t to find good assets, but good buys.  Thus, it’s not what you buy;  it’s what you pay for it.

A high-quality asset can constitute a good or bad buy, and a low-quality asset can constitute a good or bad buy.  The tendency to mistake objective merit for investment opportunity, and the failure to distinguish between good assets and good buys, get most investors into trouble.  (pages 124-125)

What creates bargains?  Marks answers:

  • Unlike assets that become the subject of manias, potential bargains usually display some objective defect.  An asset class may have weaknesses, a company may be a laggard in its industry, a balance sheet may be over-levered, or a security may afford its holders inadequate structural protection.
  • Since the efficient-market process of setting fair prices requires the involvement of people who are analytical and objective, bargains usually are based on irrationality or incomplete understanding.  Thus, bargains are often created when investors either fail to consider an asset fairly, or fail to look beneath the surface to understand it thoroughly, or fail to overcome some non-value-based tradition, bias or stricture.
  • Unlike market darlings, the orphan asset is ignored or scorned.  To the extent it’s mentioned at all by the media and at cocktail parties, it’s in unflattering terms.
  • Usually its price has been falling, making the first-level thinker ask, ‘Who would want to own that?’  (It bears repeating that most investors extrapolate past performance, expecting the continuation of trends rather than the far-more-dependable regression to the mean.  First-level thinkers tend to view past price weakness as worrisome, not as a sign that the asset has gotten cheaper.)
  • As a result, a bargain asset tends to be one that’s highly unpopular.  Capital stays away from it or flees, and no one can think of a reason to own it.  (pages 125-126)

Marks continues by explaining that to find an undervalued asset, a good place to start looking is among things that are:

  • little known and not fully understood;
  • fundamentally questionable on the surface;
  • controversial, unseemly or scary;
  • deemed inappropriate for ‘respectable’ portfolios;
  • unappreciated, unpopular and unloved;
  • trailing a record of poor returns; and
  • recently the subject of disinvestment, not accumulation.  (pages 127-128)

In brief:

To boil it all down to just one sentence, I’d say the necessary condition for the existence of bargains is that perception has to be considerably worse than reality.  That means the best opportunities are usually found among things most others won’t do.  (page 128)

Seth Klarman:

Generally, the greater the stigma or revulsion, the better the bargain.

 

PATIENT OPPORTUNISM

Buffett has always stressed that, over time, you should compare the results of what you do as an investor to what would have happened had you done absolutely nothing.  Often the best thing to do as a long-term value investor is absolutely nothing.

Buffett has also observed that investing is like baseball except that in investing, there are no called strikes.  You can wait for as long as it takes until a fat pitch appears.  Absent a fat pitch, there’s no reason to swing.  Buffett mentioned in Berkshire Hathaway’s 1997 Letter to Shareholders that Ted Williams, one of the greatest hitters ever, studied his hits and misses carefully.  Williams broke the strike zone into 77 baseball-sized ‘cells’ and analyzed his results accordingly.  Buffett explained:

Swinging only at balls in his ‘best’ cell, he knew, would allow him to bat .400;  reaching for balls in his ‘worst’ spot, the low outside corner of the strike zone, would reduce him to .230.  In other words, waiting for the fat pitch would mean a trip to the Hall of Fame;  swinging indiscriminately would mean a ticket to the minors.

See: http://berkshirehathaway.com/letters/1997.html

 

KNOWING WHAT YOU DON’T KNOW

John Kenneth Galbraith:

There are two classes of forecasters:  Those who don’t know – and those who don’t know they don’t know.

Marks studied forecasts.  Some forecasters always extrapolate the recent past.  But that’s not useful.  Outside of that, there are virtually no forecasters who are both non-consensus and regularly correct.  Marks writes:

One way to get to be right sometimes is to always be bullish or always be bearish;  if you hold a fixed view long enough, you may be right sooner or later.  And if you’re always an outlier, you’re likely to eventually be applauded for an extremely unconventional forecast that correctly foresaw what no one else did.  But that doesn’t mean your forecasts are regularly of any value…

It’s possible to be right about the macro-future once in a while, but not on a regular basis.  It doesn’t do any good to possess a survey of sixty-four forecasts that includes a few that are accurate;  you have to know which ones they are.  And if the accurate forecasts each six months are made by different economists, it’s hard to believe there’s much value in the collective forecasts.  (page 145)

Marks restates the case in the following points:

  • Most of the time, people predict a future that is a lot like the recent past.
  • They’re not necessarily wrong:  most of the time, the future largely is a rerun of the recent past.
  • On the basis of these two points, it’s possible to conclude that forecasts will prove accurate much of the time:  They’ll usually extrapolate recent experience and be right.
  • However, the many forecasts that correctly extrapolate past experience are of little value.  Just as forecasters usually assume a future that’s a lot like the past, so do markets, which usually price in a continuation of recent history.  Thus if the future turns out to be like the past, it’s unlikely big money will be made, even by those who foresaw correctly that it would.
  • Once in a while, however, the future turns out to be very different from the past.
  • It’s at these times that accurate forecasts would be of great value.
  • It’s also at these times that forecasts are least likely to be correct.
  • Some forecasters may turn out to be correct at these pivotal moments, suggesting that it’s possible to correctly forecast key events, but it’s unlikely to be the same people consistently.
  • The sum of this discussion suggests that, on balance, forecasts are of very little value.  (pages 145-146)

As an example, Marks asks who correctly predicted the credit crisis and bear market of 2007-2008.  Of those who correctly predicted it, how many of them also correctly predicted the recovery and massive bull market starting in 2009?  Very, very few.  Marks:

…Those who got 2007-2008 right probably did so at least in part because of a tendency toward negative views.  As such, they probably stayed negative for 2009.  The overall usefulness of those forecasts wasn’t great… even though they were partially right about some of the most momentous financial events in the last eighty years.

So the key question isn’t ‘are forecasts sometimes right?’ but rather ‘are forecasts as a whole – or any one person’s forecasts – consistently actionable and valuable?’  No one should bet much on the answer being affirmative.

Marks then notes that you could have found some people predicting a bear market before 2007-2008.  But if these folks had a negative bias, as well as a track record full of incorrect predictions, then you wouldn’t have had much reason to listen.  Or as Buffett put it in the Berkshire Hathaway 2016 Letter to Shareholders:

American business – and consequently a basket of stocks – is virtually certain to be worth far more in the years ahead.  Innovation, productivity gains, entrepreneurial spirit and an abundance of capital will see to that.  Ever-present naysayers may prosper by marketing their gloomy forecasts.  But heaven help them if they act on the nonsense they peddle.  (page 6)

See: http://berkshirehathaway.com/letters/2016ltr.pdf

Marks has a description for investors who believe in the value of forecasts.  They belong to the “I know” school, and it’s easy to identify them:

  • They think knowledge of the future direction of economies, interest rates, markets and widely followed mainstream stocks is essential for investment success.
  • They’re confident it can be achieved.
  • They know they can do it.
  • They’re aware that lots of other people are trying to do it too, but they figure either (a) everyone can be successful at the same time, or (b) only a few can be, but they’re among them.
  • They’re comfortable investing based on their opinions regarding the future.
  • They’re also glad to share their views with others, even though correct forecasts should be of such great value that no one would give them away gratis.
  • They rarely look back to rigorously assess their record as forecasters.  (page 147)

Marks contrasts the confident “I know” folks with the guarded “I don’t know” folks.  The latter believe you can’t predict the macro-future, and thus the proper goal for investing is to do the best possible job analyzing individual securities.  If you belong to the “I don’t know” school, eventually everyone will stop asking you where you think the market’s going.

You’ll never get to enjoy that one-in-a-thousand moment when your forecast comes true and the Wall Street Journal runs your picture.  On the other hand, you’ll be spared all those times when forecasts miss the mark, as well as the losses that can result from investing based on overrated knowledge of the future.  (page 148)

Marks continues by noting that no one likes investing on the assumption that the future is unknowable.  But if the future IS largely unknowable, then it’s far better as an investor to acknowledge that fact than to pretend otherwise.

Furthermore, says Marks, the biggest problems for investors tend to happen when investors forget the difference between probability and outcome (i.e., the limits of foreknowledge):

  • when they believe the shape of the probability distribution is knowable with certainty (and that they know it),
  • when they assume the most likely outcome is the one that will happen,
  • when they assume the expected result accurately represents the actual result, or
  • perhaps most important, when they ignore the possibility of improbable outcomes.  (pages 148-149)

Marks sums it up:

Overestimating what you’re capable of knowing or doing can be extremely dangerous – in brain surgery, transocean racing or investing.  Acknowledging the boundaries of what you can know – and working within those limits rather than venturing beyond – can give you a great advantage.  (page 150)

 

APPRECIATING THE ROLE OF LUCK

Professor Paul Johnson explains the main point:

Learn to be honest with yourself about your successes and failures [as an investor].  Learn to recognize the role luck has played in all outcomes.  Learn to decide which outcomes came because of skill and which because of luck.  Until one learns to identify the true source of success, one will be fooled by randomness.  (page 161)

Once again, we consider Nassim Taleb’s concept of “alternative histories.”  Marks quotes Taleb:

Clearly my way of judging matters is probabilistic in nature;  it relies on the notion of what could have probably happened…

If we have heard of [history’s great generals and inventors], it is simply because they took considerable risks, along with thousands of others, and happened to win.  They were intelligent, courageous, noble (at times), had the highest possible obtainable culture in their day – but so did thousands of others who live in the musty footnotes of history.  (pages 162-163)

In investing, you probably need many decades of results before you can determine how much is due to skill.  And here we’re talking mainly about long-term value investing, where stocks are held for at least a year on average.

Similarly, to judge individual investment decisions, you have to know much more than whether a specific decision worked or not.  You have to understand the process by which the investor made the decision.  You have to know which facts were available and which were used in the decision.  You have to estimate the probability of success of the investment decision, whether or not it actually worked.  This means you have to account for all the possible scenarios that could have unfolded, not just the one scenario that did unfold.

Marks gives the example of backgammon.  A certain aggressive player may need to roll double sixes in order to win.  The probability of that happening is one out of thirty-six.  Say the player accepts the cube – doubling the stakes – and gets his boxcars.  Many will consider the player brilliant.  But was it a wise bet?

You could find similar situations in other games of chance, such as bridge or poker.  There are many situations in which you can calculate the probabilities of various scenarios.  So you can figure out if the player is making the most profitable decision, averaged out over time.  Some percentage of the time the decision will work.  Some percentage of the time it won’t.  A skillful player will consistently make the the most profitable long-term decision.

Value investing is similar.  Good value investors are right 60% of the time and wrong 40% of the time.  If their process for selecting cheap stocks is solid, then risks and losses will be minimized while gains are simultaneously maximized.

Marks writes:

The actions of the ‘I know’ school are based on a view of a single future that is knowable and conquerable.  My ‘I don’t know’ school thinks of future events in terms of a probability distribution.  That’s a big difference.  In the latter case, we may have an idea which outcome is most likely to occur, but we also know there are many other possibilities, and those other outcomes may have a collective likelihood much higher than the one we consider most likely.  (page 168)

As Buffett advised, we have to focus on what’s knowable and important.  That means focusing on individual companies and industries within our circle of competence.  Many companies may be beyond our ability to value.  They go in the “too hard” pile.  Focus on those companies we can understand and value.

 

INVESTING DEFENSIVELY

As in some sports, in investing you have to decide if you want to emphasize offense, emphasize defense, or use a balanced approach.  Marks:

…investors should commit to an approach – hopefully one that will serve them through a variety of scenarios.  They can be aggressive, hoping they’ll make a lot on the winners and not give it back on the losers.  They can emphasize defense, hoping to keep up in good times and excel by losing less than others in bad times.  Or they can balance offense and defense, largely giving up on tactical timing but aiming to win through superior security selection in both up and down markets.  (page 174)

The vast majority of investors should invest in quantitative value funds or in low-cost broad market index funds.  Most of us will probably maximize our multi-decade results using one or both of these approaches.  Buffett: http://boolefund.com/warren-buffett-jack-bogle/

Regarding the balance of offense versus defense, Marks observes:

And, by the way, there’s no right choice between offense and defense.  Lots of possible routes can bring you to success, and your decision should be a function of your personality and leanings, the extent of your belief in your ability, and the peculiarities of the markets you work in and the clients you work for.  (page 175)

Marks believes that a focus on avoiding losses will lead more dependably to consistently good returns over time.  As we noted earlier, Buffett has said that his best ideas have not outperformed the best ideas of other value investors;  but his worst ideas have not done as poorly as the worst ideas of other value investors.  So minimizing losses – especially avoiding big losses – has been central to Buffett becoming arguably the best value investor of all time.

 

REASONABLE EXPECTATIONS

Setting reasonable expectations can play a pivotal role in designing and applying your investment strategy.  Marks points out that you can’t simply think about high returns without also considering risk.  In investing, if you aim too high, you’ll end up taking too much risk.

Similarly, when buying assets that are declining in price, you should have a reasonable strategy.  Marks:

I try to look at it logically.  There are three times to buy an asset that has been declining:  on the way down, at the bottom, or on the way up.  I don’t believe we ever know when the bottom has been reached, and even if we did, there might not be much for sale there.

If we wait until the bottom has passed and the price has started to rise, the rising price often causes others to buy, just as it emboldens holders and encourages them from selling.  Supply dries up and it becomes hard to buy in size.  The would-be buyer finds it’s too late.

That leaves buying on the way down, which we should be glad to do.  The good news is that if we buy while the price is collapsing, that fact alone often causes others to hide behind the excuse that ‘it’s not our job to catch falling knives.’  After all, it’s when knives are falling that the greatest bargains are available.

There’s an important saying attributed to Voltaire:  ‘The perfect is the enemy of the good.’  This is especially applicable to investing, where insisting on participating only when conditions are perfect – for example, buying only at the bottom – can cause you to miss out on a lot.  Perfection in investing is generally unobtainable;  the best we can hope for is to make a lot of good investments and exclude most of the bad ones.  (page 212)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here: http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Notes on Value Investing

(Image:  Zen Buddha Silence by Marilyn Barbone.)

October 1, 2017

Today we review some of the central concepts in value investing.  In order to learn, some repetition is required, especially when the subject may be difficult or counter-intuitive for many.

Here’s the outline:

  • Index Funds or Quant Value Funds
  • The Dangers of DCF
  • Notes on Ben Graham
  • Value vs. Growth
  • The Superinvestors of Graham-and-Doddsville

 

INDEX FUNDS OR QUANT VALUE FUNDS

The first important point is that the vast majority of investors are best off buying and holding a broad market, low-cost index fund.  Warren Buffett has repeatedly made this observation.  See: http://boolefund.com/warren-buffett-jack-bogle/

In other words, most of us who believe that we can outperform the market over the long term (decades) are wrong.  The statistics on this point are clear.  For instance, see pages 21-25 of Buffett’s 2016 Berkshire Hathaway Shareholder Letter: http://berkshirehathaway.com/letters/2016ltr.pdf

A quantitative value investment strategy—especially if focused on micro caps—is likely to do better than an index fund over time.  If you understand why this is the case, then you could adopt such an approach, at least for part of your portfolio.  (The Boole Microcap Fund is a quantitative value fund.)  But you have to be able to stick with it over the long term even though there will sometimes be multi-year periods of underperforming the market.  Easier said than done.  Know Thyself.

We all like to think we know ourselves.  But in many ways we know ourselves much less than we believe we do.  This is especially true when it comes to probabilistic decisions or complex computations.  In these areas, we suffer from cognitive biases which generally cause us to make suboptimal or erroneous choices.  See: http://boolefund.com/cognitive-biases/

The reason value investing—if properly implemented—works over time is due to the behavioral errors of many investors.  Lakonishok, Shleifer, and Vishny give a good explanation of this in their 1994 paper, “Contrarian Investment, Extrapolation, and Risk.”  Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

Lakonishok, Shleifer, and Vishny (LSV) offer three reasons why investors follow “naive” strategies:

  • Investors often extrapolate high past earnings growth too far into the future.  Similarly, investors extrapolate low past earnings growth too far into the future.
  • Investors overreact to good news and to bad news.
  • Investors think a well-run company is automatically a good investment.

LSV then state that, for whatever reason, investors overvalue stocks that have done well in the past, causing these “glamour” or “growth” stocks to be overpriced in general.  Similarly, investors undervalue stocks that have done poorly in the past, causing these “value” stocks to be underpriced in general.

Important Note:  Cognitive biases—such as overconfidence, confirmation bias, and hindsight bias—are the main reason why investors extrapolate past trends too far into the future.  For simple and clear descriptions of cognitive biases, see: http://boolefund.com/cognitive-biases/

 

THE DANGERS OF DCF

For most businesses, it’s very difficult—and often impossible—to predict future earnings and free cash flows.  One reason Warren Buffett and Charlie Munger have produced such an outstanding record at Berkshire Hathaway is because they focus on businesses that are highly predictable.  These types of businesses usually have a sustainable competitive advantage, which is what makes their future earnings and cash flows more certain.  As Buffett put it:

The key to investing is not assessing how much an industry is going to affect society, or how much it will grow, but rather determining the competitive advantage of any given company and, above all, the durability of that advantage.

Most businesses do not have a sustainable competitive advantage, and thus are not predictable 5 or 10 years into the future.

Buffett calls a sustainable competitive advantage a moat, which defends the economic “castle.”  Here’s how he described it at the Berkshire Hathaway Shareholder Meeting in 2000:

So we think in terms of that moat and the ability to keep its width and its impossibility of being crossed as the primary criterion of a great business.  And we tell our managers we want the moat widened every year.  That doesn’t necessarily mean the profit will be more this year than it was last year because it won’t be sometimes.  However, if the moat is widened every year, the business will do very well.  When we see a moat that’s tenuous in any way – it’s just too risky.  We don’t know how to evaluate that.  And, therefore, we leave it alone.  We think that all of our businesses – or virtually all of our businesses – have pretty darned good moats.

There’s a great book, The Art of Value Investing (Wiley, 2013), by John Heins and Whitney Tilson, which is filled with quotes from top value investors.  Here’s a quote from Bill Ackman, which shows that he strives to invest like Buffett and Munger:

We like simple, predictable, free-cash-flow generative, resilient and sustainable businesses with strong profit-growth opportunities and/or scarcity value.  The type of business Warren Buffett would say has a moat around it.  (page 131)

If the future earnings and cash flows of a business are not predictable, then DCF valuation may not be very reliable.  Moreover, it’s often hard to calculate the cost of capital (the discount rate).

  • DCF refers to “discounted cash flows.”  You can value any business if you can estimate future free cash flow with reasonable accuracy.  To get the present value of the business, the free cash flow in each future year must be discounted back to the present by the cost of capital.

To determine the cost of capital, Buffett and Munger use the opportunity cost of capital, which is the next best investment with a similar level of risk.

  • To illustrate, say they’re considering an investment in Company A, which they feel quite certain will return 15% per year.  To figure out the value of this potential investment, they will find their next best investment – which they may already own – that has a similar level of risk.  Perhaps they own Company N and they feel equally certain that its future returns will be 17% per year.  In that case, if possible, they would prefer to buy more of Company N rather than buying any of Company A.  (Often there are other considerations.  But that’s the gist of it.)

The academic definition of cost of capital includes “beta,” which measures how volatile a stock price has been in the past.  But for value investors like Buffett and Munger, all that matters is how much free cash flow the business will produce in the future.  The degree of volatility of a stock in the past generally has no logical relationship with the next 20-30 years of cash flows.

If a business lacks a true moat and if, therefore, DCF probably won’t work, is there any other way to evaluate a business?  James Montier, in Value Investing (Wiley, 2009), mentions three alternatives to DCF that do not require forecasting:

  • Reverse-engineered DCF
  • Asset Value
  • Earnings Power

In a reverse-engineered DCF, instead of forecasting future growth, you take the current share price and figure out what that implies about future growth.  Then you compare the implied future growth of the business against some reasonable benchmark, like growth of a close competitor.  (You still have to determine a cost of capital.)

As for asset value and earnings power, these were the two methods of valuation suggested by Ben Graham.  For asset value, Graham often suggested using liquidation value, which is usually a conservative estimate of asset value.  If the business could be sold as a going concern, then the assets would probably have a higher value than liquidation value.

Regarding earnings power, Montier quotes Graham from Security Analysis:

What the investor chiefly wants to learn… is the indicated earnings power under the given set of conditions, i.e., what the company might be expected to earn year after year if the business conditions prevailing during the period were to continue unchanged.

Montier again quotes Graham:

It combines a statement of actual earnings, shown over a period of years, with a reasonable expectation that these will be approximated in the future, unless extraordinary conditions supervene.  The record must be over a number of years, first because a continued or repeated performance is always more impressive than a single occurrence, and secondly because the average of a fairly long period will tend to absorb and equalize the distorting influences of the business cycle.

Montier mentions Bruce Greenwald’s excellent book, Value Investing: From Graham to Buffett and Beyond (Wiley, 2004), for a modern take on asset value and earnings power.

 

NOTES ON BEN GRAHAM

When studying Graham’s methods as presented in Security Analysis—first published in 1934—it’s important to bear in mind that Graham invented value investing during the Great Depression.  Therefore, some of Graham’s methods are arguably overly conservative.  Particularly if you think the Great Depression was caused in part by mistakes in fiscal and monetary policy that are unlikely to be repeated.  Charlie Munger put it as follows:

I don’t love Ben Graham and his ideas the way Warren does.  You have to understand, to Warren—who discovered him at such a young age and then went to work for him—Ben Graham’s insights changed his whole life, and he spent much of his early years worshiping the master at close range.

But I have to say, Ben Graham had a lot to learn as an investor.  His ideas of how to value companies were all shaped by how the Great Crash and the Depression almost destroyed him, and he was always a little afraid of what the market can do.  It left him with an aftermath of fear for the rest of his life, and all his methods were designed to keep that at bay.

That being said, Warren Buffett has always maintained that Chapters 8 and 20 of Ben Graham’s The Intelligent Investor—first published in 1949—contain the three fundamental precepts of value investing:

  • Owning stock is part ownership of the underlying business.
  • Market prices are there to serve you, not to instruct you.  When prices drop a great deal, it may be a good opportunity to buy.  When prices rise quite a bit, it may be a good time to sell.  At all other times, it’s best to focus on the operating results of the businesses you own.
  • The margin of safety is the difference between the price you pay and your estimate of the intrinsic value of the business.  Price is what you pay;  value is what you get.  If you think the business is worth $40 per share, then you would like to pay $20 per share.  (Value investors refer to a stock that’s selling for half its intrinsic value as a “50-cent dollar.”)

The purpose of the margin of safety is to minimize the effects of bad luck, human error, and the vagaries of the future.  Good value investors are right about 60% of the time and wrong 40% of the time.  By systematically minimizing the impact of inevitable mistakes and bad luck, a solid value investing strategy will beat the market over time.  Why?

Here’s why:  As you increase your margin of safety, you simultaneously increase your potential return.  The lower the risk, the higher the potential return.  When you’re wrong, you lose less on average.  When you’re right, you make more on average.

For instance, assume again that you estimate the intrinsic value of the business at $40 per share.

  • If you can pay $20 per share, then you have a good margin of safety.  And if you are right about intrinsic value, then you will make 100% on your investment when the price eventually moves from $20 to $40.
  • What if you managed to pay $10 per share for the same stock?  Then you have an even larger margin of safety relative to the estimated intrinsic value of $40.  As well, if you’re right about intrinsic value, then you will make 300% on your investment when the price eventually moves from $10 to $40.

The notion that you can increase your safety and your potential returns at the same time runs directly contrary to what is commonly taught in modern finance.  In modern finance, you can only increase your potential return by increasing your risk.

A final point about Buffett and Munger’s evolution as investors.  Munger realized early in his career that it was better to buy a high-quality business at a reasonable price, rather than a low-quality business at a cheap price.  Buffett also realized this—partly through Munger’s influence—after experiencing a few failed investments in bad businesses purchased at cheap prices.  Ever since, Buffett and Munger have expressed the lesson as follows:

It’s better to buy a wonderful company at a fair price than a fair company at a wonderful price.

The idea is to pay a reasonable price for a company with a high ROE (return on equity) that can be sustained—due to a moat.  If you hold a high-quality business like this, then over time your returns as an investor will approximate the ROE.  High-quality businesses can have sustainably high ROE’s that range from 20% to over 100%.

Note:  Buffett and Munger also insist that the companies they invest in have low debt (or no debt).  Even a great business can fail if it has too much debt.

 

VALUE vs. GROWTH

One of the seminal academic papers on value investing—which was mentioned earlier—is Lakonishok, Shleifer, and Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.”  Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

Lakonishok, Shleifer, and Vishny (LSV) show that value investing—buying stocks at low multiples (P/B, P/CF, and P/E)—outperformed glamour (growth) investing by about 10-11% per year from 1968 to 1989.

Here’s why, say LSV:  Investors expect the poor recent performance of value stocks to continue, causing these stocks to trade at lower multiples than is justified by subsequent performance.  And investors expect the recent good performance of glamour stocks to continue, causing these stocks to trade at higher multiples than is justified by subsequent performance.

Interestingly, La Porta (1993 paper) shows that contrarian value investing based directly on analysts’ forecasts of future growth can produce even larger excess returns than value investing based on low multiples.  In other words, betting on the stocks for which analysts have the lowest expectations can outperform the market by an even larger margin.

Moreover, LSV demonstrate that value investing is not riskier.  First, excess returns from value investing cannot be explained by excess volatility.  Furthermore, LSV show that value investing does not underperform during market declines or recessions.  If anything, value investing outperforms during down markets, which makes sense because value investing involves paying prices that are, on average, far below intrinsic value.

In conclusion, LSV ask why countless investors continue to buy glamour stocks and to ignore value stocks.  One chief reason is that buying glamour stocks—generally stocks that have been doing well—may seem “prudent” to many professional investors.  After all, glamour stocks are unlikely to become financially distressed in the near future, whereas value stocks are often already in distress.

In reality, a basket of glamour stocks is not prudent because it will far underperform a basket of value stocks over a sufficiently long period of time.  However, if professional investors choose a basket of value stocks, then they will not only own many stocks experiencing financial distress, but they also risk underperforming for several years in a row.  These are potential career risks that most professional investors would rather avoid.  From that point of view, it may indeed be “prudent” to stick with glamour stocks, despite the lower long-term performance of glamour compared to value.

  • An individual value stock is likely to be more distressed—and thus riskier—than either a glamour stock or an average stock.  But LSV have shown that value stocks, as a group, far outperform both glamour stocks and the market in general, and do so with less risk.  This finding that value stocks, as a group, outperform has been confirmed by many academic studies, including Fama and French (1992).
  • If you follow a quantitative value strategy focused on micro caps, one of the best ways to improve long-term performance is by using the Piotroski F_Score.  It’s a simple measure that strengthens a micro-cap value portfolio by reducing the number of “cheap but weak” companies and increasing the number of “cheap and strong” companies.  See: http://boolefund.com/joseph-piotroski-value-investing/

 

THE SUPERINVESTORS OF GRAHAM-AND-DODDSVILLE

Buffett gave a talk at Columbia Business School in 1984 entitled, “The Superinvestors of Graham-and-Doddsville.”  Link: http://www8.gsb.columbia.edu/rtfiles/cbs/hermes/Buffett1984.pdf

According to the EMH (Efficient Markets Hypothesis), investors who beat the market year after year are just lucky.  In his talk, Buffett argues as follows:  fifteen years before 1984, he knew a group of people who had learned the value investing approach from Ben Graham and David Dodd.  Now in 1984, fifteen years later, all of these individuals have produced investment track records far in excess of the S&P 500 Index.  Moreover, each of these investors applied the value investing approach in his own way—there was very little overlap in terms of which companies these investors bought.  Buffett simply asks whether this could be due to pure chance.

As a way to think about the issue, Buffett says to imagine a national coin-flipping contest in which all 225 million Americans (the population in 1984) participate.  It is one dollar per flip on the first day, so roughly half the people lose and pay one dollar to the other half who won.  Each day the contest is repeated, but the stakes build up based on all previous winnings.  After 10 straight mornings of this contest, there will be about 220,000 flippers left, each with a bit over $1,000.  Buffett jokes:

Now this group will probably start getting a little puffed up about this, human nature being what it is.  They may try to be modest, but at cocktail parties they will occasionally admit to attractive members of the opposite sex what their technique is, and what marvelous insights they bring to the field of flipping.  (page 5)

In another 10 days, there will be about 215 people left who had correctly called the toss of a coin 20 times in a row.  Each would have a little over $1,000,000.  Buffett quips:

By then, this group will really lose their heads.  They will probably write books on ‘How I Turned a Dollar into a Million Working Thirty Seconds a Morning.’  Worse yet, they’ll probably start jetting around the country attending seminars on efficient coin-flipping and tackling skeptical professors with, ‘If it can’t be done, why are there 215 of us?’

But then some business school professor will probably be rude enough to bring up the fact that if 225 million orangutans had engaged in a similar exercise, the results would be much the same—215 egotistical orangutans with 20 straight winning flips.

But assume that the original 225 million orangutans were distributed roughly like the U.S. population.  Buffett then asks:  what if 40 of the 215 winning orangutans were discovered to all be from the same zoo in Omaha?  This would lead one to want to identify common factors for these 40 orangutans.  Buffett says (humorously) that you’d probably ask the zookeeper about their diets, what books they read, etc.  In short, you’d try to identify causal factors.

Buffett remarks that scientific inquiry naturally follows this pattern.  He gives another example:  If there was a rare type of cancer, with 1,500 cases a year in the United States, and 400 of these cases happened in a little mining town in Montana, you’d investigate the water supply there or other variables.  Buffett explains:

You know that it’s not random chance that 400 come from a small area.  You would not necessarily know the causal factors, but you would know where to search.  (page 6)

Buffett then draws the simple, logical conclusion:

I think you will find that a disproportionate number of successful coin-flippers in the investment world came from a very small intellectual village that could be called Graham-and-Doddsville.  A concentration of winners that simply cannot be explained by chance can be traced to this particular intellectual village.

Again, Buffett stresses that the only thing these successful investors had in common was adherence to the value investing philosophy.  Each investor applied the philosophy in his own way.  Some, like Walter Schloss, used a very diversified approach with over 100 stocks chosen purely on the basis of quantitative cheapness (low P/B).  Others, like Buffett or Munger, ran very concentrated portfolios and included stocks of companies with high ROE.  And looking at this group on the whole, there was very little overlap in terms of which stocks each value investor decided to put in his portfolio.

Buffett observes that all these successful value investors were focused only on one thing:  price vs. value.  Price is what you pay;  value is what you get.  There was no need to use any academic theories about covariance, beta, the EMH, etc.  Buffett comments that the combination of computing power and mathematical training is likely what led many academics to study the history of prices in great detail.  There have been many useful discoveries, but some things (like beta or the EMH) have been overdone.

Buffett goes through the nine different track records of the market-beating value investors.  Then he summarizes:

So these are nine records of ‘coin-flippers’ from Graham-and-Doddsville.  I haven’t selected them with hindsight from among thousands.  It’s not like I am reciting to you the names of a bunch of lottery winners—people I had never heard of before they won the lottery.  I selected these men years ago based upon their framework for investment decision-making… It’s very important to understand that this group has assumed far less risk than average;  note their record in years when the general market was weak….

Buffett concludes that, in his view, the market is far from being perfectly efficient:

I’m convinced that there is much inefficiency in the market.  These Graham-and-Doddsville investors have successfully exploited gaps between price and value.  When the price of a stock can be influenced by a ‘herd’ on Wall Street with prices set at the margin by the most emotional person, or the greediest person, or the most depressed person, it is hard to argue that the market always prices rationally.  In fact, market prices are frequently nonsensical.

Buffett also states that value investors view risk and reward in opposite terms to the way academics view risk and reward.  The academic view is that a higher potential reward always requires taking greater risk.  But (as discussed in above in “Notes on Ben Graham”) value investors, having made the distinction between price and value, hold that the lower the risk, the higher the potential reward.  Buffett:

If you buy a dollar bill for 60 cents, it’s riskier than if you buy a dollar bill for 40 cents, but the expectation of reward is greater in the latter case.  The greater the potential for reward in the value portfolio, the less risk there is.

Buffett offers an example:

The Washington Post Company in 1973 was selling for $80 million in the market.  At the time, that day, you could have sold the assets to any one of ten buyers for not less than $400 million, probably appreciably more…

Now, if the stock had declined even further to a price that made the valuation $40 million instead of $80 million, it’s beta would have been greater.  And to people who think beta [or, more importantly, downside volatility] measures risk, the cheaper price would have made it look riskier.  This is truly Alice in Wonderland.  I have never been able to figure out why it’s riskier to buy $400 million worth of properties for $40 million than $80 million….

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here: http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ben Graham Was a Quant

(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011).  In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors.  Having a clearly defined quantitative investment strategy that you stick with over the long term—both when the strategy is in favor and when it’s not—is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches.  Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors.  See:  http://boolefund.com/warren-buffett-jack-bogle/

An index fund tries to copy an index, which is itself typically based on companies of a certain size.  By contrast, quantitative value investing is based on metrics that indicate undervaluation.

 

QUANTITATIVE VALUE INVESTING

Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details of security analysis which I devoted myself to so strenuously for many years.  I feel that they are relatively unimportant, which, in a sense, has put me opposed to developments in the whole profession.  I think we can do it successfully with a few techniques and simple principles.  The main point is to have the right general principles and the character to stick to them.

I have a considerable amount of doubt on the question of how successful analysts can be overall when applying these selectivity approaches.  The thing that I have been emphasizing in my own work for the last few years has been the group approach.  To try to buy groups of stocks that meet some simple criterion for being undervalued – regardless of the industry and with very little attention to the individual company

I am just finishing a 50-year study—the application of these simple methods to groups of stocks, actually, to all the stocks in the Moody’s Industrial Stock Group.  I found the results were very good for 50 years.  They certainly did twice as well as the Dow Jones.  And so my enthusiasm has been transferred from the selective to the group approach.  What I want is an earnings ratio twice as good as the bond interest ratio typically for most years.  One can also apply a dividend criterion or an asset value criterion and get good results.  My research indicates the best results come from simple earnings criterions.

Imagine—there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work.  It seems too good to be true.  But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up. 

See:  http://www.cfapubs.org/doi/pdf/10.2470/rf.v1977.n1.4731

Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors.  They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not.  This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others.  Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct.  – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that.  – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well.  Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient.  The difference between these propositions is night and day. 

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.

 

DESPERATELY SEEKING ALPHA

Greiner refers to the July 12, 2010 issue of Barron’s.  Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009.  87 of the 248 funds were gone completely, while the other funds had been downgraded.  Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha.  Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return.  Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors.  (page 16)

But risk involves not only just individual company risk.  It also involves how one company’s stock is correlated with the stocks of other companies.  If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks.  Graham was not concerned about any correlations among the cheap stocks.  As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha.  (Beta as a concept has some obvious problems.)  But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17).  Moreover, alpha is excess return relative to an index or benchmark.  We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors.  Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data).  (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return.  Greiner explains:

The stock market is not a repeatable event.  Every day is an out-of-sample environment.  The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets.  (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable.  It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)

 

RISKY BUSINESS

Risk means more things can happen than will happen.  – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed.  The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed.  Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management.  Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk.  Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high.  Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quite low.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear.  In these situations, a spike in volatility is a symptom of risk.  At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values.  So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear.  Sell-offs are usually buying opportunities for quantitative value investors.

I will tell you how to become richClose the doors.  Be fearful when others are greedy.  Be greedy when others are fearful.  – Warren Buffett

So how do you figure out risk exposures?  It is often a difficult thing to do.  Greiner defines ELE events as extinction-level events, or extreme-extreme events.  If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.”  (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future.  Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.  

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing.  Value investing has always worked over time because the investor systematically buys stocks well below probable intrinsic value—whether net asset value or earnings power.  This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience.  When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences.  This is what quants do.  They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen.  We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety.  Do we not wear seat belts because of the odd chance of the tractor-trailer collision?  Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan.  Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism.  The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes.  In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow.  The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks.  Eventually, the interconnectedness of the associations among sticks results in an avalanche.  Likewise, so behaves the market.  (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so.  Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation.  Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out.  Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful.  Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty.  Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event.  When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before.  This is not really very helpful.  The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards.  That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French, or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor.  If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe.  Do the same for all other factors.  The numerical B/P for a stock is then termed exposure.  Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta.  The variance and covariance of the beta time series act as proxies for the variance of the stocks.  These are the components of the covariance matrix.  On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors.  The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry.  Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in.  This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another.  Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries.  However, FactSet’s MAC model does include this operation.  (page 45)

 

BETA IS NOT “SHARPE” ENOUGH

Value investors like Ben Graham know that price variability is not risk.  Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power).  Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good.  Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky.  In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power.  Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value.  (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution.  The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions.  For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete.  Some of these are asymmetric about the mean (first moment) and some are not.  Some have fat tails and some do not.  You can even have distributions with infinite second moments (infinite variance).  There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance.  Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause.  Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly.  (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information.  It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave.  Market returns tend to have fatter tails.  But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance.  It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty.  However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH?  It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful.  In a world of Econs, I believe that the EMH would be true.  And it would not have been possible to do research in behavioral finance without the rational model as a starting point.  Without the rational framework, there are no anomalies from which we can detect misbehavior.  Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research.  We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.  (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed.  Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’  There are definitely anomalies:  sometimes the market overreacts, and sometimes it underreacts.  But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right.  The price is often wrong, and sometimes very wrong, says Thaler.  However, that doesn’t mean that you can beat the market.  It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution.  (page 60)

One problem with beta is that it has covariance in the numerator.  And if two variables are linearly independent, then their covariance is always zero.  But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark.  Greiner explains that you can determine linear dependence as follows:  When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X?  If yes, then the portfolio and the benchmark are linearly dependent.  Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity.  However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated.  Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund.  The distribution of returns is more like a Frechet distribution than a normal distribution in two ways:  it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74).  The Magellan Fund returns also have a fat tail on the right side of the distribution.  This is where the Frechet misses.  But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution.  Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The NASCAR Way

(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 3, 2017

Robert G. Hastrom has written quite a few excellent books on business and investing.  His best book may be The Warren Buffett Way (Wiley, 2014), which I summarized here:  http://boolefund.com/warren-buffett-way/

The Detective and the Investor (Texere, 2002) is another outstanding book by Hagstrom, which I wrote about here:  http://boolefund.com/invest-like-sherlock-holmes/

The first edition of The Warren Buffett Way was published in 1994.  A few years later, Hagstrom published The Nascar Way: The Business That Drives the Sport (Wiley, 1998).  NASCAR is the National Association of Stock Car Auto Racing.

Following the tenets of The Warren Buffett Way, Hagstrom explains that he first came across NASCAR from a business and investing point of view:

First, I look for companies that are simple and understandable and have a consistent operating history.  Second, for a company to succeed, it must have a management team that is honest and candid and forever works to rationally allocate the capital of the business.  Third, the most valuable companies tend to generate a high return on invested capital and consistently produce cash earnings for their shareholders.  (pages ix-x)

Specifically, Hagstrom had came across the International Speedway Corporation.  Running a racetrack is a simple and understandable business.  Hagstrom also learned that the company generated high returns on invested capital.  And the managers, Bill France, Jr. and Jim France, have done an excellent job managing the business, including capital allocation.

Although the book was written nearly 20 years ago, if you enjoy business and investing, or car racing, it’s a good read.

In the process of learning about International Speedway, Hagstrom had to learn about NASCAR.  Here’s the wiki entry on NASCAR:  https://en.wikipedia.org/wiki/NASCAR

NASCAR is second to the National Football League in terms of television views in the United States.  NASCAR’s races are broadcast in over 150 different countries.  As of 2004, the company holds 17 of the top 20 regularly attended single-day sporting events in the world.

Hagstrom says he aims, in the book, to capture some of this excitement of “this superb, uniquely American sport.”

This blog briefly outlines the book based its chapters:

  • Riding with Elmo
  • Rules of the Road
  • It Takes Money to Race
  • Prime Time
  • The Meanest Mile
  • True American Heroes
  • Forty-Two Teams on the Same Field at the Same Time

 

RIDING WITH ELMO

Hagstrom introduces the sport:

Stock car drivers do things in cars that would make the rest of us faint.  Try to imagine driving 100 miles an hour, then 120, then 160.  Imagine keeping up that pace for three and a half hours; that’s how long it will take to drive 500 miles… Now imagine forty-one other cars around you, all doing the same thing, just inches away from you, scraping against the side of your car and nudging your bumper as they try to pass you.  And you can never slack off.

…To win a stock car race means that you are willing to drive faster than anybody else on the track.  It means that you drive as fast as your nerves will let you go – and then faster.  (page 3)

Hagstrom got the chance to ride with Elmo Langley, NASCAR’s pace car official (in the late 90’s) before the Southern 500 at Darlington Raceway in South Carolina.  Harold Brasington, a retired racer in 1948, spent a year building the track.  He agreed, upon purchasing the property, not to disturb the minnow pond at the west end of the property.  This led Brasington to make that corner of the track tighter and more steeply banked.  The overall result is an egg-shaped track.  Crews have always found it difficult to set up the cars’ handling to be effective at both ends of the track.

At Darlington, notes Hagstrom, drivers must find the right balance of aggressiveness and patience in order to succeed.  If you’re too aggressive, you’ll crash.  If you’re too smooth, you’ll get passed.

Hagstrom also observes that specialized engine builders learned to take the standard 358-cubic-inch V8 motor and make it into a 700-horsepower engine.  Nearly everything else about the car is also carefully engineered for each particular racetrack.  (Today, in 2017, computers receive huge amounts of data from these racing machines.)

The original idea of “stock car” racing was that the cars look—on the outside at least—just like American cars that anybody could buy.  Historically, NASCAR fans have followed with great passion not only specific drivers, but specific car brands, such as Chevrolets, Fords, or Pontiacs.

Darlington Raceway has a special charm:

According to many regulars, there is no more beautiful place to entertain clients and guests than Darlington Raceway.  The hospitality village itself is outlined in white picket fences that surround beautifully appointed white and yellow striped tends.  Flower boxes hang at each entrance.  (page 9)

In addition to the hospitality tents, there are air-conditioned corporate suites high above the track.  A catered dinner is served on linen-covered tables.  As of the late 90’s, there were four corporate suites renting for $100,000 a year to PepsiCo, RJR Nabisco, Unocal, and Anheuser-Busch.

Hagstrom writes that the makeup of race fans has changed over the years.  As of the late 90’s, 30 percent stock car fans have an annual income of over $50,000; 38 percent are female.  Hagstrom says stock car racing at the beginning was for “the rowdiest and roughest,” but today’s stock car races are family events.

Hagstrom continues:

The infield, the open area inside the track itself, has become the last bastion of stock car racing’s most passionate fans.  They travel hundreds of miles in their recreational vehicles, campers, pickup trucks, and vans, and they are equipped to make the infield their home for three days.  They are determined not to miss one minute of racing:  the qualifying runs and the practices on Friday, the support race and then more practicing on Saturday, and the featured race on Sunday, with all its festivities.  (page 11)

In the 1950’s and 1960’s, writes Hagstrom, the infield was quite a bit like the Wild West.  The local sheriff even set up a temporary jail there.  In recent decades, however, track owners have substantially improved their infields in order to charge higher prices.

NASCAR has a number of rules:

NASCAR rules are designed to promote close, competitive racing, which the fans want, in a way that maintains parity and does not unduly favor the well-financed teams.  The paramount force behind all the rules, however, is the safety of both the drivers and the fans.  Everyone in NASCAR is aware of the potential for injury with so many machines running in close quarters at such high speeds, and so the rules and regulations are vigorously enforced.

A NASCAR-sanctioned motorsports event, like the Southern 500, is officiated by NASCAR and conducted in accordance with its rules.  These rules cover not only the race, but all periods leading up to and following it, including registration, inspections, time trials, qualifying races, practices, and postrace inspections.  (page 13)

Corporate sponsorship is the foundation of the sport, says Hagstrom.  There are many different types of businesses—including many Fortune 500 companies—that are NASCAR corporate sponsors.  The highest form of advertising in motorsports is to sponsor a team.  As with other sports, the greatest leverage NASCAR has had in selling itself to advertisers is based increasingly on its television audience.

As far as levels of competition, the highest level in NASCAR today (2017) is the Monster Energy NASCAR Cup Series.  (The highest level used to be the Winston Cup.)

Two neat things about NASCAR racing, historically:

  • pay is based on performance
  • drivers are humble and grateful

Hagstrom explains:

…It is the sound of humility and gratitude and enthusiasm.  It is the sound of athletes who tell you—and who mean it—that they are no bigger than the fans who come out and support them.  It is the sound of autographs being signed, of smiling pictures being snapped, and of kids collecting heroes.  It is what is best about American sports… (page 18)

 

RULES OF THE ROAD

Hagstrom recounts the history of the sport:

Stock car racing was born in the South, the boisterous legacy of daredevil moonshine drivers who tore up and down the back roads of Appalachia during the 1930s and 1940s.

For years, hardscrabble farmers in the mountains had been making their own whiskey, just as they made their own tools, clothes, and furniture.  But it wasn’t until Prohibition in 1919 that mountaineers discovered that the sippin’ whiskey they made for themselves was worth cash money to the folks in town.  For many mountain families, bootlegging was their only source of income in the winter months.

…By the 1940s, the government began sending federal revenue agents into the Appalachian Mountains to stop illegal whiskey manufacturing.  To avoid the revenuers, the mountaineers hid their stills and began to work only at night—hence the term “moonshiner.”  Drivers would begin their delivery runs after midnight and be safely home before daybreak…

In a stepped-up game of cat and mouse, the revenuers searched for tills by day and staked out the roads at night.  To stay ahead, moonshine drivers constantly tinkered with their cars, trying to eke out a few extra horsepower and to improve the suspension so the car would handle better.  It wasn’t easy barreling over hills and valleys in the middle of the night, dirt kicking up everywhere, and your car loaded down with twenty-five cases of white lightning… (pages 21-22)

Sometime in the mid-1930s “in a cow pasture in the town of Stockbridge, Georgia,” a few moonshiners started arguing about who had the fastest car and who was the better driver.  Someone made a quarter-mile dirt track in a farmer’s field.

After a few races, more and more people started to show up to watch.  The farmer fenced off the area and started charging admission.  The drivers’ pay also increased until it became more profitable to win a race than to run moonshine.  Hagstrom:

After driving over 100 miles an hour on the dirt roads of North Carolina in the middle of the night while being chased by revenuers, the moonshiners looked at these smooth, level, quarter-mile racetracks, crossed their arms, rocked back on their heels, and grinned.

The Flock brothers—Tim, Bob, and Fonty—drove for their uncle, Peachtree Williams, who had one of the biggest stills in Georgia.  Buddy Shuman also ran whiskey and drove stock cars.  But the most famous bootlegger ever to drive stock cars was Junior Johnson.  Junior ran whiskey for his daddy, Glenn Johnson, who had the biggest and most profitable moonshine operation in Wilkes Country, North Carolina.  (page 23)

For instance, Junior had perfected the power slide, which allowed him to speed up into the turns rather than slow down.

But modern stock car racing owes its success to one man:  William Henry Getty France, or “Big Bill” France.  Big Bill France raced in the Maryland suburbs of Washington, D.C., and he worked as a mechanic in garages and service stations.

France and Annie B, his wife, went to Florida to live in Miami.  But after stopping at Daytona Beach, France decided to settle there.  He opened up his own gas station.  Before long, his garage was a favorite hangout of mechanics and race car drivers.

Daytona Beach had hard-packed sandy beaches 500 yards wide and 25 miles long.  It was already known as the Speed Capital of the world.  In 1936 and 1937, writes Hagstrom, the city fathers of Daytona Beach put together races.  (This was partly out of concern for racers who were leaving for the Bonneville Salt Flats of Utah.)  But these races were poorly managed.  So they sought Bill France to manage the race in 1938.

France was already well-liked by most mechanics and race car drivers in the area.  He was also a natural promotor.  And because he had been a racer, he knew what worked and what didn’t in putting on a race.

France convinced a local restauranteur, Charlie Reese, to pay for the race as long as Bill France did all the work.  They would split the profits.  The race was a great success.

Soon thereafter, France heard about an oval dirt track for rent in Charlotte, North Carolina.  France decided to sponsor a 100-mile National Championship race there.  But local reporters hesitated to cover the race because there was no sanctioning body and no official rules.

France couldn’t convince AAA, so he organized his own sanctioning body, the National Championship Stock Car Circuit (NCSCC).  NCSCC would sponsor monthly races at various tracks, and the winners would be determined by a cumulative point system and winners’ fund.  France found someone to run his service station in Daytona Beach—and he got his wife Annie B to handle accounting—so that he could focus completely on setting up the system he envisioned.

1947 was the first full year for the National Championship Stock Car Circuit.  It was a great success.  The points’ fund, at $3,000, was divided among the top finishers.  The bootlegger Fonty Flock won first place.

The problem was that stock car racing at the time didn’t have a good reputation.  France knew it needed a central authority to govern all drivers, all car owners, and all track owners.  So France invited the most influential people from racing to Daytona Beach for a year-end meeting about the future of stock car racing.

France described his vision to his colleagues, including a national point system and winners’ fund.  Hagstrom adds:

…The rules, he declared, would have to be consistent, enforceable, and ironclad.  Cheating would not be allowed.  The regulations would be designed to ensure close competition, for they all knew that close side-by-side racing was what fans cheered for.  Finally, he argued, the organizing body should promote a racing division dedicated solely to standard street stock cars, the same cars that could be bought at automobile dealerships.  Fans would love these races, France argued, because they could identify with the cars.  (page 29)

The group voted to form a national organization of stock car racing.  France was elected president.  And they decided to incorporate the entity.  The National Association for Stock Car Auto Racing (NASCAR) was incorporated on February 15, 1948.

A technical committee set the rules for engine size, safety, and fair competition.  Only American-made cars were allowed.  NASCAR also decided to guarantee purses for the races it sanctioned.  And they established a national point system.

NASCAR today does a very detailed set of inspections.  And the rules are still designed to ensure parity and safety.  As a part of parity, costs are strictly controlled.

 

IT TAKES MONEY TO RACE

Hagstrom writes:

Sponsorship is a form of marketing in which companies attach their name, brand, or logo to an event for the purpose of achieving future profit.  It is not the same as advertising.  Both strategies seek the same end result—corporate profit—but go about it in different ways.  Advertising is a direct and overt message to consumers.  If successful, it stimulates a near-term purchase.  Sponsorship, on the other hand, generates a more subtle message that, if successful, creates a lasting bond between consumers and the company.  (page 49)

Corporate entertainment can be an effective marketing tool.

If the goal of sponsorship is to increase sales, that can be measured over specific time periods.  The same goes for other goals of sponsorship, including increasing worker productivity.

It’s more difficult to measure the impact of stock car racing sponsorship on corporate images over time.  But historically, consumers have been extremely positive towards nearly all companies that sponsor stock car racing.

Hagstrom says it is impossible to attend a NASCAR race without feeling a great deal of emotion.  The cars are so powerful and quick, and the competition is so close and intense, that you cannot avoid being impressed if you’ve never been to a race before.

In one survey (in the late 90’s), stock car racing fans were able to identify more than 200 different companies or brands connected with stock car racing.  Of all the companies mentioned, only 1 percent were incorrectly named by the fans, notes Hagstrom.  This is simply incredible.

Drivers know that their teams couldn’t race without corporate sponsors.  And fans know that ticket prices would be much higher without corporate sponsors.

 

PRIME TIME

Hagstrom writes:

The reason NASCAR events do well all season long is the same reason the other sports do so well during the playoffs:  the thrill of seeing the sport’s best athletes compete in a one-time event.  By the time baseball, basketball, and football get to the playoffs, the very best teams are facing each other.  Each game in a playoff series takes on an intensity that increases geometrically;  as the stakes rise, so does the excitement.  So too does the sense of urgency.  Fans know that playoffs and championship games will be played only once, and they had better not miss them.  (page 83)

Although a variety of camera angles and close-up views allow fans to follow NASCAR races on television better than ever before, there is still nothing like seeing a NASCAR race live.

 

THE MEANEST MILE

Hagstrom:

Although Darlington Raceway is credited with being NASCAR’s first superspeedway, world-famous Daytona is the track most responsible for launching the sport of stock car racing into the modern era.  Ask any driver his reaction on seeing Daytona for the first time and you will hear words like “amazing,” “incredible,” and “intimidating.”…  

Without question, Daytona was built for speed.  It’s 2.5 miles long, with big sweeping turns banked at 31 degrees.  Fireball Roberts, another famous 1960s NASCAR driver, was eager.  “This is the track where you can step on the accelerator and let it roll.  You can flatfoot it all the way.”  (pages 107-108)

Hagstrom then describes Talladega (as of the late 90’s):

Talladega stretched the imagination.  At 2.66 miles long, it was the longest and soon the fastest speedway.  It was here that Bill Elliott drove the fastest lap in NASCAR history—212 mph.  Drivers, once they built experience, began racing here at speeds in excess of 200 mph.  Because Talladega is wide (one lane wider than Daytona), racing three abreadst became the norm.  The intensity of competition racheted up several levels.

At last, racers had found a track that was built for speeds faster than most were comfortable driving.  NASCAR had finally answered its own question: Just how fast is fast enough?  (pages 108-109)

Track owners have the following sources of revenue:

  • General admission and luxury suites
  • Television and radio broadcast fees
  • Sponsorship fees and advertising
  • Concession, program, and mechandise sales
  • Hospitality tents and souvenir trailers

Expenses include:

  • Sanctioning fee
  • Prize money
  • Operating costs

Selling tickets has been the key for decades.  As of the late 90’s, grandstand seats, suites, and infield parking account for 70 percent of a track’s revenue, notes Hagstrom.  Other sources of revenue include concessions, souvenirs, signage, and broadcast rights.

 

TRUE AMERICAN HEROES

From 1973 to 1975, Dale Earnhardt was living hand to mouth and trying to save money to race.  Earnhardt finally got a chance to race at the World 600 at the Charlotte Motor Speedway.  Still, it takes years before a driver can race at NASCAR’s highest level.  Finally, in 1978, Earnhardt came in fourth place at Atlanta International Raceway.  Dale Earnhardt earned Rookie of the Year in 1979.  And he won the championship in 1980.  By the late 90’s, Earnhardt was arguably the greatest stock car racer of all time.

NASCAR race fans are probably the most passionate fans in the world, or at least have been historically.  Without fans buying tickets—and souvenirs and other products—stock car racing as it is would not exist.  But, unlike most other major sports historically, NASCAR fans can walk up to their favorite athletes and talk with them.

Hagstrom writes:

There is much about NASCAR racing that draws people to it.  For one thing, it is easy to identify with the activity.  Almost every adult in America knows how to drive a car, and most can remember the teenage thrill of driving fast.  Many fans own cars that, except for the paint job, look just like cars on the racetrack.  Unlike other sports, you don’t have to be a certain size, weight, or height to be a race driver.  So it’s not too much of a stretch for fans to imagine themselves behind the wheel of those powerful cars.

Something in the human psyche is attracted to danger, and that too is part of the appeal.  Today’s race cars are many times safer than today’s ordinary passenger vehicles;  nonetheless, there is always the sense that something spectacular could happen at any moment.  Finally, racing is inherently exciting in a way that many other sports are not.  The noise, the vibration, the speed all combine to affect observers in a powerful, almost visceral way.

All those factors, however, would not be enough to explain the loyalty of the NASCAR fans were it not for one other critical ingredient:  the intense emotional bond that exists between fans and their drivers.  That bond rests on a foundation of courtesy, humility, and respect that runs both ways.  The drivers’ attitude toward their fans is the unique factor that sets NASCAR apart and makes its drivers genuine heroes.  (pages 152-153)

 

FORTY-TWO TEAMS ON THE SAME FIELD AT THE SAME TIME

To win at the highest level, teams need not only a great driver, but a fast car and an excellent crew.  The crew chief is a crucial position.

…In all matters relating to the technical aspects of the car, including building it in the shop and monitoring how it performs at the track, the decisions rest with the crew chief.  The crew chief hires the race shop personnel, including a shop foreman, engine builders, fabricators, machinists, engineers, mechanics, gear/transmission specialists, a parts manager, and a transport driver.  (page 163)

Note again that Hagstrom was writing in the late 90’s.  In 2017, computers are far more powerful and are, accordingly, more essential in car racing generally.

On race day, the race crew is essential.  As of the late 90’s, they could change all four tires and refuel the car in twenty seconds.

In qualifying runs, typically there are more teams than available slots.  Qualifying often depends on tenths of a second.  Fine-tuning the race car requires a high degree of skill and teamwork.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Observations on History

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 27, 2017

The Lessons of History—first published in 1968—is the result of a lifetime of work by the outstanding historians, Will and Ariel Durant. Top investor Ray Dalio, founder and leader of Bridgewater Associates, has recommended the book as a succinct set of observations on history.

 

History may not repeat itself, but it does rhyme.  – Mark Twain

 

BIOLOGY AND HISTORY

There are three biological lessons of history:

  • Life is competition.
  • Life is selection.
  • Life must breed.

The Durants write that our acquisitiveness, greediness, and pugnacity are remnants of our evolutionary history as hunters and gatherers.  In those days, we had to eat as much as possible when we managed to get food from a successful hunt.  We also had to hoard food (and others goods) whenever possible.

As for selection, there is virtually an infinite variety of random differences in people.  And the environment itself can often be random and unpredictable.  People who win the Ovarian Lottery—to use Warren Buffett’s term—don’t just have talents; rather, they have talents well-suited to a specific environment.  Here’s Buffett:

As my friend Bill Gates says, if I’ve been born in some different place or some different time I’d have been some animal’s lunch.  I’d have been running real fast, and the animal would have been chasing me and I’d say “I allocate capital” and the animal would say “well, those are the kind that taste the best”.  I’ve been in the right place at the right time, and I’m lucky, I think a fair amount of that luck should be shared with others.

In 2013, someone from a group of students asked Warren Buffett how his understanding of markets affected his political views.  Buffett replied:

I wouldn’t say knowledge of markets has.  My political views were formed by this process.  Just imagine that it is 24 hours before you are born.  A genie comes and says to you in the womb, “You look like an extraordinarily responsible, intelligent, potential human being.  Going to emerge in 24 hours and it is an enormous responsibility I am going to assign to you—determination of the political, economic and social system into which you are going to emerge.  You set the rules, any political system, democracy, parliamentary, anything you wish, can set the economic structure, communistic, capitalistic, set anything in motion and I guarantee you that when you emerge this world will exist for you, your children and grandchildren.  What’s the catch?  One catch—just before you emerge you have to go through a huge bucket with 7 billion slips, one for each human.  Dip your hand in and that is what you get—you could be born intelligent or not intelligent, born healthy or disabled, born black or white, born in the US or in Bangladesh, etc.  You have no idea which slip you will get.  Not knowing which slip you are going to get, how would you design the world?  Do you want men to push around females?  It’s a 50/50 chance you get female.  If you think about the political world, you want a system that gets what people want.  You want more and more output because you’ll have more wealth to share around.  The US is a great system, turns out $50,000 GDP per capital, 6 times the amount when I was born in just one lifetime.  But not knowing what slip you get, you want a system that once it produces output, you don’t want anyone to be left behind.  You want to incentivize the top performers, don’t want equality in results, but do want something that those who get the bad tickets still have a decent life.  You also don’t want fear in people’s minds—fear of lack of money in old age, fear of cost of health care.  I call this the “Ovarian Lottery”.  My sisters didn’t get the same ticket.  Expectations for them were that they would marry well, or if they work, would work as a nurse, teacher, etc.  If you are designing the world knowing 50/50 male or female, you don’t want this type of world for women – you could get female.  Design your world this way; this should be your philosophy.  I look at Forbes 400, look at their figures and see how it’s gone up in the last 30 years.  Americans at the bottom are also improving, and that is great, but we don’t want that degree of inequality.  Only governments can correct that.  Right way to look at it is the standpoint of how you would view the world if you didn’t know who you would be.  If you’re not willing to gamble with your slip out of 100 random slips, you are lucky!  The top 1% of 7 billion people.  Everyone is wired differently.  You can’t say you do everything yourself.  We all have teachers, and people before us who led us to where we are.  We can’t let people fall too far behind.  You all definitely got good slips.

Link:  http://blogs.rhsmith.umd.edu/davidkass/uncategorized/warren-buffetts-meeting-with-university-of-maryland-mbams-students-november-15-2013/

As for the third biological lesson of history, that life must breed, Will and Ariel Durant explain:

Nature has no use for organisms, variations, or groups that cannot reproduce abundantly.  She has a passion for quantity as prerequisite to the selection of quality; she likes large litters, and relishes the struggle that picks the surviving few; doubtless she looks on approvingly at the upstream race of a thousand sperms to fertilize one ovum.  She is more interested in the species than in the individual, and makes little difference between civilization and barbarism.  She does not care that a high birth rate has usually accompanied a culturally low civilization, and a low birth rate a civilization culturally high; and she (meaning Nature as the process of birth, variation, competition, selection, and survival) sees to it that a nation with a low birth rate shall be periodically chastened by some more virile and fertile group.  (page 21)

 

RACE AND HISTORY

Will and Ariel Durant sum it up:

“Racial” antipathies have some origin in ethnic origin, but they are also generated, perhaps predominantly, by differences of acquired culture—of language, dress, habits, moral, or religion.  There is no cure for such antipathies except a broadened education.  A knowledge of history may teach us that civilization is a co-operative product, that nearly all people’s have contributed to it; it is our common heritage and debt; and the civilized soul will reveal itself in treating every man or woman, however lowly, as a representative of one of these creative and contributory groups.  (page 31)

 

CHARACTER AND HISTORY

Will and Ariel Durant:

Evolution in man during recorded time has been social rather than biological: it has proceeded not by heritable variations in the species, but mostly by economic, political, intellectual, and moral innovation transmitted to individuals and generations by imitation, custom, or education.  Custom and tradition within a group correspond to type and heredity in the species, and to instincts in the individual; they are ready adjustments to typical and frequently repeated situations.  New situations, however, do arise, requiring novel, unstereotyped responses; hence development, in the higher organisms, requires a capacity for experiment and innovation—the social correlates of variation and mutation.  Social evolution is an interplay of custom with origination.  (page 34)

Occasionally some new challenge or situation has required the new (or sometimes very old) ideas of an innovator—whether scientist, inventor, or leader (business, political, spiritual).

 

MORALS AND HISTORY

Will and Ariel Durant note that what are today considered vices may once have been virtues—i.e., advantages for survival.  They observe that the transition from hunting to agriculture called for new virtues:

We may reasonably assume that the new regime demanded new virtues, and changed some old virtues into vices.  Industriousness became more vital than bravery, regularity and thrift more profitable than violence, peace more victorious than war.  Children were economic assets; birth control was made immoral.  On the farm, the family was the unit of production under the discipline of the father and the seasons, and paternal authority had a firm economic base.  (page 38)

Gradually and then rapidly, write the Durants, the Industrial Revolution changed the economic form and moral superstructure of European and American life.  Many went to work as individuals in factories, and many of them worked with machines in the factories.

The Durants point out that much written history is, as Voltaire said, “a collection of the crimes, follies, and misfortunes” of humankind.  However, this written history typically does not include many good and noble deeds that actually occurred:

We must remind ourselves again that history as usually written (peccavimus) is quite different from history as usually lived: the historian records the exceptional because it is interesting—because it is exceptional.  If all those individuals who had no Boswell had found their numerically proportionate place in the pages of historians we should have a duller but juster view of the past and of man.  Behind the red facade of war and politics, misfortune and poverty, adultery and divorce, murder and suicide, were millions of orderly homes, devoted marriages, men and women kindly and affectionate, troubled and happy with children.  Even in recorded history we find so many instances of goodness, even of nobility, that we can forgive, though not forget, the sins.  The gifts of charity have almost equaled the cruelties of battlefields and jails.  How many times, even in our sketchy narratives, have we seen men helping one another… (page 41)

 

RELIGION AND HISTORY

Religion has helped with educating the young.  And religion has given meaning and dignity to even the lowliest existence, write the Durants.  Religion gives many people hope.  However, religion has stumbled at important times:

The majestic dream broke under the attacks of nationalism, skepticism, and human frailty.  The Church was manned with men, who often proved biased, venal, or extortionate.  France grew in wealth and power, and made the papacy her political tool.  Kings became strong enough to compel a pope to dissolve that Jesuit order which had so devotedly supported the popes.  The Church stooped to fraud, as with pious legends, bogus relics, and dubious miracles… More and more the hierarchy spent its energies in promoting orthodoxy rather than morality, and the Inquisition almost fatally disgraced the Church.  Even while preaching peace the Church fomented religious wars in sixteenth-century France and the Thirty Years’ War in seventeenth-century Germany.  It played only a modest part in the outstanding advance of modern morality—

the abolition of slavery.  It allowed the philosophers to take the lead in the humanitarian movements that have alleviated the evils of our time.  (page 45)

 

ECONOMICS AND HISTORY

Will and Ariel Durant open the chapter:

Unquestionably the economic interpretation illuminates much history.  The money of the Delian Confederacy built the Parthenon; the treasury of Cleopatra’s Egypt revitalized the exhausted Italy of Augustus, gave Virgil an annuity and Horace a farm.  The Crusades, like the wars of Rome with Persia, were attempts of the West to capture trade routes to the East; the discovery of America was a result of the failure of the Crusades.  The banking house of the Medici financed the Florentine Renaissance; the trade and industry of Nuremberg made Durer possible.  The French Revolution came not because Voltaire wrote brilliant satires and Rousseau sentimental romances, but because the middle classes had risen to economic leadership, needed legislative freedom for their enterprise and trade, and itched for social acceptance and political power.  (pages 52-53)

Bankers have often risen to the top of the economic pyramid, since they have been able to direct the flow of capital.

The Durants note the importance of the profit motive in moving the economy forward:

The experience of the past leaves little doubt that every economic system must sooner or later rely upon some form of profit motive to stir individuals and groups to productivity.  Substitutes like slavery, police supervision, or ideological enthusiasm prove too unproductive, too expensive, and too transient.  (page 54)

Wealth tends naturally to concentrate in the hands of the most able.  Periodically it must be redistributed.

…The government of the United States, in 1933-52 and 1960-65, followed Solon’s peaceful methods, and accomplished a moderate and pacifying redistribution; perhaps someone had studied history.  The upper classes in America cursed, complied, and resumed the concentration of wealth.

We conclude that the concentration of wealth is natural and inevitable, and is periodically alleviated by violent or peaceable partial redistribution.  (page 57)

 

SOCIALISM AND HISTORY

Capitalism—especially in America—has unleashed amazing productivity and will continue to do so for a long time:

The struggle of socialism against capitalism is part of the historic rhythm in the concentration and dispersion of wealth.  The capitalist, of course, has fulfilled a creative function in history: he has gathered the savings of the people into productive capital by the promise of dividends or interest; he has financed the mechanization of industry and agriculture, and the rationalization of distribution; and the result has been such a flow of goods from producer to consumer as history has never seen before.  He has put the liberal gospel of liberty to his use by arguing that businessmen left relatively free from transportation tolls and legislative regulation can give the public a greater abundance of food, homes, comfort, and leisure than has ever come from industries managed by politicians, manned by governmental employees, and supposedly immune to the laws of supply and demand.  In free enterprise the spur of competition and the zeal and zest of ownership arouse the productiveness and inventiveness of men; nearly every economic ability sooner or later finds its niche and reward in the shuffle of talents and the natural selection of skills; and a basic democracy rules the process insofar as most of the articles to be produced, and the services to be rendered, are determined by public demand rather than by governmental decree.  Meanwhile competition compels the capitalist to exhaustive labor, and his products to ever-rising excellence.  (pages 58-59)

Throughout most of history, socialist structures or centralized control by government have guided economies.  The Durants offer many examples, including that of Egypt:

In Egypt under the Ptolemies (323 B.C. – 30 B.C.) the state owned the soil and managed agriculture: the peasant was told what land to till, what crops to grow; his harvest was measured and registered by government scribes, was threshed on royal threshing floors, and was conveyed by a living chain of fellaheen into the granaries of the king.  The government owned the mines and appropriated the ore.  It nationalized the production and sale of oil, salt, papyrus, and textiles.  All commerce was controlled and regulated by the state; most retail trade was in the hands of state agents selling state-produced goods.  Banking was a government monopoly, but its operation might be delegated to private firms.  Taxes were laid upon every person, industry, process, product, sale, and legal document.  To keep track of taxable transactions and income, the government maintained a swarm of scribes and a complex system of personal and property registration.  The revenue of this system made the Ptolemaic the richest state of the time.  Great engineering enterprises were completed, agriculture was improved, and a large proportion of the profits went to develop and adorn the country and to finance its cultural life.  About 290 B.C. the famous Museum and Library of Alexandria were founded.  Science and literature flourished; at uncertain dates in this Ptolemaic era some scholars made the “Septuagint” translation of the Pentateuch into Greek.  (pages 59-60)

The Durants then tell the story of Rome under Diocletian:

…Faced with increasing poverty and restlessness among the masses, and with imminent danger of barbarian invasion, he issued in A.D. 301 an Edictum de pretiis, which denounced monopolists for keeping goods from the market to raise prices, and set maximum prices and wages for all important articles and services.  Extensive public works were undertaken to put the unemployed to work, and food was distributed gratis, or at reduced prices, to the poor.  The government—which already owned most mines, quarries, and salt deposits—brought nearly all major industries and guilds under detailed control.  “In every large town,” we are told, “the state became a powerful employer, … standing head and shoulders above the private industrialists, who were in any case crushed by taxation.”  When businessmen predicted ruin, Diocletian explained that the barbarians were at the gate, and that individual liberty had to be shelved until collective liberty could be made secure.  The socialism of Diocletian was a war economy, made possible by fear of foreign attack.  Other factors equal, internal liberty varies inversely as external danger.

The task of controlling men in economic detail proved too much for Diocletian’s expanding, expensive, and corrupt bureaucracy.  To support this officialdom—the army, the court, public works, and the dole—taxation rose to such heights that men lost incentive to work or earn, and an erosive contest began between lawyers finding devices to evade taxes and lawyers formulating laws to prevent evasion.  Thousands of Romans, to escape the taxgatherer, fled over the frontiers to seek refuge among the barbarians.  Seeking to check this elusive mobility, and to facilitate regulation and taxation, the government issued decrees binding the peasant to his field and the worker to his shop until all his debts and taxes had been paid.  In this and other ways medieval serfdorm began.  (pages 60-61)

The Durants then recount several attempts at socialism in China, including under the philosopher-king Wang Mang:

Wang Mang (r. A.D. 9-23) was an accomplished scholar, a patron of literature, a millionaire who scattered his riches among his friends and the poor.  Having seized the throne, he surrounded himself with men trained in letters, science, and philosophy.  He nationalized the land, divided it into equal tracts among the peasants, and put an end to slavery.  Like Wu Ti, he tried to control prices by the accumulation or release of stockpiles.  He made loans at low interest to private enterprise.  The groups whose profits had been clipped by his legislation united to plot his fall; they were helped by drought and flood and foreign invasion.  The rich Liu family put itself at the head of a general rebellion, slew Wang Mang, and repealed his legislation.  Everything was as before.  (page 62)

Later, the Durants tell of the longest-lasting socialist government: the Incas in what is now Peru.  Everyone was an employee of the state.  It seems all were happy, given the promise of security and food.

There was also a Portuguese colony in which 150 Jesuits organized 200,000 Indians in a socialist society (c. 1620 – 1750).  Every able-bodied person was required to work eight hours a day.  The Jesuits served as teachers, physicians, and judges.  The penal system did not include capital punishment.  The Jesuits also provided for recreation, including choral performances.  All were peaceful and happy, write the Durants.  And they defended themselves well when attacked.  The socialist experiment ended when the Spanish in America wanted immediately to occupy the Portuguese colony because it was rumored to contain gold.  The Portuguese government under Pombal—at the time, in disagreement with the Jesuits—ordered the priests and the natives to leave the settlements, say the Durants.

The Durants conclude the chapter:

… [Marx] interpreted the Hegelian dialectic as implying that the struggle between capitalism and socialism would end in the complete victory of socialism; but if the Hegelian formula of thesis, antithesis, and synthesis is applied to the Industrial Revolution as thesis, and to capitalism versus socialism as antithesis, the third condition would be a synthesis of capitalism and socialism; and to this reconciliation the Western world visibly moves.  (page 66)

Note that the Durants were writing in 1968.

 

GOVERNMENT AND HISTORY

Will and Ariel Durant:

Alexander Pope thought that only a fool would dispute over forms of government.  History has a good word to say for all of them, and for government in general.  Since men love freedom, and the freedom of individuals in society requires some regulation of conduct, the first condition of freedom is its own limitation; make it absolute and it dies in chaos.  So the prime task of government is to establish order; organized central force is the sole alternative to incalculable and disruptive forces in private hands.  (page 68)

It’s difficult to say when people were happiest.  Since I believe strongly that the most impactful technological breakthroughs ever—including but not limited to AI and genetics—are going to occur in the next 20-80 years, I would argue that we as humans are a long way away from the happiness we can achieve in the future.  (I also think Steven Pinker is right—in The Better Angels of Our Nature—that people are becoming less violent, slowly but surely.)

But if you had to pick a historical period, I would defer to the great historians to make this selection.  The Durants:

…”If,” said Gibbon, “a man were called upon to fix the period during which the condition of the human race was most happy and prosperous, he would without hesitation name that which elapsed from the accession of Nerva to the death of Marcus Aurelius.  Their united reigns are possibly the only period of history in which the happiness of a great people was the sole object of government.”  In that brilliant age, when Rome’s subjects complimented themselves on being under her rule, monarchy was adoptive: the emperor transmitted his authority not to his offspring but to the ablest man he could find; he adopted this man as his son, trained him in the functions of government, and gradually surrendered to him the reins of power.  The system worked well, partly because neither Trajan nor Hadrian had a son, and the sons of Antonius Pius died in childhood.  Marcus Aurelius had a son, Commodus, who succeeded him because the philosopher failed to name another heir; soon chaos was king.  (page 69)

The Durants then write that most monarchs overall do not have a great record.

Hence most governments have been oligarchies—ruled by a minority, chosen either by birth, as in aristocracies, or by a religious organization, as in theocracies, or by wealth, as in democracies.  It is unnatural (as even Rousseau saw) for a majority to rule, for a majority can seldom be organized for united and specific action, and a minority can.  If the majority of abilities is contained in a minority of men, minority government is as inevitable as the concentration of wealth; the majority can do no more than periodically throw out one minority and set up another.  The aristocrat holds that political selection by birth is the sanest alternative to selection by money or theology or violence.  Aristocracy withdraws a few men from the exhausting and coarsening strife of economic competition, and trains them from birth, through example, surroundings, and minor office, for the tasks of government; these tasks require a special preparation that no ordinary family or background can provide.  Aristocracy is not only a nursery of statesmanship, it is also a repository and vehicle of culture, manners, standards, and tastes, and serves thereby as a stabilizing barrier to social fads, artistic crazes, or neurotically rapid changes in the moral code… (page 70)

When aristocracies were too selfish and myopic, however, slowing progress, the new rich combined with the poor to overthrow them, say the Durants.

The Durants point out that most revolutions probably would have occurred without violence through gradual economic development.  They mention the rise of America as an example.  They also note that the English aristocracy was gradually replaced by the money-controlling business class in England.  The Durants then generalize:

The only real revolution is in the enlightenment of the mind and the improvement of character, the only real emancipation is individual, and the only real revolutionists are philosophers and saints.  (page 72)

A bit later, the Durants discuss the battles between the poor and the rich in Athenian democracy around the time of Plato’s death (347 B.C.).

…The poor schemed to despoil the rich by legislation, taxation, and revolution; the rich organized themselves for protection against the poor.  The members of some oligarchic organizations, says Aristotle, took a solemn oath: “I will be an adversary of the people” (i.e., the commonalty), “and in the Council I will do it all the evil that I can.”  “The rich have become so unsocial,” wrote Isocrates about 366 B.C., “that those who own property had rather throw their possessions into the sea than lend aid to the needy, while those who are in poorer circumstances would less gladly find a treasure than seize the possessions of the rich.”  (pages 74-75)

Much of this class warfare became violent.  And Greece was divided when Philip of Macedon attacked in 338 B.C.

The Durants continue:

Plato’s reduction of political evolution to a sequence of monarchy, aristocracy, democracy, and dictatorship found another illustration in the history of Rome.  During the third and second centuries before Christ a Roman oligarchy organized a foreign policy and a disciplined army, and conquered and exploited the Mediterranean world.  The wealth so won was absorbed by the patricians, and the commerce so developed raised to luxurious opulence the upper middle class.  Conquered Greeks, Orientals, and Africans were brought to Italy to serve as slaves on the latifundia; the native farmers, displaced from the soil, joined the restless, breeding proletariat in the cities, to enjoy the monthly dole of grain that Caius Gracchus had secured for the poor in 123 B.C.  Generals and proconsuls returned from the provinces loaded with spoils for themselves and the ruling class; millionaires multiplied; mobile money replaced land as the source or instrument of political power; rival factions competed in the wholesale purchase of candidates and votes; in 53 B.C. one group of voters received ten million sesterces for its support.  When money failed, murder was available: citizens who had voted the wrong way were in some instances beaten close to death and their houses were set on fire.  Antiquity had never known so rich, so powerful, and so corrupt a government.  The aristocrats engaged Pompey to maintain their ascendancy; the commoners cast their lot with Caesar; ordeal of battle replaced the auctioning of victory; Caesar won, and established a popular dictatorship.  Aristocrats killed him, but ended by accepting the dictatorship of his grandnephew and stepson Augustus (27 B.C.).  Democracy ended, monarchy was restored; the Platonic wheel had come full turn.  (pages 75-76)

The Durants describe American democracy as the most universal ever seen so far.  But the advance of technology—to the extent that it makes the economy more complex—tends to concentrate power even more:

Every advance in the complexity of the economy puts an added premium upon superior ability, and intensifies the concentration of wealth, responsibility, and political power.  (page 77)

Will and Ariel Durant conclude that democracy has done less harm and more good than any other form of government:

…It gave to human existence a zest and camaraderie that outweighed its pitfalls and defects.  It gave to thought and science and enterprise the freedom essential to their operation and growth.  It broke down the walls of privilege and class, and in each generation its raised up ability from every rank and place.  Under its stimulus Athens and Rome became the most creative cities in history, and America in two centuries has provided abundance for an unprecedently large proportion of its population.  Democracy has now dedicated itself resolutely to the spread and lengthening of education, and to the maintenance of public health.  If equality of educational opportunity can be established, democracy will be real and justified.  For this is the vital truth beneath its catchwords: that though men cannot be equal, their access to education and opportunity can be made more nearly equal.  The rights of man are not rights to office and power, but the rights of entry into every avenue that may nourish and test a man’s fitness for office and power.  A right is not a gift of God or nature but a privilege which it is good that the individual should have.  (pages 78-79)

 

HISTORY AND WAR

As mentioned earlier, I happen to agree with Steven Pinker’s thesis in The Better Angels of Our Nature:  we humans are slowly but surely becoming less violent as economic and technological progress continues.  But it could still take a very long time before wars stop entirely (if ever).

Will and Ariel Durant were writing in 1968, so they didn’t know that the subsequent 50 years would be (arguably) less violent overall.  In any case, they offer interesting insights into war:

The causes of war are the same as the causes of competition among individuals: acquisitiveness, pugnacity, and pride; the desire for food, land, materials, fuels, mastery.  The state has our instincts without our restraints.  The individual submits to restraints laid upon him by morals and laws, and agrees to replace combat with conference, because the state guarantees him basic protection in his life, property, and legal rights.  The state itself acknowledges no substantial restraints, either because it is strong enough to defy any interference with its will or because there is no superstate to offer it basic protection, and no international law or moral code wielding effective force.  (page 81)

The Durants write that, after freeing themselves from papal control, many modern European states—if they foresaw a war—would cause their people to hate the people in the opposing country.  Today, we know from psychology that when people develop extreme hatreds, they nearly always dehumanize and devalue the human beings they hate and minimize their virtues.  Such extreme hatreds, if unchecked, often lead to tragic consequences.  (The Durants note that wars between European states in the sixteenth century still permitted each side to respect the other’s civilization and achievements.)

Again bearing in mind when the Durants were writing (1968), the historical precedent seemed to indicate that the United States should attack emerging communist powers before they became powerful enough to overcome the United States.  The Durants:

…There is something greater than history.  Somewhere, sometime, in the name of humanity, we must challenge a thousand evil precedents, and dare to apply the Golden Rule to nations, as the Buddhist King Ashoka did (262 B.C.), or at least do what Augustus did when he bade Tiberius desist from further invasion of Germany (A.D. 9)… “Magnanimity in politics,” said Edmund Burke, “is not seldom the truest wisdom, and a great empire and little minds go ill together.”  (pages 84-85)

Perhaps the humanist will agree with Pinker (as I do) that eventually, however slowly, we will move towards the cessation of war (at least between humans).  If this happens, it may be due largely to unprecedented progress in technology (including but not limited to AI and genetics):  we will gain control of our own evolution and wealth per capita will advance to unimaginable levels.

At the same time, we shouldn’t assume that aliens are necessarily peace-loving.  Perhaps humanity will have to unite in self-defense, say the Durants.

 

GROWTH AND DECAY

Will and Ariel Durant give again their definition of civilization:

We have defined civilization as “social order promoting cultural creation.”  It is political order secured through custom, morals, and law, and economic order secured through a continuity of production and exchange; it is cultural creation through freedom and facilities for the origination, expression, testing, and fruition of ideas, letters, manners, and arts.  It is an intricate and precarious web of human relationships, laboriously built and readily destroyed.  (page 87)

The Durants later add:

History repeats itself in the large because human nature changes with geological leisureliness, and man is equipped to respond in stereotyped ways to frequently occurring situations and stimuli like hunger, danger, and sex.  But in a developed and complex civilization individuals are more differentiated and unique than in primitive society, and many situations contain novel circumstances requiring modifications of instinctive response; custom recedes, reasoning spreads; the results are less predictable.  There is no certainty that the future will repeat the past.  Every year is an adventure.  (page 88)

Growth happens when people meet challenges.

If we ask what makes a creative individual, we are thrown back from history to psychology and biology—to the influence of the environment and the gamble and secret of the chromosomes.  (page 91)

Decay of the civilization or group happens when the political or intellectual leaders fail to meet the challenges of change.

The challenges may come from a dozen sources… A change in the instruments or routes of trade—as by the conquest of the ocean or the air—may leave old centers of civilization becalmed and decadent, like Pisa or Venice after 1492.  Taxes may mount to the point of discouraging capital investment and productive stimulus.  Foreign markets and materials may be lost to more enterprising competition; excess of imports over exports may drain [wealth and reserves].  The concentration of wealth may disrupt the nation in class or race war.  The concentration of population and poverty in great cities may compel a government to choose between enfeebling the economy with a dole and running the risk of riot and revolution.  (page 92)

All great individuals so far have died.  (Future technology may allow us to fix that, perhaps this century.)  But great civilizations don’t really die, say the Durants:

…Greek civilization is not really dead; only its frame is gone and its habitat has changed and spread; it survives in the memory of the race, and in such abundance that no one life, however full and long, could absorb it all.  Homer has more readers now than in his own day and land.  The Greek poets and philosophers are in every library and college; at this moment Plato is being studied by a hundred thousand discoverers of the “dear delight” of philosophy overspreading life with understanding thought.  This selective survival of creative minds is the most real and beneficent of immortalities.

Nations die.  Old regions grow arid, or suffer other change.  Resilient man picks up his tools and his arts, and moves on, taking his memories with him.  If education has deepened and broadened those memories, civilization migrates with him, and builds somewhere another home.  In the new land he need not begin entirely anew, nor make his way without friendly aid; communication and transport bind him, as in a nourishing placenta, with his mother country.  Rome imported Greek civilization and transmitted it to Western Europe; America profited from European civilization and prepares to pass it on, with a technique of transmission never equaled before.

Civilizations are the generations of the racial soul.  As life overrides death with reproduction, so an aging culture hands its patrimony down to its heirs across the years and the seas.  Even as these lines are being written, commerce and print, wires and waves and invisible Mercuries of the air are binding nations and civilizations together, preserving for all what each has given to the heritage of mankind.  (pages 93-94)

 

IS PROGRESS REAL?

If progress is increasing our control of the environment, then obviously progress continues to be made, primarily because scientists, inventors, entrepreneurs, and other leaders continue to push science, technology, and education forward.  The Durants also point out that people are living much longer than ever before.  (Looking forward today from 2017, the human lifespan may double or triple at a minimum; and we may eventually develop the capacity to live virtually forever.)

Will and Ariel Durant then sum up all they have learned:

History is, above all else, the creation and recording of that heritage; progress is its increasing abundance, preservation, transmission, and use.  To those of us who study history not merely as a warning reminder of man’s follies and crimes, but also as an encouraging remembrance of generative souls, the past ceases to be a depressing chamber of horrors; it becomes a celestial city, a spacious country of the mind, wherein a thousand saints, statesmen, inventors, scientists, poets, artists, musicians, lovers, and philosophers still live and speak, teach and carve and sing.  The historian will not mourn because he can see no meaning in human existence except that which man puts into it; let it be our pride that we ourselves may put meaning into our lives, and sometimes a significance that transcends death.  If a man is fortunate he will, before he dies, gather up as much as he can of his civilized heritage and transmit it to his children.  And to his final breath he will be grateful for this inexhaustible legacy, knowing that it is our nourishing mother and our lasting life.  (page 102)

 

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Future of the Mind

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 20, 2017

This week’s blog post covers another book by the theoretical physicist Michio Kaku—The Future of the Mind (First Anchor Books, 2015).

Most of the wealth we humans have created is a result of technological progress (in the context of some form of capitalism plus the rule of law).  Most future wealth will result directly from breakthroughs in physics, artificial intelligence, genetics, and other sciences.  This is why AI is fascinating in general (not just for investing).  AI—in combination with other technologies—may eventually turn out to be the most transformative technology of all time.

 

A PHYSICIST’S DEFINITION OF CONSCIOUSNESS

Physicists have been quite successful historically because of their ability to gather data, to measure ever more precisely, and to construct testable, falsifiable mathematical models to predict the future based on the past.  Kaku explains:

When a physicist first tries to understand something, first he collects data and then he proposes a “model,” a simplified version of the object he is studying that captures its essential features.  In physics, the model is described by a series of parameters (e.g., temperature, energy, time).  Then the physicist uses the model to predict its future evolution by simulating its motions.  In fact, some of the world’s largest supercomputers are used to simulate the evolution of models, which can describe protons, nuclear explosions, weather patterns, the big bang, and the center of black holes.  Then you create a better model, using more sophisticated parameters, and simulate it in time as well.  (page 42)

Kaku then writes that he’s taken bits and pieces from fields such as neurology and biology in order to come up with a definition of consciousness:

Consciousness is a process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space, time, and in relation to others), in order to accomplish a goal (e.g., find mates, food, shelter).

Kaku emphasizes that humans use the past to predict the future, whereas most animals are focused only on the present or the immediate future.

Kaku writes that one can rate different levels of consciousness based on the definition.  The lowest level of consciousness is Level 0, where an organism has limited mobility and creates a model using feedback loops in only a few parameters (e.g., temperature).  Kaku gives the thermostat as an example.  If the temperature gets too hot or too cold, the thermostat registers that fact and then adjusts the temperature accordingly using an air conditioner or heater.  Kaku says each feedback loop is “one unit of consciousness,” so the thermostat – with only one feedback loop – would have consciousness of Level 0:1.

Organisms that are mobile and have a central nervous system have Level I consciousness.  There’s a new set of parameters—relative to Level 0—based on changing locations.  Reptiles are an example of Level I consciousness.  The reptilian brain may have a hundred feedback loops based on their senses, etc.  The totality of feedback loops give the reptile a “mental picture” of where they are in relation to various objects (including prey), notes Kaku.

Animals exemplify Level II consciousness.  The number of feedback loops jumps exponentially, says Kaku.  Many animals have complex social structures.  Kaku explains that the limbic system includes the hippocampus (for memories), the amygdala (for emotions), and the thalamus (for sensory information).

You could rank the specific level of Level II consciousness of an animal by listing the total number of distinct emotions and social behaviors.  So, writes Kaku, if there are ten wolves in the wolf pack, and each wolf interacts with all the others with fifteen distinct emotions and gestures, then a first approximation would be that wolves have Level II:150 consciousness.  (Of course, there are caveats, since evolution is never clean and precise, says Kaku.)

 

LEVEL III CONSCIOUSNESS: SIMULATING THE FUTURE

Kaku observes that there is a continuum of consciousness from the most basic organisms up to humans.  Kaku quotes Charles Darwin:

The difference between man and the higher animals, great as it is, is certainly one of degree and not of kind.

Kaku defines human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future.  This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal.

Kaku explains that we as humans have so many feedback loops that we need a “CEO”—an expanded prefrontal cortex that can analyze all the data logically and make decisions.  More precisely, Kaku writes that neurologist Michael Gazzaniga has identified area 10, in the lateral prefrontal cortex, which is twice as big in humans as in apes.  Area 10 is where memory, planning, abstract thinking, learning rules, picking out what is relevant, etc. happens.  Kaku says he will refer to this region as the dorsolateral prefrontal cortex, roughly speaking.

Most animals, by constrast, do not think and plan, but rely on instinct.  For instance, notes Kaku, animals do not plan to hibernate, but react instinctually when the temperature drops.  Predators plan, but only for the immediate future.  Primates plan a few hours ahead.

Humans, too, rely on instinct and emotion.  But humans also analyze and evaluate information, and run mental simulations of the future—even hundreds or thousands of years into the future.  This, writes Kaku, is how we as humans try to make the best decision in pursuit of a goal.  Of course, the ability to simulate various future scenarios gives humans a great evolutionary advantage for things like evading predators and finding food and mates.

As humans, we have so many feedback loops, says Kaku, that it would be a chaotic sensory overload if we didn’t have the “CEO” in the dorsolateral prefrontal cortex.  We think in terms of chains of causality in order to predict future scenarios.  Kaku explains that the essence of humor is simulating the future but then having an unexpected punch line.

Children play games largely in order to simulate specific adult situations.  When adults play various games like chess, bridge, or poker, they mentally simulate various scenarios.

Kaku explains the mystery of self-awareness:

Self-awareness is creating a model of the world and simulating the future in which you appear.

As humans, we constantly imagine ourselves in various future scenarios.  In a sense, we are continuously running “thought experiments” about our lives in the future.

Kaku writes that the medial prefrontal cortex appears to be responsible for creating a coherent sense of self out of the various sensations and thoughts bombarding our brains.  Furthermore, the left brain fits everything together in a coherent story even when the data don’t make sense.  Dr. Michael Gazzaniga was able to show this by running experiments on split-brain patients.

Kaku speculates that humans can reach better conclusions if the brain receives a great deal of competing data.  With enough data and with practice and experience, the brain can often reach correct conclusions.

At the beginning of the next section—Mind Over Matter—Kaku quotes Harvard psychologist Steven Pinker:

The brain, like it or not, is a machine.  Scientists have come to that conclusion, not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain.

 

DARPA

DARPA is the Pentagon’s Defense Advanced Research Projects Agency.  Kaku writes that DARPA has been central to some of the most important technological breakthroughs of the twentieth century.

President Dwight Eisenhower set up DARPA originally as a way to compete with the Russians after they launched Sputnik into orbit in 1957.  Over the years, some of DARPA’s projects became so large that they spun them off as separate entities, including NASA.

DARPA’s “only charter is radical innovation.”  DARPA scientists have always pushed the limits of what is physically possible.  One of DARPA’s early projects was Arpanet, a telecommunications network to connect scientists during and after World War III.  After the breakup of the Soviet bloc, the National Science Foundation decided to declassify Arpanet and give away the codes and blueprints for free.  This would eventually become the internet.

DARPA helped create Project 57, which was a top-secret project for guiding ballistic missiles to specific targets.  This technology later became the foundation for the Global Positioning System (GPS).

DARPA has also been a key player in other technologies, including cell phones, night-vision goggles, telecommunications advances, and weather satellites, says Kaku.

Kaku writes that, with a budget over $3 billion, DARPA has recently focused on the brain-machine interface.  Kaku quotes former DARPA official Michael Goldblatt:

Imagine if soldiers could communicate by thought alone… Imagine the threat of biological attack being inconsequential.  And contemplate, for a moment, a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-through.  As impossible as these visions sound or as difficult as you might think the task would be, these visions are the everyday work of the Defense Sciences Office [a branch of DARPA].  (page 74)

Goldblatt, notes Kaku, thinks the long-term legacy of DARPA will be human enhancement.  Goldblatt’s daughter has cerebral palsy and has been confined to a wheelchair all her life.  Goldblatt is highly motivated not only to help millions of people in the future and create a legacy, but also to help his own daughter.

 

TELEKINESIS

Cathy Hutchinson became a quadriplegic after suffering a massive stroke.  But in May 2012, scientists from Brown University placed a tiny chip on top of her brain—called Braingate—which is connected by wires to a computer.  (The chip has ninety-six electrodes for picking up brain impulses.)  Her brain could then send signals through the computer to control a mechanical robotic arm.  She reported her great excitement and said she knows she will get robotic legs eventually, too.  This might happen soon, says Kaku, since the field of cyber prosthetics is advancing fast.

Scientists at Northwestern placed a chip with 100 electrodes on the brain of a monkey.  The signals were carefully recorded while the monkey performed various tasks involving the arms.  Each task would involve a specific firing of neurons, which the scientists eventually were able to decipher.

Next, the scientists took the signal sequences from the chip and instead of sending them to a mechanical arm, they sent the signals to the monkey’s own arm.  Eventually the monkey learned to control its own arm via the computer chips.  (The reason 100 electrodes is enough is because they were placed on the output neurons.  So the monkey’s brain had already done the complex processing involving millions of neurons by the time the signals reached the electrodes.)

This device is one of many that Northwestern scientists are testing.  These devices, which continue to be developed, can help people with spinal cord injuries.

Kaku observes that much of the funding for these developments comes from a DARPA project called Revolutionizing Prosthetics, a $150 million effort since 2006.  Retired U.S. Army colonel Geoffrey Ling, a neurologist with several tours of duty in Iraq and Afghanistan, is a central figure behind Revolutionizing Prosthetics.  Dr. Ling was appalled by the suffering caused by roadside bombs.  In the past, many of these brave soldiers would have died.  Today, many more can be saved.  However, more than 1,300 of them have lost limbs after returning from the Middle East.

Dr. Ling, with funding from the Pentagon, instructed his staff to figure out how to replace lost limbs within five years.  Ling:

They thought we were crazy.  But it’s in insanity that things happen.

Kaku continues:

Spurred into action by Dr. Ling’s boundless enthusiasm, his crew has created miracles in the laboratory.  For example, Revolutionary Prosthetics funded scientists at the Johns Hopkins Applied Physics Laboratory, who have created the most advanced mechanical arm on Earth, which can duplicate nearly all the delicate motions of the fingers, hand, and arm in three dimensions.  It is the same size and has the same strength and agility as a real arm.  Although it is made of steel, if you covered it up with flesh-colored plastic, it would be nearly indistinguishable from a real arm.

This arm was attached to Jan Sherman, a quadriplegic who had suffered from a genetic disease that damaged the connection between her brain and her body, leaving her completely paralyzed from the neck down.  At the University of Pittsburgh, electrodes were placed directly on top of her brain, which were then connected to a computer and then to a mechanical arm.  Five months after surgery to attach the arm, she appeared on 60 Minutes.  Before a national audience, she cheerfully used her new arm to wave, greet the host, and shake his hand.  She even gave him a fist bump to show how sophisticated the arm was.

Dr. Ling says, ‘In my dream, we will be able to take this into all sorts of patients, patients with strokes, cerebral palsy, and the elderly.‘  (page 84)

Dr. Miguel Nicholelis of Duke University is pursuing novel applications of the brain-machine interface (BMI).  Dr. Nicholelis has demonstrated that BMI can be done across continents.  He put a chip on a monkey’s brain.  The chip was connected to the internet.  When the monkey was walking on a treadmill in North Carolina, the signals were sent to a robot in Kyoto, Japan, which performed the same walking motions.

Dr. Nicholelis is also working on the problem that today’s prosthetic hands lack a sense of touch.  Dr. Nicholelis is trying to create a direct brain-to-brain interface to overcome this challenge.  Messages would go from the brain to the mechanical arm, and then directly back to the brain, bypassing the stem altogether.  This is a brain-machine-brain interface (BMBI).

Dr. Nicholelis connected the motor cortex of rhesus monkeys to mechanical arms.  The mechanical arms have sensors, and send signals back to the brain by electrodes connected to the somato-sensory cortex (which registers the sensation of touch).  Dr. Nicholelis invented a new code to represent different surfaces.  After a month of practice, the brain learns the new code and can thus distinguish among different surfaces.

Dr. Nicholelis told Kaku that something like the holodeck from Star Trek—where you wander in a virtual world, but feel sensations when you bump into virtual objects—will be possible in the future.  Kaku writes:

The holodeck of the future might use a combination of two technologies.  First, people in the holodeck would wear internet contact lenses, so that they would see an entirely new virtual world everywhere they looked.  The scenery in your contact lense would change instantly with the push of a button.  And if you touched any object in this world, signals sent into the brain would simulate the sensation of touch, using BMBI technology.  In this way, objects in the virtual world you see inside your contact lense would feel solid.  (page 87)

Scientists have begun to explore an “Internet of the mind,” or brain-net.  In 2013, scientists went beyond animal studies and demonstrated the first human brain-to-brain communication.

This milestone was achieved at the University of Washington, with one scientist sending a brain signal (move your right arm) to another scientist.  The first scientist wore an EEG helmet and played a video game.  He fired a cannon by imagining moving his right arm, but was careful not to move it physically.

The signal from the EEG helmet was sent over the Internet to another scientist, who was wearing a transcranial magnetic helmet carefully placed over the part of his brain that controlled his right arm.  When the signal reached the second scientist, the helmet would send a magnetic pulse into his brain, which made his right arm move involuntarily, all by itself.  Thus, by remote control, one human brain could control the movement of another.

This breakthrough opens up a number of possibilities, such as exchanging nonverbal messages via the Internet.  You might one day be able to send the experience of dancing the tango, bungee jumping, or skydiving to the people on your e-mail list.  Not just physical activity, but emotions and feelings as well might be sent via brain-to-brian communication.

Nicholelis envisions a day when people all over the world could participate in social networks not via keyboards, but directly through their minds.  Instead of just sending e-mails, people on the brain-net would be able to telepathically exchange thoughts, emotions, and ideas in real time.  Today a phone call conveys only the information of the conversation and the tone of voice, nothing more.  Video conferencing is a bit better, since you can read the body language of the person on the other end.  But a brain-net would be the ultimate in communications, making it possible to share the totality of mental information in a conversation, including emotions, nuances, and reservations.  Minds would be able to share their most intimate thoughts and feeelings.  (pages 87-88)

Kaku gives more details of what would be needed to create a brain-net:

Creating a brain-net that can transmit such information would have to be done in stages.  The first step would be inserting nanoprobes into important parts of the brain, such as the left temporal lobe, which governs speech, and the occipital lobe, which governs vision.  Then computers would analyze these signals and decode them.  This information in turn could be sent over the Internet by fiber-optic cables.  

More difficult would be to insert these signals into another person’s brain, where they could be processed by the receiver.  So far, progress in this area has focused only on the hippocampus, but in the future it should be possible to insert messages directly into other parts of the brain corresponding to our sense of hearing, light, touch, etc.  So there is plenty of work to be done as scientists try to map the cortices of the brain involved in these senses.  Once these cortices have been mapped… it should be possible to insert words, thoughts, memories, and experiences into another brain.  (page 89)

Dr. Nicolelis’ next goal is the Walk Again Project.  They are creating a complete exoskeleton that can be controlled by the mind.  Nicolelis calls it a “wearable robot.”  The aim is to allow the paralyzed to walk just by thinking.  There are several challenges to overcome:

First, a new generation of microchips must be created that can be placed in the brain safely and reliably for years at a time.  Second, wireless sensors must be created so the exoskeleton can roam freely.  The signals from the brain would be received wirelessly by a computer the size of a cell phone that would probably be attached to your belt.  Third, new advances must be made in deciphering and interpreting signals from the brain via computers.  For the monkeys, a few hundred neurons were necessary to control the mechanical arms.  For a human, you need, at minimum, several thousand neurons to control an arm or leg.  And fourth, a power supply must be found that is portable and powerful enough to energize the entire exoskeleton.  (page 92)

 

MEMORIES AND THOUGHTS

One interesting possibility is that the long-term memory evolved in humans because it was useful for us in simulating and predicting future scenarios.

Indeed, brain scans done by scientists at Washington University in St. Louis indicate that areas used to recall memories are the same as those involved in simulating the future.  In particular, the link between the dorsolateral prefrontal cortex and the hippocampus lights up when a person is engaged in planning for the future and remembering the past.  In some sense, the brain is trying to ‘recall the future,’ drawing upon memories of the past in order to determine how something will evolve into the future.  This may also explain the curious fact that people who suffer from amnesia… are often unable to visualize what they will be doing in the future or even the very next day.  (page 113)

Some claim that Alzheimer’s disease may be the disease of the century.  As of Kaku’s writing, there were 5.3 million Americans with Alzheimer’s, and that number is expected to quadruple by 2050.  Five percent of people aged sixty-five to seventy-four have it, but more than 50 percent of those over eighty-five have it, even if they have no obvious risk factors.

One possible way to try to combat Alzheimer’s is to create antibodies or a vaccine that might specifically target misshapen protein molecules associated with the disease.  Another approach might be to create an artificial hippocampus.  Yet another approach is to see if specific genes can be found that improve memory.  Experiments on mice and fruit flies have been underway.

If the genetic fix works, it could be administered by a simple shot in the arm.  If it doesn’t work, another possible approach is to insert the proper proteins into the body.  Instead of a shot, it would be a pill.  But scientists are still trying to understand the process of memory formation.

Eventually, writes Kaku, it will be possible to record the totality of stimulation entering into a brain.  In this scenario, the Internet may become a giant library not only for the details of human lives, but also for the actual consciousness of various individuals.  If you want to see how your favorite hero or historical figure felt as they confronted the major crises of their lives, you’ll be able to do so.  Or you could share the memories and thoughts of a Nobel Prize-winning scientist, perhaps gleaning clues about how great discoveries are made.

 

ENHANCING OUR INTELLIGENCE

What made Einstein Einstein?  It’s very difficult to say, of course.  Partly, it may be that he was the right person at the right time.  Also, it wasn’t just raw intelligence, but perhaps more a powerful imagination and an ability to stick with problems for a very long time.  Kaku:

The point here is that genius is perhaps a combination of being born with certain mental abilities and also the determination and drive to achieve great things.  The essence of Einstein’s genius was probably his extraordinary ability to simulate the future through thought experiments, creating new physical principles via pictures.  As Einstein himself once said, ‘The true sign of intelligence is not knowledge, but imagination.’  And to Einstein, imagination meant shattering the boundaries of the known and entering the domain of the unknown.  (page 133)

The brain remains “plastic” even into adult life.  People can always learn new skills.  Kaku notes that the Canadian psychologist Dr. Donald Hebb made an important discovery about the brain:

the more we exercise certain skills, the more certain pathways in our brains become reinforced, so the task becomes easier.  Unlike a digital computer, which is just as dumb today as it was yesterday, the brain is a learning machine with the ability to rewire its neural pathways every time it learns something.  This is a fundamental difference between the digital computer and the brain.  (page 134)

Scientists also believe that the ability to delay gratification and the ability to focus attention may be more important than IQ for success in life.

Furthermore, traditional IQ tests only measure “convergent” intelligence related to the left brain and not “divergent” intelligence related to the right brain.  Kaku quotes Dr. Ulrich Kraft:

‘The left hemisphere is responsible for convergent thinking and the right hemisphere for divergent thinking.  The left side examines details and processes them logically and analytically but lacks a sense of overriding, abstract connections.  The right side is more imaginative and intuitive and tends to work holistically, integrating pieces of an informational puzzle into a whole.’  (page 138)

Kaku suggests that a better test of intelligence might measure a person’s ability to imagine different scenarios related to a specific future challenge.

Another avenue of intelligence research is genes.  We are 98.5 percent identical genetically to chimpanzees.  But we live twice as long and our mental abilities have exploded in the past six million years.  Scientists have even isolated just a handful of genes that may be responsible for our intelligence.  This is intriguing, to say the least.

In addition to having a larger cerebral cortex, our brains have many folds in them, vastly increasing their surface area.  (The brain of Carl Friedrich Gauss was found to be especially folded and wrinkled.)

Scientists have also focused on the ASPM gene.  It has mutated fifteen times in the last five or six million years.  Kaku:

Because these mutations coincide with periods of rapid growth in intellect, it it tantalizing to speculate that ASPM is among the handful of genes responsible for our increased intelligence.  If this is true, then perhaps we can determine whether these genes are still active today, and whether they will continue to shape human evolution in the future.  (page 154)

Scientists have also learned that nature takes numerous shortcuts in creating the brain.  Many neurons are connected randomly, so a detailed blueprint isn’t needed.  Neurons organize themselves in a baby’s brain in reaction to various specific experiences.  Also, nature uses modules that repeat over and over again.

It is possible that we will be able to boost our intelligence in the future, which will increase the wealth of society (probably significantly).  Kaku:

It may be possible in the coming decades to use a combination of gene therapy, drugs, and magnetic devices to increase our intelligence.  (page 162)

…raising our intelligence may help speed up technological innovation.  Increased intelligence would mean a greater ability to simulate the future, which would be invaluable in making scientific discoveries.  Often, science stagnates in certain areas because of a lack of fresh new ideas to stmulate new avenues of research.  Having an ability to simulate different possible futures would vastly increase the rate of scientific breakthroughs.

These scientific discoveries, in turn, could generate new industries, which would enrich all of society, creating new markets, new jobs, and new opportunities.  History if full of technological breakthroughs creating entirely new industries that benefited not just the few, but all of society (think of the transistor and the laser, which today form the foundation of the world economy).  (page 164)

 

DREAMS

Kaku explains that the brain, as a neural network, may need to dream in order to function well:

The brain, as we have seen, is not a digital computer, but rather a neural network of some sort that constantly rewires itself after learning new tasks.  Scientists who work with neural networks noticed something interesting, though.  Often these systems would become saturated after learning too much, and instead of processing more information they would enter a “dream” state, whereby random memories would sometimes drift and join together as the neural networks tried to digest all the new material.  Dreams, then, might reflect “house cleaning,” in which the brain tries to organize its memories in a more coherent way.  (If this is true, then possibly all neural networks, including all organisms that can learn, might enter a dream state in order to sort out their memories.  So dreams probably serve a purpose.  Some scientists have speculated that this might imply that robots that learn from experience might also eventually dream as well.)

Neurological studies seem to back up this conclusion.  Studies have shown that retaining memories can be improved by getting sufficient sleep between the time of activity and a test.  Neuroimaging shows that the areas of the brain that are activated during sleep are the same a those involved in learning a new task.  Dreaming is perhaps useful in consolidating this new information.  (page 172)

In 1977, Dr. Allan Hobson and Dr. Robert McCarley made history – seriously challenging Freud’s theory of dreams—by proposing the “activation synthesis theory” of dreams:

The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert.  As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state.

As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves.  These waves travel up the brain stem into the visual cortex, stimulating it to create dreams.  Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams.  (pages 174-175)

 

ALTERED STATE OF CONSCIOUSNESS

There seem to be certain parts of the brain that are associated with religious experiences and also with spirituality.  Dr. Mario Beauregard of the University of Montreal commented:

If you are an atheist and you live a certain kind of experience, you will relate it to the magnificence of the universe.  If you are a Christian, you will associate it with God.  Who knows.  Perhaps they are the same thing.

Kaku explains how human consciousness involves delicate checks and balances similar to the competing points of view that a good CEO considers:

We have proposed that a key function of human consciousness is to simulate the future, but this is not a trivial task.  The brain accomplishes it by having these feedback loops check and balance one another.  For example, a skillful CEO at a board meeting tries to draw out the disagreement among staff members and to sharpen competing points of view in order to sift through the various arguments and then make a final decision.  In the same way, various regions of the brain make diverging assessments of the future, which are given to the dorsolateral profrontal cortex, the CEO of the brain.  These competing assessments are then evaluated and weighted until a balanced final decision is made.  (page 205)

The most common mental disorder is depression, afflicting twenty million people in the United States.  One way scientists are trying to cure depression is deep brain stimulation (DBS)—inserting small probes into the brain and causing an electrical shock.  Kaku:

In the past decade, DBS has been used on forty thousand patients for motor-related diseases, such as Parkinson’s and epilepsy, which cause uncontrolled movements of the body.  Between 60 and 100 percent of the patients report significant improvement in controlling their shaking hands.  More than 250 hospitals in the United States now perform DBS treatment.  (page 208)

Dr. Helen Mayberg and colleagues at Washington University School of Medicine have discovered an important clue to depression:

Using brain scans, they identified an area of the brain, called Brodmann area 25 (also called the subcallosal cingulate region), in the cerebral cortex that is consistently hyperactive in depressed individuals for whom all other forms of treatment have been unsuccessful. 

…Dr. Mayberg had the idea of applying DBS directly to Broadmann area 25… her team took twelve patients who were clinically depressed and had shown no improvement after exhaustive use of drugs, psychotherapy, and electroshock therapy.

They found that eight of these chronically depressed individuals showed immediate progress.  Their success was so astonishing, in fact, that other groups raced to duplicate these results and apply DBS to other mental disorders…

Dr. Mayberg says, ‘Depression 1.0 was psychotherapy… Depression 2.0 was the idea that it’s a chemical imbalance.  This is Depression 3.0.  What has captured everyone’s imagination is that, by dissecting a complex behavior disorder into its component systems, you have a new way of thinking about it.’

Although the success of DBS in treating depressed individuals is remarkable, much more research needs to be done…

 

THE ARTIFICIAL MIND AND SILICON CONSCIOUSNESS

Kaku introduces the potential challenge of handling artificial intelligence as it evolves:

Given the fact that computer power has been doubling every two years for the past fifty years under Moore’s law, some say it is only a matter of time before machines eventually acquire self-awareness that rivals human intelligence.  No one knows when this will happen, but humanity should be prepared for the moment when machine consciousness leaves the laboratory and enters the real world.  How we deal with robot consciousness could decide the future of the human race.  (page 216)

Kaku observes that AI has gone through three cycles of boom and bust.  In the 1950s, machines were built that could play checkers and solve algebra problems.  Robot arms could recognize and pick up blocks.  In 1965, Dr. Herbert Simon, one of the founders of AI, made a prediction:

Machines will be capable, within 20 years, of doing any work a man can do.

In 1967, another founder of AI, Dr. Marvin Minsky, remarked:

…within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved. 

But in the 1970s, not much progress in AI had been made.  In 1974, both the U.S. and British governments significantly cut back their funding for AI.  This was the beginning of the first AI winter.

But as computer power steadily increased in the 1980s, a new gold rush occurred in AI, fueled mainly by Pentagon planners hoping to put robot soldiers on the battlefield.  Funding for AI hit a billion dollars by 1985, with hundreds of millions of dollars spent on projects like the Smart Truck, which was supposed to be an intelligent, autonomous truck that could enter enemy lines, do reconnaissance by itself, perform missions (such as rescuing prisoners), and then return to friendly territory.  Unfortunately, the only thing that the Smart Truck did was get lost.  The visible failures of these costly projects created yet another AI winter in the 1990s.  (page 217)

Kaku continues:

But now, with the relentless march of computer power, a new AI renaissance has begun, and slow but substantial progress has been made.  In 1997, IBM’s Deep Blue computer beat world chess champion, Garry Kasparov.  In 2005, a robot car from Stanford won the DARPA Grand Challenge for a driverless car.  Milestones continue to be reached.

This question remains:  Is the third try a charm?

Scientists now realize that they vastly underestimated the problem, because most human thought is actually subconscious.  The conscious part of our thoughts, in fact, represents only the tiniest portion of our computations.

Dr. Steve Pinker says, ‘I would pay a lot for a robot that would put away the dishes or run simple errands, but I can’t, because all of the little problems that you’d need to solve to build a robot to do that, like recognizing objects, reasoning about the world, and controlling hands and feet, are unsolved engineering problems.’  (pages 217-218)

Kaku asked Dr. Minsky when he thought machines would equal and then surpass human intelligence.  Minsky replied that he’s confident it will happen, but that he doesn’t make predictions about specific dates any more.

If you remove a single transistor from a Pentium chip, the computer will immediately crash, writes Kaku.  But the human brain can perform quite well even with half of it missing:

This is because the brain is not a digital computer at all, but a highly sophisticated neural network of some sort.  Unlike a digital computer, which has a fixed architecture (input, output, and processor), neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task.  The brain has no programming, no operating system, no Windows, no central processor.  Instead, its neural networks are massively parallel, with one hundred billion neurons firing at the same time in order to accomplish a single goal: to learn.

In light of this, AI researchers are beginning to reexamine the ‘top-down approach’ they have followed for the past fifty years (e.g., putting all the rules of common sense on a CD).  Now AI researchers are giving the ‘bottom-up approach’ a second look.  This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones.  Neural networks must learn the hard way, by bumping into things and making mistakes.  (page 220)

Dr. Rodney Brooks, former director of the MIT Artificial Intelligence Laboratory, introduced a totally new approach to AI.  Why not build small, insectlike robots that learn how to walk by trail and error, just as nature learns?  Brooks told Kaku that he used to marvel at the mosquito, with a microscopic brain of a few neurons, which can, nevertheless, maneuver in space better than any robot airplane.  Brooks built a series of tiny robots called ‘insectoids’ or ‘bugbots,’ which learn by bumping into things.  Kaku comments:

At first, it may seem that this requires a lot of programming.  The irony, however, is that neural networks require no programming at all.  The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision.  So programming is nothing; changing the network is everything.  (page 221)

The Mars Curiosity rover is one result of this bottom-up approach.

Scientists have realized that emotions are central to human cognition.  Humans usually need some emotional input, in addition to logic and reason, in order to make good decisions.  Robots are now be programmed to recognize various human emotions and also to exhibit emotions themselves.  Robots also need a sense of danger and some feeling of pain in order to avoid injuring themselves.  Eventually, as robots become ever more conscious, there will be many ethical questions to answer.

Biologists used to debate the question, “What is life?”  But, writes Kaku, the physicist and Nobel Laureate Francis Crick has observed that the question is not well-defined now that we are advancing in our understanding of DNA.  There are many layer and complexities to the question, “What is life?”  Similarly, there are likely to be many layers and complexities to the question of what constitutes “emotion” or “consciousness.”

Moreover, as Rodney Brooks argues, we humans are machines.  Eventually the robot machines we are building will be just as alive as we are.  Kaku summarizes a conversation he had with Brooks:

This evolution in human perspective started with Nicholaus Copernicus when he realized that the Earth is not the center of the universe, but rather goes around the sun.  It continued with Darwin, who showed that we were similar to the animals in our evolution.  And it will continue into the future… when we realize that we are machines, except that we are made of wetware and not hardware.  (page 248)

Kaku then quotes Brooks directly:

We don’t like to give up our specialness, so you know, having the idea that robots could really have emotions, or that robots could be living creatures—I think is going to be hard for us to accept.  But we’re going to come to accept it over the next fifty years.

Brooks also thinks we will successfully create robots that are safe for humans:

The robots are coming, but we don’t have to worry too much about that.  It’s going to be a lot of fun.

Furthermore, Brooks argues that we are likely to merge with robots.  After all, we’ve already done this to an extent.  Over twenty thousand people have cochlear implants, giving them the ability to hear.

Similarly, at the University of Southern California and elsewhere, it is possible to take a patient who is blind and implant an artificial retina.  One method places a mini video camera in eyeglasses, which converts an image into digital signals.  These are sent wirelessly to a chip placed in the person’s retina.  The chip activates the retina’s nerves, which then send messages down the optic nerve to the occipital lobe of the brain.  In this way, a person who is totally blind can see a rough image of familiar objects.  Another design has a light-sensitive chip placed on the retina itself, which then sends signals directly to the optic nerve.  This design does not need an external camera.  (page 249)

This means, says Kaku, that eventually we’ll be able to enhance our ordinary senses and abilities.  We’ll merge with our robot creations.

 

REVERSE ENGINEERING THE BRAIN

Kaku highlights three approaches to the brain:

Because the brain is so complex, there are at least three distinct ways in which it can be taken apart, neuron by neuron.  The first is to simulate the brain electronically with supercomputers, which is the approach being taken by the Europeans.  The second is to map out the neural pathways of living brains, as in BRAIN [Brain Research Through Advancing Innovative Neurotechnologies Initiative].  (This task, in turn, can be further subdivided, depending on how these neurons are analyzed – either anatomically, neuron by neuron, or by function and activity.)  And third, one can decipher the genes that control the development of the brain, which is an approach pioneered by billionaire Paul Allen of Microsoft.  (page 253)

Dr. Henry Markram is a central figure in the Human Brain Project.  Kaku quotes Dr. Markram:

To build this—the supercomputers, the software, the research—we need around one billion dollars.  This is not expensive when one considers that the global burden of brain disease will exceed twenty percent of the world gross domestic project very soon.

Dr. Markram also said:

It’s essential for us to understand the human brain if we want to get along in society, and I think that it is a key step in evolution.  

How does the human genome go from twenty-three thousand genes to one hundred billion neurons?

The answer, Dr. Markram believes, is that nature uses shortcuts.  The key to his approach is that certain modules of neurons are repeated over and over again once Mother Nature finds a good template.  If you look at microscopic slices of the brain, at first you see nothing but a random tangle of neurons.  But upon closer examination, patterns of modules that are repeated over and over appear.  

(Modules, in fact, are one reason why it is possible to assemble large skyscrapers so rapidly.  Once a single module is designed, it is possible to repeat it endlessly on the assembly line.  Then you can rapidly stack them on top of one another to create the skyscraper.  Once the paperwork is all signed, an apartment building can be assembled using modules in a few months.)

The key to Dr. Markram’s Blue Brain project is the “neocortical column,” a module that is repeated over and over in the brain.  In humans, each column is about two millimeters tall, with a diameter of half a millimeter, and contains sixty thousand neurons.  (As a point of comparison, rat neural modules contain about ten thousand neurons each.)  In took ten years, from 1995 to 2005, for Dr. Markram to map the neurons in such a column and to figure out how it worked.  Once that was deciphered, he then went to IBM to create massive iterations of these columns.  (page 257)

Kaku quotes Dr. Markram again:

…I think, quite honestly, that if the planet understood how the brain functions, we would resolve conflicts everywhere.  Because people would understand how trivial and how deterministic and how controlled conflicts and reactions and misunderstandings are.

The slice-and-dice approach:

The anatomical approach is to take apart the cells of an animal brain, neuron by neuron, using the “slice-and-dice” method.  In this way, the full complexity of the environment, the body, and memories are already encoded in the model.  Instead of approximating a human brain by assembling a huge number of transistors, these scientists want to identify each neuron of the brain.  After that, perhaps each neuron can be simulated by a collection of transistors so that you’d have an exact replica of the human brain, complete with memory, personality, and connection to the senses.  Once someone’s brain is fully reverse engineered in this way, you should be able to have an informative conversation with that person, complete with memories and a personality.  (page 259)

There is a parallel project called the Human Connectome Project.

Most likely, this effort will be folded into the BRAIN project, which will vastly accelerate this work.  The goal is to produce a neuronal map of the human brain’s pathways that will elucidate brain disorders such as autism and schizophrenia.  (pages 260-261)

Kaku notes that one day automated microscopes will continuously take the photographs, while AI machines continuously analyze them.

The third approach:

Finally, there is a third approach to map the brain.  Instead of analyzing the brain by using computer simulations or by identifying all the neural pathways, yet another approach was taken with a generous grant of $100 million from Microsoft billionaire Paul Allen.  The goal was to construct a map or atlas of the mouse brain, with the emphasis on identifying the genes responsible for creating the brain.

…A follow-up project, the Allen Human Brain Atlas, was announced… with the hope of creating an anatomically and genetically complete 3-D map of the human brain.  In 2011, the Allen Institute announced that it had mapped the biochemistry of two human brains, finding one thousand anatomical sites with one hundred million data points detailing how genes are expressed in the underlying biochemistry.  The data confirmed that 82 percent of our genes are expressed in the brain.  (pages 261-262)

Kaku says the Human Genome Project was very successful in sequencing all the genes in the human genome.  But it’s just the first step in a long journey to understand how these genes work.  Similarly, once scientists have reverse engineered the brain, that will likely be only the first step in understanding how the brain works.

Once the brain is reverse-engineered, this will help scientists understand and cure various diseases.  Kaku observes that, with human DNA, if there is a single mispelling out of three billion base pairs, that can cause uncontrolled flailing of your limbs and convulsions, as in Huntington’s disease.  Similarly, perhaps just a few disrupted connections in the brain can cause certain illnesses.

Successfully reverse engineering the brain also will help with AI research.  For instance, writes Kaku, humans can recognize a familiar face from different angles in .1 seconds.  But a computer has trouble with this.  There’s also the question of how long-term memories are stored.

Finally, if human consciousness can be transferred to a computer, does that mean that immortality is possible?

 

THE FUTURE

Kaku talked with Dr. Ray Kurzweil, who told him it’s important for an inventor to anticipate changes.  Kurzweil has made a number of predictions, at least some of which have been roughly accurate.  Kurzweil predicts that the “singularity” will occur around the year 2045.  Machines will have reached the point when they not only have surpassed humans in intelligence; machines also will have created next-generation robots even smarter than themselves.

Kurzweil holds that this process of self-improvement can be repeated indefinitely, leading to an explosion—thus the term “singularity”—of ever-smarter and ever more capable robots.  Moreover, humans will have merged with their robot creations and will, at some point, become immortal.

Robots of ever-increasing intelligence and ability will require more power.  Of course, there will be breakthroughs in energy technology, likely including nuclear fusion and perhaps even antimatter and/or black holes.  So the cost to produce prodigious amounts of energy will keep coming down.  At the same time, because Moore’s law cannot continue forever, super robots eventually will need ever-increasing amounts of energy.  At some point, this will probably require traveling—or sending nanobot probes—to numerous other stars or to other areas where the energy of antimatter and/or of black holes can be harnessed.

Kaku notes that most people in AI agree that a “singularity” will occur at some point.  But it’s extremely difficult to predict the exact timing.  It could happen sooner than Kurzweil predicts or it could end up taking much longer.

Kurzweil wants to bring his father back to life.  Eventually something like this will be possible.  Kaku:

…I once asked Dr. Robert Lanza of the company Advanced Cell Technology how he was able to bring a long-dead creature “back to life,” making history in the process.  He told me that the San Diego Zoo asked him to create a clone of a banteng, an oxlike creature that had died out about twenty-five years earlier.  The hard part was extracting a usable cell for the purpose of cloning.  However, he was successful, and then he FedExed the cell to a farm, where it was implanted into a female cow, which then gave birth to this animal.  Although no primate has ever been cloned, let alone a human, Lanza feels it’s a technical problem, and that it’s only a matter of time before someone clones a human.  (page 273)

The hard part of cloning a human would be bringing back their memories and personality, says Kaku.  One possibility would be creating a large data file containing all known information about a person’s habits and life.  Such a file could be remarkably accurate.  Even for people dead today, scores of questions could be asked to friends, relatives, and associates.  This could be turned into hundreds of numbers, each representing a different trait that could be ranked from 0 to 10, writes Kaku.

When technology has advanced enough, it will become possible—perhaps via the Connectome Project—to recreate a person’s brain, neuron for neuron.  If it becomes possible for you to have your connectome completed, then your doctor—or robodoc—would have all your neural connections on a hard drive.  Then, says Kaku, at some point, you could be brought back to life, using either a clone or a network of digital transistors (inside an exeskeleton or surrogate of some sort).

Dr. Hans Moravec, former director of the Artificial Intelligence Laboratory at Carnegie Mellon University, has pioneered an intriguing idea:  transferring your mind into an immortal robotic body while you’re still alive.  Kaku explains what Moravec told him:

First, you lie on a stretcher, next to a robot lacking a brain.  Next, a robotic surgeon extracts a few neurons from your brain, and then duplicates these neurons with some transistors located in the robot.  Wires connect your brain to the transistors in the robot’s empty head.  The neurons are then thrown away and replaced by the transistor circuit.  Since your brain remains connected to these transistors via wires, it functions normally and you are fully conscious during this process.  Then the super surgeon removes more and more neurons from your brain, each time duplicating these neurons with transistors in the robot.  Midway through the operation, half your brain is empty; the other half is connected by wires to a large collection of transistors inside the robot’s head.  Eventually all the neurons in your brain have been removed, leaving a robot brain that is an exact duplicate of your original brain, neuron for neuron.  (page 280)

When you wake up, you are likely to have a few superhuman powers, perhaps including a form of immortality.  This technology is likely far in the future, of course.

Kaku then observes that there is another possible path to immortality that does not involve reverse engineering the brain.  Instead, super smart nanobots could periodically repair your cells.  Kaku:

…Basically, aging is the buildup of errors, at the genetic and cellular level.  As cells get older, errors begin to build up in their DNA and cellular debris also starts to accumulate, which makes the cells sluggish.  As cells begin slowly to malfunction, skin begins to sag, bones become frail, hair falls out, and our immune system deteriorates.  Eventually, we die.

But cells also have error-correcting mechanisms.  Over time, however, even these error-correcting mechanisms begin to fail, and aging accelerates.  The goal, therefore, is to strengthen natural cell-repair mechanisms, which can be done via gene therapy and the creation of new enzymes.  But there is also another way: using “nanobot” assemblers.

One of the linchpins of this futuristic technology is something called the “nanobot,” or an atomic machine, which patrols the bloodstream, zapping cancer cells, repairing the damage from the aging process, and keeping us forever young and healthy.  Nature has already created some nanobots in the form of immune cells that patrol the body in the blood.  But these immune cells attack viruses and foreign bodies, not the aging process.

Immortality is within reach if these nanobots can reverse the ravages of the aging process at the molecular and cellular level.  In this vision, nanobots are like immune cells, tiny police patrolling your bloodstream.  They attack any cancer cells, neutralize viruses, and clean out the debris and mutations.  Then the possibility of immortality would be within reach using our own bodies, not some robot or clone.  (pages 281-282)

Kaku writes that his personal philosophy is simple: If something is possible based on the laws of physics, then it becomes an engineering and economics problem to build it.  A nanobot is an atomic machine with arms and clippers that grabs molecules, cuts them at specific points, and then splices then back together.  Such a nanobot would be able to create almost any known molecule.  It may also be able to self-reproduce.

The late Richard Smalley, a Nobel Laureate in chemistry, argued that quantum forces would prevent nanobots from being able to function.  Eric Drexler, a founder of nanotechnology, pointed out that ribosomes in our own body cut and splice DNA molecules at specific points, enabling the creation of new DNA strands.  Eventually Drexler admitted quantum forces do get in the way sometimes, while Smalley acknowledged that if ribosomes can cut and split molecules, perhaps there are other ways, too.

Ray Kurzweil is convinced that nanobots will shape society itself.  Kaku quotes Kurzweil:

…I see it, ultimately, as an awakening of the whole universe.  I think the whole universe right now is basically made up of dumb matter and energy and I think it will wake up.  But if it becomes transformed into this sublimely intelligent matter and energy, I hope to be a part of that.

 

THE MIND AS PURE ENERGY

Kaku writes that it’s well within the laws of physics for the mind to be in the form of pure energy, able to explore the cosmos.  Isaac Asimov said his favorite science-fiction short story was “The Last Question.”  In this story, humans have placed their physical bodies in pods, while their minds roam as pure energy.  But they cannot keep the universe itself from dying in the Big Freeze.  So they create a supercomputer to figure out if the Big Freeze can be avoided.  The supercomputer responds that there is not enough data.  Eons later, when stars are darkening, the supercomputer finds a solution: It takes all the dead stars and combines them, producing an explosion.  The supercomputer says, “Let there be light!”

And there was light.  Humanity, with its supercomputer, had become capable of creating a new universe.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Physics of the Future

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 13, 2017

Science and technology are moving forward faster than ever before:

…this is just the beginning.  Science is not static.  Science is exploding exponentially all around us.  (page 12)

Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future.  His book, Physics of the Future (Anchor Books, 2012), is a result.

Kaku explains why his predictions may carry more weight than those of other futurists:

  • His book is based on interviews with more than 300 top scientists.
  • Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
  • Prototypes of all the technologies mentioned in the book already exist.
  • As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.

The ancients had little understanding of the forces of nature, so they invented the gods of mythology.  Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.

We are on the verge of becoming a planetary, or Type I, civilization.  This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.

But there are still some things, like face to face meetings, that appear not to have changed much.  Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years.  People still like to see tourist attractions in person.  People still like live performances.  Many people still prefer taking courses in-person rather than online.  (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)

Here are the chapters from Kaku’s book that I cover:

  • Future of the Computer
  • Future of Artificial Intelligence
  • Future of Medicine
  • Nanotechnology
  • Future of Energy
  • Future of Space Travel
  • Future of Humanity

 

FUTURE OF THE COMPUTER

Kaku quotes Helen Keller:

No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.

According to Moore’s law, computer power doubles every eighteen months.  Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly.  Also, exponential growth is often not noticeable for the first few decades.  But eventually things can change dramatically.

Even the near future may be quite different, writes Kaku:

…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control.  They will, to a degree, recognize the human voice and face and converse in a formal language.  They will be able to create entire virtual worlds that we can only dream of today.  Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper.  Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders.  (pages 25-26)

In order to discuss the future of science and technology, Kaku has divided each chapter into three parts:  the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).

In the near future, we can surf the internet via special glasses or contact lenses.  We can navigate with a handheld device or just by moving our hands.  We can connect to our office via the lense.  It’s likely that when we encounter a person, we will see their biography on our lense.

Also, we will be able to travel by driverless cars.  This will allow us to use commute time to access the internet via our lenses or to do other work.  Kaku notes that the word car accident may disappear from the language once driveless cars become advanced and ubiquitous enough.  Instead of nearly 40,000 dying in the United States in car accidents each year, there may be zero deaths from car accidents.  Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.

At home, you will have a room with screens on every wall.  If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.

You won’t need to carry a computer with you.  Computers will be embedded nearly everywhere.  You’ll have constant access to computers and the internet via your glasses or contact lenses.

As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person.  This includes the moon, Mars, and other currently exotic locations.

Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland.  Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill.  Kaku found that he could run, hide, sprint, or lie down.  Everything he saw was very realistic.  In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.

Your doctor – likely a human face appearing on your wall – will have all your genetic information.  Also, you’ll be able to pass a tiny probe over your body and diagnose any illness.  (MRI machines will be as small as a phone.)  As well, tiny chips or sensors will be embedded throughout your environment.  Most forms of cancer will be identified and destroyed before a tumor ever forms.  Kaku says the word tumor will disappear from the human language.

Furthermore, we’ll probably be able to slow down and even reverse the aging process.  We’ll be able to regrow organs based on computerized access to our genes.  We’ll likely be able to reengineer our genes.

In the medium term (2030 to 2070):

  • Moore’s law may reach an end.  Computing power will still continue to grow exponentially, however, just not as fast as before.
  • When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail.  You’ll be able to download informative lectures about anything you see.  In fact, a real professor will appear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
  • If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers.  You’ll be able to see through hills and other obstacles.
  • If you’re a surgeon, you’ll see in great detail everything inside the body.  You’ll have access to all medical records, etc.
  • Universal translators will allow any two people to converse.
  • True 3-D images will surround us when we watch a movie.  3-D holograms will become a reality.

In the far future (2070 to 2100):

We will be able to control computers directly with our minds.

John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain.  Through trial and error, the paralyzed person learns to move the cursor on a computer screen.  Eventually they can read and write e-mails, and play computer games.  Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.

Similarly, paralyzed people will be able to control mechanical arms and legs from their brains.  Experiments with monkeys have already achieved this.

Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain.  MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.

Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy.  In this way, we’ll be able to control objects just by thinking.  Astronauts on earth will be able to control superhuman robotic bodies on the moon.

 

FUTURE OF ARTIFICIAL INTELLIGENCE

AI pioneer Herbert Simon, in 1965, said:

Machines will be capable, in twenty years, of doing any work a man can do.

Unfortunately not much progress was made.  In 1974, the first AI winter began as the U.S. and British governments cut off funding.

Progress again was made in the 1980’s.  But because it was overhyped, another backlash occurred and a second AI winter began.  Many people left the field as funding disappeared.

The human brain is a type of neural network.  Neural networks follow Hebb’s rule:  every time a correct decision is made, those neural pathways are reinforced.  Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience.  ‘

Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers.  Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.

Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).

What’s interesting is that robots are superfast when doing human mental calculations.  But robots still are not good at visual pattern recognition, movement, and common sense.  Robots can see far more detail than humans, but robots have trouble making sense of what they see.  Also, many things in our experience that we as humans know by common sense, robots don’t understand.

There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things.  But so far, these projects haven’t worked.

There are two ways to give a robot the ability to learn:  top-down and bottom-up.  An example of the top-down approach is STAIR (Stanford artificial intelligence robot).  Everything is programmed into STAIR from the beginning.  For STAIR to understand an image, it must compare the image to all the images already programmed into it.

The LAGR (learning applied to ground robots) uses the bottom-up approach.  It learns everything from scratch, by bumping into things.  LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.

Robots will become ever more helpful in medicine:

For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia.  Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar.  But the da Vinci robotic system can vastly decrease all these.  The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery.  Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body.  There are 800 hospitals in Europe and North and South America that use this system;  48,000 operations were performed in 2006 alone using this robot.  Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.

In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today.  In fact, in the future, only rarely will the surgeon slice the skin at all.  Noninvasive surgery will become the norm.

Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread.  Micromachines smaller than the period at the end of this sentence will do much of the mechanical work.  (pages 93-94)

But to make robots intelligent, scientists must learn more about how the human brain works.

The human brain has roughly three levels.  The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc.  At the next level, there is the monkey brain, or the limbic system, located at the center of our brain.  Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.

The third level of the human brain is the front and outer part – the cerebral cortex.  This level defines humanity and is responsible for the ability to think logically and rationally.

Scientists still have a way to go in understanding in sufficient detail how the human brain works.

By midcentury, scientists will be able to reverse engineer the brain.  In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer.  Kaku quotes Fred Hapgood from MIT:

Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.

By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku.  However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.

For example, says Kaku, the Human Genome Project is like a dictionary with no definitions.  We can spell out each gene in the human body.  But we still don’t know what each gene does exactly.  Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm, C. elegans.  But scientists still can’t fully translate this map into the worm’s behavior.

Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.

When will machines become conscious?  Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future.  If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious.  On the other hand, something like the Turning test may help to identify when machines have become practically indistinguishable from humans.

When will robots exceed humans?  Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.

What if superintelligent robots can make even smarter copies of themselves?  They might thereby gain the ability to evolve exponentially.  Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.

The singularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially.  The inventor Ray Kurzweil has become a spokesman for the singularity.  But he thinks humans will merge with this digital superintelligence.  Kaku quotes Kurzweil:

It’s not going to be an invasion of intelligent machines coming over the horizon.  We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.

Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us.  The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).

One problem is that the military is the largest funder of AI research.  On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).

Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.

One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience.  Such an army might turn into a practical way to explore the solar system and beyond.  One by-product of Brooks’ idea is the Mars Rover.

Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm.  AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.

Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics.  Thus, they have sought a single, unifying equation underlying all intelligence.  But, says Minsky, there is no such thing:

Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness.  Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task.  He calls this ‘the society of minds’:  that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years.  (page 123)

Brooks predicts that, by 2100, there will be very intelligent robots.  But we will be part robot and part connected with robots.

He sees this progressing in stages.  Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions.  For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf.  These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…  

Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain.  One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons.  Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision.  These groups, for the first time in history, have been able to restore a degree of sight to the blind…  (pages 124-125)

Scientists have also successfully created a robotic hand.  One patient, Robin Ekenstam, had his right hand amputated.  Scientists have given him a robotic hand with four motors and forty sensors.  The doctors connected Ekenstam’s nerves to the chips in the artificial hand.  As a result, Ekenstam is able to use the artificial hand as if it were his own hand.  He feels sensations in the artificial fingers when he picks stuff up.  In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.

Furthermore, the brain is extremely plastic because it is a neural network.  So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.

And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities.  Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats.  Similarly, genetic engineering will become possible.  As Brooks commented:

We will now longer find ourselves confined by Darwinian evolution.

Another way people will merge with robots is with surrogates and avatars.  For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.

Robot pioneer Hans Morevic has described one way this could happen:

…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot.  The operation starts when we lie beside a robot without a brain.  A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull.  As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot.  Not only do we have a robotic body, we have also the benefits of a robot:  immortality in superhuman bodies that are perfect in appearance.  (pages 130-131)

 

FUTURE OF MEDICINE

Kaku quotes Nobel Laureate James Watson:

No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?

Nobel Laureate David Baltimore:

I don’t really think our bodies are going to have any secrets left within this century.  And so, anything that we can manage to think about will probably have a reality.

Kaku mentions biologist Robert Lanza:

Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit.  In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before.  Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah.  There, the fertilized cell was implanted into a female cow.  Ten months later he got the news that his latest creation had just been born.  On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out.  Another day, he could be working on cloning human embryo cells.  He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells.  (page 138)

Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book, What is Life?  He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.

Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule.  In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix.  When unraveled, a single strand of DNA stretches about 6 feet long.  On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code.  By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life.  (page 140)

Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form.  David Baltimore:

Biology is today an information science.

Kaku writes:

The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule.  Atom for atom, we know how to build the molecules of life from scratch.  And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.

Welcome to bioinformatics:

…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms.  For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA.  In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes.  (page 143)

You’ll talk to your doctor – likely a software program – on the wall screen.  Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form.  If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.

If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed.  (There are over 91,000 in the United States waiting for an organ transplant.)

…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells.  The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells.  (page 144)

Eventually cloning will be possible for humans.

The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep.  By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original.  (page 150)

Successes in animal studies will be translated to human studies.  First diseases caused by a single mutated gene will be cured.  Then diseases caused by multiple muted genes will be cured.

At some point, there will be “designer children.”  Kaku quotes Harvard biologist E. O. Wilson:

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become. 

The “smart mouse” gene was isolated in 1999.  Mice that have it are better able to navigate mazes and remember things.  Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn.  This supports Hebb’s rule:  learning occurs when certain neural pathways are reinforced.

It will take decades to iron out side effects and unwanted consequences of genetic engineering.  For instance, scientists now believe that there is a healthy balance between forgetting and remembering.  It’s important to remember key lessons and specific skills.  But it’s also important not to remember too much.  People need a certain optimism in order to make progress and evolve.

Scientists now know what aging is:  Aging is the accumulation of errors at the genetic and cellular level.  These errors have various causes.  For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku.  Errors can also accumulate as ‘junk’ molecular debris.

The buildup of genetic errors is a by-product of the second law of thermodynamics:  entropy always increases.  However, there’s an important loophole, notes Kaku.  Entropy can be reduced in one place as long as it is increased at least as much somewhere else.  This means that aging is reversible.  Kaku quotes Richard Feynman:

There is nothing in biology yet found that indicates the inevitability of death.  This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Kaku continues:

…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding.  His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals.  In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…

…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM.  By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers.  Scientists will be able to scan millions of genomes of two groups of people, the young and the old.  By comparing the two groups, one can identify where aging takes place at the genetic level.  A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated.  (pages 168-169)

Scientists think aging is only 35 percent determined by genes.  Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria.  This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging.  Soon we could live to 150.  By 2100, we could live well beyond that.

If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent.  This is called calorie restriction.  Every organism studied so far exhibits this phenomenon.

…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging.  In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time.  Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long.  (page 170)

Now scientists have shown that caloric restriction also works for primates:  less diabetes, less cancer, less heart disease, and better health and longer life.

In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells.  SIR2 is activated when it detects that the energy reserves of a cell are low.  The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins.  Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.

Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair.  If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine.  (page 171)

Kaku reports what William Haseltine, biotech pioneer, told him:

The nature of life is not mortality.  It’s immortality.  DNA is an immortal molecule.  That molecule first appeared perhaps 3.5 billion years ago.  That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that.  First to extend our lives two- or three-fold.  And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely.  And I don’t think that will be an unnatural process.  (page 173)

Kaku concludes that extending life span in the future will likely result from a combination of activities:

  • growing new organs as they wear out or become diseased, via tissue engineering and stem cells
  • ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
  • using gene therapy to alter genes that may slow down the aging process
  • maintaining a healthy lifestyle (exercise and a good diet)
  • using nanosensors to detect diseases like cancer years before they become a problem

Kaku quotes Richard Dawkins:

I believe that by 2050, we shall be able to read the language [of life].  We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.

Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor.  After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.

 

NANOTECHNOLOGY

Kaku:

For the most part, nanotechnology is still a very young science.  But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes.  MEMS are tiny machines so small they can easily fit on the tip of a needle.  They are created using the same etching technology used in the computer business.  Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them.  (pages 207-208)

Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car.  This has already saved thousands of lives.

One day nanomachines may be able to replace surgery entirely.  Cutting the skin may become completely obsolete.  Nanomachines will also be able to find and kill cancer cells in many cases.  These nanomachines can be guided by magnets.

DNA fragments can be embedded on a tiny chip using transistor etching technology.  The DNA fragments can bind to specific gene sequences.  Then, using a laser, thousands of genes can be read at one time, rather than one by one.  Prices for these DNA chips continue to plummet due to Moore’s law.

Small electronic chips will be able to do the work that is now done by an entire laboratory.  These chips will be embedded in our bathrooms.  Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks.  In the future, it may cost pennies and take just a few minutes.

In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite.  They won the Nobel Prize for their work.  Graphene is a single sheet of carbon, no more than one atom thick.  And it can conduct electricity.  It’s also the strongest material ever tested.  (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)

Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor:  one atom thick and ten atoms across.  (The smallest transistors currently are about 30 nanometers.  Novoselov’s transistors are 30 times smaller.)

The real challenge now is how to connect molecular transistors.

The most ambitious proposal is to use quantum computers, which actually compute on individual atoms.  Quantum computers are extremely powerful.  The CIA has looked at them for their code-breaking potential.

Quantum computers actually exist.  Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.”  When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.

The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations.  (When atoms are “coherent,” they vibrate in phase with one another.)  Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.

Scientists are working on programmable matter the size of grains of sand.  These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object.  In fact, many common consumer products may be replaced by software programs sent over the internet.  If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.

In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything.  This would be the crowning achievement of engineering, says Kaku.  One problem is the sheer number of atoms that would need to be re-arranged.  But this could be solved by self-replicating nanobots.

A version of this “replicator” already exists.  Mother Nature can take the food we eat and create a baby in nine months.  DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food, notes Kaku.  Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms.  (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)

 

FUTURE OF ENERGY

Kaku writes that in this century, we will harness the power of the stars.  In the short term, this means solar and hydrogen will replace fossil fuels.  In the long term, it means we’ll tap the power of fusion and even solar energy from outer space.  Also, cars and trains will be able to float using magnetism.  This can drastically reduce our use of energy, since most energy today is used to overcome friction.

Currently, fossil fuels meet about 80 percent of the world’s energy needs.  Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.

Electric vehicles will reduce the use of fossil fuels.  But we also have to transform the way electricity is generated.  Solar power will keep getting cheaper.  But much more clean energy will be required in order gradually to replace fossil fuels.

Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases.  However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.

Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve.  This increases the odds that terrorists could acquire nuclear weapons.

Within a few decades, global warming will become even more obvious.  The signs are already clear, notes Kaku:

  • The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
  • Greenland’s ice shelves continue to shrink.  (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
  • Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off.  (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
  • For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
  • Temperatures started to be reliably recorded in the late 1700s;  1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded.  Levels of carbon dioxide are rising dramatically.
  • As the earth heats up, tropical diseases are gradually migrating northward.

It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide.  But we must be careful about unintended side effects on ecosystems.

Eventually fusion power may solve most of our energy needs.  Fusion powers the sun and lights up all the stars.

Anyone who can successfully master fusion power will have unleashed unlimited eternal energy.  And the fuel for these fusion plants comes from ordinary seawater.  Pound for pound, fusion power releases 10 million times more power than gasoline.  An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum.  (page 272)

It’s extremely difficult to heat hydrogen gas to tens of millions of degrees.  But scientists will probably master fusion power within the next few decades.  And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.

One way scientists are trying produce nuclear fusion is by focusing huge lasers on to a tiny point.  If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion.  This approach is called inertial confinement fusion.

The other main approach used by scientists to try to create fusion is magnetic confinement fusion.  A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.

What is most difficult in this approach is squeezing the hydrogen gas uniformly.  Otherwise, it bulges out in complex ways.  Scientists are using supercomputers to try to control this process.  (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion.  So stars form easily.)

Most of the energy we burn is used to overcome friction.  Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.

In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance.  Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy.  The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.

But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero.  Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero.  This is important because liquid nitrogen forms at 77 degrees (Kelvin).  Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.

Remember that most energy is used to overcome friction.  Even for electricity, up to 30 percent can be lost during transmission.  But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years.  Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.

Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.

The reason the magnet floats is simple.  Magnetic lines of force cannot penetrate a superconductor.  This is the Meissner effect.  (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.)  When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic.  This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float.  (page 289)

Room temperature superconductors will allow trains and cars to move without any friction.  This will revolutionize transportation.  Compressed air could get a car going.  Then the car could float almost forever as long as the surface is flat.

Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains.  A maglev train does lose energy to air friction.  In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.

Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible.  A reduced cost of space travel may make it feasible to send hundreds of solar satellites into space.  One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles.  But the main problem is the cost of booster rockets.  (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are working to reduce the cost of rockets by making them reusable.)

 

FUTURE OF SPACE TRAVEL

Kaku quotes Carl Sagan:

We have lingered long enough on the shores of the cosmic ocean.  We are ready at last to set sail for the stars.

Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:

So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition.  This, in turn, will generate more interest in one day sending a probe to these distant planets.  There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms.  (page 297)

Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars.  But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.

For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans.  And the surface of Europa is continually heated by the tidal forces caused by gravity.

It had been thought that life required sunlight.  But in 1977, life was found on earth, deep under water in the Galapagos Rift.  Energy from undersea volcano vents provided enough energy for life.  Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents.  Some of the most primitive forms of DNA have been found on the bottom of the ocean.

In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature.  Kaku:

At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty.  In one scenario, our universe is a huge bubble of some sort that is continually expanding.  We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper).  But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath.  Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation).  Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion.  (page 301)

Space travel is very expensive.  It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon.  It costs much more to send a person to Mars.

Robotic missions are far cheaper than manned missions.  And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.

Kaku next describes a mission to Mars:

Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission.  But getting to Mars will be much more difficult than reaching the moon.  In contrast to the moon, Mars represents a quantum leap in difficulty.  It takes only three days to reach the moon.  It takes six months to a year to reach Mars.

In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like.  Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.

Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station.  To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars.  With no air, soil, or water, everything must be brought from earth.  It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars.  The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth.  Any rip in a space suit would create rapid depressurization and death.  (page 312)

Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth.  The temperature never goes above the melting point of ice.  And the dust storms are ferocious and often engulf the entire planet.

Eventually astronauts may be able to terraform Mars to make it more hospitable for life.  The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice.  (Methane gas is an even more potent greenhouse gas than carbon dioxide.)  Once the temperature rises, the underground permafrost may begin to thaw.  Riverbeds would fill with water, and lakes and oceans might form again.  This would release more carbon dioxide, leading to a positive feedback loop.

Another possible way to terraform Mars would be to deflect a comet towards the planet.  Comets are made mostly of water ice.  A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.

The polar regions of Mars are made of frozen carbon dioxide and ice.  It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps.  This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.

Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars.  They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide.  They could also be genetically modified to maximize their growth on Mars.  These algae pools could accelerate terraforming in several ways.  First, they could convert carbon dioxide into oxygen.  Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun.  Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet.  Fourth, the algae can be harvested for food.  Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen.  (page 315)

Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.

2070 to 2100:  A Space Elevator and Interstellar Travel

Near the end of the century, scientists may finally be able to construct a space elevator.  With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky.  Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.

One challenge is to create a carbon nanotube cable that is 50,000 miles long.  Another challenge is that space satellites in orbit travel at 18,000 miles per hour.  If a satellite collided with the space elevator, it would be catastrophic.  So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.

Another challenge is turbulent weather on earth.  The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform.  Moreover, there must be an escape pod in case the cable breaks.

Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt.  The next goal would be travelling to a star.  A conventional chemical rocket would take 70,000 years to reach the nearest star.  But there are several proposals for an interstellar craft:

  • solar sail
  • nuclear rocket
  • ramjet fusion
  • nanoships

Although light has no mass, it has momentum and so can exert pressure.  The pressure is super tiny.  But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft.  The solar sail would likely be miles wide.  The craft would have to circle the sun for a few years, gaining more and more momentum.  Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.

Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power.  One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters.  It would have been powered by 1,000 hydrogen bombs.  (This also would have been a good way to get rid of atomic bombs meant only for warfare.)  Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion.  So the project was set aside.

A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust.  In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space.  The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy via nuclear fusion.  With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.

Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year.  This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship.  (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light.  But meanwhile, on earth, millions of years will have passed.)

Note that there are still engineering questions about the ramjet fusion engine.  For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space.  Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.

Another possibility is antimatter rocket ships.  If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel.  Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.

Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars.  These nanoships might become cheap enough to produce and to fuel.  They might even be self-replicating.

Millions of nanoships could gather intelligence like a “swarm” does.  For instance, a single ant is super simple.  But a colony of ants can create a complex ant hill.  A similar concept is the “smart dust” considered by the Pentagon.  Billions of particles, each a sensor, could be used to gather a great deal of information.

Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light.  Moreover, scientists may be able to create one or a few self-replicating nanoprobes.  Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.

 

FUTURE OF HUMANITY

Kaku writes:

All the technological revolutions described here are leading to a single point:  the creation of a planetary civilization.  This transition is perhaps the greatest in human history.  In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos.  Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate.  (pages 378-379)

In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations.  So he proposed three types of civilization:

  • A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
  • A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
  • A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).

Kaku explains:

The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations.  Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies.  (page 381)

Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet.  There are signs, says Kaku, that humanity will reach Type I in a matter of decades.

  • The internet allows a person to connect with virtually anyone else on the planet effortlessly.
  • Many families around the world have middle-class ambitions:  a suburban house and two cars.
  • The criterion for being a superpower is not weapons, but economic strength.
  • Entertainers increasingly consider the global appeal of their products.
  • People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
  • The news is becoming planetary.
  • Soccer and the Olympics are emerging to dominate planetary sports.
  • The environment is debated on a planetary scale.  People realize they must work together to control global warming and pollution.
  • Tourism is one of the fastest-growing industries on the planet.
  • War has rarely occurred between two democracies.  A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
  • Diseases will be controlled on a planetary basis.

A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova.  Or we may be able to keep the sun from exploding.  (Or we might be able to change the orbit of our planet.)  Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere.  Also, we probably will have colonized not just the entire solar system, but nearby stars.

By the time we become a Type III civilization, we will have explored most of the galaxy.  We may have done this using self-replicating robot probes.  Or we may have mastered Planck energy (10^19 billion electron volts).  At this energy, space-time itself becomes unstable.  The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time.  By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time.  As a result, a Type III civilization might be able to colonize the entire galaxy.

It’s possible that a more advanced civilization has already visited or detected us.  For instance, they may have used tiny self-replicating probes that we haven’t noticed yet.  It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.

Kaku writes that many people are not aware of the historic transition humanity is now making.  But this could change if we discover evidence of intelligent life somewhere in outer space.  Then we would consider our level of technological evolution relative to theirs.

Consider the SETI Institute.  This is from their website (www.seti.org):

SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.

Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets.  Whether evolution will give rise to intelligent, technological civilizations is open to speculation.  However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.

Finding evidence of other technological civilizations however, requires significant effort.  Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.

Work at the Center is divided into two areas:  Research and Development (R&D) and Projects.  R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects.  The algorithms and technology developed in the lab are first field-tested and then implemented during observing.  The observing results are used to guide the development of new hardware, software, and observing facilities.  The improved SETI observing Projects in turn provide new ideas for Research and Development.  This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.

Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is.  A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible.  If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.