Notes on Value Investing

(Image:  Zen Buddha Silence by Marilyn Barbone.)

October 1, 2017

Today we review some of the central concepts in value investing.  In order to learn, some repetition is required, especially when the subject may be difficult or counter-intuitive for many.

Here’s the outline:

  • Index Funds or Quant Value Funds
  • The Dangers of DCF
  • Notes on Ben Graham
  • Value vs. Growth
  • The Superinvestors of Graham-and-Doddsville metformin weight loss buy click here cheap viagra cialis levitra cover letter examples for radio job non profit cover letter format how do i logout my yahoo email on the iphone follow url buy dissertation online go here go here examples of case study analysis term paper help mla report writing services resume hedge fund trader viagra ohne rezept esl essay proofreading sites us go site level essay structure help martin luther king essay bug viagra nitrous cylinder does insurance pay for viagra click how to write an application essay for college INDEX FUNDS OR QUANT VALUE FUNDS

The first important point is that the vast majority of investors are best off buying and holding a broad market, low-cost index fund.  Warren Buffett has repeatedly made this observation.  See:

In other words, most of us who believe that we can outperform the market over the long term (decades) are wrong.  The statistics on this point are clear.  For instance, see pages 21-25 of Buffett’s 2016 Berkshire Hathaway Shareholder Letter:

A quantitative value investment strategy—especially if focused on micro caps—is likely to do better than an index fund over time.  If you understand why this is the case, then you could adopt such an approach, at least for part of your portfolio.  (The Boole Microcap Fund is a quantitative value fund.)  But you have to be able to stick with it over the long term even though there will sometimes be multi-year periods of underperforming the market.  Easier said than done.  Know Thyself.

We all like to think we know ourselves.  But in many ways we know ourselves much less than we believe we do.  This is especially true when it comes to probabilistic decisions or complex computations.  In these areas, we suffer from cognitive biases which generally cause us to make suboptimal or erroneous choices.  See:

The reason value investing—if properly implemented—works over time is due to the behavioral errors of many investors.  Lakonishok, Shleifer, and Vishny give a good explanation of this in their 1994 paper, “Contrarian Investment, Extrapolation, and Risk.”  Link:

Lakonishok, Shleifer, and Vishny (LSV) offer three reasons why investors follow “naive” strategies:

  • Investors often extrapolate high past earnings growth too far into the future.  Similarly, investors extrapolate low past earnings growth too far into the future.
  • Investors overreact to good news and to bad news.
  • Investors think a well-run company is automatically a good investment.

LSV then state that, for whatever reason, investors overvalue stocks that have done well in the past, causing these “glamour” or “growth” stocks to be overpriced in general.  Similarly, investors undervalue stocks that have done poorly in the past, causing these “value” stocks to be underpriced in general.

Important Note:  Cognitive biases—such as overconfidence, confirmation bias, and hindsight bias—are the main reason why investors extrapolate past trends too far into the future.  For simple and clear descriptions of cognitive biases, see:



For most businesses, it’s very difficult—and often impossible—to predict future earnings and free cash flows.  One reason Warren Buffett and Charlie Munger have produced such an outstanding record at Berkshire Hathaway is because they focus on businesses that are highly predictable.  These types of businesses usually have a sustainable competitive advantage, which is what makes their future earnings and cash flows more certain.  As Buffett put it:

The key to investing is not assessing how much an industry is going to affect society, or how much it will grow, but rather determining the competitive advantage of any given company and, above all, the durability of that advantage.

Most businesses do not have a sustainable competitive advantage, and thus are not predictable 5 or 10 years into the future.

Buffett calls a sustainable competitive advantage a moat, which defends the economic “castle.”  Here’s how he described it at the Berkshire Hathaway Shareholder Meeting in 2000:

So we think in terms of that moat and the ability to keep its width and its impossibility of being crossed as the primary criterion of a great business.  And we tell our managers we want the moat widened every year.  That doesn’t necessarily mean the profit will be more this year than it was last year because it won’t be sometimes.  However, if the moat is widened every year, the business will do very well.  When we see a moat that’s tenuous in any way – it’s just too risky.  We don’t know how to evaluate that.  And, therefore, we leave it alone.  We think that all of our businesses – or virtually all of our businesses – have pretty darned good moats.

There’s a great book, The Art of Value Investing (Wiley, 2013), by John Heins and Whitney Tilson, which is filled with quotes from top value investors.  Here’s a quote from Bill Ackman, which shows that he strives to invest like Buffett and Munger:

We like simple, predictable, free-cash-flow generative, resilient and sustainable businesses with strong profit-growth opportunities and/or scarcity value.  The type of business Warren Buffett would say has a moat around it.  (page 131)

If the future earnings and cash flows of a business are not predictable, then DCF valuation may not be very reliable.  Moreover, it’s often hard to calculate the cost of capital (the discount rate).

  • DCF refers to “discounted cash flows.”  You can value any business if you can estimate future free cash flow with reasonable accuracy.  To get the present value of the business, the free cash flow in each future year must be discounted back to the present by the cost of capital.

To determine the cost of capital, Buffett and Munger use the opportunity cost of capital, which is the next best investment with a similar level of risk.

  • To illustrate, say they’re considering an investment in Company A, which they feel quite certain will return 15% per year.  To figure out the value of this potential investment, they will find their next best investment – which they may already own – that has a similar level of risk.  Perhaps they own Company N and they feel equally certain that its future returns will be 17% per year.  In that case, if possible, they would prefer to buy more of Company N rather than buying any of Company A.  (Often there are other considerations.  But that’s the gist of it.)

The academic definition of cost of capital includes “beta,” which measures how volatile a stock price has been in the past.  But for value investors like Buffett and Munger, all that matters is how much free cash flow the business will produce in the future.  The degree of volatility of a stock in the past generally has no logical relationship with the next 20-30 years of cash flows.

If a business lacks a true moat and if, therefore, DCF probably won’t work, is there any other way to evaluate a business?  James Montier, in Value Investing (Wiley, 2009), mentions three alternatives to DCF that do not require forecasting:

  • Reverse-engineered DCF
  • Asset Value
  • Earnings Power

In a reverse-engineered DCF, instead of forecasting future growth, you take the current share price and figure out what that implies about future growth.  Then you compare the implied future growth of the business against some reasonable benchmark, like growth of a close competitor.  (You still have to determine a cost of capital.)

As for asset value and earnings power, these were the two methods of valuation suggested by Ben Graham.  For asset value, Graham often suggested using liquidation value, which is usually a conservative estimate of asset value.  If the business could be sold as a going concern, then the assets would probably have a higher value than liquidation value.

Regarding earnings power, Montier quotes Graham from Security Analysis:

What the investor chiefly wants to learn… is the indicated earnings power under the given set of conditions, i.e., what the company might be expected to earn year after year if the business conditions prevailing during the period were to continue unchanged.

Montier again quotes Graham:

It combines a statement of actual earnings, shown over a period of years, with a reasonable expectation that these will be approximated in the future, unless extraordinary conditions supervene.  The record must be over a number of years, first because a continued or repeated performance is always more impressive than a single occurrence, and secondly because the average of a fairly long period will tend to absorb and equalize the distorting influences of the business cycle.

Montier mentions Bruce Greenwald’s excellent book, Value Investing: From Graham to Buffett and Beyond (Wiley, 2004), for a modern take on asset value and earnings power.



When studying Graham’s methods as presented in Security Analysis—first published in 1934—it’s important to bear in mind that Graham invented value investing during the Great Depression.  Therefore, some of Graham’s methods are arguably overly conservative.  Particularly if you think the Great Depression was caused in part by mistakes in fiscal and monetary policy that are unlikely to be repeated.  Charlie Munger put it as follows:

I don’t love Ben Graham and his ideas the way Warren does.  You have to understand, to Warren—who discovered him at such a young age and then went to work for him—Ben Graham’s insights changed his whole life, and he spent much of his early years worshiping the master at close range.

But I have to say, Ben Graham had a lot to learn as an investor.  His ideas of how to value companies were all shaped by how the Great Crash and the Depression almost destroyed him, and he was always a little afraid of what the market can do.  It left him with an aftermath of fear for the rest of his life, and all his methods were designed to keep that at bay.

That being said, Warren Buffett has always maintained that Chapters 8 and 20 of Ben Graham’s The Intelligent Investor—first published in 1949—contain the three fundamental precepts of value investing:

  • Owning stock is part ownership of the underlying business.
  • Market prices are there to serve you, not to instruct you.  When prices drop a great deal, it may be a good opportunity to buy.  When prices rise quite a bit, it may be a good time to sell.  At all other times, it’s best to focus on the operating results of the businesses you own.
  • The margin of safety is the difference between the price you pay and your estimate of the intrinsic value of the business.  Price is what you pay;  value is what you get.  If you think the business is worth $40 per share, then you would like to pay $20 per share.  (Value investors refer to a stock that’s selling for half its intrinsic value as a “50-cent dollar.”)

The purpose of the margin of safety is to minimize the effects of bad luck, human error, and the vagaries of the future.  Good value investors are right about 60% of the time and wrong 40% of the time.  By systematically minimizing the impact of inevitable mistakes and bad luck, a solid value investing strategy will beat the market over time.  Why?

Here’s why:  As you increase your margin of safety, you simultaneously increase your potential return.  The lower the risk, the higher the potential return.  When you’re wrong, you lose less on average.  When you’re right, you make more on average.

For instance, assume again that you estimate the intrinsic value of the business at $40 per share.

  • If you can pay $20 per share, then you have a good margin of safety.  And if you are right about intrinsic value, then you will make 100% on your investment when the price eventually moves from $20 to $40.
  • What if you managed to pay $10 per share for the same stock?  Then you have an even larger margin of safety relative to the estimated intrinsic value of $40.  As well, if you’re right about intrinsic value, then you will make 300% on your investment when the price eventually moves from $10 to $40.

The notion that you can increase your safety and your potential returns at the same time runs directly contrary to what is commonly taught in modern finance.  In modern finance, you can only increase your potential return by increasing your risk.

A final point about Buffett and Munger’s evolution as investors.  Munger realized early in his career that it was better to buy a high-quality business at a reasonable price, rather than a low-quality business at a cheap price.  Buffett also realized this—partly through Munger’s influence—after experiencing a few failed investments in bad businesses purchased at cheap prices.  Ever since, Buffett and Munger have expressed the lesson as follows:

It’s better to buy a wonderful company at a fair price than a fair company at a wonderful price.

The idea is to pay a reasonable price for a company with a high ROE (return on equity) that can be sustained—due to a moat.  If you hold a high-quality business like this, then over time your returns as an investor will approximate the ROE.  High-quality businesses can have sustainably high ROE’s that range from 20% to over 100%.

Note:  Buffett and Munger also insist that the companies they invest in have low debt (or no debt).  Even a great business can fail if it has too much debt.



One of the seminal academic papers on value investing—which was mentioned earlier—is Lakonishok, Shleifer, and Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.”  Link:

Lakonishok, Shleifer, and Vishny (LSV) show that value investing—buying stocks at low multiples (P/B, P/CF, and P/E)—outperformed glamour (growth) investing by about 10-11% per year from 1968 to 1989.

Here’s why, say LSV:  Investors expect the poor recent performance of value stocks to continue, causing these stocks to trade at lower multiples than is justified by subsequent performance.  And investors expect the recent good performance of glamour stocks to continue, causing these stocks to trade at higher multiples than is justified by subsequent performance.

Interestingly, La Porta (1993 paper) shows that contrarian value investing based directly on analysts’ forecasts of future growth can produce even larger excess returns than value investing based on low multiples.  In other words, betting on the stocks for which analysts have the lowest expectations can outperform the market by an even larger margin.

Moreover, LSV demonstrate that value investing is not riskier.  First, excess returns from value investing cannot be explained by excess volatility.  Furthermore, LSV show that value investing does not underperform during market declines or recessions.  If anything, value investing outperforms during down markets, which makes sense because value investing involves paying prices that are, on average, far below intrinsic value.

In conclusion, LSV ask why countless investors continue to buy glamour stocks and to ignore value stocks.  One chief reason is that buying glamour stocks—generally stocks that have been doing well—may seem “prudent” to many professional investors.  After all, glamour stocks are unlikely to become financially distressed in the near future, whereas value stocks are often already in distress.

In reality, a basket of glamour stocks is not prudent because it will far underperform a basket of value stocks over a sufficiently long period of time.  However, if professional investors choose a basket of value stocks, then they will not only own many stocks experiencing financial distress, but they also risk underperforming for several years in a row.  These are potential career risks that most professional investors would rather avoid.  From that point of view, it may indeed be “prudent” to stick with glamour stocks, despite the lower long-term performance of glamour compared to value.

  • An individual value stock is likely to be more distressed—and thus riskier—than either a glamour stock or an average stock.  But LSV have shown that value stocks, as a group, far outperform both glamour stocks and the market in general, and do so with less risk.  This finding that value stocks, as a group, outperform has been confirmed by many academic studies, including Fama and French (1992).
  • If you follow a quantitative value strategy focused on micro caps, one of the best ways to improve long-term performance is by using the Piotroski F_Score.  It’s a simple measure that strengthens a micro-cap value portfolio by reducing the number of “cheap but weak” companies and increasing the number of “cheap and strong” companies.  See:



Buffett gave a talk at Columbia Business School in 1984 entitled, “The Superinvestors of Graham-and-Doddsville.”  Link:

According to the EMH (Efficient Markets Hypothesis), investors who beat the market year after year are just lucky.  In his talk, Buffett argues as follows:  fifteen years before 1984, he knew a group of people who had learned the value investing approach from Ben Graham and David Dodd.  Now in 1984, fifteen years later, all of these individuals have produced investment track records far in excess of the S&P 500 Index.  Moreover, each of these investors applied the value investing approach in his own way—there was very little overlap in terms of which companies these investors bought.  Buffett simply asks whether this could be due to pure chance.

As a way to think about the issue, Buffett says to imagine a national coin-flipping contest in which all 225 million Americans (the population in 1984) participate.  It is one dollar per flip on the first day, so roughly half the people lose and pay one dollar to the other half who won.  Each day the contest is repeated, but the stakes build up based on all previous winnings.  After 10 straight mornings of this contest, there will be about 220,000 flippers left, each with a bit over $1,000.  Buffett jokes:

Now this group will probably start getting a little puffed up about this, human nature being what it is.  They may try to be modest, but at cocktail parties they will occasionally admit to attractive members of the opposite sex what their technique is, and what marvelous insights they bring to the field of flipping.  (page 5)

In another 10 days, there will be about 215 people left who had correctly called the toss of a coin 20 times in a row.  Each would have a little over $1,000,000.  Buffett quips:

By then, this group will really lose their heads.  They will probably write books on ‘How I Turned a Dollar into a Million Working Thirty Seconds a Morning.’  Worse yet, they’ll probably start jetting around the country attending seminars on efficient coin-flipping and tackling skeptical professors with, ‘If it can’t be done, why are there 215 of us?’

But then some business school professor will probably be rude enough to bring up the fact that if 225 million orangutans had engaged in a similar exercise, the results would be much the same—215 egotistical orangutans with 20 straight winning flips.

But assume that the original 225 million orangutans were distributed roughly like the U.S. population.  Buffett then asks:  what if 40 of the 215 winning orangutans were discovered to all be from the same zoo in Omaha?  This would lead one to want to identify common factors for these 40 orangutans.  Buffett says (humorously) that you’d probably ask the zookeeper about their diets, what books they read, etc.  In short, you’d try to identify causal factors.

Buffett remarks that scientific inquiry naturally follows this pattern.  He gives another example:  If there was a rare type of cancer, with 1,500 cases a year in the United States, and 400 of these cases happened in a little mining town in Montana, you’d investigate the water supply there or other variables.  Buffett explains:

You know that it’s not random chance that 400 come from a small area.  You would not necessarily know the causal factors, but you would know where to search.  (page 6)

Buffett then draws the simple, logical conclusion:

I think you will find that a disproportionate number of successful coin-flippers in the investment world came from a very small intellectual village that could be called Graham-and-Doddsville.  A concentration of winners that simply cannot be explained by chance can be traced to this particular intellectual village.

Again, Buffett stresses that the only thing these successful investors had in common was adherence to the value investing philosophy.  Each investor applied the philosophy in his own way.  Some, like Walter Schloss, used a very diversified approach with over 100 stocks chosen purely on the basis of quantitative cheapness (low P/B).  Others, like Buffett or Munger, ran very concentrated portfolios and included stocks of companies with high ROE.  And looking at this group on the whole, there was very little overlap in terms of which stocks each value investor decided to put in his portfolio.

Buffett observes that all these successful value investors were focused only on one thing:  price vs. value.  Price is what you pay;  value is what you get.  There was no need to use any academic theories about covariance, beta, the EMH, etc.  Buffett comments that the combination of computing power and mathematical training is likely what led many academics to study the history of prices in great detail.  There have been many useful discoveries, but some things (like beta or the EMH) have been overdone.

Buffett goes through the nine different track records of the market-beating value investors.  Then he summarizes:

So these are nine records of ‘coin-flippers’ from Graham-and-Doddsville.  I haven’t selected them with hindsight from among thousands.  It’s not like I am reciting to you the names of a bunch of lottery winners—people I had never heard of before they won the lottery.  I selected these men years ago based upon their framework for investment decision-making… It’s very important to understand that this group has assumed far less risk than average;  note their record in years when the general market was weak….

Buffett concludes that, in his view, the market is far from being perfectly efficient:

I’m convinced that there is much inefficiency in the market.  These Graham-and-Doddsville investors have successfully exploited gaps between price and value.  When the price of a stock can be influenced by a ‘herd’ on Wall Street with prices set at the margin by the most emotional person, or the greediest person, or the most depressed person, it is hard to argue that the market always prices rationally.  In fact, market prices are frequently nonsensical.

Buffett also states that value investors view risk and reward in opposite terms to the way academics view risk and reward.  The academic view is that a higher potential reward always requires taking greater risk.  But (as discussed in above in “Notes on Ben Graham”) value investors, having made the distinction between price and value, hold that the lower the risk, the higher the potential reward.  Buffett:

If you buy a dollar bill for 60 cents, it’s riskier than if you buy a dollar bill for 40 cents, but the expectation of reward is greater in the latter case.  The greater the potential for reward in the value portfolio, the less risk there is.

Buffett offers an example:

The Washington Post Company in 1973 was selling for $80 million in the market.  At the time, that day, you could have sold the assets to any one of ten buyers for not less than $400 million, probably appreciably more…

Now, if the stock had declined even further to a price that made the valuation $40 million instead of $80 million, it’s beta would have been greater.  And to people who think beta [or, more importantly, downside volatility] measures risk, the cheaper price would have made it look riskier.  This is truly Alice in Wonderland.  I have never been able to figure out why it’s riskier to buy $400 million worth of properties for $40 million than $80 million….



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:


Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ben Graham Was a Quant

(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011).  In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors.  Having a clearly defined quantitative investment strategy that you stick with over the long term—both when the strategy is in favor and when it’s not—is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches.  Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors.  See:

An index fund tries to copy an index, which is itself typically based on companies of a certain size.  By contrast, quantitative value investing is based on metrics that indicate undervaluation.



Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details of security analysis which I devoted myself to so strenuously for many years.  I feel that they are relatively unimportant, which, in a sense, has put me opposed to developments in the whole profession.  I think we can do it successfully with a few techniques and simple principles.  The main point is to have the right general principles and the character to stick to them.

I have a considerable amount of doubt on the question of how successful analysts can be overall when applying these selectivity approaches.  The thing that I have been emphasizing in my own work for the last few years has been the group approach.  To try to buy groups of stocks that meet some simple criterion for being undervalued – regardless of the industry and with very little attention to the individual company

I am just finishing a 50-year study—the application of these simple methods to groups of stocks, actually, to all the stocks in the Moody’s Industrial Stock Group.  I found the results were very good for 50 years.  They certainly did twice as well as the Dow Jones.  And so my enthusiasm has been transferred from the selective to the group approach.  What I want is an earnings ratio twice as good as the bond interest ratio typically for most years.  One can also apply a dividend criterion or an asset value criterion and get good results.  My research indicates the best results come from simple earnings criterions.

Imagine—there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work.  It seems too good to be true.  But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up. 


Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors.  They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not.  This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others.  Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct.  – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that.  – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well.  Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient.  The difference between these propositions is night and day. 

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.



Greiner refers to the July 12, 2010 issue of Barron’s.  Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009.  87 of the 248 funds were gone completely, while the other funds had been downgraded.  Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha.  Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return.  Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors.  (page 16)

But risk involves not only just individual company risk.  It also involves how one company’s stock is correlated with the stocks of other companies.  If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks.  Graham was not concerned about any correlations among the cheap stocks.  As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha.  (Beta as a concept has some obvious problems.)  But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17).  Moreover, alpha is excess return relative to an index or benchmark.  We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors.  Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data).  (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return.  Greiner explains:

The stock market is not a repeatable event.  Every day is an out-of-sample environment.  The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets.  (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable.  It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)



Risk means more things can happen than will happen.  – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed.  The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed.  Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management.  Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk.  Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high.  Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quite low.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear.  In these situations, a spike in volatility is a symptom of risk.  At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values.  So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear.  Sell-offs are usually buying opportunities for quantitative value investors.

I will tell you how to become richClose the doors.  Be fearful when others are greedy.  Be greedy when others are fearful.  – Warren Buffett

So how do you figure out risk exposures?  It is often a difficult thing to do.  Greiner defines ELE events as extinction-level events, or extreme-extreme events.  If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.”  (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future.  Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.  

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing.  Value investing has always worked over time because the investor systematically buys stocks well below probable intrinsic value—whether net asset value or earnings power.  This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience.  When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences.  This is what quants do.  They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen.  We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety.  Do we not wear seat belts because of the odd chance of the tractor-trailer collision?  Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan.  Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism.  The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes.  In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow.  The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks.  Eventually, the interconnectedness of the associations among sticks results in an avalanche.  Likewise, so behaves the market.  (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so.  Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation.  Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out.  Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful.  Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty.  Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event.  When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before.  This is not really very helpful.  The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards.  That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French, or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor.  If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe.  Do the same for all other factors.  The numerical B/P for a stock is then termed exposure.  Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta.  The variance and covariance of the beta time series act as proxies for the variance of the stocks.  These are the components of the covariance matrix.  On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors.  The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry.  Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in.  This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another.  Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries.  However, FactSet’s MAC model does include this operation.  (page 45)



Value investors like Ben Graham know that price variability is not risk.  Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power).  Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good.  Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky.  In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power.  Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value.  (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution.  The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions.  For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete.  Some of these are asymmetric about the mean (first moment) and some are not.  Some have fat tails and some do not.  You can even have distributions with infinite second moments (infinite variance).  There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance.  Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause.  Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly.  (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information.  It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave.  Market returns tend to have fatter tails.  But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance.  It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty.  However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH?  It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful.  In a world of Econs, I believe that the EMH would be true.  And it would not have been possible to do research in behavioral finance without the rational model as a starting point.  Without the rational framework, there are no anomalies from which we can detect misbehavior.  Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research.  We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.  (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed.  Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’  There are definitely anomalies:  sometimes the market overreacts, and sometimes it underreacts.  But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right.  The price is often wrong, and sometimes very wrong, says Thaler.  However, that doesn’t mean that you can beat the market.  It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution.  (page 60)

One problem with beta is that it has covariance in the numerator.  And if two variables are linearly independent, then their covariance is always zero.  But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark.  Greiner explains that you can determine linear dependence as follows:  When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X?  If yes, then the portfolio and the benchmark are linearly dependent.  Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity.  However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated.  Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund.  The distribution of returns is more like a Frechet distribution than a normal distribution in two ways:  it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74).  The Magellan Fund returns also have a fat tail on the right side of the distribution.  This is where the Frechet misses.  But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution.  Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:


Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.


(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 3, 2017

Robert G. Hastrom has written quite a few excellent books on business and investing.  His best book may be The Warren Buffett Way (Wiley, 2014), which I summarized here:

The Detective and the Investor (Texere, 2002) is another outstanding book by Hagstrom, which I wrote about here:

The first edition of The Warren Buffett Way was published in 1994.  A few years later, Hagstrom published The Nascar Way: The Business That Drives the Sport (Wiley, 1998).  NASCAR is the National Association of Stock Car Auto Racing.

Following the tenets of The Warren Buffett Way, Hagstrom explains that he first came across NASCAR from a business and investing point of view:

First, I look for companies that are simple and understandable and have a consistent operating history.  Second, for a company to succeed, it must have a management team that is honest and candid and forever works to rationally allocate the capital of the business.  Third, the most valuable companies tend to generate a high return on invested capital and consistently produce cash earnings for their shareholders.  (pages ix-x)

Specifically, Hagstrom had came across the International Speedway Corporation.  Running a racetrack is a simple and understandable business.  Hagstrom also learned that the company generated high returns on invested capital.  And the managers, Bill France, Jr. and Jim France, have done an excellent job managing the business, including capital allocation.

Although the book was written nearly 20 years ago, if you enjoy business and investing, or car racing, it’s a good read.

In the process of learning about International Speedway, Hagstrom had to learn about NASCAR.  Here’s the wiki entry on NASCAR:

NASCAR is second to the National Football League in terms of television views in the United States.  NASCAR’s races are broadcast in over 150 different countries.  As of 2004, the company holds 17 of the top 20 regularly attended single-day sporting events in the world.

Hagstrom says he aims, in the book, to capture some of this excitement of “this superb, uniquely American sport.”

This blog briefly outlines the book based its chapters:

  • Riding with Elmo
  • Rules of the Road
  • It Takes Money to Race
  • Prime Time
  • The Meanest Mile
  • True American Heroes
  • Forty-Two Teams on the Same Field at the Same Time



Hagstrom introduces the sport:

Stock car drivers do things in cars that would make the rest of us faint.  Try to imagine driving 100 miles an hour, then 120, then 160.  Imagine keeping up that pace for three and a half hours; that’s how long it will take to drive 500 miles… Now imagine forty-one other cars around you, all doing the same thing, just inches away from you, scraping against the side of your car and nudging your bumper as they try to pass you.  And you can never slack off.

…To win a stock car race means that you are willing to drive faster than anybody else on the track.  It means that you drive as fast as your nerves will let you go – and then faster.  (page 3)

Hagstrom got the chance to ride with Elmo Langley, NASCAR’s pace car official (in the late 90’s) before the Southern 500 at Darlington Raceway in South Carolina.  Harold Brasington, a retired racer in 1948, spent a year building the track.  He agreed, upon purchasing the property, not to disturb the minnow pond at the west end of the property.  This led Brasington to make that corner of the track tighter and more steeply banked.  The overall result is an egg-shaped track.  Crews have always found it difficult to set up the cars’ handling to be effective at both ends of the track.

At Darlington, notes Hagstrom, drivers must find the right balance of aggressiveness and patience in order to succeed.  If you’re too aggressive, you’ll crash.  If you’re too smooth, you’ll get passed.

Hagstrom also observes that specialized engine builders learned to take the standard 358-cubic-inch V8 motor and make it into a 700-horsepower engine.  Nearly everything else about the car is also carefully engineered for each particular racetrack.  (Today, in 2017, computers receive huge amounts of data from these racing machines.)

The original idea of “stock car” racing was that the cars look—on the outside at least—just like American cars that anybody could buy.  Historically, NASCAR fans have followed with great passion not only specific drivers, but specific car brands, such as Chevrolets, Fords, or Pontiacs.

Darlington Raceway has a special charm:

According to many regulars, there is no more beautiful place to entertain clients and guests than Darlington Raceway.  The hospitality village itself is outlined in white picket fences that surround beautifully appointed white and yellow striped tends.  Flower boxes hang at each entrance.  (page 9)

In addition to the hospitality tents, there are air-conditioned corporate suites high above the track.  A catered dinner is served on linen-covered tables.  As of the late 90’s, there were four corporate suites renting for $100,000 a year to PepsiCo, RJR Nabisco, Unocal, and Anheuser-Busch.

Hagstrom writes that the makeup of race fans has changed over the years.  As of the late 90’s, 30 percent stock car fans have an annual income of over $50,000; 38 percent are female.  Hagstrom says stock car racing at the beginning was for “the rowdiest and roughest,” but today’s stock car races are family events.

Hagstrom continues:

The infield, the open area inside the track itself, has become the last bastion of stock car racing’s most passionate fans.  They travel hundreds of miles in their recreational vehicles, campers, pickup trucks, and vans, and they are equipped to make the infield their home for three days.  They are determined not to miss one minute of racing:  the qualifying runs and the practices on Friday, the support race and then more practicing on Saturday, and the featured race on Sunday, with all its festivities.  (page 11)

In the 1950’s and 1960’s, writes Hagstrom, the infield was quite a bit like the Wild West.  The local sheriff even set up a temporary jail there.  In recent decades, however, track owners have substantially improved their infields in order to charge higher prices.

NASCAR has a number of rules:

NASCAR rules are designed to promote close, competitive racing, which the fans want, in a way that maintains parity and does not unduly favor the well-financed teams.  The paramount force behind all the rules, however, is the safety of both the drivers and the fans.  Everyone in NASCAR is aware of the potential for injury with so many machines running in close quarters at such high speeds, and so the rules and regulations are vigorously enforced.

A NASCAR-sanctioned motorsports event, like the Southern 500, is officiated by NASCAR and conducted in accordance with its rules.  These rules cover not only the race, but all periods leading up to and following it, including registration, inspections, time trials, qualifying races, practices, and postrace inspections.  (page 13)

Corporate sponsorship is the foundation of the sport, says Hagstrom.  There are many different types of businesses—including many Fortune 500 companies—that are NASCAR corporate sponsors.  The highest form of advertising in motorsports is to sponsor a team.  As with other sports, the greatest leverage NASCAR has had in selling itself to advertisers is based increasingly on its television audience.

As far as levels of competition, the highest level in NASCAR today (2017) is the Monster Energy NASCAR Cup Series.  (The highest level used to be the Winston Cup.)

Two neat things about NASCAR racing, historically:

  • pay is based on performance
  • drivers are humble and grateful

Hagstrom explains:

…It is the sound of humility and gratitude and enthusiasm.  It is the sound of athletes who tell you—and who mean it—that they are no bigger than the fans who come out and support them.  It is the sound of autographs being signed, of smiling pictures being snapped, and of kids collecting heroes.  It is what is best about American sports… (page 18)



Hagstrom recounts the history of the sport:

Stock car racing was born in the South, the boisterous legacy of daredevil moonshine drivers who tore up and down the back roads of Appalachia during the 1930s and 1940s.

For years, hardscrabble farmers in the mountains had been making their own whiskey, just as they made their own tools, clothes, and furniture.  But it wasn’t until Prohibition in 1919 that mountaineers discovered that the sippin’ whiskey they made for themselves was worth cash money to the folks in town.  For many mountain families, bootlegging was their only source of income in the winter months.

…By the 1940s, the government began sending federal revenue agents into the Appalachian Mountains to stop illegal whiskey manufacturing.  To avoid the revenuers, the mountaineers hid their stills and began to work only at night—hence the term “moonshiner.”  Drivers would begin their delivery runs after midnight and be safely home before daybreak…

In a stepped-up game of cat and mouse, the revenuers searched for tills by day and staked out the roads at night.  To stay ahead, moonshine drivers constantly tinkered with their cars, trying to eke out a few extra horsepower and to improve the suspension so the car would handle better.  It wasn’t easy barreling over hills and valleys in the middle of the night, dirt kicking up everywhere, and your car loaded down with twenty-five cases of white lightning… (pages 21-22)

Sometime in the mid-1930s “in a cow pasture in the town of Stockbridge, Georgia,” a few moonshiners started arguing about who had the fastest car and who was the better driver.  Someone made a quarter-mile dirt track in a farmer’s field.

After a few races, more and more people started to show up to watch.  The farmer fenced off the area and started charging admission.  The drivers’ pay also increased until it became more profitable to win a race than to run moonshine.  Hagstrom:

After driving over 100 miles an hour on the dirt roads of North Carolina in the middle of the night while being chased by revenuers, the moonshiners looked at these smooth, level, quarter-mile racetracks, crossed their arms, rocked back on their heels, and grinned.

The Flock brothers—Tim, Bob, and Fonty—drove for their uncle, Peachtree Williams, who had one of the biggest stills in Georgia.  Buddy Shuman also ran whiskey and drove stock cars.  But the most famous bootlegger ever to drive stock cars was Junior Johnson.  Junior ran whiskey for his daddy, Glenn Johnson, who had the biggest and most profitable moonshine operation in Wilkes Country, North Carolina.  (page 23)

For instance, Junior had perfected the power slide, which allowed him to speed up into the turns rather than slow down.

But modern stock car racing owes its success to one man:  William Henry Getty France, or “Big Bill” France.  Big Bill France raced in the Maryland suburbs of Washington, D.C., and he worked as a mechanic in garages and service stations.

France and Annie B, his wife, went to Florida to live in Miami.  But after stopping at Daytona Beach, France decided to settle there.  He opened up his own gas station.  Before long, his garage was a favorite hangout of mechanics and race car drivers.

Daytona Beach had hard-packed sandy beaches 500 yards wide and 25 miles long.  It was already known as the Speed Capital of the world.  In 1936 and 1937, writes Hagstrom, the city fathers of Daytona Beach put together races.  (This was partly out of concern for racers who were leaving for the Bonneville Salt Flats of Utah.)  But these races were poorly managed.  So they sought Bill France to manage the race in 1938.

France was already well-liked by most mechanics and race car drivers in the area.  He was also a natural promotor.  And because he had been a racer, he knew what worked and what didn’t in putting on a race.

France convinced a local restauranteur, Charlie Reese, to pay for the race as long as Bill France did all the work.  They would split the profits.  The race was a great success.

Soon thereafter, France heard about an oval dirt track for rent in Charlotte, North Carolina.  France decided to sponsor a 100-mile National Championship race there.  But local reporters hesitated to cover the race because there was no sanctioning body and no official rules.

France couldn’t convince AAA, so he organized his own sanctioning body, the National Championship Stock Car Circuit (NCSCC).  NCSCC would sponsor monthly races at various tracks, and the winners would be determined by a cumulative point system and winners’ fund.  France found someone to run his service station in Daytona Beach—and he got his wife Annie B to handle accounting—so that he could focus completely on setting up the system he envisioned.

1947 was the first full year for the National Championship Stock Car Circuit.  It was a great success.  The points’ fund, at $3,000, was divided among the top finishers.  The bootlegger Fonty Flock won first place.

The problem was that stock car racing at the time didn’t have a good reputation.  France knew it needed a central authority to govern all drivers, all car owners, and all track owners.  So France invited the most influential people from racing to Daytona Beach for a year-end meeting about the future of stock car racing.

France described his vision to his colleagues, including a national point system and winners’ fund.  Hagstrom adds:

…The rules, he declared, would have to be consistent, enforceable, and ironclad.  Cheating would not be allowed.  The regulations would be designed to ensure close competition, for they all knew that close side-by-side racing was what fans cheered for.  Finally, he argued, the organizing body should promote a racing division dedicated solely to standard street stock cars, the same cars that could be bought at automobile dealerships.  Fans would love these races, France argued, because they could identify with the cars.  (page 29)

The group voted to form a national organization of stock car racing.  France was elected president.  And they decided to incorporate the entity.  The National Association for Stock Car Auto Racing (NASCAR) was incorporated on February 15, 1948.

A technical committee set the rules for engine size, safety, and fair competition.  Only American-made cars were allowed.  NASCAR also decided to guarantee purses for the races it sanctioned.  And they established a national point system.

NASCAR today does a very detailed set of inspections.  And the rules are still designed to ensure parity and safety.  As a part of parity, costs are strictly controlled.



Hagstrom writes:

Sponsorship is a form of marketing in which companies attach their name, brand, or logo to an event for the purpose of achieving future profit.  It is not the same as advertising.  Both strategies seek the same end result—corporate profit—but go about it in different ways.  Advertising is a direct and overt message to consumers.  If successful, it stimulates a near-term purchase.  Sponsorship, on the other hand, generates a more subtle message that, if successful, creates a lasting bond between consumers and the company.  (page 49)

Corporate entertainment can be an effective marketing tool.

If the goal of sponsorship is to increase sales, that can be measured over specific time periods.  The same goes for other goals of sponsorship, including increasing worker productivity.

It’s more difficult to measure the impact of stock car racing sponsorship on corporate images over time.  But historically, consumers have been extremely positive towards nearly all companies that sponsor stock car racing.

Hagstrom says it is impossible to attend a NASCAR race without feeling a great deal of emotion.  The cars are so powerful and quick, and the competition is so close and intense, that you cannot avoid being impressed if you’ve never been to a race before.

In one survey (in the late 90’s), stock car racing fans were able to identify more than 200 different companies or brands connected with stock car racing.  Of all the companies mentioned, only 1 percent were incorrectly named by the fans, notes Hagstrom.  This is simply incredible.

Drivers know that their teams couldn’t race without corporate sponsors.  And fans know that ticket prices would be much higher without corporate sponsors.



Hagstrom writes:

The reason NASCAR events do well all season long is the same reason the other sports do so well during the playoffs:  the thrill of seeing the sport’s best athletes compete in a one-time event.  By the time baseball, basketball, and football get to the playoffs, the very best teams are facing each other.  Each game in a playoff series takes on an intensity that increases geometrically;  as the stakes rise, so does the excitement.  So too does the sense of urgency.  Fans know that playoffs and championship games will be played only once, and they had better not miss them.  (page 83)

Although a variety of camera angles and close-up views allow fans to follow NASCAR races on television better than ever before, there is still nothing like seeing a NASCAR race live.




Although Darlington Raceway is credited with being NASCAR’s first superspeedway, world-famous Daytona is the track most responsible for launching the sport of stock car racing into the modern era.  Ask any driver his reaction on seeing Daytona for the first time and you will hear words like “amazing,” “incredible,” and “intimidating.”…  

Without question, Daytona was built for speed.  It’s 2.5 miles long, with big sweeping turns banked at 31 degrees.  Fireball Roberts, another famous 1960s NASCAR driver, was eager.  “This is the track where you can step on the accelerator and let it roll.  You can flatfoot it all the way.”  (pages 107-108)

Hagstrom then describes Talladega (as of the late 90’s):

Talladega stretched the imagination.  At 2.66 miles long, it was the longest and soon the fastest speedway.  It was here that Bill Elliott drove the fastest lap in NASCAR history—212 mph.  Drivers, once they built experience, began racing here at speeds in excess of 200 mph.  Because Talladega is wide (one lane wider than Daytona), racing three abreadst became the norm.  The intensity of competition racheted up several levels.

At last, racers had found a track that was built for speeds faster than most were comfortable driving.  NASCAR had finally answered its own question: Just how fast is fast enough?  (pages 108-109)

Track owners have the following sources of revenue:

  • General admission and luxury suites
  • Television and radio broadcast fees
  • Sponsorship fees and advertising
  • Concession, program, and mechandise sales
  • Hospitality tents and souvenir trailers

Expenses include:

  • Sanctioning fee
  • Prize money
  • Operating costs

Selling tickets has been the key for decades.  As of the late 90’s, grandstand seats, suites, and infield parking account for 70 percent of a track’s revenue, notes Hagstrom.  Other sources of revenue include concessions, souvenirs, signage, and broadcast rights.



From 1973 to 1975, Dale Earnhardt was living hand to mouth and trying to save money to race.  Earnhardt finally got a chance to race at the World 600 at the Charlotte Motor Speedway.  Still, it takes years before a driver can race at NASCAR’s highest level.  Finally, in 1978, Earnhardt came in fourth place at Atlanta International Raceway.  Dale Earnhardt earned Rookie of the Year in 1979.  And he won the championship in 1980.  By the late 90’s, Earnhardt was arguably the greatest stock car racer of all time.

NASCAR race fans are probably the most passionate fans in the world, or at least have been historically.  Without fans buying tickets—and souvenirs and other products—stock car racing as it is would not exist.  But, unlike most other major sports historically, NASCAR fans can walk up to their favorite athletes and talk with them.

Hagstrom writes:

There is much about NASCAR racing that draws people to it.  For one thing, it is easy to identify with the activity.  Almost every adult in America knows how to drive a car, and most can remember the teenage thrill of driving fast.  Many fans own cars that, except for the paint job, look just like cars on the racetrack.  Unlike other sports, you don’t have to be a certain size, weight, or height to be a race driver.  So it’s not too much of a stretch for fans to imagine themselves behind the wheel of those powerful cars.

Something in the human psyche is attracted to danger, and that too is part of the appeal.  Today’s race cars are many times safer than today’s ordinary passenger vehicles;  nonetheless, there is always the sense that something spectacular could happen at any moment.  Finally, racing is inherently exciting in a way that many other sports are not.  The noise, the vibration, the speed all combine to affect observers in a powerful, almost visceral way.

All those factors, however, would not be enough to explain the loyalty of the NASCAR fans were it not for one other critical ingredient:  the intense emotional bond that exists between fans and their drivers.  That bond rests on a foundation of courtesy, humility, and respect that runs both ways.  The drivers’ attitude toward their fans is the unique factor that sets NASCAR apart and makes its drivers genuine heroes.  (pages 152-153)



To win at the highest level, teams need not only a great driver, but a fast car and an excellent crew.  The crew chief is a crucial position.

…In all matters relating to the technical aspects of the car, including building it in the shop and monitoring how it performs at the track, the decisions rest with the crew chief.  The crew chief hires the race shop personnel, including a shop foreman, engine builders, fabricators, machinists, engineers, mechanics, gear/transmission specialists, a parts manager, and a transport driver.  (page 163)

Note again that Hagstrom was writing in the late 90’s.  In 2017, computers are far more powerful and are, accordingly, more essential in car racing generally.

On race day, the race crew is essential.  As of the late 90’s, they could change all four tires and refuel the car in twenty seconds.

In qualifying runs, typically there are more teams than available slots.  Qualifying often depends on tenths of a second.  Fine-tuning the race car requires a high degree of skill and teamwork.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:


Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Observations on History

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 27, 2017

The Lessons of History—first published in 1968—is the result of a lifetime of work by the outstanding historians, Will and Ariel Durant. Top investor Ray Dalio, founder and leader of Bridgewater Associates, has recommended the book as a succinct set of observations on history.


History may not repeat itself, but it does rhyme.  – Mark Twain



There are three biological lessons of history:

  • Life is competition.
  • Life is selection.
  • Life must breed.

The Durants write that our acquisitiveness, greediness, and pugnacity are remnants of our evolutionary history as hunters and gatherers.  In those days, we had to eat as much as possible when we managed to get food from a successful hunt.  We also had to hoard food (and others goods) whenever possible.

As for selection, there is virtually an infinite variety of random differences in people.  And the environment itself can often be random and unpredictable.  People who win the Ovarian Lottery—to use Warren Buffett’s term—don’t just have talents; rather, they have talents well-suited to a specific environment.  Here’s Buffett:

As my friend Bill Gates says, if I’ve been born in some different place or some different time I’d have been some animal’s lunch.  I’d have been running real fast, and the animal would have been chasing me and I’d say “I allocate capital” and the animal would say “well, those are the kind that taste the best”.  I’ve been in the right place at the right time, and I’m lucky, I think a fair amount of that luck should be shared with others.

In 2013, someone from a group of students asked Warren Buffett how his understanding of markets affected his political views.  Buffett replied:

I wouldn’t say knowledge of markets has.  My political views were formed by this process.  Just imagine that it is 24 hours before you are born.  A genie comes and says to you in the womb, “You look like an extraordinarily responsible, intelligent, potential human being.  Going to emerge in 24 hours and it is an enormous responsibility I am going to assign to you—determination of the political, economic and social system into which you are going to emerge.  You set the rules, any political system, democracy, parliamentary, anything you wish, can set the economic structure, communistic, capitalistic, set anything in motion and I guarantee you that when you emerge this world will exist for you, your children and grandchildren.  What’s the catch?  One catch—just before you emerge you have to go through a huge bucket with 7 billion slips, one for each human.  Dip your hand in and that is what you get—you could be born intelligent or not intelligent, born healthy or disabled, born black or white, born in the US or in Bangladesh, etc.  You have no idea which slip you will get.  Not knowing which slip you are going to get, how would you design the world?  Do you want men to push around females?  It’s a 50/50 chance you get female.  If you think about the political world, you want a system that gets what people want.  You want more and more output because you’ll have more wealth to share around.  The US is a great system, turns out $50,000 GDP per capital, 6 times the amount when I was born in just one lifetime.  But not knowing what slip you get, you want a system that once it produces output, you don’t want anyone to be left behind.  You want to incentivize the top performers, don’t want equality in results, but do want something that those who get the bad tickets still have a decent life.  You also don’t want fear in people’s minds—fear of lack of money in old age, fear of cost of health care.  I call this the “Ovarian Lottery”.  My sisters didn’t get the same ticket.  Expectations for them were that they would marry well, or if they work, would work as a nurse, teacher, etc.  If you are designing the world knowing 50/50 male or female, you don’t want this type of world for women – you could get female.  Design your world this way; this should be your philosophy.  I look at Forbes 400, look at their figures and see how it’s gone up in the last 30 years.  Americans at the bottom are also improving, and that is great, but we don’t want that degree of inequality.  Only governments can correct that.  Right way to look at it is the standpoint of how you would view the world if you didn’t know who you would be.  If you’re not willing to gamble with your slip out of 100 random slips, you are lucky!  The top 1% of 7 billion people.  Everyone is wired differently.  You can’t say you do everything yourself.  We all have teachers, and people before us who led us to where we are.  We can’t let people fall too far behind.  You all definitely got good slips.


As for the third biological lesson of history, that life must breed, Will and Ariel Durant explain:

Nature has no use for organisms, variations, or groups that cannot reproduce abundantly.  She has a passion for quantity as prerequisite to the selection of quality; she likes large litters, and relishes the struggle that picks the surviving few; doubtless she looks on approvingly at the upstream race of a thousand sperms to fertilize one ovum.  She is more interested in the species than in the individual, and makes little difference between civilization and barbarism.  She does not care that a high birth rate has usually accompanied a culturally low civilization, and a low birth rate a civilization culturally high; and she (meaning Nature as the process of birth, variation, competition, selection, and survival) sees to it that a nation with a low birth rate shall be periodically chastened by some more virile and fertile group.  (page 21)



Will and Ariel Durant sum it up:

“Racial” antipathies have some origin in ethnic origin, but they are also generated, perhaps predominantly, by differences of acquired culture—of language, dress, habits, moral, or religion.  There is no cure for such antipathies except a broadened education.  A knowledge of history may teach us that civilization is a co-operative product, that nearly all people’s have contributed to it; it is our common heritage and debt; and the civilized soul will reveal itself in treating every man or woman, however lowly, as a representative of one of these creative and contributory groups.  (page 31)



Will and Ariel Durant:

Evolution in man during recorded time has been social rather than biological: it has proceeded not by heritable variations in the species, but mostly by economic, political, intellectual, and moral innovation transmitted to individuals and generations by imitation, custom, or education.  Custom and tradition within a group correspond to type and heredity in the species, and to instincts in the individual; they are ready adjustments to typical and frequently repeated situations.  New situations, however, do arise, requiring novel, unstereotyped responses; hence development, in the higher organisms, requires a capacity for experiment and innovation—the social correlates of variation and mutation.  Social evolution is an interplay of custom with origination.  (page 34)

Occasionally some new challenge or situation has required the new (or sometimes very old) ideas of an innovator—whether scientist, inventor, or leader (business, political, spiritual).



Will and Ariel Durant note that what are today considered vices may once have been virtues—i.e., advantages for survival.  They observe that the transition from hunting to agriculture called for new virtues:

We may reasonably assume that the new regime demanded new virtues, and changed some old virtues into vices.  Industriousness became more vital than bravery, regularity and thrift more profitable than violence, peace more victorious than war.  Children were economic assets; birth control was made immoral.  On the farm, the family was the unit of production under the discipline of the father and the seasons, and paternal authority had a firm economic base.  (page 38)

Gradually and then rapidly, write the Durants, the Industrial Revolution changed the economic form and moral superstructure of European and American life.  Many went to work as individuals in factories, and many of them worked with machines in the factories.

The Durants point out that much written history is, as Voltaire said, “a collection of the crimes, follies, and misfortunes” of humankind.  However, this written history typically does not include many good and noble deeds that actually occurred:

We must remind ourselves again that history as usually written (peccavimus) is quite different from history as usually lived: the historian records the exceptional because it is interesting—because it is exceptional.  If all those individuals who had no Boswell had found their numerically proportionate place in the pages of historians we should have a duller but juster view of the past and of man.  Behind the red facade of war and politics, misfortune and poverty, adultery and divorce, murder and suicide, were millions of orderly homes, devoted marriages, men and women kindly and affectionate, troubled and happy with children.  Even in recorded history we find so many instances of goodness, even of nobility, that we can forgive, though not forget, the sins.  The gifts of charity have almost equaled the cruelties of battlefields and jails.  How many times, even in our sketchy narratives, have we seen men helping one another… (page 41)



Religion has helped with educating the young.  And religion has given meaning and dignity to even the lowliest existence, write the Durants.  Religion gives many people hope.  However, religion has stumbled at important times:

The majestic dream broke under the attacks of nationalism, skepticism, and human frailty.  The Church was manned with men, who often proved biased, venal, or extortionate.  France grew in wealth and power, and made the papacy her political tool.  Kings became strong enough to compel a pope to dissolve that Jesuit order which had so devotedly supported the popes.  The Church stooped to fraud, as with pious legends, bogus relics, and dubious miracles… More and more the hierarchy spent its energies in promoting orthodoxy rather than morality, and the Inquisition almost fatally disgraced the Church.  Even while preaching peace the Church fomented religious wars in sixteenth-century France and the Thirty Years’ War in seventeenth-century Germany.  It played only a modest part in the outstanding advance of modern morality—

the abolition of slavery.  It allowed the philosophers to take the lead in the humanitarian movements that have alleviated the evils of our time.  (page 45)



Will and Ariel Durant open the chapter:

Unquestionably the economic interpretation illuminates much history.  The money of the Delian Confederacy built the Parthenon; the treasury of Cleopatra’s Egypt revitalized the exhausted Italy of Augustus, gave Virgil an annuity and Horace a farm.  The Crusades, like the wars of Rome with Persia, were attempts of the West to capture trade routes to the East; the discovery of America was a result of the failure of the Crusades.  The banking house of the Medici financed the Florentine Renaissance; the trade and industry of Nuremberg made Durer possible.  The French Revolution came not because Voltaire wrote brilliant satires and Rousseau sentimental romances, but because the middle classes had risen to economic leadership, needed legislative freedom for their enterprise and trade, and itched for social acceptance and political power.  (pages 52-53)

Bankers have often risen to the top of the economic pyramid, since they have been able to direct the flow of capital.

The Durants note the importance of the profit motive in moving the economy forward:

The experience of the past leaves little doubt that every economic system must sooner or later rely upon some form of profit motive to stir individuals and groups to productivity.  Substitutes like slavery, police supervision, or ideological enthusiasm prove too unproductive, too expensive, and too transient.  (page 54)

Wealth tends naturally to concentrate in the hands of the most able.  Periodically it must be redistributed.

…The government of the United States, in 1933-52 and 1960-65, followed Solon’s peaceful methods, and accomplished a moderate and pacifying redistribution; perhaps someone had studied history.  The upper classes in America cursed, complied, and resumed the concentration of wealth.

We conclude that the concentration of wealth is natural and inevitable, and is periodically alleviated by violent or peaceable partial redistribution.  (page 57)



Capitalism—especially in America—has unleashed amazing productivity and will continue to do so for a long time:

The struggle of socialism against capitalism is part of the historic rhythm in the concentration and dispersion of wealth.  The capitalist, of course, has fulfilled a creative function in history: he has gathered the savings of the people into productive capital by the promise of dividends or interest; he has financed the mechanization of industry and agriculture, and the rationalization of distribution; and the result has been such a flow of goods from producer to consumer as history has never seen before.  He has put the liberal gospel of liberty to his use by arguing that businessmen left relatively free from transportation tolls and legislative regulation can give the public a greater abundance of food, homes, comfort, and leisure than has ever come from industries managed by politicians, manned by governmental employees, and supposedly immune to the laws of supply and demand.  In free enterprise the spur of competition and the zeal and zest of ownership arouse the productiveness and inventiveness of men; nearly every economic ability sooner or later finds its niche and reward in the shuffle of talents and the natural selection of skills; and a basic democracy rules the process insofar as most of the articles to be produced, and the services to be rendered, are determined by public demand rather than by governmental decree.  Meanwhile competition compels the capitalist to exhaustive labor, and his products to ever-rising excellence.  (pages 58-59)

Throughout most of history, socialist structures or centralized control by government have guided economies.  The Durants offer many examples, including that of Egypt:

In Egypt under the Ptolemies (323 B.C. – 30 B.C.) the state owned the soil and managed agriculture: the peasant was told what land to till, what crops to grow; his harvest was measured and registered by government scribes, was threshed on royal threshing floors, and was conveyed by a living chain of fellaheen into the granaries of the king.  The government owned the mines and appropriated the ore.  It nationalized the production and sale of oil, salt, papyrus, and textiles.  All commerce was controlled and regulated by the state; most retail trade was in the hands of state agents selling state-produced goods.  Banking was a government monopoly, but its operation might be delegated to private firms.  Taxes were laid upon every person, industry, process, product, sale, and legal document.  To keep track of taxable transactions and income, the government maintained a swarm of scribes and a complex system of personal and property registration.  The revenue of this system made the Ptolemaic the richest state of the time.  Great engineering enterprises were completed, agriculture was improved, and a large proportion of the profits went to develop and adorn the country and to finance its cultural life.  About 290 B.C. the famous Museum and Library of Alexandria were founded.  Science and literature flourished; at uncertain dates in this Ptolemaic era some scholars made the “Septuagint” translation of the Pentateuch into Greek.  (pages 59-60)

The Durants then tell the story of Rome under Diocletian:

…Faced with increasing poverty and restlessness among the masses, and with imminent danger of barbarian invasion, he issued in A.D. 301 an Edictum de pretiis, which denounced monopolists for keeping goods from the market to raise prices, and set maximum prices and wages for all important articles and services.  Extensive public works were undertaken to put the unemployed to work, and food was distributed gratis, or at reduced prices, to the poor.  The government—which already owned most mines, quarries, and salt deposits—brought nearly all major industries and guilds under detailed control.  “In every large town,” we are told, “the state became a powerful employer, … standing head and shoulders above the private industrialists, who were in any case crushed by taxation.”  When businessmen predicted ruin, Diocletian explained that the barbarians were at the gate, and that individual liberty had to be shelved until collective liberty could be made secure.  The socialism of Diocletian was a war economy, made possible by fear of foreign attack.  Other factors equal, internal liberty varies inversely as external danger.

The task of controlling men in economic detail proved too much for Diocletian’s expanding, expensive, and corrupt bureaucracy.  To support this officialdom—the army, the court, public works, and the dole—taxation rose to such heights that men lost incentive to work or earn, and an erosive contest began between lawyers finding devices to evade taxes and lawyers formulating laws to prevent evasion.  Thousands of Romans, to escape the taxgatherer, fled over the frontiers to seek refuge among the barbarians.  Seeking to check this elusive mobility, and to facilitate regulation and taxation, the government issued decrees binding the peasant to his field and the worker to his shop until all his debts and taxes had been paid.  In this and other ways medieval serfdorm began.  (pages 60-61)

The Durants then recount several attempts at socialism in China, including under the philosopher-king Wang Mang:

Wang Mang (r. A.D. 9-23) was an accomplished scholar, a patron of literature, a millionaire who scattered his riches among his friends and the poor.  Having seized the throne, he surrounded himself with men trained in letters, science, and philosophy.  He nationalized the land, divided it into equal tracts among the peasants, and put an end to slavery.  Like Wu Ti, he tried to control prices by the accumulation or release of stockpiles.  He made loans at low interest to private enterprise.  The groups whose profits had been clipped by his legislation united to plot his fall; they were helped by drought and flood and foreign invasion.  The rich Liu family put itself at the head of a general rebellion, slew Wang Mang, and repealed his legislation.  Everything was as before.  (page 62)

Later, the Durants tell of the longest-lasting socialist government: the Incas in what is now Peru.  Everyone was an employee of the state.  It seems all were happy, given the promise of security and food.

There was also a Portuguese colony in which 150 Jesuits organized 200,000 Indians in a socialist society (c. 1620 – 1750).  Every able-bodied person was required to work eight hours a day.  The Jesuits served as teachers, physicians, and judges.  The penal system did not include capital punishment.  The Jesuits also provided for recreation, including choral performances.  All were peaceful and happy, write the Durants.  And they defended themselves well when attacked.  The socialist experiment ended when the Spanish in America wanted immediately to occupy the Portuguese colony because it was rumored to contain gold.  The Portuguese government under Pombal—at the time, in disagreement with the Jesuits—ordered the priests and the natives to leave the settlements, say the Durants.

The Durants conclude the chapter:

… [Marx] interpreted the Hegelian dialectic as implying that the struggle between capitalism and socialism would end in the complete victory of socialism; but if the Hegelian formula of thesis, antithesis, and synthesis is applied to the Industrial Revolution as thesis, and to capitalism versus socialism as antithesis, the third condition would be a synthesis of capitalism and socialism; and to this reconciliation the Western world visibly moves.  (page 66)

Note that the Durants were writing in 1968.



Will and Ariel Durant:

Alexander Pope thought that only a fool would dispute over forms of government.  History has a good word to say for all of them, and for government in general.  Since men love freedom, and the freedom of individuals in society requires some regulation of conduct, the first condition of freedom is its own limitation; make it absolute and it dies in chaos.  So the prime task of government is to establish order; organized central force is the sole alternative to incalculable and disruptive forces in private hands.  (page 68)

It’s difficult to say when people were happiest.  Since I believe strongly that the most impactful technological breakthroughs ever—including but not limited to AI and genetics—are going to occur in the next 20-80 years, I would argue that we as humans are a long way away from the happiness we can achieve in the future.  (I also think Steven Pinker is right—in The Better Angels of Our Nature—that people are becoming less violent, slowly but surely.)

But if you had to pick a historical period, I would defer to the great historians to make this selection.  The Durants:

…”If,” said Gibbon, “a man were called upon to fix the period during which the condition of the human race was most happy and prosperous, he would without hesitation name that which elapsed from the accession of Nerva to the death of Marcus Aurelius.  Their united reigns are possibly the only period of history in which the happiness of a great people was the sole object of government.”  In that brilliant age, when Rome’s subjects complimented themselves on being under her rule, monarchy was adoptive: the emperor transmitted his authority not to his offspring but to the ablest man he could find; he adopted this man as his son, trained him in the functions of government, and gradually surrendered to him the reins of power.  The system worked well, partly because neither Trajan nor Hadrian had a son, and the sons of Antonius Pius died in childhood.  Marcus Aurelius had a son, Commodus, who succeeded him because the philosopher failed to name another heir; soon chaos was king.  (page 69)

The Durants then write that most monarchs overall do not have a great record.

Hence most governments have been oligarchies—ruled by a minority, chosen either by birth, as in aristocracies, or by a religious organization, as in theocracies, or by wealth, as in democracies.  It is unnatural (as even Rousseau saw) for a majority to rule, for a majority can seldom be organized for united and specific action, and a minority can.  If the majority of abilities is contained in a minority of men, minority government is as inevitable as the concentration of wealth; the majority can do no more than periodically throw out one minority and set up another.  The aristocrat holds that political selection by birth is the sanest alternative to selection by money or theology or violence.  Aristocracy withdraws a few men from the exhausting and coarsening strife of economic competition, and trains them from birth, through example, surroundings, and minor office, for the tasks of government; these tasks require a special preparation that no ordinary family or background can provide.  Aristocracy is not only a nursery of statesmanship, it is also a repository and vehicle of culture, manners, standards, and tastes, and serves thereby as a stabilizing barrier to social fads, artistic crazes, or neurotically rapid changes in the moral code… (page 70)

When aristocracies were too selfish and myopic, however, slowing progress, the new rich combined with the poor to overthrow them, say the Durants.

The Durants point out that most revolutions probably would have occurred without violence through gradual economic development.  They mention the rise of America as an example.  They also note that the English aristocracy was gradually replaced by the money-controlling business class in England.  The Durants then generalize:

The only real revolution is in the enlightenment of the mind and the improvement of character, the only real emancipation is individual, and the only real revolutionists are philosophers and saints.  (page 72)

A bit later, the Durants discuss the battles between the poor and the rich in Athenian democracy around the time of Plato’s death (347 B.C.).

…The poor schemed to despoil the rich by legislation, taxation, and revolution; the rich organized themselves for protection against the poor.  The members of some oligarchic organizations, says Aristotle, took a solemn oath: “I will be an adversary of the people” (i.e., the commonalty), “and in the Council I will do it all the evil that I can.”  “The rich have become so unsocial,” wrote Isocrates about 366 B.C., “that those who own property had rather throw their possessions into the sea than lend aid to the needy, while those who are in poorer circumstances would less gladly find a treasure than seize the possessions of the rich.”  (pages 74-75)

Much of this class warfare became violent.  And Greece was divided when Philip of Macedon attacked in 338 B.C.

The Durants continue:

Plato’s reduction of political evolution to a sequence of monarchy, aristocracy, democracy, and dictatorship found another illustration in the history of Rome.  During the third and second centuries before Christ a Roman oligarchy organized a foreign policy and a disciplined army, and conquered and exploited the Mediterranean world.  The wealth so won was absorbed by the patricians, and the commerce so developed raised to luxurious opulence the upper middle class.  Conquered Greeks, Orientals, and Africans were brought to Italy to serve as slaves on the latifundia; the native farmers, displaced from the soil, joined the restless, breeding proletariat in the cities, to enjoy the monthly dole of grain that Caius Gracchus had secured for the poor in 123 B.C.  Generals and proconsuls returned from the provinces loaded with spoils for themselves and the ruling class; millionaires multiplied; mobile money replaced land as the source or instrument of political power; rival factions competed in the wholesale purchase of candidates and votes; in 53 B.C. one group of voters received ten million sesterces for its support.  When money failed, murder was available: citizens who had voted the wrong way were in some instances beaten close to death and their houses were set on fire.  Antiquity had never known so rich, so powerful, and so corrupt a government.  The aristocrats engaged Pompey to maintain their ascendancy; the commoners cast their lot with Caesar; ordeal of battle replaced the auctioning of victory; Caesar won, and established a popular dictatorship.  Aristocrats killed him, but ended by accepting the dictatorship of his grandnephew and stepson Augustus (27 B.C.).  Democracy ended, monarchy was restored; the Platonic wheel had come full turn.  (pages 75-76)

The Durants describe American democracy as the most universal ever seen so far.  But the advance of technology—to the extent that it makes the economy more complex—tends to concentrate power even more:

Every advance in the complexity of the economy puts an added premium upon superior ability, and intensifies the concentration of wealth, responsibility, and political power.  (page 77)

Will and Ariel Durant conclude that democracy has done less harm and more good than any other form of government:

…It gave to human existence a zest and camaraderie that outweighed its pitfalls and defects.  It gave to thought and science and enterprise the freedom essential to their operation and growth.  It broke down the walls of privilege and class, and in each generation its raised up ability from every rank and place.  Under its stimulus Athens and Rome became the most creative cities in history, and America in two centuries has provided abundance for an unprecedently large proportion of its population.  Democracy has now dedicated itself resolutely to the spread and lengthening of education, and to the maintenance of public health.  If equality of educational opportunity can be established, democracy will be real and justified.  For this is the vital truth beneath its catchwords: that though men cannot be equal, their access to education and opportunity can be made more nearly equal.  The rights of man are not rights to office and power, but the rights of entry into every avenue that may nourish and test a man’s fitness for office and power.  A right is not a gift of God or nature but a privilege which it is good that the individual should have.  (pages 78-79)



As mentioned earlier, I happen to agree with Steven Pinker’s thesis in The Better Angels of Our Nature:  we humans are slowly but surely becoming less violent as economic and technological progress continues.  But it could still take a very long time before wars stop entirely (if ever).

Will and Ariel Durant were writing in 1968, so they didn’t know that the subsequent 50 years would be (arguably) less violent overall.  In any case, they offer interesting insights into war:

The causes of war are the same as the causes of competition among individuals: acquisitiveness, pugnacity, and pride; the desire for food, land, materials, fuels, mastery.  The state has our instincts without our restraints.  The individual submits to restraints laid upon him by morals and laws, and agrees to replace combat with conference, because the state guarantees him basic protection in his life, property, and legal rights.  The state itself acknowledges no substantial restraints, either because it is strong enough to defy any interference with its will or because there is no superstate to offer it basic protection, and no international law or moral code wielding effective force.  (page 81)

The Durants write that, after freeing themselves from papal control, many modern European states—if they foresaw a war—would cause their people to hate the people in the opposing country.  Today, we know from psychology that when people develop extreme hatreds, they nearly always dehumanize and devalue the human beings they hate and minimize their virtues.  Such extreme hatreds, if unchecked, often lead to tragic consequences.  (The Durants note that wars between European states in the sixteenth century still permitted each side to respect the other’s civilization and achievements.)

Again bearing in mind when the Durants were writing (1968), the historical precedent seemed to indicate that the United States should attack emerging communist powers before they became powerful enough to overcome the United States.  The Durants:

…There is something greater than history.  Somewhere, sometime, in the name of humanity, we must challenge a thousand evil precedents, and dare to apply the Golden Rule to nations, as the Buddhist King Ashoka did (262 B.C.), or at least do what Augustus did when he bade Tiberius desist from further invasion of Germany (A.D. 9)… “Magnanimity in politics,” said Edmund Burke, “is not seldom the truest wisdom, and a great empire and little minds go ill together.”  (pages 84-85)

Perhaps the humanist will agree with Pinker (as I do) that eventually, however slowly, we will move towards the cessation of war (at least between humans).  If this happens, it may be due largely to unprecedented progress in technology (including but not limited to AI and genetics):  we will gain control of our own evolution and wealth per capita will advance to unimaginable levels.

At the same time, we shouldn’t assume that aliens are necessarily peace-loving.  Perhaps humanity will have to unite in self-defense, say the Durants.



Will and Ariel Durant give again their definition of civilization:

We have defined civilization as “social order promoting cultural creation.”  It is political order secured through custom, morals, and law, and economic order secured through a continuity of production and exchange; it is cultural creation through freedom and facilities for the origination, expression, testing, and fruition of ideas, letters, manners, and arts.  It is an intricate and precarious web of human relationships, laboriously built and readily destroyed.  (page 87)

The Durants later add:

History repeats itself in the large because human nature changes with geological leisureliness, and man is equipped to respond in stereotyped ways to frequently occurring situations and stimuli like hunger, danger, and sex.  But in a developed and complex civilization individuals are more differentiated and unique than in primitive society, and many situations contain novel circumstances requiring modifications of instinctive response; custom recedes, reasoning spreads; the results are less predictable.  There is no certainty that the future will repeat the past.  Every year is an adventure.  (page 88)

Growth happens when people meet challenges.

If we ask what makes a creative individual, we are thrown back from history to psychology and biology—to the influence of the environment and the gamble and secret of the chromosomes.  (page 91)

Decay of the civilization or group happens when the political or intellectual leaders fail to meet the challenges of change.

The challenges may come from a dozen sources… A change in the instruments or routes of trade—as by the conquest of the ocean or the air—may leave old centers of civilization becalmed and decadent, like Pisa or Venice after 1492.  Taxes may mount to the point of discouraging capital investment and productive stimulus.  Foreign markets and materials may be lost to more enterprising competition; excess of imports over exports may drain [wealth and reserves].  The concentration of wealth may disrupt the nation in class or race war.  The concentration of population and poverty in great cities may compel a government to choose between enfeebling the economy with a dole and running the risk of riot and revolution.  (page 92)

All great individuals so far have died.  (Future technology may allow us to fix that, perhaps this century.)  But great civilizations don’t really die, say the Durants:

…Greek civilization is not really dead; only its frame is gone and its habitat has changed and spread; it survives in the memory of the race, and in such abundance that no one life, however full and long, could absorb it all.  Homer has more readers now than in his own day and land.  The Greek poets and philosophers are in every library and college; at this moment Plato is being studied by a hundred thousand discoverers of the “dear delight” of philosophy overspreading life with understanding thought.  This selective survival of creative minds is the most real and beneficent of immortalities.

Nations die.  Old regions grow arid, or suffer other change.  Resilient man picks up his tools and his arts, and moves on, taking his memories with him.  If education has deepened and broadened those memories, civilization migrates with him, and builds somewhere another home.  In the new land he need not begin entirely anew, nor make his way without friendly aid; communication and transport bind him, as in a nourishing placenta, with his mother country.  Rome imported Greek civilization and transmitted it to Western Europe; America profited from European civilization and prepares to pass it on, with a technique of transmission never equaled before.

Civilizations are the generations of the racial soul.  As life overrides death with reproduction, so an aging culture hands its patrimony down to its heirs across the years and the seas.  Even as these lines are being written, commerce and print, wires and waves and invisible Mercuries of the air are binding nations and civilizations together, preserving for all what each has given to the heritage of mankind.  (pages 93-94)



If progress is increasing our control of the environment, then obviously progress continues to be made, primarily because scientists, inventors, entrepreneurs, and other leaders continue to push science, technology, and education forward.  The Durants also point out that people are living much longer than ever before.  (Looking forward today from 2017, the human lifespan may double or triple at a minimum; and we may eventually develop the capacity to live virtually forever.)

Will and Ariel Durant then sum up all they have learned:

History is, above all else, the creation and recording of that heritage; progress is its increasing abundance, preservation, transmission, and use.  To those of us who study history not merely as a warning reminder of man’s follies and crimes, but also as an encouraging remembrance of generative souls, the past ceases to be a depressing chamber of horrors; it becomes a celestial city, a spacious country of the mind, wherein a thousand saints, statesmen, inventors, scientists, poets, artists, musicians, lovers, and philosophers still live and speak, teach and carve and sing.  The historian will not mourn because he can see no meaning in human existence except that which man puts into it; let it be our pride that we ourselves may put meaning into our lives, and sometimes a significance that transcends death.  If a man is fortunate he will, before he dies, gather up as much as he can of his civilized heritage and transmit it to his children.  And to his final breath he will be grateful for this inexhaustible legacy, knowing that it is our nourishing mother and our lasting life.  (page 102)




An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:


Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Future of the Mind

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 20, 2017

This week’s blog post covers another book by the theoretical physicist Michio Kaku—The Future of the Mind (First Anchor Books, 2015).

Most of the wealth we humans have created is a result of technological progress (in the context of some form of capitalism plus the rule of law).  Most future wealth will result directly from breakthroughs in physics, artificial intelligence, genetics, and other sciences.  This is why AI is fascinating in general (not just for investing).  AI—in combination with other technologies—may eventually turn out to be the most transformative technology of all time.



Physicists have been quite successful historically because of their ability to gather data, to measure ever more precisely, and to construct testable, falsifiable mathematical models to predict the future based on the past.  Kaku explains:

When a physicist first tries to understand something, first he collects data and then he proposes a “model,” a simplified version of the object he is studying that captures its essential features.  In physics, the model is described by a series of parameters (e.g., temperature, energy, time).  Then the physicist uses the model to predict its future evolution by simulating its motions.  In fact, some of the world’s largest supercomputers are used to simulate the evolution of models, which can describe protons, nuclear explosions, weather patterns, the big bang, and the center of black holes.  Then you create a better model, using more sophisticated parameters, and simulate it in time as well.  (page 42)

Kaku then writes that he’s taken bits and pieces from fields such as neurology and biology in order to come up with a definition of consciousness:

Consciousness is a process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space, time, and in relation to others), in order to accomplish a goal (e.g., find mates, food, shelter).

Kaku emphasizes that humans use the past to predict the future, whereas most animals are focused only on the present or the immediate future.

Kaku writes that one can rate different levels of consciousness based on the definition.  The lowest level of consciousness is Level 0, where an organism has limited mobility and creates a model using feedback loops in only a few parameters (e.g., temperature).  Kaku gives the thermostat as an example.  If the temperature gets too hot or too cold, the thermostat registers that fact and then adjusts the temperature accordingly using an air conditioner or heater.  Kaku says each feedback loop is “one unit of consciousness,” so the thermostat – with only one feedback loop – would have consciousness of Level 0:1.

Organisms that are mobile and have a central nervous system have Level I consciousness.  There’s a new set of parameters—relative to Level 0—based on changing locations.  Reptiles are an example of Level I consciousness.  The reptilian brain may have a hundred feedback loops based on their senses, etc.  The totality of feedback loops give the reptile a “mental picture” of where they are in relation to various objects (including prey), notes Kaku.

Animals exemplify Level II consciousness.  The number of feedback loops jumps exponentially, says Kaku.  Many animals have complex social structures.  Kaku explains that the limbic system includes the hippocampus (for memories), the amygdala (for emotions), and the thalamus (for sensory information).

You could rank the specific level of Level II consciousness of an animal by listing the total number of distinct emotions and social behaviors.  So, writes Kaku, if there are ten wolves in the wolf pack, and each wolf interacts with all the others with fifteen distinct emotions and gestures, then a first approximation would be that wolves have Level II:150 consciousness.  (Of course, there are caveats, since evolution is never clean and precise, says Kaku.)



Kaku observes that there is a continuum of consciousness from the most basic organisms up to humans.  Kaku quotes Charles Darwin:

The difference between man and the higher animals, great as it is, is certainly one of degree and not of kind.

Kaku defines human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future.  This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal.

Kaku explains that we as humans have so many feedback loops that we need a “CEO”—an expanded prefrontal cortex that can analyze all the data logically and make decisions.  More precisely, Kaku writes that neurologist Michael Gazzaniga has identified area 10, in the lateral prefrontal cortex, which is twice as big in humans as in apes.  Area 10 is where memory, planning, abstract thinking, learning rules, picking out what is relevant, etc. happens.  Kaku says he will refer to this region as the dorsolateral prefrontal cortex, roughly speaking.

Most animals, by constrast, do not think and plan, but rely on instinct.  For instance, notes Kaku, animals do not plan to hibernate, but react instinctually when the temperature drops.  Predators plan, but only for the immediate future.  Primates plan a few hours ahead.

Humans, too, rely on instinct and emotion.  But humans also analyze and evaluate information, and run mental simulations of the future—even hundreds or thousands of years into the future.  This, writes Kaku, is how we as humans try to make the best decision in pursuit of a goal.  Of course, the ability to simulate various future scenarios gives humans a great evolutionary advantage for things like evading predators and finding food and mates.

As humans, we have so many feedback loops, says Kaku, that it would be a chaotic sensory overload if we didn’t have the “CEO” in the dorsolateral prefrontal cortex.  We think in terms of chains of causality in order to predict future scenarios.  Kaku explains that the essence of humor is simulating the future but then having an unexpected punch line.

Children play games largely in order to simulate specific adult situations.  When adults play various games like chess, bridge, or poker, they mentally simulate various scenarios.

Kaku explains the mystery of self-awareness:

Self-awareness is creating a model of the world and simulating the future in which you appear.

As humans, we constantly imagine ourselves in various future scenarios.  In a sense, we are continuously running “thought experiments” about our lives in the future.

Kaku writes that the medial prefrontal cortex appears to be responsible for creating a coherent sense of self out of the various sensations and thoughts bombarding our brains.  Furthermore, the left brain fits everything together in a coherent story even when the data don’t make sense.  Dr. Michael Gazzaniga was able to show this by running experiments on split-brain patients.

Kaku speculates that humans can reach better conclusions if the brain receives a great deal of competing data.  With enough data and with practice and experience, the brain can often reach correct conclusions.

At the beginning of the next section—Mind Over Matter—Kaku quotes Harvard psychologist Steven Pinker:

The brain, like it or not, is a machine.  Scientists have come to that conclusion, not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain.



DARPA is the Pentagon’s Defense Advanced Research Projects Agency.  Kaku writes that DARPA has been central to some of the most important technological breakthroughs of the twentieth century.

President Dwight Eisenhower set up DARPA originally as a way to compete with the Russians after they launched Sputnik into orbit in 1957.  Over the years, some of DARPA’s projects became so large that they spun them off as separate entities, including NASA.

DARPA’s “only charter is radical innovation.”  DARPA scientists have always pushed the limits of what is physically possible.  One of DARPA’s early projects was Arpanet, a telecommunications network to connect scientists during and after World War III.  After the breakup of the Soviet bloc, the National Science Foundation decided to declassify Arpanet and give away the codes and blueprints for free.  This would eventually become the internet.

DARPA helped create Project 57, which was a top-secret project for guiding ballistic missiles to specific targets.  This technology later became the foundation for the Global Positioning System (GPS).

DARPA has also been a key player in other technologies, including cell phones, night-vision goggles, telecommunications advances, and weather satellites, says Kaku.

Kaku writes that, with a budget over $3 billion, DARPA has recently focused on the brain-machine interface.  Kaku quotes former DARPA official Michael Goldblatt:

Imagine if soldiers could communicate by thought alone… Imagine the threat of biological attack being inconsequential.  And contemplate, for a moment, a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-through.  As impossible as these visions sound or as difficult as you might think the task would be, these visions are the everyday work of the Defense Sciences Office [a branch of DARPA].  (page 74)

Goldblatt, notes Kaku, thinks the long-term legacy of DARPA will be human enhancement.  Goldblatt’s daughter has cerebral palsy and has been confined to a wheelchair all her life.  Goldblatt is highly motivated not only to help millions of people in the future and create a legacy, but also to help his own daughter.



Cathy Hutchinson became a quadriplegic after suffering a massive stroke.  But in May 2012, scientists from Brown University placed a tiny chip on top of her brain—called Braingate—which is connected by wires to a computer.  (The chip has ninety-six electrodes for picking up brain impulses.)  Her brain could then send signals through the computer to control a mechanical robotic arm.  She reported her great excitement and said she knows she will get robotic legs eventually, too.  This might happen soon, says Kaku, since the field of cyber prosthetics is advancing fast.

Scientists at Northwestern placed a chip with 100 electrodes on the brain of a monkey.  The signals were carefully recorded while the monkey performed various tasks involving the arms.  Each task would involve a specific firing of neurons, which the scientists eventually were able to decipher.

Next, the scientists took the signal sequences from the chip and instead of sending them to a mechanical arm, they sent the signals to the monkey’s own arm.  Eventually the monkey learned to control its own arm via the computer chips.  (The reason 100 electrodes is enough is because they were placed on the output neurons.  So the monkey’s brain had already done the complex processing involving millions of neurons by the time the signals reached the electrodes.)

This device is one of many that Northwestern scientists are testing.  These devices, which continue to be developed, can help people with spinal cord injuries.

Kaku observes that much of the funding for these developments comes from a DARPA project called Revolutionizing Prosthetics, a $150 million effort since 2006.  Retired U.S. Army colonel Geoffrey Ling, a neurologist with several tours of duty in Iraq and Afghanistan, is a central figure behind Revolutionizing Prosthetics.  Dr. Ling was appalled by the suffering caused by roadside bombs.  In the past, many of these brave soldiers would have died.  Today, many more can be saved.  However, more than 1,300 of them have lost limbs after returning from the Middle East.

Dr. Ling, with funding from the Pentagon, instructed his staff to figure out how to replace lost limbs within five years.  Ling:

They thought we were crazy.  But it’s in insanity that things happen.

Kaku continues:

Spurred into action by Dr. Ling’s boundless enthusiasm, his crew has created miracles in the laboratory.  For example, Revolutionary Prosthetics funded scientists at the Johns Hopkins Applied Physics Laboratory, who have created the most advanced mechanical arm on Earth, which can duplicate nearly all the delicate motions of the fingers, hand, and arm in three dimensions.  It is the same size and has the same strength and agility as a real arm.  Although it is made of steel, if you covered it up with flesh-colored plastic, it would be nearly indistinguishable from a real arm.

This arm was attached to Jan Sherman, a quadriplegic who had suffered from a genetic disease that damaged the connection between her brain and her body, leaving her completely paralyzed from the neck down.  At the University of Pittsburgh, electrodes were placed directly on top of her brain, which were then connected to a computer and then to a mechanical arm.  Five months after surgery to attach the arm, she appeared on 60 Minutes.  Before a national audience, she cheerfully used her new arm to wave, greet the host, and shake his hand.  She even gave him a fist bump to show how sophisticated the arm was.

Dr. Ling says, ‘In my dream, we will be able to take this into all sorts of patients, patients with strokes, cerebral palsy, and the elderly.‘  (page 84)

Dr. Miguel Nicholelis of Duke University is pursuing novel applications of the brain-machine interface (BMI).  Dr. Nicholelis has demonstrated that BMI can be done across continents.  He put a chip on a monkey’s brain.  The chip was connected to the internet.  When the monkey was walking on a treadmill in North Carolina, the signals were sent to a robot in Kyoto, Japan, which performed the same walking motions.

Dr. Nicholelis is also working on the problem that today’s prosthetic hands lack a sense of touch.  Dr. Nicholelis is trying to create a direct brain-to-brain interface to overcome this challenge.  Messages would go from the brain to the mechanical arm, and then directly back to the brain, bypassing the stem altogether.  This is a brain-machine-brain interface (BMBI).

Dr. Nicholelis connected the motor cortex of rhesus monkeys to mechanical arms.  The mechanical arms have sensors, and send signals back to the brain by electrodes connected to the somato-sensory cortex (which registers the sensation of touch).  Dr. Nicholelis invented a new code to represent different surfaces.  After a month of practice, the brain learns the new code and can thus distinguish among different surfaces.

Dr. Nicholelis told Kaku that something like the holodeck from Star Trek—where you wander in a virtual world, but feel sensations when you bump into virtual objects—will be possible in the future.  Kaku writes:

The holodeck of the future might use a combination of two technologies.  First, people in the holodeck would wear internet contact lenses, so that they would see an entirely new virtual world everywhere they looked.  The scenery in your contact lense would change instantly with the push of a button.  And if you touched any object in this world, signals sent into the brain would simulate the sensation of touch, using BMBI technology.  In this way, objects in the virtual world you see inside your contact lense would feel solid.  (page 87)

Scientists have begun to explore an “Internet of the mind,” or brain-net.  In 2013, scientists went beyond animal studies and demonstrated the first human brain-to-brain communication.

This milestone was achieved at the University of Washington, with one scientist sending a brain signal (move your right arm) to another scientist.  The first scientist wore an EEG helmet and played a video game.  He fired a cannon by imagining moving his right arm, but was careful not to move it physically.

The signal from the EEG helmet was sent over the Internet to another scientist, who was wearing a transcranial magnetic helmet carefully placed over the part of his brain that controlled his right arm.  When the signal reached the second scientist, the helmet would send a magnetic pulse into his brain, which made his right arm move involuntarily, all by itself.  Thus, by remote control, one human brain could control the movement of another.

This breakthrough opens up a number of possibilities, such as exchanging nonverbal messages via the Internet.  You might one day be able to send the experience of dancing the tango, bungee jumping, or skydiving to the people on your e-mail list.  Not just physical activity, but emotions and feelings as well might be sent via brain-to-brian communication.

Nicholelis envisions a day when people all over the world could participate in social networks not via keyboards, but directly through their minds.  Instead of just sending e-mails, people on the brain-net would be able to telepathically exchange thoughts, emotions, and ideas in real time.  Today a phone call conveys only the information of the conversation and the tone of voice, nothing more.  Video conferencing is a bit better, since you can read the body language of the person on the other end.  But a brain-net would be the ultimate in communications, making it possible to share the totality of mental information in a conversation, including emotions, nuances, and reservations.  Minds would be able to share their most intimate thoughts and feeelings.  (pages 87-88)

Kaku gives more details of what would be needed to create a brain-net:

Creating a brain-net that can transmit such information would have to be done in stages.  The first step would be inserting nanoprobes into important parts of the brain, such as the left temporal lobe, which governs speech, and the occipital lobe, which governs vision.  Then computers would analyze these signals and decode them.  This information in turn could be sent over the Internet by fiber-optic cables.  

More difficult would be to insert these signals into another person’s brain, where they could be processed by the receiver.  So far, progress in this area has focused only on the hippocampus, but in the future it should be possible to insert messages directly into other parts of the brain corresponding to our sense of hearing, light, touch, etc.  So there is plenty of work to be done as scientists try to map the cortices of the brain involved in these senses.  Once these cortices have been mapped… it should be possible to insert words, thoughts, memories, and experiences into another brain.  (page 89)

Dr. Nicolelis’ next goal is the Walk Again Project.  They are creating a complete exoskeleton that can be controlled by the mind.  Nicolelis calls it a “wearable robot.”  The aim is to allow the paralyzed to walk just by thinking.  There are several challenges to overcome:

First, a new generation of microchips must be created that can be placed in the brain safely and reliably for years at a time.  Second, wireless sensors must be created so the exoskeleton can roam freely.  The signals from the brain would be received wirelessly by a computer the size of a cell phone that would probably be attached to your belt.  Third, new advances must be made in deciphering and interpreting signals from the brain via computers.  For the monkeys, a few hundred neurons were necessary to control the mechanical arms.  For a human, you need, at minimum, several thousand neurons to control an arm or leg.  And fourth, a power supply must be found that is portable and powerful enough to energize the entire exoskeleton.  (page 92)



One interesting possibility is that the long-term memory evolved in humans because it was useful for us in simulating and predicting future scenarios.

Indeed, brain scans done by scientists at Washington University in St. Louis indicate that areas used to recall memories are the same as those involved in simulating the future.  In particular, the link between the dorsolateral prefrontal cortex and the hippocampus lights up when a person is engaged in planning for the future and remembering the past.  In some sense, the brain is trying to ‘recall the future,’ drawing upon memories of the past in order to determine how something will evolve into the future.  This may also explain the curious fact that people who suffer from amnesia… are often unable to visualize what they will be doing in the future or even the very next day.  (page 113)

Some claim that Alzheimer’s disease may be the disease of the century.  As of Kaku’s writing, there were 5.3 million Americans with Alzheimer’s, and that number is expected to quadruple by 2050.  Five percent of people aged sixty-five to seventy-four have it, but more than 50 percent of those over eighty-five have it, even if they have no obvious risk factors.

One possible way to try to combat Alzheimer’s is to create antibodies or a vaccine that might specifically target misshapen protein molecules associated with the disease.  Another approach might be to create an artificial hippocampus.  Yet another approach is to see if specific genes can be found that improve memory.  Experiments on mice and fruit flies have been underway.

If the genetic fix works, it could be administered by a simple shot in the arm.  If it doesn’t work, another possible approach is to insert the proper proteins into the body.  Instead of a shot, it would be a pill.  But scientists are still trying to understand the process of memory formation.

Eventually, writes Kaku, it will be possible to record the totality of stimulation entering into a brain.  In this scenario, the Internet may become a giant library not only for the details of human lives, but also for the actual consciousness of various individuals.  If you want to see how your favorite hero or historical figure felt as they confronted the major crises of their lives, you’ll be able to do so.  Or you could share the memories and thoughts of a Nobel Prize-winning scientist, perhaps gleaning clues about how great discoveries are made.



What made Einstein Einstein?  It’s very difficult to say, of course.  Partly, it may be that he was the right person at the right time.  Also, it wasn’t just raw intelligence, but perhaps more a powerful imagination and an ability to stick with problems for a very long time.  Kaku:

The point here is that genius is perhaps a combination of being born with certain mental abilities and also the determination and drive to achieve great things.  The essence of Einstein’s genius was probably his extraordinary ability to simulate the future through thought experiments, creating new physical principles via pictures.  As Einstein himself once said, ‘The true sign of intelligence is not knowledge, but imagination.’  And to Einstein, imagination meant shattering the boundaries of the known and entering the domain of the unknown.  (page 133)

The brain remains “plastic” even into adult life.  People can always learn new skills.  Kaku notes that the Canadian psychologist Dr. Donald Hebb made an important discovery about the brain:

the more we exercise certain skills, the more certain pathways in our brains become reinforced, so the task becomes easier.  Unlike a digital computer, which is just as dumb today as it was yesterday, the brain is a learning machine with the ability to rewire its neural pathways every time it learns something.  This is a fundamental difference between the digital computer and the brain.  (page 134)

Scientists also believe that the ability to delay gratification and the ability to focus attention may be more important than IQ for success in life.

Furthermore, traditional IQ tests only measure “convergent” intelligence related to the left brain and not “divergent” intelligence related to the right brain.  Kaku quotes Dr. Ulrich Kraft:

‘The left hemisphere is responsible for convergent thinking and the right hemisphere for divergent thinking.  The left side examines details and processes them logically and analytically but lacks a sense of overriding, abstract connections.  The right side is more imaginative and intuitive and tends to work holistically, integrating pieces of an informational puzzle into a whole.’  (page 138)

Kaku suggests that a better test of intelligence might measure a person’s ability to imagine different scenarios related to a specific future challenge.

Another avenue of intelligence research is genes.  We are 98.5 percent identical genetically to chimpanzees.  But we live twice as long and our mental abilities have exploded in the past six million years.  Scientists have even isolated just a handful of genes that may be responsible for our intelligence.  This is intriguing, to say the least.

In addition to having a larger cerebral cortex, our brains have many folds in them, vastly increasing their surface area.  (The brain of Carl Friedrich Gauss was found to be especially folded and wrinkled.)

Scientists have also focused on the ASPM gene.  It has mutated fifteen times in the last five or six million years.  Kaku:

Because these mutations coincide with periods of rapid growth in intellect, it it tantalizing to speculate that ASPM is among the handful of genes responsible for our increased intelligence.  If this is true, then perhaps we can determine whether these genes are still active today, and whether they will continue to shape human evolution in the future.  (page 154)

Scientists have also learned that nature takes numerous shortcuts in creating the brain.  Many neurons are connected randomly, so a detailed blueprint isn’t needed.  Neurons organize themselves in a baby’s brain in reaction to various specific experiences.  Also, nature uses modules that repeat over and over again.

It is possible that we will be able to boost our intelligence in the future, which will increase the wealth of society (probably significantly).  Kaku:

It may be possible in the coming decades to use a combination of gene therapy, drugs, and magnetic devices to increase our intelligence.  (page 162)

…raising our intelligence may help speed up technological innovation.  Increased intelligence would mean a greater ability to simulate the future, which would be invaluable in making scientific discoveries.  Often, science stagnates in certain areas because of a lack of fresh new ideas to stmulate new avenues of research.  Having an ability to simulate different possible futures would vastly increase the rate of scientific breakthroughs.

These scientific discoveries, in turn, could generate new industries, which would enrich all of society, creating new markets, new jobs, and new opportunities.  History if full of technological breakthroughs creating entirely new industries that benefited not just the few, but all of society (think of the transistor and the laser, which today form the foundation of the world economy).  (page 164)



Kaku explains that the brain, as a neural network, may need to dream in order to function well:

The brain, as we have seen, is not a digital computer, but rather a neural network of some sort that constantly rewires itself after learning new tasks.  Scientists who work with neural networks noticed something interesting, though.  Often these systems would become saturated after learning too much, and instead of processing more information they would enter a “dream” state, whereby random memories would sometimes drift and join together as the neural networks tried to digest all the new material.  Dreams, then, might reflect “house cleaning,” in which the brain tries to organize its memories in a more coherent way.  (If this is true, then possibly all neural networks, including all organisms that can learn, might enter a dream state in order to sort out their memories.  So dreams probably serve a purpose.  Some scientists have speculated that this might imply that robots that learn from experience might also eventually dream as well.)

Neurological studies seem to back up this conclusion.  Studies have shown that retaining memories can be improved by getting sufficient sleep between the time of activity and a test.  Neuroimaging shows that the areas of the brain that are activated during sleep are the same a those involved in learning a new task.  Dreaming is perhaps useful in consolidating this new information.  (page 172)

In 1977, Dr. Allan Hobson and Dr. Robert McCarley made history – seriously challenging Freud’s theory of dreams—by proposing the “activation synthesis theory” of dreams:

The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert.  As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state.

As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves.  These waves travel up the brain stem into the visual cortex, stimulating it to create dreams.  Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams.  (pages 174-175)



There seem to be certain parts of the brain that are associated with religious experiences and also with spirituality.  Dr. Mario Beauregard of the University of Montreal commented:

If you are an atheist and you live a certain kind of experience, you will relate it to the magnificence of the universe.  If you are a Christian, you will associate it with God.  Who knows.  Perhaps they are the same thing.

Kaku explains how human consciousness involves delicate checks and balances similar to the competing points of view that a good CEO considers:

We have proposed that a key function of human consciousness is to simulate the future, but this is not a trivial task.  The brain accomplishes it by having these feedback loops check and balance one another.  For example, a skillful CEO at a board meeting tries to draw out the disagreement among staff members and to sharpen competing points of view in order to sift through the various arguments and then make a final decision.  In the same way, various regions of the brain make diverging assessments of the future, which are given to the dorsolateral profrontal cortex, the CEO of the brain.  These competing assessments are then evaluated and weighted until a balanced final decision is made.  (page 205)

The most common mental disorder is depression, afflicting twenty million people in the United States.  One way scientists are trying to cure depression is deep brain stimulation (DBS)—inserting small probes into the brain and causing an electrical shock.  Kaku:

In the past decade, DBS has been used on forty thousand patients for motor-related diseases, such as Parkinson’s and epilepsy, which cause uncontrolled movements of the body.  Between 60 and 100 percent of the patients report significant improvement in controlling their shaking hands.  More than 250 hospitals in the United States now perform DBS treatment.  (page 208)

Dr. Helen Mayberg and colleagues at Washington University School of Medicine have discovered an important clue to depression:

Using brain scans, they identified an area of the brain, called Brodmann area 25 (also called the subcallosal cingulate region), in the cerebral cortex that is consistently hyperactive in depressed individuals for whom all other forms of treatment have been unsuccessful. 

…Dr. Mayberg had the idea of applying DBS directly to Broadmann area 25… her team took twelve patients who were clinically depressed and had shown no improvement after exhaustive use of drugs, psychotherapy, and electroshock therapy.

They found that eight of these chronically depressed individuals showed immediate progress.  Their success was so astonishing, in fact, that other groups raced to duplicate these results and apply DBS to other mental disorders…

Dr. Mayberg says, ‘Depression 1.0 was psychotherapy… Depression 2.0 was the idea that it’s a chemical imbalance.  This is Depression 3.0.  What has captured everyone’s imagination is that, by dissecting a complex behavior disorder into its component systems, you have a new way of thinking about it.’

Although the success of DBS in treating depressed individuals is remarkable, much more research needs to be done…



Kaku introduces the potential challenge of handling artificial intelligence as it evolves:

Given the fact that computer power has been doubling every two years for the past fifty years under Moore’s law, some say it is only a matter of time before machines eventually acquire self-awareness that rivals human intelligence.  No one knows when this will happen, but humanity should be prepared for the moment when machine consciousness leaves the laboratory and enters the real world.  How we deal with robot consciousness could decide the future of the human race.  (page 216)

Kaku observes that AI has gone through three cycles of boom and bust.  In the 1950s, machines were built that could play checkers and solve algebra problems.  Robot arms could recognize and pick up blocks.  In 1965, Dr. Herbert Simon, one of the founders of AI, made a prediction:

Machines will be capable, within 20 years, of doing any work a man can do.

In 1967, another founder of AI, Dr. Marvin Minsky, remarked:

…within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved. 

But in the 1970s, not much progress in AI had been made.  In 1974, both the U.S. and British governments significantly cut back their funding for AI.  This was the beginning of the first AI winter.

But as computer power steadily increased in the 1980s, a new gold rush occurred in AI, fueled mainly by Pentagon planners hoping to put robot soldiers on the battlefield.  Funding for AI hit a billion dollars by 1985, with hundreds of millions of dollars spent on projects like the Smart Truck, which was supposed to be an intelligent, autonomous truck that could enter enemy lines, do reconnaissance by itself, perform missions (such as rescuing prisoners), and then return to friendly territory.  Unfortunately, the only thing that the Smart Truck did was get lost.  The visible failures of these costly projects created yet another AI winter in the 1990s.  (page 217)

Kaku continues:

But now, with the relentless march of computer power, a new AI renaissance has begun, and slow but substantial progress has been made.  In 1997, IBM’s Deep Blue computer beat world chess champion, Garry Kasparov.  In 2005, a robot car from Stanford won the DARPA Grand Challenge for a driverless car.  Milestones continue to be reached.

This question remains:  Is the third try a charm?

Scientists now realize that they vastly underestimated the problem, because most human thought is actually subconscious.  The conscious part of our thoughts, in fact, represents only the tiniest portion of our computations.

Dr. Steve Pinker says, ‘I would pay a lot for a robot that would put away the dishes or run simple errands, but I can’t, because all of the little problems that you’d need to solve to build a robot to do that, like recognizing objects, reasoning about the world, and controlling hands and feet, are unsolved engineering problems.’  (pages 217-218)

Kaku asked Dr. Minsky when he thought machines would equal and then surpass human intelligence.  Minsky replied that he’s confident it will happen, but that he doesn’t make predictions about specific dates any more.

If you remove a single transistor from a Pentium chip, the computer will immediately crash, writes Kaku.  But the human brain can perform quite well even with half of it missing:

This is because the brain is not a digital computer at all, but a highly sophisticated neural network of some sort.  Unlike a digital computer, which has a fixed architecture (input, output, and processor), neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task.  The brain has no programming, no operating system, no Windows, no central processor.  Instead, its neural networks are massively parallel, with one hundred billion neurons firing at the same time in order to accomplish a single goal: to learn.

In light of this, AI researchers are beginning to reexamine the ‘top-down approach’ they have followed for the past fifty years (e.g., putting all the rules of common sense on a CD).  Now AI researchers are giving the ‘bottom-up approach’ a second look.  This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones.  Neural networks must learn the hard way, by bumping into things and making mistakes.  (page 220)

Dr. Rodney Brooks, former director of the MIT Artificial Intelligence Laboratory, introduced a totally new approach to AI.  Why not build small, insectlike robots that learn how to walk by trail and error, just as nature learns?  Brooks told Kaku that he used to marvel at the mosquito, with a microscopic brain of a few neurons, which can, nevertheless, maneuver in space better than any robot airplane.  Brooks built a series of tiny robots called ‘insectoids’ or ‘bugbots,’ which learn by bumping into things.  Kaku comments:

At first, it may seem that this requires a lot of programming.  The irony, however, is that neural networks require no programming at all.  The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision.  So programming is nothing; changing the network is everything.  (page 221)

The Mars Curiosity rover is one result of this bottom-up approach.

Scientists have realized that emotions are central to human cognition.  Humans usually need some emotional input, in addition to logic and reason, in order to make good decisions.  Robots are now be programmed to recognize various human emotions and also to exhibit emotions themselves.  Robots also need a sense of danger and some feeling of pain in order to avoid injuring themselves.  Eventually, as robots become ever more conscious, there will be many ethical questions to answer.

Biologists used to debate the question, “What is life?”  But, writes Kaku, the physicist and Nobel Laureate Francis Crick has observed that the question is not well-defined now that we are advancing in our understanding of DNA.  There are many layer and complexities to the question, “What is life?”  Similarly, there are likely to be many layers and complexities to the question of what constitutes “emotion” or “consciousness.”

Moreover, as Rodney Brooks argues, we humans are machines.  Eventually the robot machines we are building will be just as alive as we are.  Kaku summarizes a conversation he had with Brooks:

This evolution in human perspective started with Nicholaus Copernicus when he realized that the Earth is not the center of the universe, but rather goes around the sun.  It continued with Darwin, who showed that we were similar to the animals in our evolution.  And it will continue into the future… when we realize that we are machines, except that we are made of wetware and not hardware.  (page 248)

Kaku then quotes Brooks directly:

We don’t like to give up our specialness, so you know, having the idea that robots could really have emotions, or that robots could be living creatures—I think is going to be hard for us to accept.  But we’re going to come to accept it over the next fifty years.

Brooks also thinks we will successfully create robots that are safe for humans:

The robots are coming, but we don’t have to worry too much about that.  It’s going to be a lot of fun.

Furthermore, Brooks argues that we are likely to merge with robots.  After all, we’ve already done this to an extent.  Over twenty thousand people have cochlear implants, giving them the ability to hear.

Similarly, at the University of Southern California and elsewhere, it is possible to take a patient who is blind and implant an artificial retina.  One method places a mini video camera in eyeglasses, which converts an image into digital signals.  These are sent wirelessly to a chip placed in the person’s retina.  The chip activates the retina’s nerves, which then send messages down the optic nerve to the occipital lobe of the brain.  In this way, a person who is totally blind can see a rough image of familiar objects.  Another design has a light-sensitive chip placed on the retina itself, which then sends signals directly to the optic nerve.  This design does not need an external camera.  (page 249)

This means, says Kaku, that eventually we’ll be able to enhance our ordinary senses and abilities.  We’ll merge with our robot creations.



Kaku highlights three approaches to the brain:

Because the brain is so complex, there are at least three distinct ways in which it can be taken apart, neuron by neuron.  The first is to simulate the brain electronically with supercomputers, which is the approach being taken by the Europeans.  The second is to map out the neural pathways of living brains, as in BRAIN [Brain Research Through Advancing Innovative Neurotechnologies Initiative].  (This task, in turn, can be further subdivided, depending on how these neurons are analyzed – either anatomically, neuron by neuron, or by function and activity.)  And third, one can decipher the genes that control the development of the brain, which is an approach pioneered by billionaire Paul Allen of Microsoft.  (page 253)

Dr. Henry Markram is a central figure in the Human Brain Project.  Kaku quotes Dr. Markram:

To build this—the supercomputers, the software, the research—we need around one billion dollars.  This is not expensive when one considers that the global burden of brain disease will exceed twenty percent of the world gross domestic project very soon.

Dr. Markram also said:

It’s essential for us to understand the human brain if we want to get along in society, and I think that it is a key step in evolution.  

How does the human genome go from twenty-three thousand genes to one hundred billion neurons?

The answer, Dr. Markram believes, is that nature uses shortcuts.  The key to his approach is that certain modules of neurons are repeated over and over again once Mother Nature finds a good template.  If you look at microscopic slices of the brain, at first you see nothing but a random tangle of neurons.  But upon closer examination, patterns of modules that are repeated over and over appear.  

(Modules, in fact, are one reason why it is possible to assemble large skyscrapers so rapidly.  Once a single module is designed, it is possible to repeat it endlessly on the assembly line.  Then you can rapidly stack them on top of one another to create the skyscraper.  Once the paperwork is all signed, an apartment building can be assembled using modules in a few months.)

The key to Dr. Markram’s Blue Brain project is the “neocortical column,” a module that is repeated over and over in the brain.  In humans, each column is about two millimeters tall, with a diameter of half a millimeter, and contains sixty thousand neurons.  (As a point of comparison, rat neural modules contain about ten thousand neurons each.)  In took ten years, from 1995 to 2005, for Dr. Markram to map the neurons in such a column and to figure out how it worked.  Once that was deciphered, he then went to IBM to create massive iterations of these columns.  (page 257)

Kaku quotes Dr. Markram again:

…I think, quite honestly, that if the planet understood how the brain functions, we would resolve conflicts everywhere.  Because people would understand how trivial and how deterministic and how controlled conflicts and reactions and misunderstandings are.

The slice-and-dice approach:

The anatomical approach is to take apart the cells of an animal brain, neuron by neuron, using the “slice-and-dice” method.  In this way, the full complexity of the environment, the body, and memories are already encoded in the model.  Instead of approximating a human brain by assembling a huge number of transistors, these scientists want to identify each neuron of the brain.  After that, perhaps each neuron can be simulated by a collection of transistors so that you’d have an exact replica of the human brain, complete with memory, personality, and connection to the senses.  Once someone’s brain is fully reverse engineered in this way, you should be able to have an informative conversation with that person, complete with memories and a personality.  (page 259)

There is a parallel project called the Human Connectome Project.

Most likely, this effort will be folded into the BRAIN project, which will vastly accelerate this work.  The goal is to produce a neuronal map of the human brain’s pathways that will elucidate brain disorders such as autism and schizophrenia.  (pages 260-261)

Kaku notes that one day automated microscopes will continuously take the photographs, while AI machines continuously analyze them.

The third approach:

Finally, there is a third approach to map the brain.  Instead of analyzing the brain by using computer simulations or by identifying all the neural pathways, yet another approach was taken with a generous grant of $100 million from Microsoft billionaire Paul Allen.  The goal was to construct a map or atlas of the mouse brain, with the emphasis on identifying the genes responsible for creating the brain.

…A follow-up project, the Allen Human Brain Atlas, was announced… with the hope of creating an anatomically and genetically complete 3-D map of the human brain.  In 2011, the Allen Institute announced that it had mapped the biochemistry of two human brains, finding one thousand anatomical sites with one hundred million data points detailing how genes are expressed in the underlying biochemistry.  The data confirmed that 82 percent of our genes are expressed in the brain.  (pages 261-262)

Kaku says the Human Genome Project was very successful in sequencing all the genes in the human genome.  But it’s just the first step in a long journey to understand how these genes work.  Similarly, once scientists have reverse engineered the brain, that will likely be only the first step in understanding how the brain works.

Once the brain is reverse-engineered, this will help scientists understand and cure various diseases.  Kaku observes that, with human DNA, if there is a single mispelling out of three billion base pairs, that can cause uncontrolled flailing of your limbs and convulsions, as in Huntington’s disease.  Similarly, perhaps just a few disrupted connections in the brain can cause certain illnesses.

Successfully reverse engineering the brain also will help with AI research.  For instance, writes Kaku, humans can recognize a familiar face from different angles in .1 seconds.  But a computer has trouble with this.  There’s also the question of how long-term memories are stored.

Finally, if human consciousness can be transferred to a computer, does that mean that immortality is possible?



Kaku talked with Dr. Ray Kurzweil, who told him it’s important for an inventor to anticipate changes.  Kurzweil has made a number of predictions, at least some of which have been roughly accurate.  Kurzweil predicts that the “singularity” will occur around the year 2045.  Machines will have reached the point when they not only have surpassed humans in intelligence; machines also will have created next-generation robots even smarter than themselves.

Kurzweil holds that this process of self-improvement can be repeated indefinitely, leading to an explosion—thus the term “singularity”—of ever-smarter and ever more capable robots.  Moreover, humans will have merged with their robot creations and will, at some point, become immortal.

Robots of ever-increasing intelligence and ability will require more power.  Of course, there will be breakthroughs in energy technology, likely including nuclear fusion and perhaps even antimatter and/or black holes.  So the cost to produce prodigious amounts of energy will keep coming down.  At the same time, because Moore’s law cannot continue forever, super robots eventually will need ever-increasing amounts of energy.  At some point, this will probably require traveling—or sending nanobot probes—to numerous other stars or to other areas where the energy of antimatter and/or of black holes can be harnessed.

Kaku notes that most people in AI agree that a “singularity” will occur at some point.  But it’s extremely difficult to predict the exact timing.  It could happen sooner than Kurzweil predicts or it could end up taking much longer.

Kurzweil wants to bring his father back to life.  Eventually something like this will be possible.  Kaku:

…I once asked Dr. Robert Lanza of the company Advanced Cell Technology how he was able to bring a long-dead creature “back to life,” making history in the process.  He told me that the San Diego Zoo asked him to create a clone of a banteng, an oxlike creature that had died out about twenty-five years earlier.  The hard part was extracting a usable cell for the purpose of cloning.  However, he was successful, and then he FedExed the cell to a farm, where it was implanted into a female cow, which then gave birth to this animal.  Although no primate has ever been cloned, let alone a human, Lanza feels it’s a technical problem, and that it’s only a matter of time before someone clones a human.  (page 273)

The hard part of cloning a human would be bringing back their memories and personality, says Kaku.  One possibility would be creating a large data file containing all known information about a person’s habits and life.  Such a file could be remarkably accurate.  Even for people dead today, scores of questions could be asked to friends, relatives, and associates.  This could be turned into hundreds of numbers, each representing a different trait that could be ranked from 0 to 10, writes Kaku.

When technology has advanced enough, it will become possible—perhaps via the Connectome Project—to recreate a person’s brain, neuron for neuron.  If it becomes possible for you to have your connectome completed, then your doctor—or robodoc—would have all your neural connections on a hard drive.  Then, says Kaku, at some point, you could be brought back to life, using either a clone or a network of digital transistors (inside an exeskeleton or surrogate of some sort).

Dr. Hans Moravec, former director of the Artificial Intelligence Laboratory at Carnegie Mellon University, has pioneered an intriguing idea:  transferring your mind into an immortal robotic body while you’re still alive.  Kaku explains what Moravec told him:

First, you lie on a stretcher, next to a robot lacking a brain.  Next, a robotic surgeon extracts a few neurons from your brain, and then duplicates these neurons with some transistors located in the robot.  Wires connect your brain to the transistors in the robot’s empty head.  The neurons are then thrown away and replaced by the transistor circuit.  Since your brain remains connected to these transistors via wires, it functions normally and you are fully conscious during this process.  Then the super surgeon removes more and more neurons from your brain, each time duplicating these neurons with transistors in the robot.  Midway through the operation, half your brain is empty; the other half is connected by wires to a large collection of transistors inside the robot’s head.  Eventually all the neurons in your brain have been removed, leaving a robot brain that is an exact duplicate of your original brain, neuron for neuron.  (page 280)

When you wake up, you are likely to have a few superhuman powers, perhaps including a form of immortality.  This technology is likely far in the future, of course.

Kaku then observes that there is another possible path to immortality that does not involve reverse engineering the brain.  Instead, super smart nanobots could periodically repair your cells.  Kaku:

…Basically, aging is the buildup of errors, at the genetic and cellular level.  As cells get older, errors begin to build up in their DNA and cellular debris also starts to accumulate, which makes the cells sluggish.  As cells begin slowly to malfunction, skin begins to sag, bones become frail, hair falls out, and our immune system deteriorates.  Eventually, we die.

But cells also have error-correcting mechanisms.  Over time, however, even these error-correcting mechanisms begin to fail, and aging accelerates.  The goal, therefore, is to strengthen natural cell-repair mechanisms, which can be done via gene therapy and the creation of new enzymes.  But there is also another way: using “nanobot” assemblers.

One of the linchpins of this futuristic technology is something called the “nanobot,” or an atomic machine, which patrols the bloodstream, zapping cancer cells, repairing the damage from the aging process, and keeping us forever young and healthy.  Nature has already created some nanobots in the form of immune cells that patrol the body in the blood.  But these immune cells attack viruses and foreign bodies, not the aging process.

Immortality is within reach if these nanobots can reverse the ravages of the aging process at the molecular and cellular level.  In this vision, nanobots are like immune cells, tiny police patrolling your bloodstream.  They attack any cancer cells, neutralize viruses, and clean out the debris and mutations.  Then the possibility of immortality would be within reach using our own bodies, not some robot or clone.  (pages 281-282)

Kaku writes that his personal philosophy is simple: If something is possible based on the laws of physics, then it becomes an engineering and economics problem to build it.  A nanobot is an atomic machine with arms and clippers that grabs molecules, cuts them at specific points, and then splices then back together.  Such a nanobot would be able to create almost any known molecule.  It may also be able to self-reproduce.

The late Richard Smalley, a Nobel Laureate in chemistry, argued that quantum forces would prevent nanobots from being able to function.  Eric Drexler, a founder of nanotechnology, pointed out that ribosomes in our own body cut and splice DNA molecules at specific points, enabling the creation of new DNA strands.  Eventually Drexler admitted quantum forces do get in the way sometimes, while Smalley acknowledged that if ribosomes can cut and split molecules, perhaps there are other ways, too.

Ray Kurzweil is convinced that nanobots will shape society itself.  Kaku quotes Kurzweil:

…I see it, ultimately, as an awakening of the whole universe.  I think the whole universe right now is basically made up of dumb matter and energy and I think it will wake up.  But if it becomes transformed into this sublimely intelligent matter and energy, I hope to be a part of that.



Kaku writes that it’s well within the laws of physics for the mind to be in the form of pure energy, able to explore the cosmos.  Isaac Asimov said his favorite science-fiction short story was “The Last Question.”  In this story, humans have placed their physical bodies in pods, while their minds roam as pure energy.  But they cannot keep the universe itself from dying in the Big Freeze.  So they create a supercomputer to figure out if the Big Freeze can be avoided.  The supercomputer responds that there is not enough data.  Eons later, when stars are darkening, the supercomputer finds a solution: It takes all the dead stars and combines them, producing an explosion.  The supercomputer says, “Let there be light!”

And there was light.  Humanity, with its supercomputer, had become capable of creating a new universe.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:


Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Physics of the Future

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 13, 2017

Science and technology are moving forward faster than ever before:

…this is just the beginning.  Science is not static.  Science is exploding exponentially all around us.  (page 12)

Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future.  His book, Physics of the Future (Anchor Books, 2012), is a result.

Kaku explains why his predictions may carry more weight than those of other futurists:

  • His book is based on interviews with more than 300 top scientists.
  • Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
  • Prototypes of all the technologies mentioned in the book already exist.
  • As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.

The ancients had little understanding of the forces of nature, so they invented the gods of mythology.  Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.

We are on the verge of becoming a planetary, or Type I, civilization.  This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.

But there are still some things, like face to face meetings, that appear not to have changed much.  Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years.  People still like to see tourist attractions in person.  People still like live performances.  Many people still prefer taking courses in-person rather than online.  (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)

Here are the chapters from Kaku’s book that I cover:

  • Future of the Computer
  • Future of Artificial Intelligence
  • Future of Medicine
  • Nanotechnology
  • Future of Energy
  • Future of Space Travel
  • Future of Humanity



Kaku quotes Helen Keller:

No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.

According to Moore’s law, computer power doubles every eighteen months.  Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly.  Also, exponential growth is often not noticeable for the first few decades.  But eventually things can change dramatically.

Even the near future may be quite different, writes Kaku:

…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control.  They will, to a degree, recognize the human voice and face and converse in a formal language.  They will be able to create entire virtual worlds that we can only dream of today.  Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper.  Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders.  (pages 25-26)

In order to discuss the future of science and technology, Kaku has divided each chapter into three parts:  the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).

In the near future, we can surf the internet via special glasses or contact lenses.  We can navigate with a handheld device or just by moving our hands.  We can connect to our office via the lense.  It’s likely that when we encounter a person, we will see their biography on our lense.

Also, we will be able to travel by driverless cars.  This will allow us to use commute time to access the internet via our lenses or to do other work.  Kaku notes that the word car accident may disappear from the language once driveless cars become advanced and ubiquitous enough.  Instead of nearly 40,000 dying in the United States in car accidents each year, there may be zero deaths from car accidents.  Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.

At home, you will have a room with screens on every wall.  If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.

You won’t need to carry a computer with you.  Computers will be embedded nearly everywhere.  You’ll have constant access to computers and the internet via your glasses or contact lenses.

As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person.  This includes the moon, Mars, and other currently exotic locations.

Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland.  Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill.  Kaku found that he could run, hide, sprint, or lie down.  Everything he saw was very realistic.  In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.

Your doctor – likely a human face appearing on your wall – will have all your genetic information.  Also, you’ll be able to pass a tiny probe over your body and diagnose any illness.  (MRI machines will be as small as a phone.)  As well, tiny chips or sensors will be embedded throughout your environment.  Most forms of cancer will be identified and destroyed before a tumor ever forms.  Kaku says the word tumor will disappear from the human language.

Furthermore, we’ll probably be able to slow down and even reverse the aging process.  We’ll be able to regrow organs based on computerized access to our genes.  We’ll likely be able to reengineer our genes.

In the medium term (2030 to 2070):

  • Moore’s law may reach an end.  Computing power will still continue to grow exponentially, however, just not as fast as before.
  • When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail.  You’ll be able to download informative lectures about anything you see.  In fact, a real professor will appear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
  • If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers.  You’ll be able to see through hills and other obstacles.
  • If you’re a surgeon, you’ll see in great detail everything inside the body.  You’ll have access to all medical records, etc.
  • Universal translators will allow any two people to converse.
  • True 3-D images will surround us when we watch a movie.  3-D holograms will become a reality.

In the far future (2070 to 2100):

We will be able to control computers directly with our minds.

John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain.  Through trial and error, the paralyzed person learns to move the cursor on a computer screen.  Eventually they can read and write e-mails, and play computer games.  Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.

Similarly, paralyzed people will be able to control mechanical arms and legs from their brains.  Experiments with monkeys have already achieved this.

Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain.  MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.

Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy.  In this way, we’ll be able to control objects just by thinking.  Astronauts on earth will be able to control superhuman robotic bodies on the moon.



AI pioneer Herbert Simon, in 1965, said:

Machines will be capable, in twenty years, of doing any work a man can do.

Unfortunately not much progress was made.  In 1974, the first AI winter began as the U.S. and British governments cut off funding.

Progress again was made in the 1980’s.  But because it was overhyped, another backlash occurred and a second AI winter began.  Many people left the field as funding disappeared.

The human brain is a type of neural network.  Neural networks follow Hebb’s rule:  every time a correct decision is made, those neural pathways are reinforced.  Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience.  ‘

Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers.  Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.

Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).

What’s interesting is that robots are superfast when doing human mental calculations.  But robots still are not good at visual pattern recognition, movement, and common sense.  Robots can see far more detail than humans, but robots have trouble making sense of what they see.  Also, many things in our experience that we as humans know by common sense, robots don’t understand.

There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things.  But so far, these projects haven’t worked.

There are two ways to give a robot the ability to learn:  top-down and bottom-up.  An example of the top-down approach is STAIR (Stanford artificial intelligence robot).  Everything is programmed into STAIR from the beginning.  For STAIR to understand an image, it must compare the image to all the images already programmed into it.

The LAGR (learning applied to ground robots) uses the bottom-up approach.  It learns everything from scratch, by bumping into things.  LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.

Robots will become ever more helpful in medicine:

For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia.  Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar.  But the da Vinci robotic system can vastly decrease all these.  The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery.  Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body.  There are 800 hospitals in Europe and North and South America that use this system;  48,000 operations were performed in 2006 alone using this robot.  Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.

In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today.  In fact, in the future, only rarely will the surgeon slice the skin at all.  Noninvasive surgery will become the norm.

Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread.  Micromachines smaller than the period at the end of this sentence will do much of the mechanical work.  (pages 93-94)

But to make robots intelligent, scientists must learn more about how the human brain works.

The human brain has roughly three levels.  The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc.  At the next level, there is the monkey brain, or the limbic system, located at the center of our brain.  Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.

The third level of the human brain is the front and outer part – the cerebral cortex.  This level defines humanity and is responsible for the ability to think logically and rationally.

Scientists still have a way to go in understanding in sufficient detail how the human brain works.

By midcentury, scientists will be able to reverse engineer the brain.  In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer.  Kaku quotes Fred Hapgood from MIT:

Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.

By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku.  However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.

For example, says Kaku, the Human Genome Project is like a dictionary with no definitions.  We can spell out each gene in the human body.  But we still don’t know what each gene does exactly.  Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm, C. elegans.  But scientists still can’t fully translate this map into the worm’s behavior.

Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.

When will machines become conscious?  Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future.  If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious.  On the other hand, something like the Turning test may help to identify when machines have become practically indistinguishable from humans.

When will robots exceed humans?  Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.

What if superintelligent robots can make even smarter copies of themselves?  They might thereby gain the ability to evolve exponentially.  Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.

The singularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially.  The inventor Ray Kurzweil has become a spokesman for the singularity.  But he thinks humans will merge with this digital superintelligence.  Kaku quotes Kurzweil:

It’s not going to be an invasion of intelligent machines coming over the horizon.  We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.

Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us.  The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).

One problem is that the military is the largest funder of AI research.  On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).

Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.

One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience.  Such an army might turn into a practical way to explore the solar system and beyond.  One by-product of Brooks’ idea is the Mars Rover.

Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm.  AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.

Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics.  Thus, they have sought a single, unifying equation underlying all intelligence.  But, says Minsky, there is no such thing:

Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness.  Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task.  He calls this ‘the society of minds’:  that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years.  (page 123)

Brooks predicts that, by 2100, there will be very intelligent robots.  But we will be part robot and part connected with robots.

He sees this progressing in stages.  Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions.  For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf.  These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…  

Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain.  One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons.  Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision.  These groups, for the first time in history, have been able to restore a degree of sight to the blind…  (pages 124-125)

Scientists have also successfully created a robotic hand.  One patient, Robin Ekenstam, had his right hand amputated.  Scientists have given him a robotic hand with four motors and forty sensors.  The doctors connected Ekenstam’s nerves to the chips in the artificial hand.  As a result, Ekenstam is able to use the artificial hand as if it were his own hand.  He feels sensations in the artificial fingers when he picks stuff up.  In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.

Furthermore, the brain is extremely plastic because it is a neural network.  So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.

And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities.  Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats.  Similarly, genetic engineering will become possible.  As Brooks commented:

We will now longer find ourselves confined by Darwinian evolution.

Another way people will merge with robots is with surrogates and avatars.  For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.

Robot pioneer Hans Morevic has described one way this could happen:

…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot.  The operation starts when we lie beside a robot without a brain.  A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull.  As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot.  Not only do we have a robotic body, we have also the benefits of a robot:  immortality in superhuman bodies that are perfect in appearance.  (pages 130-131)



Kaku quotes Nobel Laureate James Watson:

No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?

Nobel Laureate David Baltimore:

I don’t really think our bodies are going to have any secrets left within this century.  And so, anything that we can manage to think about will probably have a reality.

Kaku mentions biologist Robert Lanza:

Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit.  In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before.  Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah.  There, the fertilized cell was implanted into a female cow.  Ten months later he got the news that his latest creation had just been born.  On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out.  Another day, he could be working on cloning human embryo cells.  He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells.  (page 138)

Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book, What is Life?  He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.

Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule.  In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix.  When unraveled, a single strand of DNA stretches about 6 feet long.  On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code.  By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life.  (page 140)

Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form.  David Baltimore:

Biology is today an information science.

Kaku writes:

The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule.  Atom for atom, we know how to build the molecules of life from scratch.  And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.

Welcome to bioinformatics:

…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms.  For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA.  In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes.  (page 143)

You’ll talk to your doctor – likely a software program – on the wall screen.  Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form.  If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.

If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed.  (There are over 91,000 in the United States waiting for an organ transplant.)

…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells.  The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells.  (page 144)

Eventually cloning will be possible for humans.

The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep.  By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original.  (page 150)

Successes in animal studies will be translated to human studies.  First diseases caused by a single mutated gene will be cured.  Then diseases caused by multiple muted genes will be cured.

At some point, there will be “designer children.”  Kaku quotes Harvard biologist E. O. Wilson:

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become. 

The “smart mouse” gene was isolated in 1999.  Mice that have it are better able to navigate mazes and remember things.  Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn.  This supports Hebb’s rule:  learning occurs when certain neural pathways are reinforced.

It will take decades to iron out side effects and unwanted consequences of genetic engineering.  For instance, scientists now believe that there is a healthy balance between forgetting and remembering.  It’s important to remember key lessons and specific skills.  But it’s also important not to remember too much.  People need a certain optimism in order to make progress and evolve.

Scientists now know what aging is:  Aging is the accumulation of errors at the genetic and cellular level.  These errors have various causes.  For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku.  Errors can also accumulate as ‘junk’ molecular debris.

The buildup of genetic errors is a by-product of the second law of thermodynamics:  entropy always increases.  However, there’s an important loophole, notes Kaku.  Entropy can be reduced in one place as long as it is increased at least as much somewhere else.  This means that aging is reversible.  Kaku quotes Richard Feynman:

There is nothing in biology yet found that indicates the inevitability of death.  This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Kaku continues:

…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding.  His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals.  In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…

…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM.  By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers.  Scientists will be able to scan millions of genomes of two groups of people, the young and the old.  By comparing the two groups, one can identify where aging takes place at the genetic level.  A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated.  (pages 168-169)

Scientists think aging is only 35 percent determined by genes.  Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria.  This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging.  Soon we could live to 150.  By 2100, we could live well beyond that.

If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent.  This is called calorie restriction.  Every organism studied so far exhibits this phenomenon.

…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging.  In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time.  Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long.  (page 170)

Now scientists have shown that caloric restriction also works for primates:  less diabetes, less cancer, less heart disease, and better health and longer life.

In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells.  SIR2 is activated when it detects that the energy reserves of a cell are low.  The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins.  Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.

Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair.  If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine.  (page 171)

Kaku reports what William Haseltine, biotech pioneer, told him:

The nature of life is not mortality.  It’s immortality.  DNA is an immortal molecule.  That molecule first appeared perhaps 3.5 billion years ago.  That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that.  First to extend our lives two- or three-fold.  And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely.  And I don’t think that will be an unnatural process.  (page 173)

Kaku concludes that extending life span in the future will likely result from a combination of activities:

  • growing new organs as they wear out or become diseased, via tissue engineering and stem cells
  • ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
  • using gene therapy to alter genes that may slow down the aging process
  • maintaining a healthy lifestyle (exercise and a good diet)
  • using nanosensors to detect diseases like cancer years before they become a problem

Kaku quotes Richard Dawkins:

I believe that by 2050, we shall be able to read the language [of life].  We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.

Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor.  After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.




For the most part, nanotechnology is still a very young science.  But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes.  MEMS are tiny machines so small they can easily fit on the tip of a needle.  They are created using the same etching technology used in the computer business.  Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them.  (pages 207-208)

Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car.  This has already saved thousands of lives.

One day nanomachines may be able to replace surgery entirely.  Cutting the skin may become completely obsolete.  Nanomachines will also be able to find and kill cancer cells in many cases.  These nanomachines can be guided by magnets.

DNA fragments can be embedded on a tiny chip using transistor etching technology.  The DNA fragments can bind to specific gene sequences.  Then, using a laser, thousands of genes can be read at one time, rather than one by one.  Prices for these DNA chips continue to plummet due to Moore’s law.

Small electronic chips will be able to do the work that is now done by an entire laboratory.  These chips will be embedded in our bathrooms.  Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks.  In the future, it may cost pennies and take just a few minutes.

In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite.  They won the Nobel Prize for their work.  Graphene is a single sheet of carbon, no more than one atom thick.  And it can conduct electricity.  It’s also the strongest material ever tested.  (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)

Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor:  one atom thick and ten atoms across.  (The smallest transistors currently are about 30 nanometers.  Novoselov’s transistors are 30 times smaller.)

The real challenge now is how to connect molecular transistors.

The most ambitious proposal is to use quantum computers, which actually compute on individual atoms.  Quantum computers are extremely powerful.  The CIA has looked at them for their code-breaking potential.

Quantum computers actually exist.  Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.”  When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.

The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations.  (When atoms are “coherent,” they vibrate in phase with one another.)  Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.

Scientists are working on programmable matter the size of grains of sand.  These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object.  In fact, many common consumer products may be replaced by software programs sent over the internet.  If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.

In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything.  This would be the crowning achievement of engineering, says Kaku.  One problem is the sheer number of atoms that would need to be re-arranged.  But this could be solved by self-replicating nanobots.

A version of this “replicator” already exists.  Mother Nature can take the food we eat and create a baby in nine months.  DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food, notes Kaku.  Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms.  (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)



Kaku writes that in this century, we will harness the power of the stars.  In the short term, this means solar and hydrogen will replace fossil fuels.  In the long term, it means we’ll tap the power of fusion and even solar energy from outer space.  Also, cars and trains will be able to float using magnetism.  This can drastically reduce our use of energy, since most energy today is used to overcome friction.

Currently, fossil fuels meet about 80 percent of the world’s energy needs.  Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.

Electric vehicles will reduce the use of fossil fuels.  But we also have to transform the way electricity is generated.  Solar power will keep getting cheaper.  But much more clean energy will be required in order gradually to replace fossil fuels.

Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases.  However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.

Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve.  This increases the odds that terrorists could acquire nuclear weapons.

Within a few decades, global warming will become even more obvious.  The signs are already clear, notes Kaku:

  • The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
  • Greenland’s ice shelves continue to shrink.  (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
  • Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off.  (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
  • For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
  • Temperatures started to be reliably recorded in the late 1700s;  1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded.  Levels of carbon dioxide are rising dramatically.
  • As the earth heats up, tropical diseases are gradually migrating northward.

It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide.  But we must be careful about unintended side effects on ecosystems.

Eventually fusion power may solve most of our energy needs.  Fusion powers the sun and lights up all the stars.

Anyone who can successfully master fusion power will have unleashed unlimited eternal energy.  And the fuel for these fusion plants comes from ordinary seawater.  Pound for pound, fusion power releases 10 million times more power than gasoline.  An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum.  (page 272)

It’s extremely difficult to heat hydrogen gas to tens of millions of degrees.  But scientists will probably master fusion power within the next few decades.  And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.

One way scientists are trying produce nuclear fusion is by focusing huge lasers on to a tiny point.  If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion.  This approach is called inertial confinement fusion.

The other main approach used by scientists to try to create fusion is magnetic confinement fusion.  A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.

What is most difficult in this approach is squeezing the hydrogen gas uniformly.  Otherwise, it bulges out in complex ways.  Scientists are using supercomputers to try to control this process.  (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion.  So stars form easily.)

Most of the energy we burn is used to overcome friction.  Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.

In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance.  Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy.  The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.

But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero.  Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero.  This is important because liquid nitrogen forms at 77 degrees (Kelvin).  Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.

Remember that most energy is used to overcome friction.  Even for electricity, up to 30 percent can be lost during transmission.  But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years.  Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.

Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.

The reason the magnet floats is simple.  Magnetic lines of force cannot penetrate a superconductor.  This is the Meissner effect.  (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.)  When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic.  This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float.  (page 289)

Room temperature superconductors will allow trains and cars to move without any friction.  This will revolutionize transportation.  Compressed air could get a car going.  Then the car could float almost forever as long as the surface is flat.

Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains.  A maglev train does lose energy to air friction.  In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.

Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible.  A reduced cost of space travel may make it feasible to send hundreds of solar satellites into space.  One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles.  But the main problem is the cost of booster rockets.  (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are working to reduce the cost of rockets by making them reusable.)



Kaku quotes Carl Sagan:

We have lingered long enough on the shores of the cosmic ocean.  We are ready at last to set sail for the stars.

Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:

So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition.  This, in turn, will generate more interest in one day sending a probe to these distant planets.  There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms.  (page 297)

Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars.  But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.

For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans.  And the surface of Europa is continually heated by the tidal forces caused by gravity.

It had been thought that life required sunlight.  But in 1977, life was found on earth, deep under water in the Galapagos Rift.  Energy from undersea volcano vents provided enough energy for life.  Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents.  Some of the most primitive forms of DNA have been found on the bottom of the ocean.

In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature.  Kaku:

At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty.  In one scenario, our universe is a huge bubble of some sort that is continually expanding.  We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper).  But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath.  Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation).  Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion.  (page 301)

Space travel is very expensive.  It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon.  It costs much more to send a person to Mars.

Robotic missions are far cheaper than manned missions.  And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.

Kaku next describes a mission to Mars:

Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission.  But getting to Mars will be much more difficult than reaching the moon.  In contrast to the moon, Mars represents a quantum leap in difficulty.  It takes only three days to reach the moon.  It takes six months to a year to reach Mars.

In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like.  Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.

Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station.  To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars.  With no air, soil, or water, everything must be brought from earth.  It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars.  The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth.  Any rip in a space suit would create rapid depressurization and death.  (page 312)

Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth.  The temperature never goes above the melting point of ice.  And the dust storms are ferocious and often engulf the entire planet.

Eventually astronauts may be able to terraform Mars to make it more hospitable for life.  The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice.  (Methane gas is an even more potent greenhouse gas than carbon dioxide.)  Once the temperature rises, the underground permafrost may begin to thaw.  Riverbeds would fill with water, and lakes and oceans might form again.  This would release more carbon dioxide, leading to a positive feedback loop.

Another possible way to terraform Mars would be to deflect a comet towards the planet.  Comets are made mostly of water ice.  A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.

The polar regions of Mars are made of frozen carbon dioxide and ice.  It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps.  This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.

Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars.  They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide.  They could also be genetically modified to maximize their growth on Mars.  These algae pools could accelerate terraforming in several ways.  First, they could convert carbon dioxide into oxygen.  Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun.  Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet.  Fourth, the algae can be harvested for food.  Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen.  (page 315)

Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.

2070 to 2100:  A Space Elevator and Interstellar Travel

Near the end of the century, scientists may finally be able to construct a space elevator.  With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky.  Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.

One challenge is to create a carbon nanotube cable that is 50,000 miles long.  Another challenge is that space satellites in orbit travel at 18,000 miles per hour.  If a satellite collided with the space elevator, it would be catastrophic.  So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.

Another challenge is turbulent weather on earth.  The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform.  Moreover, there must be an escape pod in case the cable breaks.

Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt.  The next goal would be travelling to a star.  A conventional chemical rocket would take 70,000 years to reach the nearest star.  But there are several proposals for an interstellar craft:

  • solar sail
  • nuclear rocket
  • ramjet fusion
  • nanoships

Although light has no mass, it has momentum and so can exert pressure.  The pressure is super tiny.  But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft.  The solar sail would likely be miles wide.  The craft would have to circle the sun for a few years, gaining more and more momentum.  Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.

Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power.  One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters.  It would have been powered by 1,000 hydrogen bombs.  (This also would have been a good way to get rid of atomic bombs meant only for warfare.)  Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion.  So the project was set aside.

A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust.  In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space.  The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy via nuclear fusion.  With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.

Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year.  This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship.  (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light.  But meanwhile, on earth, millions of years will have passed.)

Note that there are still engineering questions about the ramjet fusion engine.  For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space.  Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.

Another possibility is antimatter rocket ships.  If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel.  Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.

Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars.  These nanoships might become cheap enough to produce and to fuel.  They might even be self-replicating.

Millions of nanoships could gather intelligence like a “swarm” does.  For instance, a single ant is super simple.  But a colony of ants can create a complex ant hill.  A similar concept is the “smart dust” considered by the Pentagon.  Billions of particles, each a sensor, could be used to gather a great deal of information.

Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light.  Moreover, scientists may be able to create one or a few self-replicating nanoprobes.  Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.



Kaku writes:

All the technological revolutions described here are leading to a single point:  the creation of a planetary civilization.  This transition is perhaps the greatest in human history.  In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos.  Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate.  (pages 378-379)

In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations.  So he proposed three types of civilization:

  • A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
  • A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
  • A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).

Kaku explains:

The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations.  Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies.  (page 381)

Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet.  There are signs, says Kaku, that humanity will reach Type I in a matter of decades.

  • The internet allows a person to connect with virtually anyone else on the planet effortlessly.
  • Many families around the world have middle-class ambitions:  a suburban house and two cars.
  • The criterion for being a superpower is not weapons, but economic strength.
  • Entertainers increasingly consider the global appeal of their products.
  • People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
  • The news is becoming planetary.
  • Soccer and the Olympics are emerging to dominate planetary sports.
  • The environment is debated on a planetary scale.  People realize they must work together to control global warming and pollution.
  • Tourism is one of the fastest-growing industries on the planet.
  • War has rarely occurred between two democracies.  A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
  • Diseases will be controlled on a planetary basis.

A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova.  Or we may be able to keep the sun from exploding.  (Or we might be able to change the orbit of our planet.)  Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere.  Also, we probably will have colonized not just the entire solar system, but nearby stars.

By the time we become a Type III civilization, we will have explored most of the galaxy.  We may have done this using self-replicating robot probes.  Or we may have mastered Planck energy (10^19 billion electron volts).  At this energy, space-time itself becomes unstable.  The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time.  By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time.  As a result, a Type III civilization might be able to colonize the entire galaxy.

It’s possible that a more advanced civilization has already visited or detected us.  For instance, they may have used tiny self-replicating probes that we haven’t noticed yet.  It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.

Kaku writes that many people are not aware of the historic transition humanity is now making.  But this could change if we discover evidence of intelligent life somewhere in outer space.  Then we would consider our level of technological evolution relative to theirs.

Consider the SETI Institute.  This is from their website (

SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.

Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets.  Whether evolution will give rise to intelligent, technological civilizations is open to speculation.  However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.

Finding evidence of other technological civilizations however, requires significant effort.  Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.

Work at the Center is divided into two areas:  Research and Development (R&D) and Projects.  R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects.  The algorithms and technology developed in the lab are first field-tested and then implemented during observing.  The observing results are used to guide the development of new hardware, software, and observing facilities.  The improved SETI observing Projects in turn provide new ideas for Research and Development.  This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.

Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is.  A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible.  If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Bogle on Index Funds

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 6, 2017

Ultra-low-cost index funds tend to be exceptionally good long-term investments.  It’s not just that, on an annual basis, index funds typically do better than 60-80% of all funds.  It’s that index funds very consistently do better.  Consistently outperforming 60-80% of all funds annually virtually guarantees that index funds will beat at least 90-95% of all funds over the course of several decades or more.  It’s just a matter of simple arithmetic, as Bogle has noted.  Moreover, the past several decades illustrate this result (see Brute Facts below).

If you’re a long-term investor, then by investing in index funds, you are likely to beat at least 90-95% of all investors, net of costs, over time.  Investing in index funds is the best long-term investment for the vast majority of investors, as Warren Buffett—one of the greatest investors ever—has often noted.  See:

Jack Bogle’s Doun’t Count on It! (Wiley, 2011) is a collection of his writings on a variety of topics including capitalism, entrepreneurship, indexing, idealism, and heroes.  It’s a long book (586 pages), but well worth reading.  Below is my brief summary of Chapter 18 (pages 369-392).



The main reason that index funds generally beat at least 90-95% of all investors over time is ultra-low costs.  Bogle:

…we don’t need to accept the EMH [Efficient Market Hypothesis] to be index believers.  For there is a second reason for the triumph of indexing, and it is not only more compelling but unarguably universal.  I call it the CMH—the Cost Matters Hypothesis—and not only is it all that is needed to explain why indexing must and does work, but it in fact enables us to quantify with some precision how well it works.  Whether or not the markets are efficient, the explanatory power of the CMH holds.  (page 371)

Bogle further explains:

…The mathematical expectation of the speculator is not zero;  it is a loss equal to the amount of transaction costs incurred.

So, too, the mathematical expectation of the long-term investor also is a shortfall to whatever returns our financial markets are generous enough to provide.  Indeed the shortfall can be described as precisely equal to the costs of our system of financial intermediation—the sum total of all those advisory fees, marketing expenditures, sales loads, brokerage commissions, transaction costs, custody and legal fees, and securities processing expenses.  Intermediation costs in the U.S. equity market may well total as much as $250 billion a year or more.  If today’s $13 trillion stock market were to provide, say, a 7 percent annual return ($910 billion), costs would consume more than a quarter of it, leaving less than three-quarters of the return for the investors—those who put up 100 percent of the capital.  We don’t need the EMH to explain the dire odds that investors face in their quest to beat the stock market.  We need only the CMH.  Whether markets are efficient or inefficient, investors as a group must fall short of the market return by the amount of the costs they incur.  (page 372)



Bogle recounts:

Our introduction of First Index Investment Trust was greeted by the investment community with derision.  It was dubbed ‘Bogle’s Folly,’ and described as un-American, inspiring a widely circulated poster showing Uncle Sam calling on the world to ‘Help Stamp Out Index Funds’… Fidelity Chairman Edward C. Johnson led the skeptics, assuring the world that Fidelity had no intention of following Vanguard’s lead:  ‘I can’t believe that the great mass of investors are going to be satisfied with just receiving average returns.  The name of the game is to be the best.’  (Fidelity now runs some $38 billion in indexed assets.)  (pages 375-376)

Of course, all investors would like to get the best returns if possible.  Yet, by definition, investors on the whole will get average results.  But that is before costs.

After costs, the average investor will get less than the market returns.  And the amount of the shortfall will precisely equal the costs.



Bogle examines the long-term performance of mutual funds:

…In 1970, there were 355 equity mutual funds, and we have now had more than three decades over which to measure their success.  We’re first confronted with an astonishing—and important—revelation:  Only 147 funds survived the period.  Fully 208 of those funds vanished from the scene, an astonishing 60 percent failure rate…

Now let’s look at the records of the survivors—doubtless the superior funds of the initial group.  Yet fully 104 of them fell short of the 11.3 percent average annual return achieved by the unmanaged S&P 500 Index.  Just 43 funds exceeded the index return.  If, reasonably enough, we describe a return that comes within plus or minus a single percentage point of the market as statistical noise, 52 of the surviving funds provided a return roughly equivalent to that of the market.  A total of 72 funds, then, were clear losers (i.e., by more than a percentage point), with only 23 clear winners above that threshold.

If we widen the ‘noise’ threshold to plus or minus two percentage points, we find that 43 of the 50 funds outside that range were inferior and only 7 superior—a tiny 2 percent of the 355 funds that began the period…

But I believe the evidence actually overrates the long-term achievement of the seven putatively successful funds.  Is the obvious credibility of those superior records in fact credible?  I’m not so sure.  Those winning funds have much in common.  First, each was relatively unknown (and relatively unowned by investors) at the start of the period.  Their assets were tiny, with the smallest at $1.9 million, the median at $9.8 million, and the largest at $59 million.  Second, their best returns were achieved during their first decade, and resulted in enormous asset growth, typically from those little widows’ mites at the start of the period to $5 billion or so at the peak, before performance started to deteriorate.  (One fund actually peaked at $105 billion!)  Third, despite their glowing early records, most have lagged the market fairly consistently during the past decade, sometimes by a substantial amount… The pattern for five of the seven funds is remarkably consistent:  a peak in relative return in the early 1990s, followed by annual returns of the next decade that lagged the market’s return by about three percentage points per year—roughly, S&P 500 +12 percent, mutual fund +9 percent.

In the field of fund management it seems apparent that ‘nothing fails like success’… For the vicious circle of investing—good past performance draws large dollars of inflow, and having large dollars to manage crimps the very ingredients that were largely responsible for the good performance—is almost inevitable in any winning field.  So even if an investor was smart enough or lucky enough to have selected one of the few winning funds at the outset, selecting such funds by hindsight—after their early success—was also largely a loser’s game.  Whatever the case, the brute evidence of the past three decades makes a powerful case against the quest to find the needle in the haystack.  Investors would be better served by simply owning, through an index fund, the market haystack itself.  (pages 378-380)

Bogle continues:

In the field of investment management, relying on past performance simply has not worked.  The past has not been prologue, for there is little persistence in fund performance.  A recent study of equity mutual fund risk-adjusted returns during 1983-2003 reflected a randomness in performance that is virtually perfect.  A comparison of fund returns in the first half to the second half of the first decade, in the first half to the second half of the second decade, and in the first full decade to the second full decade makes the point clear.  Averaging the three periods shows that 25 percent of the top-quartile funds in the first period found themselves in the top quartile in the second—precisely what chance would dictate.  Almost the same number of top-quartile funds—23 percent—tumbled to the bottom quartile, again a close-to-random outcome.  In the bottom quartile, 28 percent of the funds mired there during the first half remained there in the second, while slightly more—29 percent—had actually jumped to the top quartile.

…Simply picking the top-performing funds of the past fails to be a winning strategy.  What is more, even when funds succeed in outpacing their peers, they still have a way to go to match the return of the stock market index itself.  (pages 381-382)



Bogle writes:

…What do the proponents of active management point to?  Themselves!  ‘We can do it better.’  ‘We have done it better.’  ‘Just buy the (inevitably superior performing) funds that we advertise.’  It turns out, then, that the big idea that defines active management is that there is no big idea.  Its proponents offer only a few good anecdotes of the past and promises for the future.

Also, it turns out that there is in fact one big idea that can be generalized without contradiction.  Cost is the single statistical construct that is highly correlated with future investment success.  The higher the cost, the lower the return.  Equity fund expense ratios have a negative correlation coefficient of -0.61 with equity fund returns.  In the fund business, you get what you don’t pay for.  You get what you don’t pay for!

If we simply aggregate funds by quartile, this correlation jumps right out at us.  During the decade ended November 30, 2003, the lowest-cost quartile of funds provided an average annual return of 10.7 percent;  the second-lowest, 9.8 percent;  the second-highest, 9.5 percent;  and the highest quartile, 7.7 percent—the difference of fully three percentage points per year between the high and low quartiles, equal to a 30 percent increase in annual return!  The same pattern holds irrespective of manager style or market capitalization.  But of course, with index funds carrying by far the lowest costs in the industry, there are few, if any, promotions by active managers of the undeniable relationship between cost and value.  (pages 385-386)



Bogle explains why index funds have succeeded in beating nearly all other funds over the course of several decades or more:

The reasons for that success are the essence of simplicity:  (1) the broadest possible diversification, often subsuming the entire U.S. stock market;  (2) a focus on the long-term, with minimal, indeed nominal, portfolio turnover (say, 3% to 5% annually);  and (3) rock-bottom cost, with neither advisory fees nor sales loads, and minimal operating expenses….

…While fund costs essentially represent the difference between success and failure for investors who seek to accumulate assets, they have gone up as index fees have come down.  The initial expense ratio of our 500 Index Fund was 0.43 percent, compared to 1.40 percent for the average equity fund.  Today, it is 0.18 percent or less, while the ratio for the average equity fund has risen to 1.58 percent.  Add in turnover costs and sales commissions and the all-in cost of the average fund is at least 2.5 percent, suggesting a future annual index fund advantage of at least 2.3 percent per year.  (page 387)



Bogle concludes:

Now think of this in personal terms.  What difference would an index fund make in your own retirement plan over, say, 40 years?  Well, let’s postulate a future long-term annual return of 8 percent on stocks.  If we assume that mutual fund costs continue at their present level of at least 2.5 percent a year, an average mutual fund might return 5.5 percent.  Extending this tax-deferred compounding out in time on your investment of $3,000 each year over 40 years, an investment in the stock market itself would grow to $840,000, with the market index fund not far behind.  Your actively managed mutual fund would produce $430,000—only a little more than one-half as much.

Looked at from a different perspective, your retirement plan has earned a value of $840,000 before costs, and donated $410,000 of that total to the mutual fund industry.  You have kept the remainder – $430,000.  The financial system has consumed 48 percent of the return, and you have achieved but 52 percent of your earning potential.  Yet it was you who provided 100 percent of the initial capital;  the industry provided none.  Confronted by the issue in this way, would an intelligent investor consider this split to represent a fair shake?  Merely to ask the question is to answer it:  ‘No.’  (pages 391-392)



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.


A Man for All Markets

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 30, 2017

A Man for All Markets: From Las Vegas to Wall Street, How I Beat the Dealer and the Market (Random House, 2017) is the autobiography of Edward O. Thorp, a remarkable person.  Here’s the beginning:

Join me in my odyssey through the worlds of science, gambling, and the securities markets.  You will see how I overcame risks and reaped rewards in Las Vegas, Wall Street, and life.  On the way, you will meet interesting people from blackjack card counters to investment experts, from movie starts to Nobel Prize winners.  And you’ll learn about options and other derivatives, hedge funds, and why a simple investment approach beats most investors in the long run, including experts.

The simple approach to which Thorp refers is investing in ultra-low-cost index funds.  Thorp’s view here is similar to Warren Buffett’s:



Even as a young child, Thorp loved learning.  And he especially loved testing ideas by doing experiments:

A trait that showed up about this time was my tendency not to accept anything I was told until I had checked it out for myself.

From the beginning, I loved learning through experimentation and exploration how my world worked.

Thorp also demonstrated awesome powers of concentration:

When I was reading or just thinking, my concentration was so complete that I lose all awareness of my surroundings.

Thorp was influenced by a few great teachers, including Jack Chasson:

Jack was twenty-seven then, with wavy brown hair and the classic good looks of a Greek god.  He had a ready, warm smile and a way of saying something that boosted the self-esteem of everyone he met… my first great teacher…



Thorp became fascinated by radio and electronics.  The ability to hear voices from the air amazed him:

The mechanical world of wheels, pulleys, pendulums, and gears was ordinary.  I could see, touch, and watch it in action.  But this new world was one of invisible waves that traveled through space.  You had to figure out through experiments that it was actually there and then use logic to grasp how it worked.

Eventually this led Thorp to think things through for himself and also to design experiments:

I was learning to work things out for myself, not limited to prompting from teachers, parents, or the school curriculum.  I relished the power of pure thought, combined with the logic and predictability from science.  I loved visualizing an idea and then making it happen.

Learning, thinking, and experimenting were all great fun for Thorp, leading him to contemplate becoming a scientist at a university:

An academic life was becoming my dream.  I liked all the science experiments I was doing and the knowledge they led to.  If I could have a career continuing this kind of playing, I would be very happy.  And the way to have that kind of life was by joining the academic world where they had the laboratories, the kinds of experiments and projects I enjoyed, and maybe the chance to work with other people like me.

In the summer of 1948, Thorp read through a list of 60 great novels, mostly American literature but also including some foreign authors like Dostoyevski and Stendhal.  Thorp’s teacher Jack Chasson had given him the list and then lent him the books from his personal library.



Thorp was the number one student in his chemistry class, but lost that position when he was cheated.  When the mistake was not corrected, Thorp changed his major to physics:

This rash decision, which led me to change my school and my major subject, would change my whole path in life.  In hindsight, it turned out for the best, as my interests and my future were in physics and mathematics.

In graduate school, after transferring from Berkeley to UCLA, Thorp completed all the course work for the PhD and was halfway through his thesis on the structure of atomic nuclei.  All he had to do was finish the thesis work and pass a final oral exam.  But he would have to learn much more mathematics in order to finish the complex quantum mechanical calculations.  Thorp realized that he could earn a PhD in mathematics much sooner than he would likely be able to earn a PhD in physics.  So he got the PhD in mathematics.

While in graduate school, Thorp had become re-acquainted with Vivian Sinetar.  Thorp says he was lucky she was still single, despite family pressure to marry.  Also, Vivian, whose parents were immigrant Hungarian Jews, would be the first to marry outside the Jewish faith.  Fortunately, Vivian’s parents liked Thorp even though he was an academic rather than a doctor or a lawyer.



Ed Thorp and his wife Vivian spent one Christmas vacation in Las Vegas because the city had turned itself into a bargain vacation spot (to attract gamblers).  The city was different at that time:

Back then the long, straight, uncrowded highway had a dozen or so one-story hotel-casino complexes scattered on either side with hundreds of yards of sand and tumbleweeds separating them.

Just before this trip to Vegas, Thorp had learned from a colleague what is now called basic strategy for blackjack.  This strategy gave the player the smallest statistical disadvantage – 0.62 percent – of any casino game.  Thorp thought he would have fun by risking a few dollars trying out basic strategy.

Before this trip, Thorp had already realized that roulette could be beaten.  Why not blackjack?

The belief that casinos must come out ahead in the long run was supported by conventional wisdom, which argued that if blackjack could be beaten, the casinos would have to either change the rules or drop the game.  Neither had happened.  But, confident from my experiments that I could predict roulette, I wasn’t willing to accept these claims about blackjack.  I decided to check for myself if the player could systematically win.



Thorp explains:

It wasn’t the money that drew me to blackjack.  Though we could certainly use extra dollars, Vivian and I expected to lead the usual low-budget academic life.  What intrigued me was the possibility that merely by sitting in a room and thinking, I could figure out how to win.

Back from vacation, Thorp went to the section of the UCLA library where mathematical and statistical research articles were.

I started with the fact that the strategy I had used in the casino assumed that every card had the same chance of being dealt as any other during play.  This cut the casino’s edge to just 0.62 percent, the best odds of any game being offered.  But I realized that the odds as the game progressed actually depended on which cards were still left in the deck and that the edge would shift as play continued, sometimes favoring the casino and sometimes the player.  The player who kept track could vary his bets accordingly. 

The player would keep his bets small when the casino had the advantage, which was most of the time.  But the player would bet much more when the odds were in his favor.  Over a large number of hands, the casino would win most of the small bets, but the player would win most of the big bets.  As long as the deal was fair—otherwise the player should learn to quit right away—Thorp’s strategy would be profitable over time.

Thorp began to do calculations to see how the player’s advantage changed based on which cards had already been played.  Thorp figured out that what mattered was the proportion of each type of card left as a percentage of the total number of cards left.

When Thorp started teaching mathematics at MIT, he had access to an IBM 704 computer, which he used to test his blackjack approximations.  Next he used the computer to figure out how the odds changed when all four of a specific card were missing from the remaining deck.  The math also showed that if removing a specific group of cards shifted the odds in one direction, adding an equal number of the same cards would move the odds the other way by the same amount.

Eventually Thorp was able to calculate the player advantage based on the specific cards that had been played.  He decided to publish his results in Proceedings of the National Academy of Sciences.  But he needed a member of the academy to approve and forward his work.  The only mathematics member of the academy at MIT was Claude Shannon.  Shannon was famous for the invention of information theory, the foundation of modern computing.

To Thorp’s surprise, Shannon was fascinated by Thorp’s ideas.  A few minutes became an hour and a half of animated dialogue.  Shannon said Thorp had likely made a theoretical breakthrough.  But he suggested the paper be titled “A Favorable Strategy for Twenty-One” instead of “A Winning Strategy for Blackjack.”

Then Shannon asked Thorp if he was working on anything else involving games of chance.  Thorp told him about his idea that roulette was predictable.  This led to several more hours of excited conversation.  Shannon and Thorp decided to work together to build a small, wearable computer that could be used to beat roulette.



Thorp had decided to present his blackjack system at the annual meeting of the American Mathematical Society in Washington, DC.  Thorp made this decision because previous mathematicians (centuries earlier in some cases) seemed to have proven that no casino game could be beat.  Dick Stewart of The Boston Globe had heard about Thorp’s upcoming talk.  Stewart called Thorp to ask about it.  The newspaper also sent a photographer to take Thorp’s picture.  The next morning Stewart’s article and Thorp’s picture were on the front page.

When the day of the meeting arrived, instead of the usual scholarly audience of forty or fifty, there were hundreds of curious people, including many with sunglasses, pinkie rings, or cigars.  Thorp writes:

In the abstract, life is a mixture of chance and choice.  Chance can be thought of as the cards you are dealt in life.  Choice is how you play them.  I chose to investigate blackjack.  As a result, chance offered me a new set of unexpected opportunities.

Thorp was deluged by offers to back a casino test, ranging from a few thousand dollars to $100,000.  Many were curious about whether Thorp’s system would really work.  Thorp felt he owed his readers proof.

The most promising offer came from two New York multimillionaires.  Thorp called them Mr. X and Mr. Y.  Initially, Thorp was concerned about the dangers of a bankroll provided by strangers.  But Mr. X kept calling, so Thorp finally decided to meet him.

Emmanual “Manny” Kimmel (Mr. X) arrived at Thorp’s residence in a Cadillac with two good-looking young blondes.  Kimmel introduced the two women as his “nieces.”  Kimmel dealt blackjack to Thorp for a couple of hours while asking him about his research.  Then they agreed to plan a trip to Nevada.  When Manny was leaving, he grabbed several pearl necklaces from his pocket and offered a strand to Vivian.  The pearls stayed in Thorp’s family and are now warn by his daughter.

Kimmel and his friend (Mr. Y) gave Thorp a bankroll of $100,000.  (This is the equivalent of $800,000 in 2017 dollars.)  But Thorp insisted on starting with only $10,000 in order to prove first that his system worked.

This plan, of betting only at a level at which I was emotionally comfortable and not advancing until I was ready, enabled me to play my system with a calm and disciplined accuracy.  This lesson from the blackjack tables would prove invaluable throughout my investment lifetime as the stakes grew ever larger.

Thorp’s system worked.  But the blackjack player had to understand randomness and odds over a very long series of bets.  Most small bets the casino would win.  And there would also be times when the player was unlucky on bigger bets, despite favorable odds.  But eventually, over time, Thorp’s system worked.

…the Ten-Count System had shown moderately heavy losses mixed with ‘lucky’ streaks of the most dazzling brilliance.  I learned later that this was a characteristic of a random series of favorable bets.  And I would see it again and again in real life in both the gambling and the investment worlds.

Note:  Thorp’s system worked as long as the deal was fair most of the time.  But the player had to learn to spot signs of cheating.  The player also had to quit games where losses were happening fast.  (Fast losses usually meant cheating.)

Cheating was so relentless during those days in Las Vegas that I spent as much time learning about the many ways it was being done as I did playing.  Everywhere we went, we reached a point where we were cheated, barred from play,  or the dealer reshuffled the cards after every hand.



Thorp and Shannon created a wearable computer that would allow the player to win at roulette.

Thorp was now in a position to win a good deal of money—compared to his salary as a mathematics professor—by playing blackjack and roulette.  But introspection revealed to him that he would enjoy life more as an academic than as a gambler:

I was at a point in life where I could choose between two very different futures.  I could roam the world as a professional gambler winning millions per year.  Switching between blackjack and roulette, I could spend some of the winnings as perfect camouflage by also betting on other games offering a small casino edge, like craps or baccarat.

My other choice was to continue my academic life.  The path I would take was determined by my character, namely, What makes me tick?  As the Greek philosopher Heraclitus said, ‘Character is destiny.’



Thorp writes:

Gambling is investing simplified.  The striking similarities between the two suggested to me that, just as some gambling games could be beaten, it might also be possible to do better than the market averages.  Both can be analyzed using mathematics, statistics, and computers.  Each requires money management, choosing the proper balance between risk and return.  Betting too much, even though each individual bet is in your favor, can be ruinous… On the other hand, playing safe and betting too little means you leave money on the table.  The psychological makeup to succeed at investing also has similarities to that for gambling.  Great investors are often good at both.

Thorp made several mistakes when he started investing.  The first stock he bought dropped 50%.  Thorp decided to wait until he could get even.  This happened after four years.  One thing Thorp learned from this experience is to avoid anchoring.

Learn about the anchoring effect here:

Thorp’s second mistake was investing based on momentum.  It didn’t work.  Thorp learned not to expect momentum to continue unless you have good reasons to think it will.

Thorp’s third mistake was to buy silver on margin.  Initially silver rose and Thorp used the profits to buy even more silver on margin.  Then the silver price dropped, which wiped Thorp out because he was on margin. After that, silver started going up again, but Thorp had already lost his whole investment due to his use of margin.  This experience taught Thorp about proper risk management.

Thorp learned how to invest in undervalued warrants while hedging the position:

To form a hedge, take two securities whose prices tend to move together, such as a warrant and the common stock it can be used to purchase, but which are comparatively mispriced.  Buy the relatively underpriced security and sell short the relatively overpriced security.  If the proportions in the position are chosen well, then even though prices fluctuate, the gains and losses on the two sides will approximately offset or hedge each other.  If the relative mispricing between the two securities disappears as expected, close the position in both and collect a profit.

Thorp figured out a formula for pricing warrants and options.  An option to buy a stock is like a warrant except that usually the company issues warrants.  Thorp began investing portfolios for friends and acquaintances.



Ralph Waldo Gerard was an early investor with Thorp.  Previously, Gerard had invested in the Buffett Partnership.  Gerard was related to the father of value investing, Benjamin Graham (Buffett’s teacher and mentor).

Gerard invited Thorp and his wife to his home for dinner with Susie and Warren Buffett.  Buffett is arguably the most successful investor of all time.  But Thorp learned that Buffett had to work extremely hard in order to find a few excellent long-term investments.

By contrast, Thorp’s quantitative, statistical investment strategy seemed much easier than analyzing in detail thousands of companies.  Thorp’s approach would give him more free time to enjoy family and to pursue his academic career.

Later, Buffett invited Gerard and Thorp to his home in Emerald Bay, California for an afternoon of bridge.  Thorp:

Bridge is what mathematicians call a game of imperfect information.  The bidding, which precedes the play of the cards, gives some information about the four concealed hands held by two pairs of players who are opposing each other.  As the cards are played, players use the bidding and the cards they have seen so far to make inferences about who has the remaining unplayed cards.  The stock market also is a game of imperfect information and even resembles bridge in that both have their deceptions.  As in bridge, you do better in the market if you get more information sooner and put it to better use.  It’s no surprise that Buffett, arguably the greatest investor in history, is a bridge addict.

Thorp was impressed by Buffett and made a prediction:

Impressed by Warren’s mind and his methods, as well as his record as an investor, I told Vivian that I believed he would eventually become the richest man in America.  Buffett was an extraordinarily smart evaluator of underpriced companies, so he could compound money much faster than the average investor.  He also could continue to rely mainly on his own talent even as his capital grew to an enormous amount.  Warren furthermore understood the power of compound interest and, clearly, planned to apply it over a long time.

Thorp partnered with a New York stockbroker, Jay Regan, who had studied philosophy at Dartmouth.  Together, they launched Convertible Hedge Associates—later renamed Princeton Newport Partners.  They aimed to raise $5 million, but only reached $1.4 million.  They went ahead anyways.



Princeton Newport Partners (PNP) specialized in the hedging of convertible securities—warrants, options, convertible bonds and preferreds, and other types of derivative securities.  PNP not only hedged each individual position.  But it also hedged the portfolio against changes in interest rates and changes in the overall market level.  PNP’s near total reliance on quantitative methods—using mathematical formulas, economic models, and computers—made them the earliest “quants.”

Thorp was motivated to reduce risk:

Influenced by having been born during the Great Depression and by my early investment experiences, I made reducing risk a central feature of my investing approach.

The hedges protected us against losses but at the expense of giving up some of the gains in the big up-markets.

In 1973-1974, each $1,000 invested in the S&P 500 would have shrunk to $618, whereas each $1,000 invested in PNP grew to $1,160.

Thorp’s wife, Vivian, not only raised their three children.  She was also active in local politics, helping reelect a decent congressman.  And Vivian organized and ran a large phone bank that helped elect the first black man to a California statewide office.  Moreover, she influenced many people one on one.

One time, a woman complained to Vivian about “those Jews.”  Vivian was Jewish and had lost several relatives in Nazi World War II prison camps.  Ed Thorp:

When she told us about meeting the woman, we expected to hear how she tore her to shreds.  Explaining why she did not, Vivian pointed out that the woman would have learned nothing and simply would have become an enemy.  Vivian patiently educated this basically good person and they became friends for the rest of their lives.

Thorp’s PhD thesis had been in pure mathematics and this continued to be his focus for fifteen years.  Although Thorp loved teaching and research, eventually he resigned his full professorship at the University of California, Irvine.  He felt a sense of loss, but it turned out to be for the best.  Thorp continued his friendships and research collaborations.  He continued to present his work at meetings and publish it in the mathematical, financial, and gambling literature.



Thorp and his colleagues continued to solve problems for valuing derivatives before academics did.  This gave PNP a large edge from 1967 to 1988, when PNP closed.

Hedging with derivatives was a key source of profits for PNP during its entire nineteen years.  Such hedging also became a core strategy for many later hedge funds like Citadel, Stark, and Elliott, which each went on to manage billions.

Some risks cannot be hedged:

There is another kind of risk on Wall Street from which computers and formulas can’t protect you.  That’s the danger of being swindled or defrauded.  Being cheated at cards in the casinos in the 1960s was valuable preparation for the far greater scale of dishonesty I would encounter in the investment world.  The financial press reveals new skulduggery on a daily basis.



PNP’s dream for the 1980s was to expand their expertise into new areas.

Of the scores of indicators we systematically analyzed, several correlated strongly with past performance.  Among them were earnings yield (annual earnings divided by price), dividend yield, book value divided by price, momentum, short interest…, earnings surprise…, purchases and sales by company officers, directors, and large shareholders, and the ratio of total company sales to the market price of the company.  We studied each of these separately, then worked out how to combine them.  When the historical patterns persisted as prices unfolded into the future, we created a trading system called MIDAS (multiple indicator diversified asset system) and used it to run a separate long/short hedge fund (long the “good” stocks, short the “bad” ones).  The power of MIDAS was that it applied to the entire multitrillion-dollar stock market, with the possibility of investing very large sums.

From November 1, 1979 through January 1, 1988, PNP’s capital expanded from $28.6 million to $273 million.  The partnership earned 22.8 percent per year before fees, which meant 18.2 percent per year for limited partners.

Furthermore, PNP invented excellent new products that could allow the fund to manage billions.  They included:

  • State-of-the-art convertible, warrant, and option computerized analytic models and trading systems
  • Statistical arbitrage
  • Expert investments based on interest rates
  • OSM Partners, a “fund of hedge funds”



In the 1970s, less established companies had to scramble for funding.  A young financial innovator named Michael Milken had an idea:

Milken’s group underwrote issues of low-rated, high-yielding bonds—the so-called junk bonds—some of which were convertible or came with warrants to purchase stock… Filling a gaping need and hungry demand in the business community, Milken’s group became the greatest financing engine in Wall Street history.

Such innovation outraged the old line establishment of corporate America, who were initially transfixed like deer in the headlights as a horde of entrepreneurs, funded with seemingly unlimited Drexel-generated cash, began a wave of unfriendly takeovers.  Many old firms were vulnerable because the officers and directors had done a poor job of investing the shareholders’ equity.  With subpar returns on capital, the stocks were cheap…

The officers and directors of America’s big corporations were happy with the way things had been.  They enjoyed their hunting lodges and private jets, made charitable donations for their personal aggrandizement and objectives, and granted themselves generous salaries, retirement plans, bonuses of cash, stock, and stock options, and golden parachutes.  All these things were designed by and for themselves and paid for with corporate dollars, the expenses routinely ratified by a scattered and fragmented shareholder base.  Economists call this conflict of interest between management, or agents, and the shareholders, who are the real owners, the agency problem.  It continues today, one example being the massive continuing grants of stock options by management to itself…

Rudolph Giuliani, U.S. Attorney for the Southern District of New York, was on a campaign to prosecute real and alleged Wall Street criminals.  As a part of his effort to prosecute Michael Milken at Drexel Burnham and Robert Freeman at Goldman Sachs, Giuliani went after Thorp’s partner Jay Regan, who knew both Milken and Freeman well.

Giuliani went after the Princeton office of PNP.  The Newport office, where Thorp and forty others worked, did not have any knowledge of the alleged acts in the Princeton office.  No one at the Newport office was implicated in, or charged with, any wrongdoing in this (or any other) matter.

To apply more pressure, the U.S. Attorney began contacted the limited partners of PNP.  They subpoened them to come to New York and testify before the grand jury.  Thorp explains that the limited partners were passive participants in PNP.  The subpoenas thus had no real value for Giuliani’s case.  It seems Giuliani wanted to disturb and upset these limited partners so that they might withdraw from PNP.

In the end, convictions for racketeering and tax fraud against a few PNP defendants were thrown out by the Second Court of Appeals.  Thorp writes:

In January 1992, having achieved their real goal, which was to convict Milken and Freeman, the prosecutors dropped the remaining charges against four of the five PNP defendants and a relate charge against the Drexel trader.  Princeton’s head trader and the Drexel defendant were still facing fines and three-month prison terms for their remaining counts.  In September 1992, a federal judge vacated these sentences as well.

Thorp later explains:

The old establishment financiers were lucky in that prosecutors would find numerous violations of securities laws within the Milken group and among its allies, associates, and clients.  However, it is difficult to judge how relatively bad these were, compared with the incessant violations that have always been, and continue to be, endemic in business and finance, because only a few of the many violators  are caught, and when they are prosecuted it may be for only a tiny fraction of their offenses.  This contrasts with the case of Drexel, where the searchlight of government was focused to reveal as many violations as possible.  It’s like the case of the man who was cited three times in a single year for driving while intoxicated.  His neighbor would also drink and drive, but was never pulled over.  Who is the greater criminal?  Now suppose I tell you that the caught man did it only three times and was apprehended every time, whereas his neighbor did it a hundred times and was never caught.  How could this happen?  What if I tell you that the two men are bitter business rivals and that the traffic cop’s boss, the police chief, gets large campaign contributions from the man who got no traffic citations.  Now who is the greater criminal?

Thorp considered launching a partnership that would be similar to PNP.  But he loved the quantitative analysis part of the business, not operations and marketing.  So he decided to wind down the Newport office.



Although the closing of PNP erased billions in future wealth for Thorp and his colleagues, Thorp and his wife had more than enough money to be free to spend their time exactly as they wanted.

Around this time, Thorp discovered the greatest financial fraud.  He had been hired to examine some hedge fund investments.  Thorp approved them with one exception: Bernard Madoff Investment.

Madoff claimed to use a split-strike price strategy:  He would buy a stock, sell a call option at a higher price, and use the proceeds to pay for a put option at a lower price.

I explained that, according to financial theory, the long-run impact on portfolio returns from many properly priced options with zero net proceeds should also be zero.  So we expect, over time, that the client’s portfolio return should be roughly the same as the return on equities.  The returns Madoff reported were too large to be believed.  Moreover, in months when stocks are down, the strategy should produce a loss—but Madoff wasn’t reporting any losses.  After checking the client’s account statements I found that losing months for the strategy were magically converted to winners by short sales of S&P Index futures.  In the same way, months that should have produced very large wins were ‘smoothed out.’


…At my suggestion, the client then hired my firm to conduct a detailed analysis of their individual transactions to prove or disprove my suspicions that they were fake.  After analyzing about 160 individual option trades, we found that for half of them no trades occurred on the exchange where Madoff said that they supposedly took place.  For many of the remaining half that did trade, the quantity reported by Madoff just for my client’s two accounts exceeded the entire volume reported for everyone.  To check the minority of remaining trades, those that did not conflict with the prices and volumes reported by the exchanges, I asked an official at Bear Stearns to find out in confidence who all the buyers and sellers of the options were.  We could not connect any of them to Madoff’s firm.

Thorp had proved Madoff’s investment operation was a fraud.  Madoff was running a Ponzi scheme.

In 1991, Thorp was seeking a partner to whom to sell their statistical arbitrage software.  This led him to meet with Bruce Kovner, a successful commodities trader.

About this time he realized large oil tankers were in such oversupply that the older ones were selling for little more than scrap value.  Kovner formed a partnership to buy one.  I was one of the limited partners.  Here was an interesting option.  We were largely protected against loss because we could always sell the tanker for scrap, recovering most of our investment;  but we had a substantial upside:  Historically, the demand for tankers had fluctuated widely and so had their price.  Within a few years, our refurbished 475,000-ton monster, the Empress Des Mers, was profitably plying the world’s sea-lanes stuffed with oil.  I liked to think of my ownership as a twenty-foot section just forward of the bridge… The Empress Des Mers operated profitably into the twenty-first century, when the saga finally ended.  Having generated a return on investment of 30 percent annualized, she was sold for scrap in 2004, fetching almost $23 million, far more than her purchase price of $6 million.

Thorp discusses traders who always try to save a tiny amount on each trade.  The problem is that the trader may do this successfully twenty times in a row, but then miss a trade that goes up so much that it wipes out the savings on the previous twenty trades.

What the hagglers and the traders do reminds me of the behavioral psychology distinction between two extremes on a continuum of types:  satisficers and maximizers.  When a maximizer goes shopping, looks for a handyman, buys gas, or plans a trip, he searches for the best (maximum) possible deal.  Time and effort don’t matter much.  Missing the very best deal leads to regret and stress.  On the other hand, the satisficer, so-called because he is satisfied with a result that is close to the best, factors in the costs of searching and decision making, as well as the risk of losing a near-optimal opportunity and perhaps never finding anything as good again.

This is reminiscent of the so-called secretary or marriage problem in mathematics.  Assume that you will interview a series of people, from which you will choose one.  Further, you must consider them one at a time, and having once rejected someone, you cannot reconsider.  The optimal strategy is to wait until you have seen about 37 percent of the prospects, then choose the next one you see who is better than anybody among this first 37 percent that you passed over.  If no one is better you are stuck with the last person on the list.




…Some exchanges, such as NASDAQ, let HF [High Frequency] traders peek at customer orders ahead of everyone else for thirty milliseconds before the order goes to the exchange.  Seeing an order to buy, for instance, the HF traders can buy first, pushing the stock price up, then resell to the customer at a profit.  Seeing someone’s order to sell, the HF trader sells first, causing the stock to fall, and then buys it back at the lower price.  How is this different from the crime of front-running, described in Wikipedia as ‘the illegal practice of a stock broker executing orders on a security for its own account while taking advantage of advance knowledge of pending orders from its customers’?

Some securities industry spokesmen argue that harvesting this wealth from investors somehow makes the markets more efficient and that ‘markets need liquidity.’  Nobel Prize-winning economist Paul Krugman disagrees sharply, arguing that high-frequency trading is simply a way of taking wealth from ordinary investors, serves no useful purpose, and wastes national wealth because the resources consumed create no social good.

Since the more the rest of us trade the more we as a group lose to the computers, here’s one more reason to buy and hold rather than trade, unless you have a big enough edge.



Thorp discusses a statistical arbitrage investment project:

The idea of the project was to study how the historical returns of securities were related to various characteristics, or indicators.  Among the scores of fundamental and technical measures we considered were the ratio of earnings per share to price per share, known as the earnings yield, the liquidation or “book” value of the company compared with its market price, and the total market value of the company (its “size”).  Today our approach is well known and widely explored but back in 1979 it was denounced by massed legions of academics who believed market prices already had fully adjusted to such information.  Many practitioners disagreed.  The time was right for our project because the necessary high-quality databases and the powerful new computers with which to explore them were just becoming affordable.

The idea for statistical arbitrage was based on the discovery (by one of Thorp’s researchers) that the stocks that had gone up the most over the previous two weeks did the worst as a group over the ensuing few weeks, while the stocks that had gone down the most over the previous two weeks did the best.

In 1994, Thorp launched a new investment partnership, Ridgeline Partners.  Limited partners gained 18 percent per year over eight and a quarter years.

We charged Ridgeline Partners 1 percent per year plus 20 percent of net new profits.  We voluntarily reduced fees during a period when we felt disappointed in our performance.  We gave back more than $1 million to the limited partners.  Some of today’s greedy hedge fund managers might say our return of fees was economically irrational, but our investors were happy and we nearly always had a waiting list.  Ridgeline was closed a large part of the time to new investors, and current partners were often restricted from adding capital.  To maintain higher returns, we sometimes even reduced our size by returning capital to partners.

Instead of charging more fees, Thorp says he sought to treat limited partners as he would wish to be treated if he were in their place.  Thorp closed the fund down in the fall of 2002 because returns had declined due to more hedge funds using statistical arbitrage programs.  More importantly, Ed and Vivian wanted time to travel, read, and learn, and to be with their family.




The consensus of industry studies of hedge fund returns to investors seems to be that, considering the level of risk, hedge funds on average once gave their investors extra return, but this has faded as the industry expanded.  Later analyses say average results are worse than portrayed.  Funds voluntarily report their results to the industry databases.  Winners tend to participate much more than losers.  One study showed that this doubled the reported average annual return for funds as a group from an actual 6.3 percent during 1996-2014 to a supposed 12.6 percent.

The study goes one to point out that if returns over the years are given weights that correspond to the dollars invested, then the returns are ‘only marginally higher than risk-free [U.S. Treasury Bonds] rates of return.’  Another reason that reports by the industry look better than what investors experienced is that they combined higher-percentage returns from the earlier years, when the total invested in hedge funds was smaller, with the lower-percentage returns later, when they managed much more money.

It’s difficult to get an edge picking stocks.  Hedge funds are little businesses just like companies that trade on the exchanges.  Should one be any better at picking hedge funds than we are at picking stocks?

Thorp points out that you will rarely find an investment that is better than an ultra-low-cost index fund over time.  Also, some hedge funds and mutual funds create spectacular records early on but mediocre results when assets under management have grown:

One method that leads to this has also been used to launch new mutual funds.  Fund managers sometimes start a new fund with a small amount of capital.  They then stuff it with hot IPOs (initial public offerings) that brokers give them as a reward for the large volume of business they have been doing through their established funds.  During this process of ‘salting the mine,’ the fund is closed to the public.  When it establishes a stellar track record, the public rushes in, giving the fund managers a huge capital base from which they reap large fees.  The brokers who supplied the hot IPOs are rewarded by a flood of additional business from the triumphant managers of the new fund.  The available volume of hot IPOs is too small to help returns much once the fund gets big, so the track record declines to mediocrity.  However, the fund promoters can use more hot IPOs to incubate yet another spectacularly performing new fund;  and so it goes on.

Like Buffett, Thorp predicts the gradual disappearance of any excess returns produced by hedge funds as a group.  Here is Buffett’s view:




Call any investment that mimics the whole market of listed U.S. securities ‘passive’ and notice that since each of these passive investments acts just like the market, so does a pool of all of them.  If these passive investors together own, say, 15 percent of every stock, then ‘everybody else’ owns 85 percent and, taken as a group, their investments also are like one giant index fund.  But ‘everybody else’ means all the active investors, each of whom has his own recipe for how much to own of each stock and none of whom has indexed.  As Nobel Prize winner Bill Sharpe says, it follows from the laws of arithmetic that the combined holdings of all the active investors also replicates the index.

Reducing risk through diversification is a reason to own an index fund.  An even more important reason to own an index fund is to reduce your costs.  Ultra-low costs are why index funds outperform, necessarily, the vast majority of investors, especially over the course of several decades.  Thorp explains:

Investors who don’t index pay on average an extra 1 percent a year in trading costs and another 1 percent to what Warren Buffett calls ‘helpers’—the money managers, salespeople, advisers, and fiduciaries that permeate all areas of investing.  As a result of these costs, active investors as a group trail the index by 2 percent or so, whereas the passive investor who selects a no-load (no sales fee), low-expense-ratio (low overhead and low management fee) index fund can pay less than 0.25 percent in fees and trading costs.  From the gambling perspective, the return to an active investor is that of a passive investor plus the extra gain or loss from paying (on average) 2 percent a year to toss a fair coin in some (imaginary) casino.  Taxable active investors do even worse, because a high portfolio turnover means short-term capital gains, which currently are taxed at a higher rate than gains from securities, the sales of which have been deferred for a year.

Furthermore, notes Thorp, one way an investor could mimic an index fund is simply to buy a portfolio of at least twenty stocks.  If the choices are randomized, then the returns from this portfolio should track the index over time.  Consider, for instance, that the Dow Jones Industrial Average—comprised of thirty stocks—has closely tracked the S&P 500 Index over time.

Moreover, the portfolio of twenty stocks could be even lower cost than an ultra-low-cost index fund because the 20-stock portfolio likely would not require any trading at all, whereas a broad market ultra-low-cost index fund would have to make minor adjustments over time in order to keep tracking the index.



Thorp observes that for a perfectly efficient market, one you can’t beat, we expect:

  • All information to be instantly available to many participants.
  • Many participants to be financially rational.
  • Many participants to be able instantly to evaluate all available relevant information and determine the current fair price of every security.
  • New information to cause prices immediately to jump to the new fair price, preventing anyone from gaining an excess market return by trading at intermediate prices during the transition.

Supporters of the EMH (Efficient Market Hypothesis) typically argue that these conditions hold as an approximation.

In the real world of investing, Thorp writes that the market is somewhat inefficient.  In particular:

  • Some information is instantly available to the minority that happen to be listening at the right time and place.  Much information starts out known only to a limited number of people, then spreads to a wider group in stages.  This spreading could take from minutes to months, depending on the situation.  The first people to act on the information capture the gains.  The others get nothing or lose.  (Note:  The use of early information by insiders can be either legal or illegal, depending on the type of information, how it is obtained, and how it’s used.)
  • Each of us is financially rational only in a limited way.  We vary from those who are almost totally irrational to some who strive to be financially rational in nearly all their actions.  In real markets the rationality of the participants is limited.
  • Participants typically have only some of the relevant information for determining the fair price of a security.  For each situation, both the time to process the information and the ability to analyze it generally very widely.
  • The buy and sell orders that come in response to an item of information sometimes arrive in a flood within a few seconds, causing the price to gap or nearly gap to the new level.  More often, however, the reaction to news is spread out over minutes, hours, days, or months, as the academic literature documents.

These realities tell us how to beat the market, says Thorp:

  • Get good information early.  How do you know if your information is good enough or early enough?  If you are not sure, then it probably isn’t.
  • Be a disciplined rational investor.  Follow logic and analysis rather than sales pitches, whims, or emotion.  Assume you may have an edge only when you can make a rational affirmative case that withstands your attempts to tear it down.  Don’t gamble unless you are highly confident you have the edge.  As Buffett says, ‘Only swing at the fat pitches.’
  • Find a superior method of analysis.  Ones that you have seen pay off for me include statistical arbitrage, convertible hedging, the Black-Scholes formula, and card counting at blackjack.  Other winning strategies include superior security analysis by the gifted few and the methods of the better hedge funds.
  • When securities are known to be mispriced and people take advantage of this, their trading tends to eliminate the mispricing.  This means the earliest traders gain the most and their continued trading tends to reduce or eliminate the mispricing.  When you have identified an opportunity, invest ahead of the crowd.

Thorp sums it up:

Note that market inefficiency depends on the observer’s knowledge.  Most market participants have no demonstrable advantage.  For them, just as the cards in blackjack or the numbers at roulette seem to appear at random, the market appears to be completely efficient.

To beat the market, focus on investments well within your knowledge and ability to evaluate, your ‘circle of competence.’  Be sure your information is current, accurate, and essentially complete.  Be aware that information flows down a ‘food chain,’ with those who get it first ‘eating’ and those who get it late being eaten.  Finally, don’t bet on an investment unless you can demonstrate by logic, and if appropriate by track record, that you have an edge.



Thorp wraps up his book by sharing some of what he learned on his odyssey through science, mathematics, gambling, hedge funds, finance, and investing:

Education has made all the difference for me.  Mathematics taught me to reason logically and to understand numbers, tables, charts, and calculations as second nature.  Physics, chemistry, astronomy, and biology revealed wonders of the world, and showed me how to build models and theories to describe and to predict.  This paid off for me in both gambling and investing.

Education builds software for your brain.  When you’re born, think of yourself as a computer with a basic operating system and not much else.  Learning is like adding programs, big and small, to this computer, from drawing a face to riding a bicycle to reading to mastering calculus.  You will use these programs to make your way in the world.  Much of what I’ve learned came from schools and teachers.  Even more valuable, I learned at an early age to teach myself.  This paid off later on because there weren’t any courses in how to beat blackjack, build a computer for roulette, or launch a market-neutral hedge fund.

I found that most people don’t understand the probability calculations needed to figure out gambling games or to solve problems in everyday life.  We didn’t need that skill to survive as a species in the forests and jungles.  When a lion roared, you instinctively climbed the nearest tree and thought later about what to do next.  Today we often have the time to think, calculate, and plan ahead, and here’s where math can help us make decisions…

Thorp later writes that economists have found one factor that explains a nation’s future economic growth more than any other:  its output of scientists and engineers.  Therefore it’s crucial to have the best education system we can.  It’s essential that we strive to keep talented American-born scientists and engineers in the United States, and that we also seek to keep gifted foreign-born scientists and engineers after they have received advanced degrees in the United States.  Thorp:

To starve education is to eat our seed corn.  No tax today, no technology tomorrow.

Thorp concludes:

Life is like reading a novel or running a marathon.  It’s not so much about reaching a goal but rather about the journey itself and the experiences along the way.  As Benjamin Franklin famously said, ‘Time is the stuff life is made of,’ and how you spend it makes all the difference.

…Whatever you do, enjoy your life and the people who share it with you, and leave something good of yourself for the generations to follow.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Warren Buffett on Jack Bogle

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 23, 2017

Warren Buffett has long maintained that most investors—large and small—would be best off by simply investing in ultra-low-cost index funds.  Buffett explains his reasoning again in the 2016 Letter to Berkshire Shareholders (see pages 21-25):

Passive investors will essentially match the market over time.  So, argues Buffett, active investors will match the market over time before costs (including fees and expenses).  After costs, active investors will, in aggregate, trail the market by the total amount of costs.  Thus, the net returns of most active investors will trail the market over time.  Buffett:

There are, of course, some skilled individuals who are highly likely to out-perform the S&P over long stretches.  In my lifetime, though, I’ve identified—early on—only ten or so professionals that I expected would accomplish this feat.

There are no doubt many hundreds of people—perhaps thousands—whom I have never met and whose abilities would equal those of the people I’ve identified.   The job, after all, is not impossible.  The problem simply is that the great majority of managers who attempt to over-perform will fail.  The probability is also very high that the person soliciting your funds will not be the exception who does well.

As for those active managers who produce a solid record over 5-10 years, many of them will have had a fair amount of luck.  Moreover, good records attract assets under management.  But large sums are always a drag on performance.



Long Bets is a non-profit started by Jeff Bezos.  As Buffett describes in his 2016 Letter to Shareholders, “proposers” can post a proposition at that will be proved right or wrong at some date in the future.  They wait for someone to take the other side of the bet.  Each side names a charity that will be the beneficiary if its side wins and writes a brief essay defending its position.


Subsequently, I publicly offered to wager $500,000 that no investment pro could select a set of at least five hedge funds—wildly-popular and high-fee investing vehicles—that would over an extended period match the performance of an unmanaged S&P-500 index fund charging only token fees.  I suggested a ten-year bet and named a low-cost Vanguard S&P fund as my contender.  I then sat back and waited expectantly for a parade of fund managers—who could include their own fund as one of the five—to come forth and defend their occupation.  After all, these managers urged others to bet billions on their abilities.  Why should they fear putting a little of their own money on the line?

What followed was the sound of silence.  Though there are thousands of professional investment managers who have amassed staggering fortunes by touting their stock-selecting prowess, only one man—Ted Seides—stepped up to my challenge.  Ted was a co-manager of Protégé Partners, an asset manager that had raised money from limited partners to form a fund-of-funds—in other words, a fund that invests in multiple hedge funds.

I hadn’t known Ted before our wager, but I like him and admire his willingness to put his money where his mouth was…

For Protégé Partners’ side of our ten-year bet, Ted picked five funds-of-funds whose results were to be averaged and compared against my Vanguard S&P index fund.  The five he selected had invested their money in more than 100 hedge funds, which meant that the overall performance of the funds-of-funds would not be distorted by the good or poor results of a single manager.

Here are the results so far after nine years (from 2008 thru 2016):

Net return after 9 years
Fund of Funds A 8.7%
Fund of Funds B 28.3%
Fund of Funds C 62.8%
Fund of Funds D 2.9%
Fund of Funds E 7.5%


Net return after 9 years
S&P 500 Index Fund 85.4%


Compound Annual Return
All Funds of Funds 2.2%
S&P 500 Index Fund 7.1%

To see a more detailed table of the results, go to page 22 of the Berkshire 2016 Letter:

Buffett continues:

The compounded annual increase to date for the index fund is 7.1%, which is a return that could easily prove typical for the stock market over time.  That’s an important fact:  A particularly weak nine years for the market over the lifetime of this bet would have probably helped the relative performance of the hedge funds, because many hold large ‘short’ positions.  Conversely, nine years of exceptionally high returns from stocks would have provided a tailwind for index funds.

Instead we operated in what I would call a ‘neutral’ environment.  In it, the five funds-of-funds delivered, through 2016, an average of only 2.2%, compounded annually.  That means $1 million invested in those funds would have gained $220,000.  The index fund would meanwhile have gained $854,000.

Bear in mind that every one of the 100-plus managers of the underlying hedge funds had a huge financial incentive to do his or her best.  Moreover, the five funds-of-funds managers that Ted selected were similarly incentivized to select the best hedge-fund managers possible because the five were entitled to performance fees based on the results of the underlying funds.

I’m certain that in almost all cases the managers at both levels were honest and intelligent people.  But the results for their investors were dismal—really dismal.  And, alas, the huge fixed fees charged by all of the funds and funds-of-funds involved—fees that were totally unwarranted by performance—were such that their managers were showered with compensation over the nine years that have passed.  As Gordon Gekko might have put it: ‘Fees never sleep.’

The underlying hedge-fund managers in our bet received payments from their limited partners that likely averaged a bit under the prevailing hedge-fund standard of ‘2 and 20,’ meaning a 2% annual fixed fee, payable even when losses are huge, and 20% of profits with no clawback (if good years were followed by bad ones).  Under this lopsided arrangement, a hedge fund operator’s ability to simply pile up assets under management has made many of these managers extraordinarily rich, even as their investments have performed poorly.

Still, we’re not through with fees.  Remember, there were the fund-of-funds managers to be fed as well. These managers received an additional fixed amount that was usually set at 1% of assets.  Then, despite the terrible overall record of the five funds-of-funds, some experienced a few good years and collected ‘performance’ fees.  Consequently, I estimate that over the nine-year period roughly 60%—gulp!—of all gains achieved by the five funds-of-funds were diverted to the two levels of managers.  That was their misbegotten reward for accomplishing something far short of what their many hundreds of limited partners could have effortlessly—and with virtually no cost—achieved on their own.

In my opinion, the disappointing results for hedge-fund investors that this bet exposed are almost certain to recur in the future.  I laid out my reasons for that belief in a statement that was posted on the Long Bets website when the bet commenced (and that is still posted there)…

Even if you take the smartest 10% of all active investors, most of them will trail the market, net of costs, over the course of a decade or two.  Most investors (even the smartest) who think they can beat the market are wrong.  Buffett’s bet against Protégé Partners is yet another example of this.



If a statue is ever erected to honor the person who has done the most for American investors, the handsdown choice should be Jack Bogle.  For decades, Jack has urged investors to invest in ultra-low-cost index funds.  In his crusade, he amassed only a tiny percentage of the wealth that has typically flowed to managers who have promised their investors large rewards while delivering them nothing—or, as in our bet, less than nothing—of added value.

In his early years, Jack was frequently mocked by the investment-management industry.  Today, however, he has the satisfaction of knowing that he helped millions of investors realize far better returns on their savings than they otherwise would have earned.  He is a hero to them and to me.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

How Great Leaders Build Sustainable Businesses

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 16, 2017

Ian Cassell and Sean Iddings are successful microcap investors who co-authored the book, Intelligent Fanatics Project: How Great Leaders Build Sustainable Businesses (Iddings Cassel Publishing, 2016).  Ian Cassell is the founder of

If a microcap company is led by an intelligent fanatic, then it has a good chance of becoming a much larger company over time.  So, for a long-term investor, it makes sense to look for an intelligent fanatic who is currently leading a microcap company.  Cassel:

I want to find Reed Hastings in 2002, when Netflix (NFLX) was a $150 million market cap microcap (now worth $38 billion).  I want to find Bruce Cozadd in 2009, when Jazz Pharmaceuticals (JAZZ) was a $50 million market cap microcap (now worth $9 billion).

All great companies started as small companies, and intelligent fanatics founded most great companies.  So how do we find these rare intelligent fanatics early?  We find them by studying known intelligent fanatics and their businesses.  We look for common elements and themes, to help us in our search for the next intelligent fanatic-led business.

The term intelligent fanatic is originally from Charlie Munger.  Cassel defines the term:

CEO or management team with large ideas and fanatical drive to build their moat.  Willing and able to think and act unconventionally.  A learning machine that adapts to constant change.  Focused on acquiring the best talent.  Able to create a sustainable corporate culture and incentivize their operations for continual progress.  Their time horizon is in five- or ten-year increments, not quarterly, and they invest in their business accordingly.  Regardless of the industry, they are able to create a moat [– i.e., a sustainable competitive advantage].

Cassel and Iddings give eight examples of intelligent fanatics:

  • Father of Sales and Innovation: John H. Patterson—National Cash Register
  • Retail Maverick: Simon Marks—Marks & Spencer
  • Original Warehouse Pioneer: Sol Price—Fedmart and Price Club
  • King of Clever Systems: Les Schwab—Les Schwab Tire Centers
  • Low-Cost Airline Wizard: Herb Kelleher—Southwest Airlines
  • Cult of Convenience: Chester Cadieux—QuikTrip
  • Leader of Steel: Kenneth Iverson—Nucor
  • Human Capital Allocators: 3G Partners—Garantia…

Cassel and Iddings conclude by summarizing the intelligent fanatic model.



Patterson purchased control of National Manufacturing Company, the originator of the cash register, in 1885, five years after the company had been formed.  Prospects did not appear good at all:

Everything was against a business selling cash registers at that time.  There was virtually no demand for cash registers.  Store owners could not justify the cost of the machine, which in today’s dollars would be roughly $1,000.  Patterson’s peers mocked his purchase of such a poor business, yet Patterson had a bold vision of what the cash register market could be, and he knew it would make a significant impact.

Patterson had had a great experience with the cash register.  His store in Coalton, Ohio, had immediately turned losses into profits simply by buying and installing a cash register.  It is hard to imagine now but employee theft at retail operations was common, given the primitive form of record keeping in those days.  Patterson knew the power of the cash register and needed to help merchants understand its value, too.

Patterson believed in staying ahead of what the current market was demanding:

We have made a policy to be just a short distance ahead, for the cash register has always had to make its market.  We had to educate our first customers;  we have to educate our present-day customer;  and our thought has always been to keep just so far ahead that education of the buyer will always be necessary.  Thus the market will be peculiarly our own—our customers will feel that we are their natural teachers and leaders.

…We are always working far ahead.  If the suggestions at the tryout demonstrate that the model will be much more valuable with changes or improvements, then send them out again to be tried.  And we keep up this process until every mechanical defect has been overcome and the model includes every feasible suggestion.

Few people at the time believed that the cash register would be widely adopted.  But Patterson predicted at least one cash register for every four hundred citizens in a town.  He was basically right.

Patterson started out working at the store on the family farm.  He was frustrated by the poor recordkeeping.  The employee books never reconciled.

Patterson then got a job as a toll collector at the Dayton office on the Miami and Erie Canal.  There were always arguments, with the bargemen complaining about higher tolls at certain locations.  Patterson solved the issue by developing a system of receipts, all of which would be sent to toll headquarters.

Patterson had extra time as a toll collector, so he started selling coal and wood out of his office.  He learned that he could differentiate himself by selling quality coal delivered on time and in the right quantity.  He also used the best horses, the best scales, and the best carts.  He made sure everything was quality and high-class.  His main challenge was that he never seemed to have enough cash since he was always reinvesting in the business, including advertising.

Eventually Patterson and his brother owned three coal mines, a store, and a chain of retail coal yards.  He had trouble with his mine store in Coalton, Ohio.  Revenues were high, but there were no profits and debt was growing.  He discovered that some clerks were only charging for half the coal.  Patterson bought two cash registers and hired new clerks.  Six months later, the debt was almost zero and there were profits.

Patterson then entered a venture to take one-third of the profits for operating the Southern Coal and Iron Company.  Unfortunately, this proved to be a disaster.  Patterson lost three years of his life and half his investment.

Meanwhile, Patterson had purchased stock in the cash register manufacturer National Manufacturing Company.  Patterson was also on the board of the company.  Patterson came up with a plan to increase sales, but the controlling shareholder and CEO, George Phillips, did not agree.  Patterson sold most of his stock.

But Patterson still believed in the idea of the cash register.  He was able to buy shares in National Manufacturing Company from George Phillips.  Patterson became the butt of Dayton jokes for buying such a bad business.  Patterson even tried to give his shares back to Phillips, but Phillips wouldn’t take them even as a gift.  So Patterson formed the National Cash Register Company.

Patterson started advertising directly to prospects through the mail.  He then sent highly qualified salesmen to those same prospects.  Patterson decided to pay his salesmen solely on commission and with no cap on how much they could make.  This was unconventional at the time, but it created effective incentives.  Patterson also bought expensive clothes for his salesmen, and at least one fine gown for the salesman’s wife.  As a result, the salesmen became high-quality and they also wanted a better standard of living.

Moreover, Patterson systematized the sales pitches of his salesmen.  This meant even salesmen with average ability could and did evolve into great salesmen.  Patterson also designated specific territories for the salesmen so that the salesmen wouldn’t be competing against one another.

Patterson made sure that salesmen and also manufacturing workers were treated well.  When he built new factories, he put in wall-to-wall glass windows, good ventilation systems, and dining rooms where employees could get decent meals at or below cost.  Patterson also made sure his workers had the best tools.  These were unusual innovations at the time.

Patterson also instituted a profit-sharing plan for all employees.

National Cash Register now had every worker aligned with common goals:  to increase efficiency, cut costs, and improve profitability.  (page 16)

Patterson was always deeply involved in the research and development of the cash register.  He often made sketches of new ideas in a memo book.  He got a few of these ideas patented.

NCR’s corporate culture and strategies were so powerful that John H. Patterson produced more successful businessmen than the best university business departments of the day.  More than a dozen NCR alumni went on to head large corporations, and many more went on to hold high corporate positions.  (page 21)

Cassel and Iddings sum it up:

Patterson was a perpetual beginner.  He bought NCR without knowing much of anything about manufacturing – except that he wanted to improve every business owner’s operations.  From his experiences, he took what he knew to be right and paid no attention to convention.  John Patterson not only experimented with improving the cash register machine but also believed in treating employees extremely well.  Many corporations see their employees as an expense line item;  intelligent fanatics see employees as a valuable asset.

When things failed or facts changed, Patterson showed an ability to pivot…

…He was able to get every one of his workers to think like owners, through his profit-sharing plan.  Patterson was always looking to improve production, so he made sure that every employee had a voice in improving the manufacturing operations.  (page 22)



Marks & Spencer was started by Michael Marks as a small outdoor stall in Leeds.  By 1923, when Michael’s son Simon was in charge, the company had grown significantly.  But Simon Marks was worried that efficient American competitors were going to wage price wars and win.

So Marks went to the U.S. to study his competitors.  (Walmart founder Sam Walton would do this four decades later.)  When Marks returned to Britain, he delivered a comprehensive report to his board:

I learned the value of more commodious and imposing premises.  I learned the value of checking lists to control stocks and sales.  I learned that new accounting machines could help to reduce the time formidably, to give the necessary information in hours instead of weeks.  I learned the value of counter footage and how in the chain store operation each foot of counter space had to pay wages, rent, overhead expenses, and profit.  There could be no blind spots insofar as goods are concerned.  This meant a much more exhaustive study of the goods we were selling and the needs of the public.  It meant that the staff who were operating with me had to be reeducated and retrained.  (page 26)

Cassel and Iddings:

…Simon Marks had been left a company with a deteriorating moat and a growing list of competitors.  He had the prescience and boldness to take a comfortable situation, a profitable, growing Marks & Spencer, and to take risks to build a long-term competitive edge.  From that point on, it could have been observed that Simon Marks had only one task – to widen Marks & Spencer’s moat every day for the rest of his life and to provide investors with uncommon profits.  (page 28)

Simon Marks convinced manufacturers that the retailer and manufacturer, by working together without the wholesale middleman, could sell at lower prices.  Marks made sure to maintain the highest quality at the lowest prices, making up for low profit margins with high volume.

Simon Marks was rare.  He was able to combine an appreciation for science and technology with an industry that had never cared to utilize it, all the while maintaining ‘a continuing regard for the individual, either as a customer or employee, and with a deep responsibility for his welfare and well-being.’  Marks & Spencer’s tradition of treating employees well stretched all the way back to Michael Marks’s Penny Bazaars in the covered stalls of Northern England… To Simon Marks, a happy and contented staff was the most valuable asset of any business.

Simon Marks established many policies to better Marks & Spencer’s labor relations, leading to increased employee efficiency and productivity…  (page 41)

Marks introduced dining rooms to provide free or low-cost meals to employees of stores.  Marks even put hair salons in stores so the female workforce could get their hair done during lunch.  He also provided free or reduced-cost health insurance.  Finally, he set up the Marks & Spencer Benevolent Trust to provide for the retirement of employees.  These moves were ahead of their time and led to low employee turnover and high employee satisfaction.



Sol Price founded Price Club in 1976.  The company lost $750,000 during its first year.  But by 1979, revenues reached $63 million, with $1.1 million in after-tax profits.

The strategy was to sell a limited number of items – 1,500 to 5,000 items versus 20,000+ offered by discounters – at a small markup from wholesale, to a small group of members (government workers and credit union customers).

Before founding Price Club, Sol Price founded and built FedMart from one location in 1954 into a company with $361 million in revenue by 1975.

…Thus, when Sol Price founded Price Club, other savvy retailers, familiar with this track record, were quick to pay close attention.  These retailers made it their obligation to meet Price, to learn as much as possible, and to clone Price’s concept.  They knew that the market opportunity was large and that Sol Price was an intelligent fanatic with a great idea.  An astute investor could have done the same and partnered with Price early in 1980 by buying Price Club stock.

One savvy retailer who found Sol Price early in the development of Price Club was Bernard Marcus, cofounder of Home Depot.  After getting fired from the home improvement company Handy Dan, Marcus met with Price, in the late 1970s.  Marcus was looking for some advice from Price about a potential legal battle with his former employer.  Sol Price had a similar situation at FedMart.  He told Marcus to forget about a protracted legal battle and to start his own business.  (pages 46-47)

Marcus borrowed many ideas from Price Club when he cofounded Home Depot.  Later, Sam Walton copied as much as he could from Price Club when he founded Walmart.  Walton:

I guess I’ve stolen – I actually prefer the word borrowed – as many ideas from Sol Price as from anyone else in the business.

Bernie Brotman tried to set up a deal to franchise Price Clubs in the Pacific Northwest.  But Sol Price and his son, Richard Price, were reluctant to franchise Price Club.  Brotman’s son, Jeff Brotman, convinced Jim Sinegal, a long-time Price Club employee, to join him and start Costco, in 1983.

Brotman and Sinegal cloned Price Club’s business model and, in running Costco, copied many of Sol Price’s strategies.  A decade later, Price Club merged with Costco, and many Price Club stores are still in operation today under the Costco name.  (pages 49-50)

Back in 1936, Sol Price graduated with a bachelor’s degree in philosophy.  He got his law degree in 1938.  Sol Price worked for Weinberger and Miller, a local law firm in San Diego.  He represented many small business owners and learned a great deal about business.

Thirteen years later, Price founded FedMart after noticing a nonprofit company, Fedco, doing huge volumes.  Price set up FedMart as a nonprofit, but created a separate for-profit company, Loma Supply, to manage the stores.  Basically, everything was marked up 5% from cost, which was the profit Loma Supply got.

FedMart simply put items on the shelves and let the customers pick out what they wanted.  This was unusual at the time, but it helped FedMart minimize costs and thus offer cheaper prices for many items.

By 1959, FedMart had grown to five stores and had $46.3 million in revenue and nearly $500,000 in profits.  FedMart went public that year and raised nearly $2 million for expansion.

In 1962, Sam Walton had opened the first Walmart, John Geisse had opened the first Target, and Harry Cunningham had opened the first Kmart, all with slight variations on Sol Price’s FedMart business model.  (page 55)

By the early 1970s, Sol Price wasn’t enjoying managing FedMart as much.  He remarked that they were good at founding the business, but not running it.

While traveling in Europe with his wife, Sol Price was carefully observing the operations of different European retailers.  In particular, he noticed a hypermarket retailer in Germany named Wertkauf, run by Hugo Mann.  Price sought to do a deal with Hugo Mann as a qualified partner.  But Mann saw it as a way to buy FedMart.  After Mann owned 64% of FedMart, Sol Price was fired from the company he built.  But Price didn’t let that discourage him.

Like other intelligent fanatics, Sol Price did not sit around and mourn his defeat.  At the age of 60, he formed his next venture less than a month after getting fired from FedMart.  The Price Company was the name of this venture, and even though Sol Price had yet to figure out a business plan, he was ready for the next phase of his career.

…What the Prices [Sol and his son, Robert] ended up with was a business model similar to some of the concepts Sol had observed in Europe.  The new venture would become a wholesale business selling merchandise to other businesses, with a membership system similar to that of the original FedMart but closer to the ‘passport’ system used by Makro, in the Netherlands, in a warehouse setting.  The business would attract members with its extremely low prices.  (pages 56-57)

During the first 45 days, the company lost $420,000.

Instead of doing nothing or admitting defeat, however, Sol Price figured out the problem and quickly pivoted.

Price Club had incorrectly assumed that variety and hardware stores would be large customers and that the location would be ideal for business customers.  A purchasing manager, however, raised the idea of allowing members of the San Diego City Credit Union to shop at Price Club.  After finding out that Price Club could operate as a retail shop, in addition to selling to businesses, the company allowed credit union members to shop at Price Club.  The nonbusiness customers did not pay the $25 annual business membership fee but got a paper membership pass and paid an additional 5% on all goods.  Business members paid the wholesale price.  The idea worked and sales turned around, from $47,000 per week at the end of August to $151,000 for the week of November 21.  The Price Club concept was now proven.  (page 57)

Sol Price’s idea was to have the smallest markup from cost possible and to make money on volume.  This was unconventional.

Price also sought to treat his employees well, giving them the best wages and providing the best working environment.  By treating employees well, he created happy employees who in turn treated customers well.

Instead of selling hundreds of thousands of different items, Sol Price thought that focusing on only a few thousand items would lead to greater efficiency and lower costs.  Also, Price was able to buy in larger quantities, which helped.  This approach gave customers the best deal.  Customers would typically buy a larger quantity of each good, but would generally save over time by paying a lower price per unit of volume.

Sol Price also saved money by not advertising.  Because his customers were happy, he relied on unsolicited testimonials for advertising.  (Costco, in turn, has not only benefitted from unsolicited testimonials, but also from unsolicited media coverage.)

Jim Sinegal commented:

The thing that was most remarkable about Sol was not just that he knew what was right.  Most people know the right thing to do.  But he was able to be creative and had the courage to do what was right in the face of a lot of opposition.  It’s not easy to stick to your guns when you have a lot of investors saying that you’re not charging customers enough and you’re paying employees too much.  (page 60)

Over a thirty-eight year period, including FedMart and then Price Club until the Costco merger in 1993, Sol Price generated roughly a 40% CAGR in shareholder value.



Les Schwab knew how to motivate his people through clever systems and incentives.  Schwab realized that allowing his employees to become highly successful would help make Les Schwab Tire Centers successful.

Schwab split his profits with his first employees, fifty-fifty, which was unconventional in the 1950s.  Schwab would reinvest his portion back into the business.  Even early on, Schwab was already thinking about massive future growth.

As stores grew and turned into what Schwab called ‘supermarket’ tire centers, the number of employees needed to manage the operations increased, from a manager with a few helpers to six or seven individuals.  Schwab, understanding the power of incentives, asked managers to appoint their best worker as an assistant manager and give him 10% of the store’s profits.  Schwab and the manager each would give up 5%.

…Les Schwab was never satisfied with his systems, especially the employee incentives, and always strove to develop better programs.

…Early on, it was apparent that Les Schwab’s motivation was not to get rich but to provide opportunities for young people to become successful, as he had done in the beginning.  This remained his goal for decades.  Specifically, his goal was to share the wealth.  The company essentially has operated with no employees, only partners.  Even the hourly workers were treated like partners.  (pages 64-66)

When Schwab was around fifteen years old, he lost his mother to pneumonia and then his father to alcohol.  Schwab started selling newspapers.  Later as a circulation manager, he devised a clever incentive scheme for the deliverers.  Schwab always wanted to help others succeed, which in turn would help the business succeed.

The desire to help others succeed can be a powerful force.  Les Schwab was a master at creating an atmosphere for others to succeed through clever programs.  Les always told his manager to make all their people successful, because he believed that the way a company treated employees would directly affect how employees would treat the customer.  Schwab also believed that the more he shared with employees, the more the business would succeed, and the more resources that would eventually be available to give others opportunities to become successful.  In effect, he was compounding his giving through expansion of the business, which was funded from half of his profits.

Once in these programs, it would be hard for employees and the company as a whole not to become successful, because the incentives were so powerful.  Schwab’s incentive system evolved as the business grew, and unlike most companies, those systems evolved for the better as he continued giving half his profits to employees.  (page 71)

Like other intelligent fanatics, Schwab believed in running a decentralized business.  This required good communication and ongoing education.



The airline industry has been perhaps the worst industry ever.  Since deregulation in 1978, the U.S. airline industry alone has lost $60 billion.

Southwest Airlines is nearing its forty-third consecutive year of profitability.  That means it has made a profit nearly every year of its corporate life, minus the first fifteen months of start-up losses.  Given such an incredible track record in a horrible industry, luck cannot be the only factor.  There had to be at least one intelligent fanatic behind its success.

…In 1973, the upstart Texas airline, Southwest Airlines, with only three airplanes, turned the corner and reached profitability.  This was a significant achievement, considering that the company had to overcome three and a half years of legal hurdles by two entrenched and better-financed competitors:  Braniff International Airways had sixty-nine aircraft and $256 million in revenues, and Texas International had forty-five aircraft with $32 million in revenues by 1973.  (page 83)

As a young man, Herb ended up living with his mother after his older siblings moved out and his father passed away.  Kelleher says he learned about how to treat people from his mother:

She used to sit up talking to me till three, four in the morning.  She talked a lot about how you should treat people with respect.  She said that positions and titles signify absolutely nothing.  They’re just adornments;  they don’t represent the substance of anybody… She taught me that every person and every job is worth as much as any other person or any other job.

Kelleher ended up applying these lessons at Southwest Airlines.  The idea of treating employees well and customers well was central.

Kelleher did not graduate with a degree in business, but with a bachelor’s degree in English and philosophy.  He was thinking of becoming a journalist.  He ended up becoming a lawyer, which helped him get into business later.

When Southwest was ready to enter the market in Texas as a discount airline, its competitors were worried.

With their large resources, competitors did everything in their power to prevent Southwest from getting off the ground, and they were successful in temporarily delaying Southwest’s first flight.  The incumbents filed a temporary restraining order that prohibited the aeronautics commission from issuing Southwest a certificate to fly.  The case went to trial in the Austin state court, which did not support another carrier entering the market.

Southwest proceeded to appeal the lower court decision that the market could not support another carrier.  The intermediate appellate court sided with the lower court and upheld the ruling.  In the meantime, Southwest had yet to make a single dollar in revenues and had already spent a vast majority of the money it had raised.  (page 89)

The board was understandably frustrated.  At this point, Kelleher said he would represent the company one last time and pay every cent of legal fees out of his own pocket.  Kelleher convinced the supreme court to rule in Southwest’s favor.  Meanwhile, Southwest hired Lamar Muse as CEO, who was an experienced, iconoclastic entrepreneur with an extensive network of contacts.

Herb Kelleher was appointed CEO in 1982 and ran Southwest until 2001.  He led Southwest from $270 million to $5.7 billion in revenues, every year being profitable.  This is a significant feat, and no other airline has been able to match that kind of record in the United States.  No one could match the iron discipline that Herb Kelleher instilled in Southwest Airlines from the first day and maintained so steadfastly through the years.  (page 91)

Before deregulation, flying was expensive.  Herb Kelleher had the idea of offering lower fares.  To achieve this, Southwest did four things.

  • First, they operated out of less-costly and less-congested airports. Smaller airports are usually closer to downtown locations, which appealed to businesspeople.
  • Second, Southwest only operated the Boeing 737. This gave the company bargaining power in new airplane purchases and the ability to make suggestions in the manufacture of those plans to improve efficiency.  Also, operating costs were lower because everyone only had to learn to operate one type of plane.
  • Third, Southwest reduced the amount of time planes were on the ground to 10 minutes (from 45 minutes to an hour).
  • Fourth, Southwest treats employees well and is thus able to retain qualified, hardworking employees. This cuts down on turnover costs.

Kelleher built an egalitarian culture at Southwest where each person is treated like everyone else.  Also, Southwest was the first airline to share profits with employees.  This makes employees think and act like owners.  As well, employees are given autonomy to make their own decisions, as an owner would.  Not every decision will be perfect, but inevitable mistakes are used as learning experiences.

Kelleher focused the company on being entrepreneurial even as the company grew.  But simplifying did not include eliminating employees.

Southwest Airlines is the only airline – and one of the few corporations in any industry – that has been able to run for decades without ever imposing a furlough.  Cost reductions are found elsewhere, and that has promoted a healthy morale within the Southwest Airlines corporate culture.  Employees have job security.  A happy, well-trained labor force that only needs to be trained on one aircraft promotes more-efficient and safer flights.  Southwest is the only airline that has a nearly perfect safety record.  (page 95)

Kelleher once told the following story:

What I remember is a story about Thomas Watson.  This is what we have followed at Southwest Airlines.  A vice president of IBM came in and said, ‘Mr. Watson, I’ve got a tremendous idea…. And I want to set up this little division to work on it.  And I need ten million dollars to get it started.’  Well, it turned out to be a total failure.  And the guy came back to Mr. Watson and he said that this was the original proposal, it cost ten million, and that it was a failure.  ‘Here is my letter of resignation.’  Mr. Watson said, ‘Hell, no!  I just spent ten million on your education.  I ain’t gonna let you leave.’  That is what we do at Southwest Airlines.  (page 96)

One example is Matt Buckley, a manager of cargo in 1985.  He thought of a service to compete with Federal Express.  Southwest let him try it.  But it turned out to be a mistake.  Buckley:

Despite my overpromising and underproducing, people showed support and continued to reiterate, ‘It’s okay to make mistakes;  that’s how you learn.’  In most companies, I’d probably have been fired, written off, and sent out to pasture.  (page 97)

Kelleher believed that any worthwhile endeavor entails some risk.  You have to experiment and then adjust quickly when you learn what works and what doesn’t.

Kelleher also created a culture of clear communication with employees, so that employees would understand in more depth how to minimize costs and why it was essential.

Communication with employees at Southwest is not much different from the clear communication Warren Buffett has had with shareholders and with his owned operations, through Berkshire Hathaway’s annual shareholder letters.  Intelligent fanatics are teachers to every stakeholder.  (page 99)



Warren Buffett:

Back when I had 10,000 bucks, I put 2,000 of it into a Sinclair service station, which I lost, so my opportunity cost on it’s about 6 billion right now.  A fairly big mistake – it makes me feel good when Berkshire goes down, because the cost of my Sinclair station goes down too.

Chester Cadieux ran into an acquaintance from school, Burt B. Holmes, who was setting up a bantam store – an early version of a convenience store.  Cadieux invested $5,000 out of the total $15,000.

At the time, in 1958, there were three thousand bantam stores open.  They were open longer hours than supermarkets, which led customers to be willing to pay higher prices.

Cadieux’s competitive advantage over larger rivals was his focus on employees and innovation.  Both characteristics were rooted in Chester’s personal values and were apparent early in QuikTrip’s history.  He would spend a large part of his time – roughly two months out of the year – in direction communication with QuikTrip employees.  Chester said, ‘Without fail, each year we learned something important from a question or comment voiced by a single employee.’  Even today, QuikTrip’s current CEO and son of Chester Cadieux, Chet Cadieux, continues to spend four months of his year meeting with employees.  (page 104)

Cassel and Iddings:

Treat employees well and incentivize them properly, and employees will provide exceptional service to the customers.  Amazing customer service leads to customer loyalty, and this is hard to replicate, especially by competitors who don’t value their employees.  Exceptional employees and a quality corporate culture have allowed QuikTrip to stay ahead of competition from convenience stores, gas retailers, quick service restaurants, cafes, and hypermarkets.  (page 106)

Other smart convenience store operators have borrowed many ideas from Chester Cadieux.  Sheetz, Inc. and Wawa, Inc. – both convenience store chains headquartered in Pennsylvania – have followed many of Cadieux’s ideas.  Cadieux, in turn, has also picked up a few ideas from Sheetz and Wawa.

Sheetz, Wawa, and QuikTrip all have similar characteristics, which can be traced back to Chester Cadieux and his leadership values at QuikTrip.  When three stores in the same industry, separated only by geography, utilize the same strategies, have similar core values, and achieve similar success, then there must be something to their business models.  All could have been identified early, when their companies were much smaller, with qualitative due diligence.  (pages 107-108)

One experience that shaped Chester Cadieux was when he was promoted to first lieutenant at age twenty-four.  He was the senior intercept controller at his radar site, and he had to lead a team of 180 personnel:

…he had to deal with older, battle-hardened sergeants who did not like getting suggestions from inexperienced lieutenants.  Chester said he learned ‘how to circumvent the people who liked to be difficult and, more importantly, that the number of stripes on someone’s sleeves was irrelevant.’  The whole air force experience taught him how to deal with people, as well as the importance of getting the right people on his team and keeping them.  (page 109)

When Cadieux partnered with Burt Holmes on their first QuikTrip convenience store, it seemed that everything went wrong.  They hadn’t researched what the most attractive location would be.  And Cadieux stocked the store like a supermarket.  Cadieux and Holmes were slow to realize that they should have gone to Dallas and learned all they could about 7-Eleven.

QuikTrip was on the edge of bankruptcy during the first two or three years.  Then the company had a lucky break when an experienced convenience store manager, Billy Neale, asked to work for QuikTrip.  Cadieux:

You don’t know what you don’t know.  And when you figure it out, you’d better sprint to fix it, because your competitors will make it as difficult as possible in more ways than you could ever have imagined.

Cadieux was smart enough to realize that QuikTrip survived partly by luck.  But he was a learning machine, always learning as much as possible.  One idea Cadieux picked up was to sell gasoline.  He waited nine years until QuikTrip had the financial resources to do it.  Cadieux demonstrated that he was truly thinking longer term.

QuikTrip has always adapted to the changing needs of its customers, demographics, and traffic patterns, and has constantly looked to stay ahead of competition.  This meant that QuikTrip has had to reinvest large sums of capital into store updates, store closures, and new construction.  From QuikTrip’s inception, in 1958, to 2008, the company closed 418 stores;  in 2008, QuikTrip had only five hundred stores in operation.  (pages 112-113)

QuikTrip shows its long-term focus by its hiring process.  Cadieux:

Leaders are not necessarily born with the highest IQs, or the most drive to succeed, or the greatest people skills.  Instead, the best leaders are adaptive – they understand the necessity of pulling bright, energetic people into their world and tapping their determination and drive.  True leaders never feel comfortable staying in the same course for too long or following conventional wisdom – they inherently understand the importance of constantly breaking out of routines in order to recognize the changing needs of their customers and employees.  (page 114)

QuikTrip interviews about three out of every one hundred applicants and then chooses one from among those three.  Only 70% of new hires make it out of training, and only 50% of those remaining make it past the first 6 months on the job.  But QuikTrip’s turnover rate is roughly 13% compared to the industry average of 59%.  These new hires are paid $50,000 a year.  And QuikTrip offers a generous stock ownership plan.  Employees also get medical benefits and a large amount of time off.

Cadieux’s main goal was to make employees successful, thereby making customers and eventually shareholders happy.



Ken Iverson blazed a new trail in steel production with the mini mill, thin-slab casting, and other innovations.  He also treated his employees like partners.  Both of these approaches were too unconventional and unusual for the old, slow-moving, integrated steel mills to compete with.  Ken Iverson harnessed the superpower of incentives and effective corporate culture.  He understood how to manage people and had a clear goal.  (page 120)

In its annual report in 1975, Nucor had all of its employees listed on the front cover, which showed who ran the company.  Every annual report since then has listed all employees on the cover.  Iverson:

I have no desire to be perfect.  In fact, none of the people I’ve seen do impressive things in life are perfect… They experiment.  And they often fail.  But they gain something significant from every failure.  (page 124)

Iverson studied aeronautical engineering at Cornell through the V-12 Navy College Training Program.  Iverson spent time in the Navy, and then earned a master’s degree in mechanical engineering from Purdue University.  Next he worked as an assistant to the chief research physicist at International Harvester.

Iverson’s supervisor told him you can achieve more at a small company.  So Iverson started working as the chief engineer at a small company called Illium Corp.  Taking chances was encouraged.  Iverson built a pipe machine for $5,000 and it worked, which saved the company $245,000.

Iverson had a few other jobs.  He helped Nuclear Corporation of America find a good acquisition – Vulcraft Corporation.  After the acquisition, Vulcraft made Iverson vice president.  The company tripled its sales and profits over the ensuing three years, while the rest of Nuclear was on the verge of bankruptcy.  When Nuclear’s president resigned, Iverson became president of Nuclear.

Nuclear Corporation changed its name to Nucor.  Iverson cut costs.  Although few could have predicted it, Nucor was about to take over the steel industry.  Iverson:

At minimum, pay systems should drive specific behaviors that make your business competitive.  So much of what other businesses admire in Nucor – our teamwork, extraordinary productivity, low costs, applied innovation, high morale, low turnover – is rooted in how we pay our people.  More than that, our pay and benefit programs tie each employee’s fate to the fate of our business.  What’s good for the company is good – in hard dollar terms – for the employee.  (page 127)

The basic incentive structure had already been in place at Vulcraft.  Iverson had the sense not to change it, but rather to improve it constantly.  Iverson:

As I remember it, the first time a production bonus was over one hundred percent, I thought that I had created a monster.  In a lot of companies, I imagine many of the managers would have said, ‘Whoops, we didn’t set that up right.  We’d better change it.’  Not us.  We’ve modified it some over the years.  But we’ve stayed with that basic concept ever since.  (pages 127-128)

Nucor paid its employees much more than what competitors paid.  But Nucor’s employees produced much, much more.  As a result, net costs were lower for Nucor.  In 1996, Nucor’s total cost was less than $40 per ton of steel produced versus at least $80 per ton of steel produced for large integrated U.S. steel producers.

Nucor workers were paid a lower base salary – 65% to 70% of the average – but had opportunities to get large bonuses if they produced a great deal.

Officer bonuses (8% to 24%) were tied to the return on equity.

Nonproduction headquarter staff, engineers, secretaries, and so on, as well as department managers, could earn 25% to 82% of base pay based on their division’s return on assets employed.  So, if a division did not meet required returns, those employees received nothing, but they received a significant amount if they did.  There were a few years when all employees received no bonuses and a few years when employees maxed out their bonuses.

An egalitarian incentive structure leads all employees to feel equal, regardless of base pay grade or the layer of management an employee is part of.  Maintenance workers want producers to be successful and vice versa.  (pages 129-130)

All production workers, including managers, wear hard hats of the same color.  Everyone is made to feel they are working for the common cause.  Nucor has only had one year of losses, in 2009, over a fifty-year period.  This is extraordinary for the highly cyclical steel industry.

Iverson, like Herb Kelleher, believed that experimentation – trial and error – was essential to continued innovation.  Iverson:

About fifty percent of the things we try really do not work out, but you can’t move ahead and develop new technology and develop a business unless you are willing to take risks and adopt technologies as they occur.  (page 132)



3G Partners refers to the team of Jorge Paulo Lemann, Carlos Alberto “Beto” Sicupira, and Marcel Hermann Telles.  They have developed the ability to buy underperforming companies and dramatically improve productivity.

When the 3G partners took control of Brahma, buying a 40% stake in 1989, it was the number two beer company in Brazil and was quickly losing ground to number one, Antarctica.  The previously complacent management and company culture generated low productivity – approximately 1,200 hectoliters of beverage produced per employee.  There was little emphasis on profitability or achieving more efficient operations.  During Marcel Telles’s tenure, productivity per employee multiplied seven times, to 8,700 hectoliters per employee.  Efficiency and profitability were top priorities of the 3G partners, and the business eventually held the title of the most efficient and profitable brewer in the world.  Through efficiency of operations and a focus on profitability, Brahma maintained a 20% return on capital, a 32% compound annual growth rate in pretax earnings, and a 17% CAGR in revenues over the decade from 1990 to 1999… Shareholder value creation stood at an astounding 42% CAGR over that period.

…Subsequent shareholder returns generated at what eventually became Anheuser-Busch InBev (AB InBev) have been spectacular, driven by operational excellence.  (pages 138-140)

Jorge Paulo Lemann – who, like Sicupira and Telles, was born in Rio de Janeiro – started playing tennis when he was seven.  His goal was to become a great tennis player.  He was semi-pro for a year after college.  Lemann:

In tennis you win and lose.  I’ve learned that sometimes you lose.  And if you lose, you have to learn from the experience and ask yourself, ‘What did I do wrong?  What can I do better?  How am I going to win next time?’

Tennis was very important and gave me the discipline to train, practice, and analyze… In tennis you have to take advantage of opportunities.

So my attitude in business was always to make an effort, to train, to be present, to have focus.  Occasionally an opportunity passed and you have to grab those opportunities.  (pages 141-142)

In 1967, Lemann started working for Libra, a brokerage.  Lemann owned 13% of the company and wanted to create a meritocratic culture.  But others disagreed with him.

In 1971, Lemann founded Garantia, a brokerage.  He aimed to create a meritocratic culture like the one at Goldman Sachs.  Lemann would seek out top talent and then base their compensation on performance.  Marcel Telles and Beto Sicupira joined in 1972 and 1973, respectively.

Neither Marcel Telles nor Beto Sicupira started off working in the financial markets or high up at Garantia.  Both men started at the absolute bottom of Garantia, just like any other employee…

Jorge Paulo Lemann initially had a 25% interest in Garantia, but over the first seven years increased it to 50%, slowly buying out the other initial investors.  However, Lemann also wanted to provide incentives to his best workers, so he began selling his stake to new partners.  By the time Garantia was sold, Lemann owned less than 30% of the company.  (page 144)

Garantia transformed itself into an investment bank.  It was producing a gusher of cash.  The partners decided to invest in underperforming companies and then introduce the successful, meritocratic culture at Garantia.  In 1982, they invested in Lojas Americanas.

Buying control of outside businesses gave Lemann the ability to promote his best talent into those businesses.  Beto Sicupira was appointed CEO and went about turning the company around.  The first and most interesting tactic Beto utilized was to reach out to the best retailers in the United States, sending them all letters and asking to meet them and learn about their companies;  neither Beto nor his partners had any retailing experience.  Most retailers did not respond to this query, but one person did:  Sam Walton of Walmart.

The 3G partners met in person with the intelligent fanatic Sam Walton and learned about his business.  Beto was utilizing one of the most important aspects of the 3G management system:  benchmarking from the best in the industry.  The 3G partners soaked up everything from Walton, and because the young Brazilians were a lot like him, Sam Walton became a mentor and friend to all of them.  (pages 146-147)

In 1989, Lemann noticed an interesting pattern:

I was looking at Latin America and thinking, Who was the richest guy in Venezuela?  A brewer (the Mendoza family that owns Polar).  The richest guy in Colombia?  A brewer (the Santo Domingo Group, the owner of Bavaria).  The richest in Argentina?  A brewer (the Bembergs, owners of Quilmes).  These guys can’t all be geniuses… It’s the business that must be good.  (page 148)

3G always set high goals.  When they achieved one ambitious goal, then they would set the next one.  They were never satisfied.  When 3G took over Brahma, the first goal was to be the best brewer in Brazil.  The next goal was to be the best brewer in the world.

3G has always had a truly long-term vision:

Marcel Telles spent considerable time building Brahma, with a longer-term vision.  The company spent a decade improving the efficiency of its operations and infusing it with the Garantia culture.  When the culture was in place, a large talent pipeline was developed, so that the company could acquire its largest rival, Antarctica.  By taking their time in building the culture of the company, management was ensuring that the culture could sustain itself well beyond the 3G partners’ tenure.  This long-term vision remains intact and can be observed in a statement from AB InBev’s 2014 annual report:  ‘We are driven by our passion to create a company that can stand the test of time and create value for our shareholders, not only for the next ten or twenty years but for the next one hundred years.  Our mind-set is truly long term.’  (pages 149-150)

3G’s philosophy of innovation was similar to a venture capitalist approach.  Ten people would be given a small amount of capital to try different things.  A few months later, two out of ten would have good ideas and so they would get more funding.

Here are the first five commandments (out of eighteen) that Lemann created at Garantia:

  • A big and challenging dream makes everyone row in the same direction.
  • A company’s biggest asset is good people working as a team, growing in proportion to their talent, and being recognized for that. Employee compensation has to be aligned with shareholders’ interests.
  • Profits are what attract investors, people, and opportunities, and keep the wheels spinning.
  • Focus is of the essence. It’s impossible to be excellent at everything, so concentrate on the few things that really matter.
  • Everything has to have an owner with authority and accountability. Debate is good, but in the end, someone has to decide.

Garantia had an incentive system similar to that created by other intelligent fanatics.  Base salary was below market average.  But high goals were set for productivity and costs.  And if those goals are achieved, bonuses can amount to many times the base salaries.

The main metric that employees are tested against is economic value added – employee performance in relation to the cost of capital.  The company’s goal is to achieve 15% economic value added, so the better the company performs as a whole, the larger is the bonus pool to be divided among employees.  And, in a meritocratic culture, the employees with the best results are awarded the highest bonuses.  (page 154)

Top performers also are given a chance to purchase stock in the company at a 10% discount.

The 3G partners believe that a competitive atmosphere in a business attracts high-caliber people who thrive on challenging one another.  Carlos Brito said, ‘That’s why it’s important to hire people better than you.  They push you to be better.’  (page 155)



Cassel and Iddings quote Warren Buffett’s 2010 Berkshire Hathaway shareholder letter:

Our final advantage is the hard-to-duplicate culture that permeates Berkshire.  And in businesses, culture counts.  (page 159)

One study found the following common elements among outperformers:

What elements of those cultures enabled the top companies to adapt and to sustain performance?  The common answers were the quality of the leadership, the maintenance of an entrepreneurial environment, prudent risk taking, innovation, flexibility, and open communication throughout the company hierarchy.  The top-performing companies maintained a small-company feel and had a long-term horizon.  On the other hand, the lower-performing companies were slower to adapt to change.  Interviewees described these companies as bureaucratic, with very short-term horizons.  (pages 159-160)

Cassel and Iddings discuss common leadership attributes of intelligent fanatics:

Leading by Example

Intelligent fanatics create a higher cause that all employees have the chance to become invested in, and they provide an environment in which it is natural for employees to become heavily invested in the company’s mission.

…At Southwest, for example, the company created an employee-first and family-like culture where fun, love, humor, and creativity were, and continue to be, core values.  Herb Kelleher was the perfect role model for those values.  He expressed sincere appreciation for employees and remembered their names, and he showed his humor by dressing up for corporate gatherings and even by settling a dispute with another company through an arm wrestling contest.  (page 162)

Unblemished by Industry Dogma

Industries are full of unwritten truths and established ways of thinking.  Industry veterans often get accustomed to a certain way of doing or thinking about things and have trouble approaching problems from a different perspective.  This is the consistency and commitment bias Charlie Munger has talked about in his speech ‘The Psychology of Human Misjudgment.’  Succumbing to the old guard prevents growth and innovation. 

…All of our intelligent fanatic CEOs were either absolute beginners, with no industry experience, or had minimal experience.  Their inexperience allowed them to be open to trying something new, to challenge the old guard.  The CEOs developed new ways of operating that established companies could not compete with.  Our intelligent fanatics show us that having industry experience can be a detriment.  (pages 165-166)

Teaching by Example

Jim Sinegal learned from Sol Price that ‘if you’re not spending ninety percent of your time teaching, you’re not doing your job.’

Founder Ownership Creates Long-Term Focus

The only way to succeed in dominating a market for decades is to have a long-term focus.  Intelligent fanatics have what investor Tom Russo calls the capacity to suffer short-term pain for long-term gain…. As Jeff Bezos put it, ‘If we have a good quarter, it’s because of work we did three, four, five years ago.  It’s not because we did a good job this quarter.’  They build the infrastructure to support a larger business, which normally takes significant up-front investment that will lower profitability in the short term.  (page 168)

Keep It Simple

Jorge Paulo Lemann:

All the successful people I ever met were fanatics about focus.  Sam Walton, who built Walmart, thought only about stores day and night.  He visited store after store.  Even Warren Buffett, who today is my partner, is a man super focused on his formula.  He acquires different businesses but always within the same formula, and that’s what works.  Today our formula is to buy companies with a good name and to come up with our management system.  But we can only do this when we have people available to go to the company.  We cannot do what the American private equity firms do.  They buy any company, send someone there, and constitute a team.  We only know how to do this with our team, people within our culture.  Then, focus is also essential.  (page 171)

Superpower of Incentives

Intelligent fanatics are able to create systems of financial incentive that attract high-quality talent, and they provide a culture and higher cause that immerses employees in their work.  They are able to easily communicate the why and the purpose of the company so that employees themselves can own the vision.  (page 173)

…All of this book’s intelligent fanatic CEOs unleashed their employees’ fullest potential by getting them to think and act as owners.  They did this two ways:  they provided a financial incentive, aligning employees with the actual owners, and they gave employees intrinsic motivation to think like owners.  In every case, CEOs communicated the importance of each and every employee to the organization and provided incentives that were simple to understand.  (page 174)


Intelligent fanatics and their employees are unstoppable in their pursuit of staying ahead of the curve.  They test out many ideas, like a scientist experimenting to find the next breakthrough.  In the words of the head of Amazon Web Services (AWS), Andy Jassy, ‘We think of (these investments) as planting seeds for very large trees that will be fruitful over time.’

Not every idea will work out as planned.  Jeff Bezos, the founder and CEO of Amazon, said, ‘A few big successes compensate for dozens and dozens of things that did not work.’  Bezos has been experimenting for years and often has been unsuccessful…  (page 176)

Productive Paranoia

Jim Collins describes successful leaders as being ‘paranoid, neurotic freaks.’  Although paranoia can be debilitating for most people, intelligent fanatics use their paranoia to prepare for financial or competitive disruptions.  They also are able to promote this productive paranoia within their company culture, so the company can maintain itself by innovating and preparing for the worst.  (page 178)

Decentralized Organizations

Intelligent fanatics focus a lot of their mental energy on defeating bureaucracies before they form.

…Intelligent fanatics win against internal bureaucracies by maintaining the leanness that helped their companies succeed in the first place… Southwest was able to operate with 20% fewer employees per aircraft and still be faster than its competitors.  It took Nucor significantly few workers to produce a ton of steel, allowing them to significantly undercut their competitors’ prices.  (pages 181-182)

Dominated a Small Market Before Expanding

Intelligent fanatics pull back on the reins in the beginning so they can learn their lessons while they are small.  Intelligent fanatic CEOs create a well-oiled machine before pushing the accelerator to the floor.  (page 183)

Courage and Perseverance in the Face of Adversity

Almost all successful people went through incredible hardship, obstacles, and challenges.  The power to endure is the winner’s quality.  The difference between intelligent fanatics and others is perseverance…

Take, for instance, John Patterson losing more than half his money in the Southern Coal and Iron Company, or Sol Price getting kicked out of FedMart by Hugo Mann.  Herb Kelleher had to fight four years of legal battles to get Southwest Airlines’ first plane off the ground.  Another intelligent fanatic, Sam Walton, got kicked out of his first highly successful Ben Franklin store due to a small clause in his building’s lease and had to start over.  Most people would give up, but intelligent fanatics are different.  They have the uncanny ability to quickly pick themselves up from a large mistake and move on.  They possess the courage to fight harder than ever before…  (page 184)



Intelligent fanatics demonstrate the qualities all employees should emulate, both within the organization and outside, with customers.  This allows employees to do their jobs effectively, by giving them autonomy.  All employees have to do is adjust their internal compass to the company’s true north to solve a problem.  Customers are happier, employees are happier, and if you make those two groups happy, then shareholders are happier.

…Over time, the best employees rise to the top and can quickly fill the holes left as other employees retire or move on.  Employees are made to feel like partners, so the success of the organization is very important to them.  Partners are more open to sharing new ideas or to offering criticism, because their net worth is tied to the long-term success of the company.

Companies with a culture of highly talented, driven people continually challenge themselves to offer best-in-class service and products.  Great companies are shape-shifters and can maneuver quickly as they grow and as the markets in which they compete change.  (pages 187-188)



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.