Ben Graham Was a Quant

(Zen Buddha Silence by Marilyn Barbone)

(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011).  In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors.  Having a clearly defined quantitative investment strategy that you stick with over the long term—both when the strategy is in favor and when it’s not—is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches.  Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors.  See:  http://boolefund.com/warren-buffett-jack-bogle/

An index fund tries to copy an index, which is itself typically based on companies of a certain size.  By contrast, quantitative value investing is based on metrics that indicate undervaluation.

 

QUANTITATIVE VALUE INVESTING

Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details of security analysis which I devoted myself to so strenuously for many years.  I feel that they are relatively unimportant, which, in a sense, has put me opposed to developments in the whole profession.  I think we can do it successfully with a few techniques and simple principles.  The main point is to have the right general principles and the character to stick to them.

I have a considerable amount of doubt on the question of how successful analysts can be overall when applying these selectivity approaches.  The thing that I have been emphasizing in my own work for the last few years has been the group approach.  To try to buy groups of stocks that meet some simple criterion for being undervalued – regardless of the industry and with very little attention to the individual company

I am just finishing a 50-year study—the application of these simple methods to groups of stocks, actually, to all the stocks in the Moody’s Industrial Stock Group.  I found the results were very good for 50 years.  They certainly did twice as well as the Dow Jones.  And so my enthusiasm has been transferred from the selective to the group approach.  What I want is an earnings ratio twice as good as the bond interest ratio typically for most years.  One can also apply a dividend criterion or an asset value criterion and get good results.  My research indicates the best results come from simple earnings criterions.

Imagine—there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work.  It seems too good to be true.  But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up. 

See:  http://www.cfapubs.org/doi/pdf/10.2470/rf.v1977.n1.4731

Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors.  They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not.  This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others.  Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct.  – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that.  – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well.  Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient.  The difference between these propositions is night and day. 

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.

 

DESPERATELY SEEKING ALPHA

Greiner refers to the July 12, 2010 issue of Barron’s.  Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009.  87 of the 248 funds were gone completely, while the other funds had been downgraded.  Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha.  Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return.  Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors.  (page 16)

But risk involves not only just individual company risk.  It also involves how one company’s stock is correlated with the stocks of other companies.  If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks.  Graham was not concerned about any correlations among the cheap stocks.  As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha.  (Beta as a concept has some obvious problems.)  But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17).  Moreover, alpha is excess return relative to an index or benchmark.  We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors.  Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data).  (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return.  Greiner explains:

The stock market is not a repeatable event.  Every day is an out-of-sample environment.  The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets.  (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable.  It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)

 

RISKY BUSINESS

Risk means more things can happen than will happen.  – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed.  The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed.  Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management.  Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk.  Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high.  Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quite low.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear.  In these situations, a spike in volatility is a symptom of risk.  At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values.  So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear.  Sell-offs are usually buying opportunities for quantitative value investors.

I will tell you how to become richClose the doors.  Be fearful when others are greedy.  Be greedy when others are fearful.  – Warren Buffett

So how do you figure out risk exposures?  It is often a difficult thing to do.  Greiner defines ELE events as extinction-level events, or extreme-extreme events.  If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.”  (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future.  Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.  

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing.  Value investing has always worked over time because the investor systematically buys stocks well below probable intrinsic value—whether net asset value or earnings power.  This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience.  When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences.  This is what quants do.  They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen.  We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety.  Do we not wear seat belts because of the odd chance of the tractor-trailer collision?  Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan.  Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism.  The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes.  In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow.  The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks.  Eventually, the interconnectedness of the associations among sticks results in an avalanche.  Likewise, so behaves the market.  (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so.  Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation.  Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out.  Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful.  Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty.  Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event.  When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before.  This is not really very helpful.  The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards.  That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French, or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor.  If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe.  Do the same for all other factors.  The numerical B/P for a stock is then termed exposure.  Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta.  The variance and covariance of the beta time series act as proxies for the variance of the stocks.  These are the components of the covariance matrix.  On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors.  The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry.  Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in.  This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another.  Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries.  However, FactSet’s MAC model does include this operation.  (page 45)

 

BETA IS NOT “SHARPE” ENOUGH

Value investors like Ben Graham know that price variability is not risk.  Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power).  Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good.  Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky.  In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power.  Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value.  (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution.  The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions.  For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete.  Some of these are asymmetric about the mean (first moment) and some are not.  Some have fat tails and some do not.  You can even have distributions with infinite second moments (infinite variance).  There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance.  Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause.  Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly.  (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information.  It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave.  Market returns tend to have fatter tails.  But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance.  It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty.  However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH?  It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful.  In a world of Econs, I believe that the EMH would be true.  And it would not have been possible to do research in behavioral finance without the rational model as a starting point.  Without the rational framework, there are no anomalies from which we can detect misbehavior.  Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research.  We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.  (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed.  Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’  There are definitely anomalies:  sometimes the market overreacts, and sometimes it underreacts.  But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right.  The price is often wrong, and sometimes very wrong, says Thaler.  However, that doesn’t mean that you can beat the market.  It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution.  (page 60)

One problem with beta is that it has covariance in the numerator.  And if two variables are linearly independent, then their covariance is always zero.  But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark.  Greiner explains that you can determine linear dependence as follows:  When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X?  If yes, then the portfolio and the benchmark are linearly dependent.  Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity.  However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated.  Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund.  The distribution of returns is more like a Frechet distribution than a normal distribution in two ways:  it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74).  The Magellan Fund returns also have a fat tail on the right side of the distribution.  This is where the Frechet misses.  But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution.  Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  http://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The goal of the Boole Microcap Fund is to outperform the Russell Microcap Index over time, net of fees.  The Boole Fund has low fees. 

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Leave a Reply

Your email address will not be published. Required fields are marked *