Valaris (VAL)—The Cheapest Stock I’ve Ever Seen

(Image: Zen Buddha Silence, by Marilyn Barbone)

March 8, 2020

We continue with examples of Boole’s quantitative investment process in action.

A few weeks ago, we looked at Ranger Energy Services (RNGR): https://boolefund.com/ranger-energy-services-rngr/

Before that, we looked at Macro Enterprises (Canada: MCR.V): https://boolefund.com/macro-enterprises-mcr-v/

This week, we are going to look at Valaris (VAL)—the largest offshore oil driller in the world and the cheapest stock I’ve ever seen—which comes out near the top of the quantitative screen employed by the Boole Microcap Fund.  This results from four steps.

Step One

First we screen for cheapness based on five metrics.  Here are the numbers for Valaris:

    • EV/EBITDA = 4.01
    • P/E = 0.14
    • P/B = 0.01
    • P/CF = 0.07
    • P/S = 0.03

These figures—especially P/E and P/B—make Valaris one of the top ten cheapest companies out of over two thousand that we ranked.  (Note: This assumes a medium-case recovery with EBITDA at $1,510 million and net income at $700 million.  The current market capitalization is $99 million.)

Step Two

Next we calculate the Piotroski F-Score, which is a measure of the fundamental strength of the company.  For more on the Piostroski F-Score, see my blog post here: https://boolefund.com/piotroski-f-score/

Valaris has a Piotroski F-Score of 7.  (The best score possible is 9, while the worst score is 0.)  This is a very good score.

Step Three

Then we rank the company based on low debt, high insider ownership, and shareholder yield.

We measure debt levels by looking at total liabilities (TL) to total assets (TA).  Valaris has TL/TA of 45.0%, which is reasonable.

Insider ownership is important because that means that the people running the company have interests that are aligned with the interests of other shareholders.  At Valaris, insider ownership is approximately 5%.  This isn’t a high percentage, but it does represent a total insider ownership of $5 million.

Shareholder yield is the dividend yield plus the buyback yield.  The company has no dividend and is not buying back shares.  Thus, shareholder yield is practically zero.

Each component of the ranking has a different weight.  The overall combined ranking of Valaris places it in the top 5 stocks on our screen, or the top 0.2% of the more than two thousand companies we ranked.

Step Four

The final step is to study the company’s financial statements, presentations, and quarterly conference calls to (i) check for non-recurring items,  hidden liabilities, and bad accounting; (ii) estimate intrinsic value—how much the business is worth—using scenarios for low, mid, and high cases.

See the company presentation (dated February, 2020): https://s23.q4cdn.com/956522167/files/doc_presentations/2020/02/02212020-Valaris-Investor-Presentation.pdf

Valaris is the largest offshore oil driller in the world, with presence in six continents and nearly all major offshore markets.  The company has a large and diverse customer base including major, national, and independent E&P companies.

Valaris has 16 drillships, 10 semisubmersibles, and 50 jackups.  Valaris has one of the highest-quality fleets: 11 of its 16 drillships are the highest-specification.  13 of its 50 jackups are heavy duty ultra harsh and harsh environment jackups.  High-spec assets are preferred by customers.

Valaris is also one of the best capitalized drillers.  Valaris has a market capitalization of $99 million.  The company has $2.5 billion in contracted revenue backlog (excluding bonus opportunities).  It has $1.7 billion in liquidity, including $100 million in cash and $1.6 billion in credit available.  And it has only $858 million in debt maturities to 2024.  Valaris is one of two public offshore drillers with no guaranteed or secured debt in the capital structure.  With the asset value of its fleet at $9.1 billion (according to third party estimates), Valaris has ample flexibility to raise additional capital if needed.

In April 2019, Ensco plc (ESV) and Rowan Companies plc (RDC) merged in an all-stock transaction.  The combination (renamed Valaris) has brought together two world-class operators with common cultures.  Both companies have strong track records of safety and operational excellence.  And both companies have a strategic focus on innovative technologies that increase efficiencies and lower costs.  Ensco was rated #1 in customer satisfaction for nine straight years according to a leading independent survey.

As a result of the merger, Valaris has already achieved cost savings of $135 million pre-tax per year.  The company expects to achieve additional savings of $130 million a year.  The full savings will be $265 million a year, which is $100 million more a year than the company initially projected.

Intrinsic value scenarios:

    • Low case: If oil prices languish below $55 (WTI) for the next 3 to 5 years, Valaris will be a survivor, due to its large fleet, globally diverse customer base, industry leading performance, low cost structure, and well-capitalized position.  In this scenario, Valaris is likely worth at least 10 percent of current book value (which is depressed) of $48.15.  That’s $4.82, about 860% higher than today’s $0.50.
    • Mid case: If oil prices are in a range of $55 to $75 over the next 3 to 5 years—which is likely based on long-term supply and demand—then Valaris is probably worth at least current book value (which is depressed) of $48.15 a share, roughly 9,530% higher than today’s $0.50.
    • High case: EBITDA under a full recovery is approximately $4 billion.  Fair value can be conservatively estimated at 6x EV/EBITDA.  That would be EV (enterprise value) of $24 billion, which implies a market cap of $17.6 billion.  That works out to $88.94 a share, over 17,685% higher than today’s $0.50.

Bottom Line

Valaris (VAL) is the cheapest stock I’ve ever encountered.  Assuming the return of normal circumstances within the next 3 to 5 years, the potential upside is roughly 9,530%.  We are “trembling with greed” to buy VAL for the Boole Microcap Fund.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Preparing to Buy Beaten-Down Stocks: Covid-19 Is Going Global

(Image: Zen Buddha Silence, by Marilyn Barbone)

March 1, 2020

No one can predict what the stock market will do, even if a recession appears to be unfolding.  That said, 2009 to 2019 was a bull market for U.S. stocks.  It wouldn’t be surprising if the S&P 500 Index experienced a bear market.  Covid-19,  the disease associated with the novel virus SARS-COV-2, may cause a U.S. recession—even if mild—which could in turn cause a bear market for U.S. stocks.  In fact, a bear market for stocks may already have begun.

Periodically there are bear markets. The important thing for a long-term value investor is to be prepared to buy the most undervalued stocks in the event of a bear market.  It’s important to be able to buy when everybody else is selling.

Let’s now look at an article by The Economist, “Covid-19 is now in 50 countries, and things will get worse.”  Link: https://www.economist.com/briefing/2020/02/29/covid-19-is-now-in-50-countries-and-things-will-get-worse

The article states:

Studies suggest that the number of people who have left China carrying the disease is significantly higher than would be inferred from the cases so far reported to have cropped up elsewhere, strongly suggesting that the virus’s spread has been underestimated.  Some public-health officials still talk in terms of the window for containment coming closer and closer to closing.  In reality, it seems to have slammed shut.

The article continues:

As of the morning of February 27th, stock markets had fallen by 8% in America, 7.4% in Europe and 6.2% in Asia over the past seven days.  The industries, commodities and securities that are most sensitive to global growth, cross-border commerce and densely packed public spaces got whacked particularly hard, with the prices of oil and shares in airlines, cruise-ship owners, casinos and hotel companies all tumbling.  Investors have taken refuge in assets that are perceived to be safe: yields on ten-year Treasury bonds reached an all-time low of 1.3%.  The place least hit was China, where a huge sell-off took place some time ago.  Investors, like some public-health officials, are starting to think that the epidemic there is, for now, under control… But if economic models developed for other diseases hold good, the rich world stands a distinct chance of slipping into recession as the epidemic continues.  That will bring China, and everyone else, a fresh set of problems.

The article adds:

How the virus will spread in the weeks and months to come is impossible to tell. Diseases can take peculiar routes, and dally in unlikely reservoirs, as they hitchhike around the world.  Two cases in Lebanon lead to worries about the camps in which millions of people displaced from Syria are now crowded together and exposed to the winter weather.  But regardless of exactly how the virus spreads, spread it will.  The World Health Organisation (WHO) has not yet pronounced covid-19 a pandemic—which is to say, a large outbreak of disease affecting the whole world.  But that is what it now is.

Part of the WHO’s reticence is that the P-word frightens people, paralyses decision making and suggests that there is no further possibility of containment.  It is indeed scary—not least because, ever since news of the disease first emerged from Wuhan, the overwhelming focus of attention outside China has been the need for a pandemic to be avoided.  That many thousands of deaths now seem likely, and millions possible, is a terrible thing.  But covid-19 is the kind of disease with which, in principle, the world knows how to deal.

The course of an epidemic is shaped by a variable called the reproductive rate, or R.  It represents, in effect, the number of further cases each new case will give rise to.  If R is high, the number of newly infected people climbs quickly to a peak before, for want of new people to infect, starting to fall back again… If R is low the curve rises and falls more slowly, never reaching the same heights.  With SARS-COV-2 now spread around the world, the aim of public-health policy, whether at the city, national or global scale, is to flatten the curve, spreading the infections out over time.

If R is low and fewer people are infected, that gives health-care systems more time to develop better treatments, which would mean a lower death rate.

The virus seems to be transmitted mainly through droplets that infected people cough or sneeze into the air.  So transmission can be reduced through good hygiene, physical barriers, and reducing the various ways that people mingle.  These measures are routinely used to lessen the spread of the influenza virus, which kills hundreds of thousands of people a year.  The article continues:

Influenza, like many other respiratory diseases, thrives in cold and humid air.  If covid-19 behaves the same way, spreading less as the weather gets warmer and drier, flattening the curve will bring an extra benefit.  As winter turns to spring then summer, the reproductive rate will drop of its own accord.  Dragging out the early stage of the pandemic means fewer deaths before the summer hiatus and provides time to stockpile treatments and develop new drugs and vaccines—efforts towards both of which are already under way.

Ben Cowling, an epidemiologist at the University of Hong Kong, says that the intensity of the measures countries employ to flatten the curve will depend on how deadly SARS-COV-2 turns out to be.  It is already clear that, for the majority of people who get sick, covid-19 is not too bad, especially among the young: a cough and a fever.  In older people and those with chronic health problems such as heart disease or diabetes, the infection risks becoming severe and sometimes fatal.  How often it will do so, though, is not known.

Epidemiologists estimate the fatality rate of covid-19 in the range of 0.5-1%.  This is higher than the fatality rate of 0.1% of the seasonal flu in America.  But it’s lower than the 10% fatality rate for SARS, a disease caused by another coronavirus that broke out in 2003.

However, the fatality rate depends not only on the disease itself, but also on the quality of care received.  This means that poorer countries are at more risk than richer countries.  Moroever, the economic effects of covid-19 will be worse in poorer countries.  The article says:

As the pandemic unfolds, the reproductive rate in different parts of the world will differ according both to the policies put in place and the public’s willingness to follow them.  Few countries will be able to impose controls as strict as China’s.  In South Korea the government has invoked the power to forcibly stop any public activities, such as mass protests; schools, airports and military bases are closed.  Japan is urging companies to introduce staggered working hours and virtual meetings, limiting both crowding on public transport and mingling at work.  Other developed countries are mostly not going that far, as yet.  Something that is acceptable in one country might result in barely any compliance, or even mass protests in another.

The article again:

Some hints of what may be to come can be gleaned from an economic model of an influenza pandemic created by Warwick McKibbin and Alexandra Sidorenko, both then at Australian National University, in 2006.  Covid-19 is not flu: it seems to hit people in the prime of their working life less often, which is good, but to take longer to recover from, which isn’t.  But the calculations in their model—which were being updated for covid-19 as The Economist went to press—give some sense of what may be to come…

Mr McKibbin says the moderate scenario in that paper looks closest to covid-19, which suggests a 2% hit to global growth.  That corresponds to calculations by Oxford Economics, a consultancy, which put the possible costs of covid-19 at 1.3% of GDP.  Such a burden would not be evenly spread.  Oxford Economics sees America and Europe both being tipped into recession—particularly worrying for Europe, which has little room to cut interest rates in response, and where the country currently most exposed, Italy, is already a cause for economic concern.  But poor countries would bear the biggest losses from a pandemic, relative to their economies’ size.

The article concludes:

As the world climbs the epidemic curve, biomedical researchers and public-health experts will rush to understand covid-19 better.  Their achievements are already impressive; there is realistic talk of evidence on new drugs within months and some sort of vaccine within a year.  Techniques of social distancing are already being applied.  But they will need help from populations that neither dismiss the risks nor panic.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Saudi America

(Image: Zen Buddha Silence, by Marilyn Barbone)

February 16, 2020

Journalist Bethany McLean has written an excellent book, Saudi America: The Truth About Fracking and How It’s Changing the World.  (McLean is the co-author of The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron.)

Just recently, in January 2020, U.S. crude oil production has approached 13 million barrels a day, outproducing both Saudi Arabia and Russia.  The EIA (U.S. Energy Information Administration) predicts that U.S. crude oil production will average 13.2 million barrels a day in 2020 and 13.6 million barrels a day in 2021.   McLean writes:

Few saw this coming.  This remarkable transformation in the U.S. was brought about by American entrepreneurs who figured out how to literally force open rocks often more than a mile below the surface of the earth, to produce gas, and then oil.  Those rocks—called shale, or source rock, or tight rock, and once thought to be impermeable—were opened by combining two technologies: horizontal drilling, in which the drill bit can travel well over two miles horizontally, and hydraulic fracturing, in which fluid is pumped into the earth at a high enough pressure to crack open hydrocarbon bearing rocks, while a so-called proppant, usually sand, holds the rock open a sliver of an inch so the hydrocarbons can flow.

McLean later adds:

The country’s newest hot spot, Texas’s Permian Basin, now ranks second only to Saudi Arabia’s legendary Ghawar oilfield in production per day, according to oil company ConocoPhillips.  Stretching through northern Appalachia, the Marcellus shale could be the second largest natural gas field in the world, according to geologists at Penn State.  Shale gas now accounts for over half of total U.S. production, according to the EIA, up from almost nothing a decade ago.

McLean again:

“It [shale] is monstrous,” says Will Fleckenstein, who drilled his first horizontal well in 1990 and is now a professor of petroleum engineering at the Colorado School of Mines.  In part due to ongoing improvements in technology, he says, “It is impossible to overstate the hydrocarbons that it is technically and economically feasible to produce.”

McLean continues:

CME Group executive director and senior economist Erik Norland calls fracking “one of the top five things reshaping geopolitics.”  Ever since President Franklin D. Roosevelt met the first Saudi king, Abdul-Aziz al Saud, aboard the USS Quincy in the Suez Canal in 1945, we’ve had a devil’s bargain: our protection in exchange for their oil.  The superficial analysis boils down to a simple question: If America doesn’t need Saudi oil, does America need Saudi Arabia?

Still, there are reasons to doubt the sustainability of U.S. fracking, as McLean explains:

The fracking of oil, in particular, rests on a financial foundation that is far less secure than most people realize.

The most vital ingredient in fracking isn’t chemicals, but capital, with companies relying on Wall Street’s willingness to fund them.  If it weren’t for historically low interest rates, it’s not clear there would even have been a fracking boom.

“You can make an argument that the Federal Reserve is entirely responsible for the fracking boom,” one private equity titan told me.  That view is echoed by Amir Azar, a fellow at Columbia University’s Center on Global Energy Policy.  “The real catalyst of the shale revolution was… the 2008 financial crisis and the era of unprecedentedly low interest rates it ushered in,” he wrote in a recent report.  Another investor puts it this way: “If companies were forced to live within the cash flow they produce, U.S. oil would not be a factor in the rest of the world, and would have grown at a quarter to half the rate that it has.”

Here’s the outline:

PART ONE: SHALE REVOLUTION

    • America’s Most Reckless Billionaire
    • The Brain Trust
    • Debt
    • Skeptics
    • Bust
    • It Changes the World, But It Ends in Tears

PART TWO: SAUDI AMERICA

    • America First
    • Permania
    • Game of Thrones
    • A New Era?
    • Make America Great Again
    • Losing the Race
    • Epilogue

 

Part One: Shale Revolution

AMERICA’S MOST RECKLESS BILLIONAIRE

McLean writes about Aubrey McClendon, co-founder of Chesapeake Energy Corporation and worth over $2 billion in early 2008:

If McClendon did die broke, it wouldn’t have been out of character.  During his years as an oil and gas tycoon, he fed on risk, and was as fearless as he was reckless.  He built an empire that at one point produced more gas than any American company except ExxonMobil.  Once, when an investor asked on a conference call, “When is enough?”, McClendon answered bluntly: “I can’t get enough.”

Many think that without McClendon’s salesmanship and his astonishing ability to woo investors, the world would be a far different place today.  Stories abound about how at industry conferences, executives from oil majors like Exxon would find themselves speaking to mostly empty seats, while people literally fought for space in the room where McClendon was holding forth.

“In retrospect, it was kind of like Camelot,” says Henry Hood, Chesapeake’s former general counsel, who worked at Chesapeake initially as a consultant from 1993 until the spring of 2013.  “There was a period of time that will never be duplicated with a company that will never be duplicated.”

Forbes once called McClendon “America’s Most Reckless Billionaire,” which for many in the industry defined the man, notes McLean.  McLean adds:

You might think of McClendon as a bit of J.R. Ewing, the fictional character in the television series Dallas, mixed with Michael Milken, the junk bond king who pioneered an industry and arguably changed the world, but spent several years in prison after pleading guilty to securities fraud.

McLean continues:

McClendon, who was always immensely popular, was the president and co-valedictorian of his senior class.  He headed to Duke, where he was the rush chairman of his fraternity… “He was super competitive and aggressive,” recalls someone who knew him at Duke.  “If he had a few drinks, he’d want to wrestle.  He was big and strong and a little bit out of control.”

[…]

In college, he also met Hood, who recalls a driven if rambunctious young man.  “Aubrey was thoughtful, tall, and handsome, but incredibly clumsy,” Hood recalls.  “He always had an ink stain on his shirt from the pen in his front pocket.  We called him ‘Aubspill,’ because he was always spilling.  In basketball, he was always throwing elbows like a bull in a China shop.”  Another variation of the nickname, Hood recalls, was “‘Aubkill,’ because McClendon’s outsized competitive instinct made him dangerous in physical activities.”

Initially, McClendon was going to be an accountant.  But then he read an article in the Wall Street Journal.

“It was about two guys who had drilled a big well in the Anadarko Basin that had blown out, and it was alleged to be the biggest blowout in the history of the country,” McClendon told Rolling Stone.  “They sold their stake to Washington Gas and Light and got a $100 million check.  I thought, ‘These are two dudes who just drilled a well and it happened to hit.’  So that really piqued my interest.”

McLean observes that the geologist M. King Hubbert predicted that U.S. oil production would peak in some time between 1965 and 1970.  American oil production did peak at 9.6 million barrels a day in 1970.  Interestingly, before then, the Texas Railroad Commission controlled the international price of oil by setting production at a certain level and maintaining spare capacity.  But after U.S. production peaked, OPEC—created in 1960 by Iran, Iraq, Saudi Arabia, and Venezuela—started gaining the power to control the price of oil.

When McClendon graduated Duke in 1981, energy prices began declining.  McLean writes:

But McClendon was never one to be deterred.  He thought there was opportunity in assembling packages of drilling rights—for gas, not oil—either to be sold to bigger companies or to be drilled… In order to drill, you just have to persuade someone to give you a lease.  McClendon became what’s known in the oil and gas business as a land man, the person who negotiates the leases that allow for drilling.

That, it turned out, would make him the perfect person for the new world of fracking, which is not so much about finding the single gusher as it is about assembling the rights to drill multiple wells.  “Landmen were always the stepchild of the industry,” he later told Rolling Stone.  “Geologists and engineers were the important guys—but it dawned on me pretty early that all their fancy ideas aren’t worth very much if we don’t have a lease.  If you’ve got the lease and I don’t, you win.”

In 1983, McClendon partnered with another Oklahoman: Tom Ward.  Six years later, they formed Chesapeake Energy with a $50,000 investment.  McLean:

In some ways, they were an odd couple.  Bespectacled and balding, Ward came across as more of a typical businessman, whereas McClendon, with his flowing hair and Hollywood good looks, was the dynamo of the duo.  They divided the responsibilities, with McClendon happily playing the front man, raising money, and talking to the markets, while Ward stayed in the background running the business.  They operated out of separate buildings, with separate staffs…

Neither Ward nor McClendon were technological pioneers.  That distinction, most people agree, goes to a man named George Mitchell, who drew on research done by the government to experiment on the Barnett Shale, an area of tight rock in the Fort Worth basin of North Texas.  Using a combination of horizontal drilling and hydraulic fracturing, Mitchell’s team cracked the code for getting gas out of rock that was thought to be impermeable.

At the time, however, most companies—including ExxonMobil—and most observers ignored what Mitchell was doing, believing that it would be too expensive.  McLean continues:

McClendon, however, was the pioneer in the other essential part of the business: raising money.  “As oxygen is to life, capital is to the oil and gas business,” says Andrew Wilmot, a Dallas-based mergers and acquisitions advisor to the oil and gas industry at Purposed Ventures.  “This industry needs capital to fire on all cylinders, and the founder and father of raising capital for shale in the U.S. is Aubrey McClendon.”… “I never let Aubrey McClendon in the door for a meeting,” says an analyst who works for a big investment firm.  “Because we would have bought a ton of stock and it would not have ended well.  He was that good.”

In the early 1990s, Bear Stearns helped Chesapeake sell high-yield debt in a first-of-its-kind sort of deal.  This was no small achievement.  After all, Chesapeake didn’t have much of a track record, and there was less than zero interest in the oil and gas business from the investment community.  “I watched him convince people in these meetings,” says a banker who was there.  “He was so good, so sharp, with such an ability to draw people in.”

On February 12, 1993—a day McClendon would later describe as the best one of his career—he and Ward took Chesapeake public.  They did so despite the fact that their accounting firm, Arthur Andersen, had issued a “going concern” warning, meaning its bean-counters worried that Chesapeake might go out of business.  So McClendon and Ward simply switched accounting firms.  “Tom and I were thirty-three-year-old land men at the time, and most people didn’t think we had a clue of what we were doing, and probably in hindsight they were at least partially right,” McClendon told one interviewer in 2006.  Their IPO reduced their ownership stake to 60 percent, but both men kept for themselves as important perk, one that would play a key role in the Chesapeake story: They got the right to take a personal 2.5 percent stake in every well Chesapeake drilled.

McLean adds:

In the years following its IPO, Chesapeake was one of the best-performing stocks on Wall Street, climbing from $1.33 per share (split adjusted) to almost $27 per share.

There was an area called Austin Chalk where some highly successful wells had been drilled.  McClendon “went all in”: Chesapeake leased more than a million acres.  Unfortunately, much of this land turned out not to be productive.  Chesapeake took a $200 million charge against earnings, wiping out the previous three years of profits.  By 1998, Chesapeake stock was selling for $0.75.  McLean:

As he would do again and again, McClendon survived by borrowing yet more money to acquire more properties.  “Simply put, low prices cure low prices as consumers are motivated to consume more and producers are compelled to produce less,” he wrote in Chesapeake’s 1998 annual report.  McClendon essentially made a giant bet that gas prices would rise on their own, and he billed the properties he acquired “low risk.”  Luckily, he was right.  Within a few years, prices were soaring again, and McClendon had gotten out of a jam, for now.

Wall Street loved Chesapeake because of the fees it paid.

From 2001 to 2012, Chesapeake sold $16.4 billion in stock and $15.5 billion of debt, and paid Wall Street more than $1.1 billion in fees, according to Thomson Reuters Deals Intelligence.  McClendon was like no other client.

McClendon developed a reputation of overpaying.  There were stories of him offering ten times the amount of other offers.  But he was extremely hard-working, and would frequently send an email at 4:00 a.m.

McLean writes:

But with the sort of price increase that the market was experiencing at the time, it didn’t seem to matter what McClendon had paid.  Gas prices steadily marched upward, and by their peak in June 2008, they had more than doubled in just a few years.  Chesapeake’s stock moved in lockstep, recovering its losses from the 1990s, and more.  It hit over $65 a share in the summer of 2008, giving the company a market value of more than $35 billion.  That made McClendon’s shares worth some $2 billion.

McLean again:

To Wall Street investors, McClendon was delivering on what they wanted most: consistency and growth.  His pitch was that fracking had transformed the production of gas from a hit or miss proposition to one that operated with an on and off switch.  It was manufacturing, not wildcatting.  He became a flag waver for natural gas…

For a man steeped in the industry’s history of booms and busts, McClendon had by now convinced himself that gas prices would never fall.  In August 2008, he predicted that gas would stay in the $8 to $9 range for the foreseeable future.

In the spring and summer of 2008, Chesapeake raised $2.5 billion by selling stock, and $2 billion selling debt.  Moreover, with Chesapeake stock near an all-time high, McClendon continued to buy shares on margin.  Someone who knew him said McClendon always had to be “all in” because it was “so important to him to win.”

McLean notes that McClendon forgot that just as low prices cure low prices, so high prices cure high prices.

 

THE BRAIN TRUST

McLean writes about EOG Resources, which others called “the Brain Trust.”  EOG was a natural gas company.  But the CEO Mark Papa realized that natural gas prices would be low for decades.  Papa had EOG change its focus to oil production.

First, EOG began drilling for oil in the Bakken, a formation of about 200,000 square miles below parts of Montana, North Dakota, and Saskatchewan.  Then EOG began drilling for oil in the Eagle Ford shale under Texas.  McLean:

In the spring of 2010, EOG announced to a crowd of Wall Street investors at the Houston Four Seasons that the Eagle Ford contained over nine hundred million barrels of oil, enough to rival the Bakken.

McLean reports that money was pouring into the American energy business, which was a rare area of growth after the Great Recession.  Oil prices kept increasing, and no one thought they would fall again.  Soon the boom changed the U.S. economy.

 

DEBT

McLean writes:

As gas prices began to fall in 2008, so did Chesapeake’s stock, from its peak of $70 in the summer of 2008 to $16 by October.  With that steep slide, the value of the shares he’d pledged to banks in exchange for loans also fell—and the banks called his margin loans.  Rowland, whose office was about fifty feet away from McClendon’s, recalls him walking in and saying, “Marc, they’re selling me out.”  “It was a one minute conversation,” says Rowland.  “He went from a $2 billion net worth to a negative $500 million.  There wasn’t any sweat in his eye or anything like that.  It was just the way he was.”

Indeed, from October 8 to October 10, McClendon had to sell 94 percent of his Chesapeake stock.  “I would not have wished the past month on my worst enemy,” he said in a meeting.

The Chesapeake board of directors, which was one of the highest paid in the industry, gave McClendon a $75 million bonus.  This brought McClendon’s total pay for the year to $112 million, which made him the highest paid CEO in corporate America that year.  McLean:

Underlying all of McClendon’s enterprises was a vast and tangled web of debt.  That 2.5 percent stake in the profits from Chesapeake’s wells that McClendon and Ward had kept for themselves at the IPO had come with a hitch: McClendon had to pay his share of the costs to drill the wells.  Over time, according to a series of investigative pieces done by Reuters, he quietly borrowed over $1.5  billion from various banks and private equity firms, using the well interests as collateral.  Reuters, which entitled one piece “The Lavish and Leveraged Life of Aubrey McClendon,” also reported that much of what McClendon owned, from his stake in the Oklahoma City Thunder to his wine collection to his venture capital and hedge fund investments, was also mortgaged.

As for Chesapeake, the company continued to bleed cash.  McLean:

From 2002 to the end of 2012, there was never a year in which Chesapeake reported positive free cash flow (meaning the cash it generated from operations less its capital expenditures).  Over the decade ending in 2012, Chesapeake burned through almost $30 billion.

McLean says that Chesapeake loaded up on debt and sold stock to investors in order to cover its costs.  The company also sold some of its land.  Chesapeake was going to need higher natural gas prices in order to survive.

In January 2013, Chesapeake announced that McClendon would retire.  Right after leaving, McClendon started American Energy Partners, an umbrella company for a smorgasbord of new companies.  Within only a few years, McClendon had raised $15 billion in capital and his companies employed eight hundred people.  Meanwhile, the price of gas rose, hitting $6.

 

SKEPTICS

Short-sellers weren’t just skeptical of McClendon.  They were skeptical of the entire industry, which seemed to consume more capital than it produced.  The famous short-seller David Einhorn made a detailed case for shorting the shale industry at the 2015 Ira W. Sohn Investment Research Conference.  McLean:

Einhorn found that from 2006 to 2014, the fracking firms had spent $80 billion more than they had received from selling oil and gas.  Even when oil was at $100 a barrel, none of them generated excess cash flow—in fact, in 2014, when oil was at $100 for part of the year, the group burned through $20 billion.

McLean continues:

A key reason for the terrible financial results is that fracked oil wells in particular show an incredibly steep decline rate.  According to an analysis by the Kansas City Federal Reserve, the average well in the Bakken declines 69 percent in its first year and more than 85 percent in its first three years, while a conventional well might decline by 10 percent a year.  One energy analyst calculated that to maintain production of 1 million barrels per day, shale requires up to 2,500 wells, while production in Iraq can do it with fewer than 100.  For a fracking operation to show growth requires huge investment each year to offset the decline from the previous years’ wells.  To Einhorn, this was clearly a vicious circle.

Furthermore, the shale revolution wouldn’t have been possible were it not for the ultra-low interest rate policy by the Federal Reserve.  McLean:

Amir Azar, a fellow at Columbia’s Center on Global Energy Policy, wrote than by 2014, the industry’s net debt exceeded $175 billion, a 250 percent increase from its 2005 level.  But interest expense increased at less than half the rate debt did, because interest rates kept falling.

 

BUST

In late November, 2014, at a meeting in Vienna, Saudi Arabia—led by its oil minister Ali Al-Naimi—and OPEC decided to leave production where it was rather than cut it.  The marginal cost to produce a barrel of oil in Saudi Arabia is “at most” $10, according to Al-Naimi, whereas it was as much as five times that figure for U.S. frackers to produce a barrel of oil.  Many people interpreted OPEC’s decision not to cut as an attempt to drive U.S. frackers out of business by causing the oil price to collapse.

By February 2016, a barrel of oil sold for just $26.  Soon came the reckoning:

One after another, debt-laden companies began to declare bankruptcy, with some two hundred of them eventually going bust.

Acquisitions had not worked out well:

As one investor put it: “All of the acquisitions of shale assets done by the majors and by international companies have been disasters.  The wildcatters made a lot of money, but the companies haven’t.”

 

IT CHANGES THE WORLD, BUT IT ENDS IN TEARS

McLean:

Even in those dark days, McClendon remained a true believer.  Rather than back down he doubled down.  He announced deal after deal… Even those who were skeptical of him were amazed.  “Look at this guy who mortgaged it all to start a new company in the teeth of a terrible decline,” says one financier.  “If he went out, he was going to go out in a blaze of glory.”

McLean continues:

Those who know McClendon and who backed him believe he might have survived the financial hell, maybe even raised the capital for a third go round.  But he could not escape the legal hell he also found himself in.

…In the spring of 2014, a year after McClendon had left Chesapeake, the state of Michigan brought criminal charges against the firm for conspiring with other companies to rig the bids in a 2010 state auction for oil and gas rights… McClendon and the CEO of a Canadian company named Encana had divvied up the state, agreeing not to bid on leases in each other’s allocated counties, according to the charges.  “Should we throw in 50/50 together here rather than trying to bash each other’s brains out on lease buying?” McClendon once asked an Encana executive, according to evidence filed in the case.

A raft of civil lawsuits were filed alleging that Chesapeake had used a similar strategy in other hot plays.

 

Part Two: Saudi America

AMERICA FIRST

McLean writes:

Citigroup chief economist Ed Morse said that the U.S. had the potential to become the “new Middle East,” and Leonardo Maugeri, a former director at Italian energy firm Eni who became a fellow at the Harvard Kennedy School’s Belfer Center for Science and International Affairs, coined the phrase “Saudi America” when his 2012 report predicted that the U.S. could one day rival Saudi Arabia’s fabled oil production.

McLean adds:

During the Obama Administration, U.S. and European politicians began pushing for America to accelerate the granting of permits for new LNG facilities so that the U.S. could export natural gas to Europe, weakening Russia’s ability to use its energy supplies as a political weapon… Ambassadors from Hungary, Poland, Slovakia, and the Czech Republic sent a letter asking Congress to allow the faster sale of more natural gas to Europe.

McLean describes how the U.S. ban on oil exports was eventually repealed.

Studies came out from various industry-friendly organizations arguing that free trade in oil would create nearly a million jobs, add billions to the economy, and would lower the trade deficit.

[….]

…Repeal of the export ban was ultimately tucked into the sprawling $1.1 trillion year-end 2015 spending bill.

 

PERMANIA

McLean:

While on the campaign trail, then-candidate Donald Trump began to talk about energy independence.  Upon election, he installed one of the most energy heavy cabinets in modern history, from ExxonMobil CEO Rex Tillerson as Secretary of State; to former Oklahoma Attorney General Scott Pruitt as head of the Environmental Protection Agency, which he had sued more than a dozen times to protect the interests of energy companies; to former oil and gas consultant Ryan Zinke as Secretary of the Interior.

Einhorn and other skeptics of fracking ended up being wrong.  McLean writes:

As it turns out, the demise of fracking that seemed so inevitable wasn’t inevitable after all.  There are a few reasons why that was the case, but it all begins with the Permian—Permania, some are now calling it.

Fly into Midland, and all you see across the flat dry land are windmills and drilling rigs.  “Outside of Saudi Arabia, the whole oil story today is West Texas,” one investor tells me.

McLean explains:

What set the Permian apart from other plays is geological luck.  Its oil- and gas-bearing rocks are laid down in horizontal bands… That means that one lease can give you multiple layers of hydrocarbons, and also that you can drill more efficiently, because you only have to use one expensive rig to access multiple layers.

Furthermore, unlike the Bakken and the Marcellus, the Permian already had infrastructure in place including pipelines.

In 2010, the Permian was producing nearly a million barrels a day.  In 2017, it was producing 2.5 million barrels a day.  Today (February 2020), the Permian is producing over 4.8 million barrels a day, making it the largest oil field in the world.  See: https://www.eia.gov/petroleum/drilling/#tabs-summary-2

As for overall oil production, the EIA predicts that the U.S. will produce 13.2 million barrels a day in 2020 and 13.6 million barrels a day in 2021.  See: https://www.eia.gov/outlooks/steo/report/us_oil.php

This makes the U.S. the world’s biggest energy power, since Saudi Arabia produces a maximum of 11 million barrels a day and is currently producing just under 10 million barrels a day in order to move oil prices higher.  Russia produces just under 11 million barrels a day.

McLean writes:

Until recently, the history of shale drilling was that operators would watch what others did, and if someone’s new technique got more oil and gas out of the ground, then everyone else would start doing that.

But now drillers are more data-driven, looking for repeatable ways to get oil out of the ground at the lowest cost.  Horizontal wells used to be about a mile long, but now one company’s well extended almost four miles.  Moreover, while the average fracked well used 4 million pounds of sand in 2011, now the average fracked well uses 12 million pounds of sand.

McLean adds:

At the same time, other things are getting more minute and efficient.  Drillers are also executing smaller, more complex, and more frequent fractures.  These more precise fracks reduce the risk that the wells “communicate”—that one leaks into another, rendering them inoperable—so the wells can be drilled more closely together…

According to a 2016 paper by researchers at the Federal Reserve, not only are rigs drilling more wells, but each well is producing far more.  The extraction from the new wells in their first month of production has roughly tripled since 2008.  Break-even cost—the estimate of what it costs to get a barrel of oil out of the ground—has plunged.  Before the bust, it was supposedly around $70; analysts say it’s less than $50 now…

McLean:

The dramatic rebound make skeptics look spectacularly wrong—no one more so than David Einhorn.

Then McLean makes an important point:

And yet, even today, it is unclear if we will look back and see fracking as the beginning of a huge and lasting shift—or if we will look back wistfully, realizing that what we thought was transformative was merely a moment in time.  Because in its current financial form, the industry is still unsustainable, still haunted by McClendon’s twin ghosts of heavy debt and lack of cash flow.  “The industry has burned up cash whether the oil price was at $100, as in 2014, or at about $50, as it was during the past three months,” one analyst calculated in mid-2017.  According to his analysis, the biggest sixty firms in aggregate had used up an average of $9 billion per quarter from mid-2012 to mid-2017.

The availability of capital, made possible by low interest rates, has been central to the success of the shale industry, notes McLean:

Wall Street’s willingness to fund money-losing shale operators is, in turn, a reflection of ultra-low interest rates.  That poses a twofold risk to shale companies.  In his paper for Columbia’s Center on Global Energy Policy, Amir Azar noted that if interest rates rose, it would wipe out a significant portion of the improvement in break-even costs.

But low interest rates haven’t just meant lower borrowing costs for debt-laden companies.  The lack of return elsewhere also led pension funds, which need to be able to pay retirees, to invest massive amounts of money with hedge funds that invest in high yield debt, like that of energy firms, and with private equity firms—which, in turn, shoveled money into shale companies, because in a world devoid of growth, shale at least was growing.  Which explains why Lambert, the portfolio manager of Nassau Re, says “Pension funds were the enablers of the U.S. energy revolution.”

McLean points out that private equity firms raised over $100 billion—almost five times the usual amount—in 2016 to invest in natural resources.  Many private equity investors have done well because other investors have been willing to invest in energy companies based upon acreage owned rather than based upon a multiple of profits.  McLean:

“I view it as a greater fool business model,” one private equity executive tells me.  “But it’s one that has worked for a long time.”

Some believe that the U.S. is squandering its shale reserves by focusing on growth-at-all-costs.  McLean:

“Our view is that there’s only five years of drilling inventory left in the core,” one prominent investor tells me.  “If I’m OPEC, I would be laughing at shale.  In five years, who cares?  It’s a crazy system, where we’re taking what is a huge gift and what should be real for many years, not five years, and wasting it…”

That said, McLean notes that some longtime skeptics believe the industry is moving in the right direction by focusing on generating positive free cash flow.

 

GAME OF THRONES

McLean writes:

OPEC wasn’t as impervious to lower prices as Ali Al-Naimi, the Bedouin shepherd-turned-oil-minister, seemed to have suggested in the fall of 2014.  True, Saudi Arabia and other OPEC members spend a lot less money to get a barrel of oil out of the ground than do U.S. frackers.  But it turns out that Al-Naimi’s measurement was flawed.  That’s because the wealthy (and not-so-wealthy) oil-rich states have long relied on their countries’ natural resources to support patronage systems, in which revenue from selling natural resources underwrites generous social programs, subsidies, and infrastructure spending.  That, in turn, has helped subdue potential political upheavals…

This is an expensive way to run a country, and so the patronage system gave rise to the notion of the fiscal break-even price, which essentially is the average oil price that an oil state needs to balance its budget each year.  And it really does all come down to oil: McKinsey noted in a December 2015 analysis that Saudi Arabia gets about 90 percent of its government revenue from oil.

McLean adds:

At the same time, a Game of Thrones was starting in Saudi Arabia.  In late 2014, King Abdullah bin Abdulaziz Al Saud, who had led Saudia Arabia for a decade, passed away, leading to the elevation of his brother, King Salman bin Abdulaziz, to the throne.  With that came the accession of the new king’s son, Mohammed bin Salman.  The new deputy crown prince, who has been nicknamed MBS, was then just thirty years old.  His father gave him unprecedented control over a huge portion of the economy, including oil.

McLean continues:

Transforming the system is exactly what Saudi Arabia is trying to do.  MBS is viewed as the architect of a breathtakingly audacious plan called Saudi Vision 2030.

Part of the plan is to increase the number of Saudis in private employment.  The funding for the plan is going to come from the recent IPO of Saudi Aramco.  The IPO valued the company at about $1.7 trillion, making Saudi Aramco the largest public company in the world.  However, the IPO only raised about $25.6 billion, roughly a third of what MBS hoped for.  That’s because the company is only listed on the Tadawul, Saudi’s stock exchange, instead of being listed in New York or London, as originally planned.  Due in part to weak oil prices, there wasn’t sufficient interest in Saudi Aramco’s IPO from global investors.  See: https://www.nytimes.com/2019/12/06/business/energy-environment/saudi-aramco-ipo.html

 

A NEW ERA?

McLean writes about the U.S. lowering oil imports significantly:

Limiting America’s dependence on unstable regions of the world may seem like an unalloyed positive, but the larger effects are quite murky.  “It’s a very, very difficult question as to whether it makes for a safer world or a less safe world,” says Norland… He worries that economic hardship can lead to increased terrorism and even civil war—a horror anywhere, but one that is especially horrible in countries like Angola, where a civil war than began in 1975 just ended.

McLean continues:

The shale revolution also figures into America’s often vexing relationship with China, the world’s second-largest economy.  China has overtaken the U.S. as the world’s biggest oil importer, but what’s even more astounding is that shipments of U.S. crude oil to China, which were nothing before the lifting of the export ban, hit almost $10 billion in 2017.  China is also on track to become the biggest importer of U.S. LNG… This could help reduce our trade deficit, and could be a healthy change from a world where the U.S. and China compete for scarce energy supplies.

Meanwhile, Saudi Arabia is worried that the oil for security bargain between Saudi Arabia and the U.S. is breaking down, as U.S. oil imports from Saudi Arabia continue to decline.  However, notes McLean:

There are a lot of reasons it’s in America’s interest to have a stable Middle East, whether it’s fighting terrorism, resisting the spread of nuclear weapons, protecting Israel—or keeping the global economy functioning.  Even if America doesn’t need Middle Eastern oil, its allies in Europe do, and China certainly does.  This isn’t just altruism.  In a world where over 40 percent of the S&P 500’s revenues come from outside the U.S., the American economy is dependent on the global economy.

 

MAKE AMERICA GREAT AGAIN

McLean says:

What is obvious, even today, is the enormous impact of shale gas on the domestic economy.  “The U.S. has the lowest cost energy prices of any OECD nation,” noted the Energy Center’s Stephen Arbogast.  Prices are hovering at less than half of prices in much of the rest of the world.  That means the energy intensive manufacturing in the U.S. “enjoys a significant advantage versus Europe, Japan, or China, all of whom depend upon imported oil and LNG for marginal supplies.”

McLean adds:

Since 2010, over three hundred chemical industry projects worth $181 billion have been announced in the U.S., according to the American Chemistry Council, a trade group representing chemical companies.

Furthermore, cheap natural gas is gradually replacing coal:

Cheap natural gas is what’s destroying coal, not tree-hugging liberals.  It’s now cheaper to build a power plant that uses natural gas than to build one fueled by coal.  As a result, the market share for coal in power generation has fallen from 50 percent in 2005 to around 30 percent today.

McLean points out that the replacement of coal by natural gas is probably positive for the environment.  In 2017, U.S. carbon dioxide emissions hit a twenty-five years low.  However, there has been a huge rise in emissions of methane due to the natural gas supply chain.  Some scientists believe that methane is a more potent greenhouse gas than carbon dioxide.

Meanwhile, instead of trying to improve the natural gas supply chain, the Trump Administration has been busy trying to bail out the coal industry.  McLean:

In the fall of 2017, the Department of Energy, under Rick Perry, announced a plan that would in effect force regional electricity grids to purchase large amounts of coal.  The ostensible reason is to ensure a supply of fuel that can be stored and called upon in the event of a disruption—but in addition to distorting markets and likely causing an increase in energy prices, the plan ran counter to the Department’s own study, which reported that increased reliance on natural gas and renewables was not reducing the reliability of the grid.

That said, the Trump Administration is taking other actions that would increase the supply of natural gas:

Trump’s initial proposal was to unlock our “$50 trillion in untapped shale, oil, and natural gas reserves,” and to further that goal, he signed an executive order to ease regulations on offshore drilling and eventually allow more to occur, particularly in the Arctic Ocean.  His administration has also proposed allowing drilling in the Arctic National Wildlife Refuge for the first time in forty years.  The effect of all this would be to make it harder to control methane leaks, and it would also likely crater prices, thereby making the economics of drilling even less attractive than they already are.  If this comes about at a time of rising interest rates and the end of the era of cheap capital, we may soon begin talking about how the Trump Administration killed the shale revolution.

 

LOSING THE RACE

No one can predict how long it will take for the world to transition to a place where we no longer use oil and gas.  McLean writes:

Most scholars think the transition will take decades.  But there are those who say it might come much more quickly.  Most prominently, in a 2015 study, Stanford University engineering professor Mark Jacobson and colleagues have argued that it was technically feasible for all fifty states to run on clean renewable energy by 2050, with an 80 percent conversion possible by 2030.  Of course, there’s also the opposite argument.  Fatih Birol, the executive director of the International Energy Agency, a Paris-based nongovernmental organization, doesn’t think we’ll see peak demand for oil any time soon, mainly because very few countries have any sort of fuel economy standards for trucks, and the use of renewables in freight transportation lags passenger vehicles enormously.  And while passenger cars make up about 25 percent of oil demand, other modes of transportation, like shipping, aviation, and freight account for almost 30 percent… OPEC, too, projects that demand will keep increasing through 2040.

McLean says that not knowing the precise timing doesn’t mean the U.S. government, led by Trump, should pretend that the transition isn’t coming.

President Trump has proposed slashing the budget for a division of the Department of Energy called the Office of Energy Efficiency and Renewable Energy… The proposed spending cuts caused the last seven heads of the office, including three who served under Republican presidents, to write a letter to Congress.  “We are unified that cuts of this magnitude… will do serious harm to this office’s critical work and America’s energy future,” they wrote.  Trump has also imposed tariffs on foreign-made solar panels, which could decrease installation volumes in coming years; Bloomberg called the tariffs “the biggest blow to renewables yet.”

Meanwhile, China and other countries, even including Saudi Arabia, are far ahead of the U.S. when it comes to the transition to clean energy.

 

EPILOGUE

McLean writes:

I found that it’s a fool’s errand to make bold predictions about what’s to come, but the most honest answer I found about the future came from research firm IHS Markit.

The firm has three scenarios.  The first, called Rivalry, is the base case.  Rivalry, IHS says, means “intense competition among energy sources plus evolutionary social and technology change.  Gas loosens oil’s grip on transport demand.  Renewables become increasingly competitive with gas, coal, and nuclear in power generation.”

The second scenario, called Autonomy, is a much faster-than-expected transition away from fossil fuels.  “Revolutionary changes in market, technology, and social forces decentralize the global energy supply and demand system.”

The last scenario is called Vertigo.  Vertigo means “economic and geopolitical uncertainty drive volatility and boom-bust cycles with economic concerns slowing the transition to a less carbon-intensive economy.”

McLean adds:

The potential for vertigo helps explain why Charlie Munger, the famous investor and thinker and longtime Warren Buffett sidekick, believes that we should conserve what we have, instead of drilling frenetically.  Munger argues that for all the eventual certainty of renewables, there is still no substitute for hydrocarbons in several essential aspects of modern life, namely transportation and agriculture.

Part of why the U.S. can feed its population is because yield per acre has increased dramatically in the modern era—but that’s in large part due to pesticides, nitrogenous fertilizers and other agricultural products that are made through use of hydrocarbons.

McLean continues:

As history shows, even oil and gas executives don’t have a clue what’s going to happen next.  Charlie Munger might be right.  Or shale oil and gas might do what shale oil and gas have done since the revolution began, and surprise to the upside.  EOG might discover ways to get oil economically out of other places we never thought we could get oil economically.  Or there could be a battery breakthrough tomorrow that renders oil obsolete more quickly than anyone ever dreamed.

McLean observes that we should recognize that America’s oil and gas resources are very different.  America’s natural gas is ultra-low cost and will last at least a century.  As for oil, it’s not clear what the ultimate supply is, nor is it clear at what price oil is recoverable.

Furthermore, notes McLean, we should keep in mind the extent to which both oil and (to a lesser extent) gas drillers have been dependent upon low-cost capital, which may be less available when interest rates eventually rise.  McLean:

For the first time in perhaps forever, at least some long-term investors are aligned with conservationists, and they are trying to send a message that isn’t drill, baby, drill—but rather drill thoughtfully and profitably, so that more people benefit from America’s resources for longer, and it isn’t only executives getting a payday.

McLean adds:

In a recent letter to the firm’s clients, JP Morgan chief strategist Michael Cembalest wrote that one thing he considered critical for our future was “the ability to develop natural gas-powered vehicles and trains with lower fuel costs than gasoline- or diesel-powered counterparts, and with greater geopolitical fuel security.”

…The energy grid of the future will likely consist of mostly renewables but with the ability to rapidly add backup power from natural gas when wind, solar, and hydropower generation is low.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ranger Energy Services (RNGR)

(Image:  Zen Buddha Silence by Marilyn Barbone.)

January 26, 2020

We continue with examples of Boole’s quantitative investment process in action.

Last week, we looked at Macro Enterprises (Canada: MCR.V): https://boolefund.com/macro-enterprises-mcr-v/

This week, we are going to look at Ranger Energy Services (RNGR), which may be an even better opportunity.

Ranger Energy Services comes out near the top of the quantitative screen employed by the Boole Microcap Fund.  This results from four steps.

Step One

First we screen for cheapness based on five metrics.  Here are the numbers for Ranger:

    • EV/EBITDA = 2.96
    • P/E = 17.51
    • P/B = 0.53
    • P/CF = 2.20
    • P/S = 0.31

These figures—especially EV/EBITDA, P/B, and P/CF—make Ranger Energy Services one of the top ten cheapest companies out of over two thousand that we ranked.

Step Two

Next we calculate the Piotroski F-Score, which is a measure of the fundamental strength of the company.  For more on the Piostroski F-Score, see my blog post here: https://boolefund.com/piotroski-f-score/

Ranger has a Piotroski F-Score of 8.  (The best score possible is 9, while the worst score is 0.)  This is an excellent score.

Step Three

Then we rank the company based on low debt, high insider ownership, and shareholder yield.

We measure debt levels by looking at total liabilities (TL) to total assets (TA).  Ranger has TL/TA of 34.3%, which is fairly low.

Insider ownership is important because that means that the people running the company have interests that are aligned with the interests of other shareholders.  At Ranger, insiders own 60% of the shares.  This puts Ranger Energy Services in the top 2.2% of the more than two thousand companies we ranked according to insider ownership.

Shareholder yield is the dividend yield plus the buyback yield.  The company has no dividend.  Also, while it has bought back some shares—which is good because the shares appear quite undervalued (see Step Four)—this has been offset by the issuance and exercise of stock options.  Thus overall, the shareholder yield is zero.

Each component of the ranking has a different weight.  The overall combined ranking of Ranger Energy Services places it in the top 5 stocks on our screen, or the top 0.2% of the more than two thousand companies we ranked.

Step Four

The final step is to study the company’s financial statements, presentations, and quarterly conference calls to (i) check for non-recurring items,  hidden liabilities, and bad accounting; (ii) estimate intrinsic value—how much the business is worth—using scenarios for low, mid, and high cases.

See the company presentation: http://investors.rangerenergy.com/~/media/Files/R/Ranger-Energy-IR/reports-and-presentations/events-and-presentation-august-2019-final.pdf

Ranger operations are reported in three segments:

Completion & Other Services

    • Primary operations include wireline (plug & perf and pump down)
    • Well Testing, Snubbing, Fluid Hauling and Tank Rental
    • Locations in Permian and DJ Basins

Well Service Rigs & Related Services

    • Primary operations include well completion support, workovers, well maintenance and P&A
    • Related equipment rentals include power swivels, well control packages, hydraulic catwalks, pipe racks and pipe handling tools
    • Locations in the Permian, DJ, Bakken, Eagle Ford, Haynesville, Gulf Coast, SCOOP/STACK

Processing Solutions

    • Primary operations include the rental of Modular Mechanical Refrigeration Units (“MRUs”) and other natural gas processing equipment
    • Locations in the Permian, Bakken, Utica, San Joaquin and Piceance

Ranger Energy Services has a market cap of $107 million and an enterprise value of $148 million.  The company recently signed new multi-year contracts with oil majors including Chevron and Conoco.  Ranger continues to gain market share due to its high spec rigs and relatively low debt levels.  Moreover, the company continues to pay down its debt—its target debt level is $0.

Intrinsic value scenarios:

    • Low case: Ranger is probably not worth less than book value, which is $12.98 per share.  That’s about 90% higher than today’s share price of $6.83.
    • Mid case: The company can achieve EBITDA of $75 million, and is likely worth at least EV/EBITDA of 5.0.  That translates into a share price of $21.41, which is 214% higher than today’s $6.83.
    • High case: Ranger has said that in a recovery, it could do EBITDA of $100 million.  The company may easily be worth at least EV/EBITDA of 6.0.  That translates into a share price of $35.83, which is about 425% higher than today’s $6.83.

Bottom Line

Ranger Energy Services is one of the top 5 most attractive stocks out of more than two thousand microcap stocks that we ranked using our quantitative screen.  Moreover, all the intrinsic value estimates—including the low case—are far above the current stock price.  As a result, we are “trembling with greed” to accumulate this stock for the Boole Microcap Fund.

Sources

In addition to company financial statements and presentations, I used information from the following two analyses of Ranger Energy Services:

(Note: If you have trouble accessing the www.valueinvestorsclub.com analysis, you can create a guest account, which is free.)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Macro Enterprises (MCR.V)

(Image:  Zen Buddha Silence by Marilyn Barbone.)

January 19, 2020

The Boole Microcap Fund follows a quantitative investment process.  This year (2020), I am going to give examples of Boole’s investment process in action.

The first example is Macro Enterprises Inc. (Canada: MCR.V).  Macro Enterprises builds oil and natural gas pipelines, constructs energy-related infrastructure facilities, and performs maintenance and integrity work on existing pipelines.  The company operates primarily in western Canada and is headquartered in Fort St. John, British Columbia.

Macro Enterprises comes out near the top of the quantitative screen employed by the Boole Microcap Fund.  This results from four steps.

Note: All values in Canadian dollars unless otherwise noted.

Step One

First we screen for cheapness based on five metrics.  Here are the numbers for Macro Enterprises:

    • EV/EBITDA = 1.37
    • P/E = 3.72
    • P/B = 1.08
    • P/CF = 3.51
    • P/S = 0.26

These figures—especially EV/EBITDA, P/E, and P/S—make Macro Enterprises one of the top ten cheapest companies out of over two thousand that we ranked.

Step Two

Next we calculate the Piotroski F-Score, which is a measure of the fundamental strength of the company.  For more on the Piostroski F-Score, see my blog post here: https://boolefund.com/piotroski-f-score/

Macro Enterprises has a Piotroski F-Score of 7.  (The best score possible is 9, while the worst score is 0.)  This is a very good score.

Step Three

Then we rank the company based on low debt, high insider ownership, and shareholder yield.

Warren Buffett, arguably the greatest investor of all time, explains why low debt is important:

At rare and unpredictable intervals… credit vanishes and debt becomes financially fatal.

We measure debt levels by looking at total liabilities (TL) to total assets (TA).  Macro Enterprises has TL/TA of 27.17%, which is fairly low.

Insider ownership is important because that means that the people running the company have interests that are aligned with the interests of other shareholders.  Macro’s founder and CEO, Frank Miles, owns approximately 30%+ of the shares outstanding.  Other insiders own about 3%.  This puts Macro Enterprises in the top 7% of the more than two thousand companies we ranked according to insider ownership.

Shareholder yield is the dividend yield plus the buyback yield.  The company has no dividend.  Also, while it has bought back a modest number of shares, this has been offset by the issuance and exercise of stock options.  Thus overall, the shareholder yield is zero.

Each component of the ranking has a different weight.  The overall combined ranking of Macro Enterprises places it in the top 5 stocks on our screen, or the top 0.2% of the more than two thousand companies we ranked.

Step Four

The final step is to study the company’s financial statements, presentations, and quarterly conference calls to (i) check for non-recurring items,  hidden liabilities, and bad accounting; (ii) estimate intrinsic value—how much the business is worth—using scenarios for low, mid, and high cases.

Macro Enterprises has been in operation for 25 years.  Over that time, it has earned a reputation for safety and reliability while becoming one of the largest pipeline construction companies in western Canada.  The company has a market cap of $121 million and an enterprise value of $98 million.

Macro has built a record backlog of $870+ million in net revenue over the next few years.  That is more than 7x the company’s current market cap.  Presently the company has at least a 16% EBITDA margin.  This translates into a net profit margin of at least 11%.  That means the company will earn at least 80% of its current market cap over the next few years.  (Peak net profit margins were around 15%—at these levels, the company would earn more than 100% of its market cap over the next few years.)

If you look at the $870+ million backlog, there are two large projects.  There’s the $375 million Trans Mountain Project, of which Macro’s interest is 50%.  And there’s the $900 million Coastal GasLink Project, of which Macro’s interest is 40%.  Importantly, both of these projects are largely cost-plus—as opposed to fixed price—which greatly reduces the company’s execution risk.  Macro can be expected to add new profitable projects to its backlog.

Furthermore, Macro performs maintenance and integrity work on existing pipelines.  The company has four master service agreements with large pipeline operators to conduct such work, which is a source of recurring, higher-margin revenue.

Intrinsic value scenarios:

    • Low case: Macro is probably not worth less than book value, which is $3.61 per share.  That’s about 7% lower than today’s share price of $3.89.
    • Mid case: The company is probably worth at least EV/EBITDA of 5.0.  That translates into a share price of $10.39, which is 167% higher than today’s $3.89.
    • High case: Macro may easily be worth at least EV/EBITDA of 8.0.  That translates into a share price of $16.17, which is about 316% higher than today’s $3.89.

Bottom Line

Macro Enterprises is one of the top 5 most attractive stocks out of more than two thousand microcap stocks that we ranked using our quantitative screen.  Moreover, the mid case and high case intrinsic value estimates are far above the current stock price.  As a result, we are “trembling with greed” to buy this stock for the Boole Microcap Fund.

Sources

In addition to company financial statements and presentations, I used information from the following three analyses of Macro Enterprises:

(Note: If you have trouble accessing the www.valueinvestorsclub.com analyses, you can create a guest account, which is free.)

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

This Time Is Different

(Image:  Zen Buddha Silence by Marilyn Barbone)

July 14, 2019

For a value investor who patiently searches for individual stocks that are cheap, predictions about the economy or the stock market are irrelevant.  In fact, most of the time, such predictions are worse than irrelevant because they could cause the value investor to miss some individual bargains.

Warren Buffett puts it best:

  • Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.
  • We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.
  • Market forecasters will fill your ear but never fill your wallet.
  • Forecasts may tell you a great deal about the forecaster; they tell you nothing about the future.
  • Stop trying to predict the direction of the stock market, the economy, interest rates, or elections.
  • [On economic forecasts:] Why spend time talking about something you don’t know anything about?  People do it all the time, but why do it?
  • I don’t invest a dime based on macro forecasts.

(Illustration by Eti Swinford)

No one has ever been able to predict the stock market with any sort of reliability.  Ben Graham—with a 200 IQ—was as smart or smarter than any value investor who’s ever lived.  And here’s what Graham said near the end of his career:

If I have noticed anything over these 60 years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

No one can predict the stock market, although anyone can get lucky once or twice in a row.  But if you’re patient, you can find individual stocks that are cheap.  Consider the career of Henry Singleton.

When he was managing Teledyne, Singleton built one of the best track records of all time as a capital allocator.  A dollar invested with Singleton would grow to $180.94 by the time of Singleton’s retirement 29 years later.  $10,000 invested with Singleton would have become $1.81 million.

Did Singleton ever worry about whether the stock market was too high when he was deciding how to allocate capital?  Not ever.  Not one single time.  Singleton:

I don’t believe all this nonsense about market timing.  Just buy very good value and when the market is ready that value will be recognized.

Had Singleton ever brooded over the level of the stock market, his phenomenal track record as a capital allocator would have suffered.

Top value investor Seth Klarman expresses the matter as follows:

In reality, no one knows what the market will do; trying to predict it is a waste of time, and investing based upon that prediction is a speculative undertaking.

If you’re not convinced that focusing on individual bargains—regardless of the economy or the market—is the wise approach, then let’s consider whether “this time is different.”  Why is this phrase important?  Because if things are never different, then you can bet on historical trends—and mean reversion; you can bet that P/E ratio’s will return to normal, which (if true) implies that the stock market today will probably fall.

Quite a few leading value investors—who have excellent track records of ignoring the crowd and being right—agree that the U.S. stock market today is very overvalued, at least based on historical trends.  (This group includes Rob Arnott, Jeremy Grantham, John Hussman, Frank Martin, Russell Napier, and Andrew Smithers.)

However, this time the crowd appears to be right and leading value investors wrong.  This time really is different.  Grantham admits:

[It] can be very dangerous indeed to assume that things are never different.

Here Grantham presents his views: https://www.barrons.com/articles/grantham-dont-expect-p-e-ratios-to-collapse-1493745553

Leading value investor Howard Marks:

The thing I find most interesting about investing is how paradoxical it is: how often the things that seem most obvious—on which everyone agrees—turn out not to be true.

 

THIS TIME IS DIFFERENT

The main reason is not possible to predict the economy or the stock market is that both the economy and the stock market evolve over time.  As Howard Marks says:

Economics and markets aren’t governed by immutable laws like the physical sciences…

…sometimes things really are different…

Link: https://www.oaktreecapital.com/docs/default-source/memos/this-time-its-different.pdf

In 1963, Graham gave a lecture, “Securities in an Insecure World.”  Link: http://jasonzweig.com/wp-content/uploads/2015/03/BG-speech-SF-1963.pdf

In the lecture, Graham admits that the Graham P/E—based on ten-year average earnings of the Dow components—was much too conservative.  (The Graham P/E is now called the CAPE—cyclically adjusted P/E.)  Graham:

The action of the stock market since then would appear to demonstrate that these methods of valuations are ultra-conservative and much too low, although they did work out extremely well through the stock market fluctuations from 1871 to about 1954, which is an exceptionally long period of time for a test.  Unfortunately in this kind of work, where you are trying to determine relationships based upon past behavior, the almost invariable experience is that by the time you have had a long enough period to give you sufficient confidence in your form of measurement just then new conditions supersede and the measurement is no longer dependable for the future.

Because of the U.S. government’s more aggressive policy with respect to preventing a depression, Graham concluded that the U.S. stock market should have a fair value 50 percent higher.  (Graham explains this change in the 1962 edition of Security Analysis.)

Similar logic can be applied to the S&P 500 Index today—at just over 3,013.  Fed policy, moral hazard, lower interest rates, an aging population, slower growth, productivity, and higher profit margins (based in part on political and monopoly power) are all factors in the S&P 500 being quite high.

The great value investor John Templeton observed that when people say, “this time is different,” 20 percent of the time they’re right.

By traditional standards, the U.S. stock market looks high.  For instance, the CAPE is at 29+.  (The CAPE—formerly the Graham P/E—is the cyclically adjusted P/E ratio based on 10-year average earnings.)  The historical average CAPE is 16.6.

If the stock market followed the pattern of history, then there would be mean reversion in stock prices, i.e., there would probably be a large drop in stock prices, at least until the CAPE approached 16.6.  (Typically the CAPE would overshoot on the downside and so would go below 16.6.)

But that assumes that the CAPE will still average 16.6 going forward.  Since 1996, according to Rob Arnott, 96% of the time the CAPE has been above the 16.6; and two-thirds of the time the CAPE has been above 24.  See: https://www.researchaffiliates.com/en_us/publications/articles/645-cape-fear-why-cape-naysayers-are-wrong.html  

Here are some reasons why the average CAPE going forward could be 24 (or even higher) instead of 16.6.

  • Interest rates have gotten progressively lower over the past couple of decades, especially since 2009.  This may continue.  The longer interest rates stay low, the higher stock prices will be.
  • Perhaps the government has tamed the business cycle (at least to some extent).  Monetary and fiscal authorities may continue to be able to delay or avoid a recession.
  • Government deficits might not cause interest rates to rise, in part because the U.S. can print its own currency.
  • Government debt might not cause interest rates to rise.  (Again, the U.S. can print its own currency.)
  • Just because the rate of unemployment is low doesn’t mean that the rate of inflation will pick up.
  • Inflation may be structurally lower—and possibly also less volatile—than in the past.
  • Profit margins may be permanently higher than in the past.

Let’s consider each point in some detail.

 

LOWER INTEREST RATES

The longer rates stay low, the higher stock prices will be.

Warren Buffett pointed out recently that if 3% on 30-year bonds makes sense, then stocks are ridiculously cheap: https://www.cnbc.com/2019/05/06/warren-buffett-says-stocks-are-ridiculously-cheap-if-interest-rates-stay-at-these-levels.html

 

BUSINESS CYCLE TAMED

The current economic recovery is the longest recovery in U.S. history.  Does that imply that a recession is overdue?  Not necessarily.  GDP has been less volatile due in part to the actions of the government, including Fed policy.

Perhaps the government is finally learning how to tame the business cycle.  Perhaps a recession can be avoided for another 5-10 years or even longer.

 

GOVERNMENT DEFICITS MAY NOT CAUSE RATES TO RISE

Traditional economic theory says that perpetual government deficits will eventually cause interest rates to rise.  However, according to Modern Monetary Theory (MMT), a country that can print its own currency doesn’t need to worry about deficits.

Per MMT, the government first spends money and then later takes money back out in the form of taxes.  Importantly, every dollar the government spends ends up as a dollar of income for someone else.  So deficits are benign.  (Deficits can still be too big under MMT, particularly if they are not used to increase the nation’s productive capacity, or if there is a shortage of labor, raw materials, and factories.)

Interview with Stephanie Kelton, one of the most influential proponents of MMT: https://theglobepost.com/2019/03/28/stephanie-kelton-mmt/

 

MOUNTING GOVERNMENT DEBT MAY NOT CAUSE RATES TO RISE

Traditional economic theory says that government debt can get so high that people lose confidence in the country’s bonds and currency.  Stephanie Kelton:

The national debt is nothing more than a historical record of all of the dollars that were spent into the economy and not taxed back, and are currently being saved in the form of Treasury securities.

One key, again, is that the country in question must be able to print its own currency.

Kelton again:

MMT is advancing a different way of thinking about money and a different way of thinking about the role of taxes and deficits and debt in our economy.  I think it’s probably also safe to say that MMT has, I think, a superior understanding of monetary operations.  That means that we take banking and the Federal Reserve and Treasury operations and so forth very seriously, whereas more conventional approaches historically have rarely even found room in their models for things like money and finance and debt.

Let’s be clear.  MMT may be wrong, at least in part.  Many great economists—including Paul Krugman, Ken Rogoff, Larry Summers, and Janet Yellen—do not agree with MMT’s assertion that deficits and debt don’t matter for a country that can print its own currency.

 

UNEMPLOYMENT AND INFLATION

In traditional economic theory, the Phillips curve holds that there is an inverse relationship between the rate of unemployment and the rate of inflation.  As unemployment falls, wages increase which causes inflation.  But if you look at the non-employment rate (rather than the unemployment rate), the labor market isn’t really tight.  The labor force participation rate is at its lowest level in more than 40 years.  That explains in part why wages and inflation have not increased.

 

INFLATION STRUCTURALLY LOWER

As Howard Marks has noted, inflation may be structurally lower than in the past, due to automation, the shift of manufacturing to low-cost countries, and the abundace of free/cheap stuff in the digital age.

Link again: https://www.oaktreecapital.com/docs/default-source/memos/this-time-its-different.pdf

 

PROFIT MARGINS PERMANENTLY HIGHER

Proft margins on sales and corporate profits as a percentage of GDP have both been trending higher.  This is due partly to “increased monopoly, political, and brand power,” according to Jeremy Grantham.  Link again: https://www.barrons.com/articles/grantham-dont-expect-p-e-ratios-to-collapse-1493745553

Furthermore, lower interest rates and higher leverage (since 1997) have contributed to higher profit margins, asserts Grantham.

I would add that software and related technologies have become much more important in the U.S. and global economy.  Companies in these fields tend to have much higher profit margins—even after accounting for lower rates, higher leverage, and increased monopoly and political power.

 

IGNORE FORECASTS AND DON’T TRY TO TIME THE MARKET; INSTEAD FOCUS ON INDIVIDUAL BUSINESSES

The most important point is that it’s not possible to predict the stock market, but it is possible—if you’re patient—to find individual stocks that are undervalued.  This is especially true if your assets are small enough to invest in microcap stocks.  In 1999, when the overall U.S. stock market was close to its highest valuation in history, Warren Buffett said:

If I was running $1 million, or $10 million for that matter, I’d be fully invested.

No matter how high the S&P 500 Index gets, there are hundreds of microcap stocks that are almost completely ignored, with no analyst coverage and with no large investors paying attention.  That’s why Buffett said during the stock bubble in 1999 that he’d be fully invested if he were managing a small enough sum.

Microcap stocks offer the highest potential returns because there are thousands of them and they are largely ignored.  That’s not to say that there are no cheap small caps, mid caps, or large caps.  Even when the broad market is high, there are at least a few undervalued large caps.  But the number of undervalued micro caps is always much greater than the number of undervalued large caps.

So it’s best to focus on micro caps in order to maximize long-term returns.  But whether you invest in micro caps or in large caps, what matters is not the stock market or the economy, but the price of the individual business.

If and when you find a business selling at a cheap stock price, then it’s best to buy regardless of economic and market conditions—and regardless of economic and market forecasts.  As Seth Klarman puts it:

Investors must learn to assess value in order to know a bargain when they see one.  Then they must exhibit the patience and discipline to wait until a bargain emerges from their searches and buy it, regardless of the prevailing direction of the market or their own views about the economy at large.

For example, if you find a conservatively financed business whose stock is trading at 20 percent of liquidation value, it makes sense to buy it regardless of how high the overall stock market is and regardless of what’s happening—or what might happen—in the economy.  Seth Klarman again:

We don’t buy ‘the market’.  We invest in discrete situations, each individually compelling.

Ignore forecasts!

(Illustration by Maxim Popov)

Peter Lynch:

Nobody can predict interest rates, the future direction of the economy, or the stock market.  Dismiss all such forecasts and concentrate on what’s actually happening to the companies in which you’ve invested.

Now, every year there are “pundits” who make predictions about the stock market.  Therefore, as a matter of pure chance, there will always be people in any given year who are “right.”  But there’s zero evidence that any of those who were “right” at some point in the past have been correct with any sort of reliability.

Howard Marks has asked: of those who correctly predicted the bear market in 2008, how many of them predicted the recovery in 2009 and since then?  The answer: very few.  Marks points out that most of those who got 2008 right were already disposed to bearish views in general.  So when a bear market finally came, they were “right,” but the vast majority missed the recovery starting in 2009.

There are always naysayers making bearish predictions.  But anyone who owned an S&P 500 Index fund from 2007 to present (mid 2019) would have done dramatically better than most of those who listened to naysayers.  Buffett:

Ever-present naysayers may prosper by marketing their gloomy forecasts.  But heaven help them if they act on the nonsense they peddle.

Buffett himself made a 10-year wager against a group of talented hedge fund (and fund of hedge fund) managers.  Buffett’s investment in an S&P 500 Index fund trounced the super-smart hedge funds.  See: http://berkshirehathaway.com/letters/2017ltr.pdf

Some very able investors have stayed largely in cash since 2011.  Meanwhile, the S&P 500 Index has increased close to 140 percent.  Moreover, many smart investors have tried to short the U.S. stock market since 2011.  Not surprisingly, some of these short sellers are down 50 percent or more.

This group of short sellers includes the value investor John Hussman, whose Hussman Strategic Growth Fund (HSGFX) is down nearly 54 percent since the end of 2011.  Compare that to a low-cost S&P 500 Index fund like the Vanguard 500 Index Fund Investor Shares (VFINX), which is up 140 percent since then end of 2011.

If you invested $10,000 in HSGFX at the end of 2011, you would have about $4,600 today.  If instead you invested $10,000 in VFINX at the end of 2011, you would have about $24,000 today.  In other words, if you invested with one of the “ever-present naysayers,” you would have 20 percent of the value you otherwise would have gotten from a simple index fund.   HSGFX will have to increase 400 percent more than VFINX just to get back to even.

Please don’t misunderstand.  John Hussman is a brilliant and patient investor.  (Also, I made a very similar mistake 2011-2013.)  But Hussman, along with many other highly intelligent value investors—including Rob Arnott, Frank Martin, Russell Napier, and Andrew Smithers—have missed the strong possibility that this time really may be different, i.e., the average CAPE (cyclically adjusted P/E) going forward may be 24 or higher instead of 16.6.

The truth—fair value—may be somewhere in-between a CAPE of 16.6 and a CAPE of 24.  But even in that case, HSGFX is unlikely to increase 400 percent relative to the S&P 500 Index.

Jeremy Grantham again:

[It] can be very dangerous indeed to assume that things are never different.

As John Maynard Keynes is (probably incorrectly) reported to have said:

When the information changes, I alter my conclusions.  What do you do, sir?

 

WARREN BUFFETT: U.S. STOCKS VS. GOLD

In his 2018 letter to Berkshire Hathaway shareholders, Warren Buffett writes about “The American Tailwind.”  See pages 13-14: http://www.berkshirehathaway.com/letters/2018ltr.pdf

Buffett begins this discussion by pointing out that he first invested in American business when he was 11 years old in 1942.  That was 77 years ago.  Buffett “went all in” and invested $114.75 in three shares of City Service preferred stock.

Buffett then asks the reader to travel back the two 77-year periods prior to his purchase.  The year is 1788.  George Washington had just been made the first president of the United States.

Buffett asks:

Could anyone then have imagined what their new country would accomplish in only three 77-year lifetimes?

Buffett continues:

During the two 77-year periods prior to 1942, the United States had grown from four million people – about 1⁄2 of 1% of the world’s population – into the most powerful country on earth.  In that spring of 1942, though, it faced a crisis: The U.S. and its allies were suffering heavy losses in a war that we had entered only three months earlier.  Bad news arrived daily.

Despite the alarming headlines, almost all Americans believed on that March 11th that the war would be won.  Nor was their optimism limited to that victory.  Leaving aside congenital pessimists, Americans believed that their children and generations beyond would live far better lives than they themselves had led.

The nation’s citizens understood, of course, that the road ahead would not be a smooth ride.  It never had been.  Early in its history our country was tested by a Civil War that killed 4% of all American males and led President Lincoln to openly ponder whether “a nation so conceived and so dedicated could long endure.”  In the 1930s, America suffered through the Great Depression, a punishing period of massive unemployment.

Nevertheless, in 1942, when I made my purchase, the nation expected post-war growth, a belief that proved to be well-founded.  In fact, the nation’s achievements can best be described as breathtaking.

Let’s put numbers to that claim: If my $114.75 had been invested in a no-fee S&P 500 index fund, and all dividends had been reinvested, my stake would have grown to be worth (pre-taxes) $606,811 on January 31, 2019 (the latest data available before the printing of this letter).  That is a gain of 5,288 for 1.  Meanwhile, a $1 million investment by a tax-free institution of that time – say, a pension fund or college endowment – would have grown to about $5.3 billion.

[…]

Those who regularly preach doom because of government budget deficits (as I regularly did myself for many years) might note that our country’s national debt has increased roughly 400-fold during the last of my 77-year periods.  That’s 40,000%!   Suppose you had foreseen this increase and panicked at the prospect of runaway deficits and a worthless currency.  To “protect” yourself, you might have eschewed stocks and opted instead to buy 3 1⁄4 ounces of gold with your $114.75.

And what would that supposed protection have delivered?  You would now have an asset worth about $4,200, less than 1% of what would have been realized from a simple unmanaged investment in American business.  The magical metal was no match for the American mettle.

Our country’s almost unbelievable prosperity has been gained in a bipartisan manner.  Since 1942, we have had seven Republican presidents and seven Democrats.  In the years they served, the country contended at various times with a long period of viral inflation, a 21% prime rate, several controversial and costly wars, the resignation of a president, a pervasive collapse in home values, a paralyzing financial panic and a host of other problems.  All engendered scary headlines; all are now history.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Art of Value Investing

(Image:  Zen Buddha Silence by Marilyn Barbone.)

March 11, 2018

The Art of Value Investing (Wiley, 2013) is an excellent book by John Heins and Whitney Tilson.  Heins and Tilson have been running the monthly newsletter, Value Investor Insight, for a decade now.  Over that time, they have interviewed many of the best value investors in the world.  The Art of Value Investing is a collection of quotations carefully culled from those interviews.

I’ve selected and discussed the best quotes from the following areas:

  • Margin of Safety
  • Humility, Flexibility, and Patience
  • “Can’t Lose”: Shorting the U.S. Stock Market
  • “Can’t Lose”: Shorting the Japanese Yen
  • Courage
  • Cigar-Butt’s
  • Opportunities in Micro Caps
  • Predictable Human Irrationality
  • Long-Term Time Horizon
  • Screening and Quantitative Models

 

MARGIN OF SAFETY

(Ben Graham, by Equim43)

Ben Graham, the father of value investing, stressed having a margin of safety by buying well below the probable intrinsic value of a stock.  This is essential because the future is uncertain.  Also, mistakes are inevitable.  (Good value investors tend to be right 60 percent of the time and wrong 40 percent of the time.)  Jean-Marie Eveillard:

Whenever Ben Graham was asked what he thought would happen to the economy or to company X’s or Y’s profits, he always used to deadpan, ‘The future is uncertain.’  That’s precisely why there’s a need for a margin of safety in investing, which is more relevant today than ever.

Value investing legend Seth Klarman:

People should be highly skeptical of anyone’s, including their own, ability to predict the future, and instead pursue strategies that can survive whatever may occur.  

The central idea in value investing is to figure out what a business is worth (approximately), and then pay a lot less to acquire part ownership of that business via stock.  Howard Marks:

If I had to identify a single key to consistently successful investing, I’d say it’s ‘cheapness.’  Buying at low prices relative to intrinsic value (rigorously and conservatively derived) holds the key to earning dependably high returns, limiting risk and minimizing losses.  It’s not the only thing that matters—obviously—but it’s something for which there is no substitute.

 

HUMILITY, FLEXIBILITY, AND PATIENCE

(Image by Wilma64)

Successful value investing, to a large extent, is about having the right mindset.  Matthew McLennan identifies humility, flexibility, and patience as key traits:

Starting with the first recorded and reliable history that we can find—a history of the Peloponnesian war by a Greek author named Thucydides—and following through a broad array of key historical global crises, you see recurring aspects of human nature that have gotten people into trouble:  hubris, dogma, and haste.  The keys to our investing approach are the symmetrical opposite of that:  humility, flexibility, and patience.

On the humility side, one of the things that Jean-Marie Eveillard firmly ingrained in the culture here is that the future is uncertain.  That results in investing with not only a price margin of safety, but in companies with conservative balance sheets and prudent and proven management teams….

In terms of flexibility, we’ve been willing to be out of the biggest sectors of the market…

The third thing in terms of temperament we think we value more than most other investors is patience.  We have a five-year average holding period….We like to plant seeds and then watch the trees grow, and our portfolio is often kind of a portrait of inactivity.

It’s hard to overstate the importance of humility in investing.  Many of the biggest investing mistakes have occurred when intelligent investors who have succeeded in the past have developed high conviction in an idea that happens to be wrong.  Kyle Bass explains this point clearly:

You obviously need to develop strong opinions and to have the conviction to stick with them when you believe you’re right, even when everybody else may think you’re an idiot.  But where I’ve seen ego get in the way is by not always being open to questions and to input that could change your mind.  If you can’t ever admit you’re wrong, you’re more likely to hang on to your losers and sell your winners, which is not a recipe for success.

It often happens in investing that ideas that seem obvious or even irrefutable turn out to be wrong.  The very best investors—such as Warren Buffett, Charlie Munger, Seth Klarman, Howard Marks, Jeremy Grantham, George Soros, and Ray Dalio—have developed enough humility to admit when they’re wrong, even when all the evidence seems to indicate that they’re right.

Here are two great examples of how seemingly irrefutable ideas can turn out to be wrong:

  • shorting the U.S. stock market;
  • shorting the Japanese yen.

 

“CAN’T LOSE”: SHORTING THE U.S. STOCK MARKET

(Illustration by Eti Swinford)

Professor Russell Napier is the author of Anatomy of the Bear (Harriman House, 4th edition, 2016).  Napier was a top-rated analyst for many years and has been studying and writing about global macro strategy for institutional investors since 1995.

Napier has maintained (at least since 2012) that the U.S. stock market is significantly overvalued based on the Q-ratio and also the CAPE (cyclically adjusted P/E).  Moreover, Napier points out that every major U.S. secular bear market bottom in the last 100 years or so has seen the CAPE approach single digits.  The catalyst for the major drop has always been either inflation or deflation, states Napier.

Napier continues to argue that U.S. stocks are overvalued and that deflation will cause the U.S. stock market to drop significantly, similar to previous secular bear markets.

Many highly intelligent value investors—at least since 2012 or 2013—have maintained high cash balances and/or short positions because they essentially agree with Napier’s argument.

However, no one has ever been able to predict the stock market.  But if you follow the advice of most great value investors, you just focus on investing in individual businesses that you can understand.  There’s no need to try to predict the unpredictable.

That’s not to say there won’t be a large drop in the S&P 500 Index at some point.  But Napier was arguing—starting even before 2012—that the S&P 500 Index was overvalued at levels around 1200-1500 and that it would fall possibly as low as 400.  It’s now roughly six years later and the S&P 500 Index has recently exceeded 2700-2800.  Moreover, Jeremy Grantham, an expert on bubbles and fully aware of arguments by bears like Napier, has recently suggested the S&P 500 Index could exceed 3400-3700 before any serious break.

If the market exceeds 3400 or 3700 and then falls to 1700-2000, Napier still wouldn’t be right because he originally suggested a fall from 1200-1500 towards levels near 400.  Napier is one of the smartest market historians in the world.  This demonstrates that no one has ever been able to predict the stock market.  That’s what great value investors—including Ben Graham, Henry Singleton, Warren Buffett, Charlie Munger, Peter Lynch, and Seth Klarman—have always maintained.

The basic reason the stock market can’t be predicted is that the economy changes and evolves over time.

  • For example, Fed policy in recent decades has been to keep interest rates quite low for years in order to prevent deflation.  Very low rates cause stocks to be much higher than otherwise.
  • Profit margins are arguably higher to the extent that software (and related technologies) has become much more important in the U.S. and global economy.  The five largest U.S. companies are Google, Apple, Microsoft, Facebook, and Amazon, all technology companies.  Lower corporate taxes are likely giving a further boost to profit margins.

Jeremy Grantham, co-founder of GMO, is one of the most astute value investors who tracks fair value of the S&P 500 Index.  Grantham used to think, back in 2012-2013, that the U.S. secular bear market was not over.  Then he partially revised his view and predicted that the S&P 500 Index was likely to exceed 2250-2300.  This level would have made the S&P 500’s value two standard deviations above the historical mean, indicating that it was back in bubble territory according to GMO’s definition.

Recently, in June 2017, Grantham has revised his view again.  See: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/viewpoints—i-do-indeed-believe-the-us-market-will-revert-toward-its-old-means-just-very-slowly

Grantham says mean reversion for profit margins and for the CAPE (cyclically adjusted P/E) is likely, but will probably take 20 years rather than 7 years (which previously was sufficient for mean reversion).  That’s because the factors that support margins and the CAPE are themselves changing very slowly.  Those factors include Fed policy including moral hazard, lower interest rates, an aging population, slower growth, productivity, and increased political and monopoly power for corporations.

In January 2018, Grantham updated his view yet again: https://www.gmo.com/docs/default-source/research-and-commentary/strategies/asset-allocation/viewpoints—bracing-yourself-for-a-possible-near-term-melt-up.pdf?sfvrsn=4

Grantham now asserts that a market melt-up is likely over the next 6 months to 2 years.  Grantham suggests that the S&P 500 Index will exceed 3400 or 3700.  Prices are already high, but few of the usual signs of euphoria are present, which is why Grantham thinks the S&P 500 Index is not quite back to bubble territory.

The historian has to emphasize the big picture: In general are investors getting clearly carried away?  Are prices accelerating?  Is the market narrowing?  And, are at least some of the other early warnings from the previous great bubbles falling into place?

(Image by joshandandreaphotography)

As John Maynard Keynes is (probably incorrectly) reported to have said:

When the information changes, I alter my conclusions.  What do you do, sir?

There are some very smart value investors—such as Frank Martin and John Hussman—who still basically agree with Russell Napier’s views.  They may eventually be right.

But no one has ever been able to predict the stock market.  Ben Graham—with a 200 IQ—was as smart or smarter than any value investor who’s ever lived.  And here’s what Graham said near the end of his career:

If I have noticed anything over these sixty years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

In 1963, Graham gave a lecture, “Securities in an Insecure World.”  Link: https://www8.gsb.columbia.edu/rtfiles/Heilbrunn/Schloss%20Archives%20for%20Value%20Investing/Articles%20by%20Benjamin%20Graham/DOC005.PDF

In the lecture, Graham admits that the Graham P/E—based on ten-year average earnings of the Dow components—was much too conservative.  Graham:

The action of the stock market since then would appear to demonstrate that these methods of valuations are ultra-conservative and much too low, although they did work out extremely well through the stock market fluctuations from 1871 to about 1954, which is an exceptionally long period of time for a test.  Unfortunately in this kind of work, where you are trying to determine relationships based upon past behavior, the almost invariable experience is that by the time you have had a long enough period to give you sufficient confidence in your form of measurement just then new conditions supersede and the measurement is no longer dependable for the future.

Graham goes on to note that, in the 1962 edition of Security Analysis, Graham and Dodd addressed this issue.  Because of the U.S. government’s more aggressive policy with respect to preventing a depression, Graham and Dodd concluded that the U.S. stock market should have a fair value 50 percent higher.

Similar logic can be applied to the S&P 500 Index today—at just over 2783.  Fed policy including moral hazard, lower interest rates, an aging population, slower growth, productivity, and increased political and monopoly power for corporations are all factors in the S&P 500 being quite high.  But Grantham is most likely right that there won’t be a true bubble until there are more signs of investors getting carried away.  Grantham reminds readers that a bubble is “Excellent Fundamentals Euphorically Extrapolated.”  Now that the global economy is doing nicely, this condition for a true bubble is now in place.

None of this suggests that an investor should attempt market timing.  Value investors can still find individual stocks that are undervalued, even though there are fewer today than a few years ago.  But trying to time the market itself has almost never worked except by luck.  This has not only been observed by Graham.  But it’s also been pointed out by Peter Lynch, Seth Klarman, Henry Singleton, and Warren Buffett.  Peter Lynch is one of the best investors.  Klarman is even better.  Buffett is arguably the best.  And Singleton was even smarter than Buffett.

(Illustration by Maxim Popov)

Peter Lynch:

Nobody can predict interest rates, the future direction of the economy, or the stock market.  Dismiss all such forecasts and concentrate on what’s actually happening to the companies in which you’ve invested.

Seth Klarman:

In reality, no one knows what the market will do; trying to predict it is a waste of time, and investing based upon that prediction is a speculative undertaking.

Now, every year there are “pundits” who make predictions about the stock market.  Therefore, as a matter of pure chance, there will always be people in any given year who are “right.”  But there’s zero evidence that any of those who were “right” at some point in the past have been correct with any sort of reliability.

Howard Marks has asked: of those who correctly predicted the bear market in 2008, how many of them predicted the recovery in 2009 and since then?  The answer: very few.  Marks points out that most of those who got 2008 right were already disposed to bearish views in general.  So when a bear market finally came, they were “right,” but the vast majority missed the recovery starting in 2009.

There are always naysayers making bearish predictions.  But anyone who owned an S&P 500 index fund from 2007 to present (early 2018) would have done dramatically better than most of those who listened to naysayers.  Buffett:

Ever-present naysayers may prosper by marketing their gloomy forecasts.  But heaven help them if they act on the nonsense they peddle.

Buffett himself made a 10-year wager against a group of talented hedge fund (and fund of hedge fund) managers.  The S&P 50 Index fund trounced the super-smart hedge funds.  See: http://berkshirehathaway.com/letters/2017ltr.pdf

Some very able investors have stayed largely in cash since 2011-2012.  The S&P 500 Index has more than doubled since then.  Moreover, many have tried to short the U.S. stock market since 2011-2012.  Some are down 50 percent or more, while the S&P 500 Index has more than doubled.  The net result of that combination is to be at only 15-25% of the S&P 500’s current value.

Henry Singleton, a business genius (100 points from being a chess grandmaster) who was easily one of the best capital allocators in American business history, never relied on financial forecasts—despite operating in a secular bear market from 1968 to 1982:

I don’t believe all this nonsense about market timing. Just buy very good value and when the market is ready that value will be recognized.

Warren Buffett puts it best:

  • Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.
  • We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.
  • Market forecasters will fill your ear but never fill your wallet.
  • Forecasts may tell you a great deal about the forecaster; they tell you nothing about the future.
  • Stop trying to predict the direction of the stock market, the economy, interest rates, or elections.
  • [On economic forecasts:] Why spend time talking about something you don’t know anything about?  People do it all the time, but why do it?
  • I don’t invest a dime based on macro forecasts.

 

“CAN’T LOSE”: SHORTING THE JAPANESE YEN

Another good example of a “can’t lose” investment idea that has turned out not to be right:  shorting the Japanese yen.  Many macro experts have been quite certain that the Japanese yen versus the U.S. dollar would eventually exceed 200.  They thought this would have happened years ago.  Some called it the “trade of the decade.”  But the yen versus U.S. dollar is still around 110.  A simple S&P 500 index fund appears to be doing far better than the “trade of the decade.”

(Illustration by Shalom3)

Some have tried to short Japanese government bonds (JGB’s), rather than shorting the yen currency.  But that hasn’t worked for decades.  In fact, shorting JGB’s has become known as the widowmaker trade.

Seth Klarman on humility:

In investing, certainty can be a serious problem, because it causes one not to reassess flawed conclusions.  Nobody can know all the facts.  Instead, one must rely on shreds of evidence, kernels of truth, and what one suspects to be true but cannot prove.

Klarman on the vital importance of doubt:

It is much harder psychologically to be unsure than to be sure;  certainty builds confidence, and confidence reinforces certainty.  Yet being overly certain in an uncertain, protean, and ultimately unknowable world is hazardous for investors.  To be sure, uncertainty breeds doubt, which can be paralyzing.  But uncertainty also motivates diligence, as one pursues the unattainable goal of eliminating all doubt.  Unlike premature or false certainty, which induces flawed analysis and failed judgments, a healthy uncertainty drives the quest for justifiable conviction.

My own painful experiences:  shorting the U.S. stock market and shorting the Japanese yen.  In each case, I believed that the evidence was overwhelming.  By far the biggest mistake I’ve ever made was shorting the U.S. stock market in 2011-2013.  At the time, I agreed with Russell Napier’s arguments.  I was completely wrong.

After that, I shorted the Japanese yen because I was convinced the argument was virtually irrefutable.  Wrong.  Perhaps the yen will collapse some day, but if it’s 10-20 years in the future—or even later—then an index fund or a quantitative value fund would be a far better and safer investment.

Spencer Davidson:

Over a long career you learn a certain humility and are quicker to attribute success to luck rather than your own brilliance.  I think that makes you a better investor, because you’re less apt to make the big mistake and you’re probably quicker to capitalize on good fortune when it shines upon you.

Jeffrey Bronchick:

It’s important not to get carried away with yourself when times are good, and to be able to admit your mistakes and move on when they’re not so good.  If you are intellectually honest—and not afraid to be visibly and sometimes painfully judged by your peers—investing is not work, it’s fun.

Patiently waiting for pessimism or temporary bad news to create low stock prices (some place), and then buying stocks well below probable intrinsic value, does not require genius in general.  But it does require the humility to focus only on areas where you can do well.  As Warren Buffett has remarked:

What counts for most people in investing is not how much they know, but rather how realistically they define what they don’t know.

 

COURAGE

(Courage concept by Travelling-light)

Humility is essential for success in investing.  But you also need the courage to think and act independently.  You have to be able to develop an investment thesis based on the facts and good reasoning without worrying if many others disagree.  Most of the best value investments are contrarian, meaning that your view differs from the consensus.  Ben Graham:

In the world of securities, courage becomes the supreme virtue after adequate knowledge and a tested judgment are at hand.

Graham again:

You’re neither right nor wrong because the crowd disagrees with you.  You’re right because your data and reasoning are right.

Or as Carlo Cannell says:

Going against the grain is clearly not for everyone—and it doesn’t tend to help you in your social life—but to make the really large money in investing, you have to have the guts to make the bets that everyone else is afraid to make.

Joel Greenblatt identifies two chief reasons why contrarian value investing is hard:

Value investing strategies have worked for years and everyone’s known about them.  They continue to work because it’s hard for people to do, for two main reasons.  First, the companies that show up on the screens can be scary and not doing so well, so people find them difficult to buy.  Second, there can be one-, two- or three-year periods when a strategy like this doesn’t work.  Most people aren’t capable of sticking it out through that.

Contrarian value investing requires buying what is out-of-favor, neglected, or hated.  It also requires the ability to endure multi-year periods of trailing the market, which most investors just can’t do.  Furthermore, while you’re buying what everyone hates and while you’re trailing the market, you also have to put up with people calling you an idiot.  In a word, you must have the ability to suffer.  Eveillard:

If you are a value investor, you’re a long-term investor.  If you are a long-term investor, you’re not trying to keep up with a benchmark on a short-term basis.  To do that, you accept in advance that every now and then you will lag behind, which is another way of saying you will suffer.  That’s very hard to accept in advance because, the truth is, human nature shrinks from pain.  That’s why not so many people invest this way.  But if you believe as strongly as I do that value investing not only makes sense, but that it works, there’s really no credible alternative.

 

CIGAR-BUTT’S

(Photo by Leung Cho Pan)

Warren Buffett has remarked that buying baskets of statistically cheap cigar-butt’s—50-cent dollars—is a more dependable way to generate good returns than buying high-quality businesses.  Rich Pzena perhaps expressed it best:

When I talk about the companies I invest in, you’ll be able to rattle off hundreds of bad things about them—but that’s why they’re cheap!  The most common comment I get is ‘Don’t you read the paper?’  Because if you read the paper, there’s no way you’d buy these stocks.

They’re priced where they are for good reason, but I invest when I believe the conditions that are causing them to be priced that way are probably not permanent.  By nature, you can’t be short-term oriented with this investment philosophy.  If you’re going to worry about short-term volatility, you’re just not going to be able to buy the cheapest stocks.  With the cheapest stocks, the outlooks are uncertain.

Many investors incorrectly assume that high growth in the past will continue into the future, or that a high-quality company is automatically a good investment.  Behavioral finance expert and value investor James Montier:

There’s a great chapter [in Dan Ariely’s Predictably Irrational] about the ways in which we tend to misjudge price and use it as an indicator of something or other.  That links back to my whole thesis that the most common error we as investors make is overpaying for the hope of growth.  Dan did an experiment involving wine, in which he told people, ‘Here’s a $10 bottle of wine and here’s a $90 bottle of wine.  Please rate them and tell me which tastes better.’  Not surprisingly, nearly everyone thought the $90 wine tasted much better than the $10 wine.  The only snag was that the $90 wine and the $10 wine were actually the same $10 wine.

 

OPPORTUNITIES IN MICRO CAPS

(Illustration by Mopic)

Micro-cap stocks are the most inefficiently priced.  That’s because, for most professional investors, assets under management are too large.  These investors cannot even consider micro caps.  The Boole Microcap Fund is designed to take advantage of this inefficiency: https://boolefund.com/best-performers-microcap-stocks/

James Vanasek on the opportunity in micro caps:

We’ll invest in companies with up to $1 billion or so in market cap, but have been most successful in ideas that start out in the $50 million to $300 million range.  Fewer people are looking at them and the industries the companies are in can be quite stable.  Given that, if you find a company doing well, it’s more likely it can sustain that advantage over time.

Because very few professional investors can even contemplate investing in micro caps, there’s far less competition.  Carlo Cannell:

My basic premise is that the efficient markets hypothesis breaks down when there is inconsistent, imperfect dissemination of information.  Therefore it makes sense to direct our attention to the 14,000 or so publicly traded companies in the U.S. for which there is little or no investment sponsorship by Wall Street, meaning three or fewer sell-side analysts who publish research…

You’d be amazed how little competition we have in this neglected universe.  It is just not in the best interest of the vast majority of the investing ecosphere to spend 10 minutes on the companies we spend our lives looking at.

Robert Robotti adds:

We focus on smaller-cap companies that are largely ignored by Wall Street and face some sort of distress, of their own making or due to an industry cycle.  These companies are more likely to be inefficiently priced and if you have conviction and a long-term view they can produce not 20 to 30 percent returns, but multiples of that.

 

PREDICTABLE HUMAN IRRATIONALITY

Value investors recognize that the stock market is not always efficient, largely because humans are often less than fully rational.  As Seth Klarman explains:

Markets are inefficient because of human nature—innate, deep-rooted, permanent.  People don’t consciously choose to invest with emotion—they simply can’t help it.

Quantitative value investor James O’Shaughnessy:

Because of all the foibles of human nature that are well documented by behavioral research—people are always going to overshoot and undershoot when pricing securities.  A review of financial markets all the way back to the South Sea Company nearly 300 years ago proves this out.

Bryan Jacoboski:

The very reason price and value diverge in predictable and exploitable ways is because people are emotional beings.  That’s why the distinguishing attribute among successful investors is temperament rather than brainpower, experience, or classroom training.  They have the ability to be rational when others are not.

Overconfidence is extremely deep-rooted in human psychology.  When asked, the vast majority of us rate ourselves as above average across a wide variety of dimensions such as looks, smarts, driving skill, academic ability, future well-being, and even luck (!).

In a field such as investing, it’s vital to become aware of our natural overconfidence.  Charlie Munger likes this quote from Demosthenes:

Nothing is easier than self-deceit.  For what each man wishes, that also he believes to be true.

But becoming aware of our overconfidence is usually not enough.  We also have to develop systems—such as checklists—that can automatically reduce both the frequency and the severity of mistakes.

(Image by Aleksey Vanin)

Charlie Munger reminds value investors not only to develop and use a checklist, but also to follow the advice of mathematician Carl Jacobi:

Invert, always invert.

In other words, instead of thinking about how to succeed, Munger advises value investors to figure out all the ways you can fail.  This is a powerful concept in a field like investing, where overconfidence frequently causes failure.  Munger:

It is occasionally possible for a tortoise, content to assimilate proven insights of his best predecessors, to outrun hares which seek originality or don’t wish to be left out of some crowd folly which ignores the best work of the past.  This happens as the tortoise stumbles on some particularly effective way to apply the best previous work, or simply avoids the standard calamities.  We try more to profit by always remembering the obvious than from grasping the esoteric.  It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent.

When it comes to checklists, it’s helpful to have a list of cognitive biases.  Here’s my list: https://boolefund.com/cognitive-biases/

Munger’s list is more comprehensive: https://boolefund.com/the-psychology-of-misjudgment/

Recency bias is one of the most important biases to be aware of as an investor.  Jed Nussdorf:

It is very hard to avoid recency bias, when what just happened inordinately informs your expectation of what will happen next.  One of the best things I’ve read on that is The Icarus Syndrome, by Peter Beinart.  It’s not about investing, but describes American hubris in foreign policy, in many cases resulting from doing what seemed to work in the previous 10 years even if the setting was materially different or conditions had changed.  One big problem is that all the people who succeed in the recent past become the ones in charge going forward, and they think they have it all figured out based on what they did before.  It’s all quite natural, but can result in some really bad decisions if you don’t constantly challenge your core beliefs.

Availability bias is closely related to recency bias and vividness bias.  You’re at least 15-20 times more likely to be hit by lightning in the United States than to be bitten by shark.  But often people don’t realize this because shark attacks tend to be much more vivid in people’s minds.  Similarly, your odds of dying in a car accident are 1 in 5,000, while your odds of dying in a plane crash are 1 in 11 million.  Nonetheless, many people view flying as more dangerous.

John Dorfman on investors overreacting to recent news:

Investors overreact to the latest news, which has always been the case, but I think it’s especially true today with the Internet.  Information spreads so quickly that decisions get made without particularly deep knowledge about the companies involved.  People also overemphasize dramatic events, often without checking the facts.

 

LONG-TERM TIME HORIZON

(Illustration by Marek)

Because so many investors worry and think about the shorter term, value investors continue to gain a large advantage by focusing on the longer term (especially three to five years).  In a year or less, a given stock can do almost anything.  But over a five-year period, a stock tracks intrinsic business value to a large extent.  Jeffrey Ubben:

It’s still true that the biggest players in the public markets—particularly mutual funds and hedge funds—are not good at taking short-term pain for long-term gain.  The money’s very quick to move if performance falls off over short periods of time.  We don’t worry about headline risk—once we believe in an asset, we’re buying more on any dips because we’re focused on the end game three or four years out.

Mario Cibelli:

One of the last great arbitrages left is to be long-term-oriented when there is a large class of shareholders who have no tolerance for short-term setbacks.  So it’s interesting when stocks get beaten-up because a company misses earnings or the market reacts to a short-term business development.  It’s crazy to me when someone says something is cheap but doesn’t buy it because they think it won’t go anywhere for the next 6 to 12 months.  We have a pretty high tolerance for taking that pain if we see glory longer term.

Whitney Tilson wrote about a great story that value investor Bill Miller told.  Miller recalled that, early in his career, he was visiting an institutional money manager, to whom he was pitching R.J. Reynolds, then trading at four times earnings.  Miller:

“When I finished, the chief investment officer said: ‘That’s a really compelling case but we can’t own that.  You didn’t tell me why it’s going to outperform the market in the next nine months.’  I said I didn’t know if it was going to do that or not but that there was a very high probability it would do well over the next three to five years.

“He said: ‘How long have you been in this business?  There’s a lot of performance pressure, and performing three to five years down the road doesn’t cut it.  You won’t be in business then.  Clients expect you to perform right now.’

“So I said: ‘Let me ask you, how’s your performance?’

“He said: ‘It’s terrible, that’s why we’re under a lot of performance pressure.’

“I said: ‘If you bought stocks like this three years ago, your performance would be good right now and you’d be buying RJR to help your performance over the next three years.’”

Link: http://www.tilsonfunds.com/Patience%20can%20find%20a%20virtue%20in%20market%20inefficiency-FT-6-9-06.pdf

Many investors are so focused on shorter periods of time (a year or less).  They forget that the value of any business is ALL of its (discounted) future free cash flow, which often means 10-20 years or more.  David Herro:

I would assert the biggest reason quality companies sell at discounts to intrinsic value is time horizon.  Without short-term visibility, most investors don’t have the conviction or courage to hold a stock that’s facing some sort of challenge, either internally or externally generated.  It seems kind of ridiculous, but what most people in the market miss is that intrinsic value is the sum of ALL future cash flows discounted back to the present.  It’s not just the next six months’ earnings or the next year’s earnings.  To truly invest for the long term, you have to be able to withstand underperformance in the short term, and the fact of the matter is that most people can’t.

As Mason Hawkins observes, a company may be lagging now precisely because it’s making longer-term investments that will probably increase business value in the future:

Classic opportunities for us get back to time horizon.  A company reports a bad quarter, which disappoints Wall Street with its 90-day focus, but that might be for explainable temporary reasons or even because the company is making very positive long-term investments in the business.  Many times that investment increases the likely value of the company five years from now, but disappoints people who want the stock up tomorrow.

Whitney George:

We evaluate businesses over a full business cycle and probably our biggest advantage is an ability to buy things when most people can’t because the short-term outlook is lousy or very hard to judge.  It’s a good deal easier to know what’s likely to happen than to know precisely when it’s going to happen.

In general, humans are impatient and often discount multi-year investment gains far too much.  John Maynard Keynes: 

Human nature desires quick results, there is a particular zest in making money quickly, and remoter gains are discounted by the average man at a very high rate.

 

SCREENING AND QUANTITATIVE MODELS

(Word cloud by Arloofs)

Automating of the investment process, including screening, is often more straightforward now than it has been, thanks to enormous advances in computing in the past two decades.

Will Browne:

We often start with screens on all aspects of valuation.  There are characteristics that have been proven over long periods to be associated with above-average rates of return:  low P/Es, discounts to book value, low debt/equity ratios, stocks with recent significant price declines, companies with patterns of insider buying and—something we’re paying a lot more attention to—stocks with high dividend yields.

Stephen Goddard:

Our basic screening process weights three factors equally:  return on tangible capital, the multiple of EBIT to enterprise value, and free cash flow yield.  We rank the universe we’ve defined on each factor individually from most attractive to least, and then combine the rankings and focus on the top 10%.

Carlo Cannell:

[We] basically spend our time trying to uncover the assorted investment misfits in the market’s underbrush that are largely neglected by the investment community.  One of the key metrics we assign to our companies is an analyst ratio, which is simply the number of analysts who follow the company.  The lower the better—as of the end of last year, about 65 percent of the companies in our portfolio had virtually no analyst coverage.

For some time now, it has been clear that simple quant models outperform experts in a wide variety of areas: https://boolefund.com/simple-quant-models-beat-experts-in-a-wide-variety-of-areas/

Quantitative value investor James O’Shaughnessy:

Models beat human forecasters because they reliably and consistently apply the same criteria time after time.  Models never vary.  They are never moody, never fight with their spouse, are never hung over from a night on the town, and never get bored.  They don’t favor vivid, interesting stories over reams of statistical data.  They never take anything personally.  They don’t have egos.  They’re not out to prove anything.  If they were people, they’d be the death of any party.

People on the other hand, are far more interesting.  It’s far more natural to react emotionally or to personalize a problem than it is to dispassionately review broad statistical occurrences—and so much more fun!  It’s much more natural for us to look at the limited set of our personal experiences and then generalize from this small sample to create a rule-of-thumb heuristic.  We are a bundle of inconsistencies, and although this tends to make us interesting, it plays havoc with our ability to successfully invest.

Buffett maintains (correctly) that the vast majority of investors, large or small, should invest in low-cost broad market index funds: https://boolefund.com/quantitative-microcap-value/

If you invest in a quantitative value fund focused on cheap micro caps with improving fundamentals, then you can reasonably expect to do about 7% (+/- 3%) better than the S&P 500 Index over time: https://boolefund.com/best-performers-microcap-stocks/

Will Browne:

When you have a model you believe in, that you’ve used for a long time and which is more empirical than intuitive, sticking with it takes the emotion away when markets are good or bad.  That’s been a central element of our success.  It’s the emotional dimension that drives people to make lousy, irrational decisions.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Deep Value: Profiting from Mean Reversion

(Image:  Zen Buddha Silence by Marilyn Barbone.)

November 12, 2017

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.  Sometimes it seems that there are misconceptions about deep value investing.

  • First, deep value stocks have on occasion been called cheap relative to future growth.  But it’s often more accurate to say that deep value stocks are cheap relative to normalized earnings or cash flows.
  • Second, the cheapness of deep value stocks has often been said to be relative to “net tangible assets.”  However, in many cases, even including stocks at a discount to tangible assets, mean reversion relates to the future normalized earnings or cash flows that the assets can produce.
  • Third, typically more than half of deep value stocks underperform the market.  And deep value stocks are more likely to be distressed than average stocks.  Do these facts imply that a deep value investment strategy is riskier than average?  No…

Have you noticed these misconceptions?  I’m curious to hear your take.  Please let me know.

Here are the sections in this blog post:

  • Introduction
  • Mean Reversion as “Return to Normal” instead of “Growth”
  • Revenues, Earnings, Cash Flows, NOT Asset Values
  • Is Deep Value Riskier?
  • A Long Series of Favorable Bets
  • “Cigar Butt’s” vs. See’s Candies
  • Microcap Cigar Butt’s

 

INTRODUCTION

Deep value stocks tend to fit two criteria:

  • Deep value stocks trade at depressed multiples.
  • Deep value stocks have depressed fundamentals – they have generally been doing terribly in terms of revenues, earnings, or cash flows, and often the entire industry is doing poorly.

The essence of deep value investing is systematically buying stocks at low multiples in order to profit from future mean reversion.

  • Low multiples include low P/E (price-to-earnings), low P/B (price-to-book), low P/CF (price-to-cash flow), and low EV/EBIT (enterprise value-to-earnings before interest and taxes).
  • Mean reversion implies that, in general, deep value stocks are underperforming their economic potential.  On the whole, deep value stocks will experience better future economic performance than is implied by their current stock prices.

If you look at deep value stocks as a group, it’s a statistical fact that many will experience better revenues, earnings, or cash flows in the future than what is implied by their stock prices.  This is due largely to mean reversion.  The future economic performance of these deep value stocks will be closer to normal levels than their current economic performance.

Moreover, the stock price increases of the good future performers will outweigh the languishing stock prices of the poor future performers.  This causes deep value stocks, as a group, to outperform the market over time.

Two important notes:

  1. Generally, for deep value stocks, mean reversion implies a return to more normal levels of revenues, earnings, or cash flows.  It does not often imply growth above and beyond normal levels.
  2. For most deep value stocks, mean reversion relates to future economic performance and not to tangible asset value per se.

(1) Mean Reversion as Return to More Normal Levels

One of the best papers on deep value investing is by Josef Lakonishok, Andrei Shleifer, and Robert Vishny (1994), “Contrarian Investment, Extrapolation, and Risk.”  Link: http://scholar.harvard.edu/files/shleifer/files/contrarianinvestment.pdf

LSV (Lakonishok, Schleifer, and Vishny) correctly point out that deep value stocks are better identified by using more than one multiple.  LSV Asset Management currently manages $105 billion using deep value strategies that rely simultaneously on several metrics for cheapness, including low P/E and low P/CF.

  • In Quantitative Value (Wiley, 2012), Tobias Carlisle and Wesley Gray find that low EV/EBIT outperformed every other measure of cheapness, including composite measures.
  • However, James O’Shaughnessy, in What Works on Wall Street (McGraw-Hill, 2011), demonstrates – with great thoroughness – that, since the mid-1920’s, composite approaches (low P/S, P/E, P/B, EV/EBITDA, P/FCF) have been the best performers.
  • Any single metric may be more easily arbitraged away by a powerful computerized approach.  Walter Schloss once commented that low P/B was working less well because many more investors were using it.  (In recent years, low P/B hasn’t worked.)

LSV explain why mean reversion is the essence of deep value investing.  Investors, on average, are overly pessimistic about stocks at low multiples.  Investors understimate the mean reversion in future economic performance for these out-of-favor stocks.

However, in my view, the paper would be clearer if it used (in some but not all places) “return to more normal levels of economic performance” in place of “growth.”  Often it’s a return to more normal levels of economic performance – rather than growth above and beyond normal levels – that defines mean reversion for deep value stocks.

(2) Revenues, Earnings, Cash Flows NOT Net Asset Values

Buying at a low price relative to tangible asset value is one way to implement a deep value investing strategy.  Many value investors have successfully used this approach.  Examples include Ben Graham, Walter Schloss, Peter Cundill, John Neff, and Marty Whitman.

Warren Buffett used this approach in the early part of his career.  Buffett learned this method from his teacher and mentor, Ben Graham.  Graham called this the “net-net” approach.  You take net working capital minus ALL liabilities.  If the stock price is below that level, and if you buy a basket of such “net-net’s,” you can’t help but do well over time.  These are extremely cheap stocks, on average.  (The only catch is that there must be enough net-net’s in existence to form a basket, which is not always the case.)

Buffett on “cigar butts”:

…I call it the cigar butt approach to investing.  You walk down the street and you look around for a cigar butt someplace.  Finally you see one and it is soggy and kind of repulsive, but there is one puff left in it.  So you pick it up and the puff is free – it is a cigar butt stock.  You get one free puff on it and then you throw it away and try another one.  It is not elegant.  But it works.  Those are low return businesses.

Link: http://intelligentinvestorclub.com/downloads/Warren-Buffett-Florida-Speech.pdf

But most net-net’s are NOT liquidated.  Rather, there is mean reversion in their future economic performance – whether revenues, earnings, or cash flows.  That’s not to say there aren’t some bad businesses in this group.  For net-net’s, when economic performance returns to more normal levels, typically you sell the stock.  You don’t (usually) buy and hold net-net’s.

Sometimes net-net’s are acquired.  But in many of these cases, the acquirer is focused mainly on the earnings potential of the assets.  (Non-essential assets may be sold, though.)

In sum, the specific deep value method of buying at a discount to net tangible assets has worked well in general ever since Graham started doing it.  And net tangible assets do offer additional safety.  That said, when these particular cheap stocks experience mean reversion, often it’s because revenues, earnings, or cash flows return to “more normal” levels.  Actual liquidation is rare.

 

IS DEEP VALUE RISKIER?

According to a study done by Joseph Piotroski from 1976 to 1996 – discussed below – although a basket of deep value stocks clearly beats the market over time, only 43% of deep value stocks outperform the market, while 57% underperform.  By comparison, an average stock has a 50% chance of outperforming the market and a 50% chance of underperforming.

Let’s assume that the average deep value stock has a 57% chance of underperforming the market, while an average stock has only a 50% chance of underperforming.  This is a realistic assumption not only because of Piotroski’s findings, but also because the average deep value stock is more likely to be distressed (or to have problems) than the average stock.

Does it follow that the reason deep value investing does better than the market over time is that deep value stocks are riskier than average stocks?

It is widely accepted that deep value investing does better than the market over time.  But there is still disagreement about how risky deep value investing is.  Strict believers in the EMH (Efficient Markets Hypothesis) – such as Eugene Fama and Kenneth French – argue that value investing must be unambiguously riskier than simply buying an S&P 500 Index fund.  On this view, the only way to do better than the market over time is by taking more risk.

Now, it is generally true that the average deep value stock is more likely to underperform the market than the average stock.  And the average deep value stock is more likely to be distressed than the average stock.

But LSV show that a deep value portfolio does better than an average portfolio, especially during down markets.  This means that a basket of deep value stocks is less risky than a basket of average stocks.

  • A “portfolio” or “basket” of stocks refers to a group of stocks.  Statistically speaking, there must be at least 30 stocks in the group.  In the case of LSV’s study – like most academic studies of value investing – there are hundreds of stocks in the deep value portfolio.  (The results are similar over time whether you have 30 stocks or hundreds.)

Moreover, a deep value portfolio only has slightly more volatility than an average portfolio, not nearly enough to explain the significant outperformance.  In fact, when looked at more closely, deep value stocks as a group have slightly more volatility mainly because of upside volatility – relative to the broad market – rather than because of downside volatility.  This is captured not only by the clear outperformance of deep value stocks as a group over time, but also by the fact that deep value stocks do much better than average stocks in down markets.

Deep value stocks, as a group, not only outperform the market, but are less risky.  Ben Graham, Warren Buffett, and other value investors have been saying this for a long time.  After all, the lower the stock price relative to the value of the business, the less risky the purchase, on average.  Less downside implies more upside.

 

A LONG SERIES OF FAVORABLE BETS

Let’s continue to assume that the average deep value stock has a 57% chance of underperforming the market.  And the average deep value stock has a greater chance of being distressed than the average stock.  Does that mean that the average individual deep value stock is riskier than the average stock?

No, because the expected return on the average deep value stock is higher than the expected return on the average stock.  In other words, on average, a deep value stock has more upside than downside.

Put very crudely, in terms of expected value:

[(43% x upside) – (57% x downside)] > [avg. return]

43% times the upside, minus 57% times the downside, is greater than the return from the average stock (or from the S&P 500 Index).

The crucial issue relates to making a long series of favorable bets.  Since we’re talking about a long series of bets, let’s again consider a portfolio of stocks.

  • Recall that a “portfolio” or “basket” of stocks refers to a group of at least 30 stocks.

A portfolio of average stocks will simply match the market over time.  That’s an excellent result for most investors, which is why most investors should just invest in index funds: https://boolefund.com/warren-buffett-jack-bogle/

A portfolio of deep value stocks will, over time, do noticeably better than the market.  Year in and year out, approximately 57% of the deep value stocks will underperform the market, while 43% will outperform.  But the overall outperformance of the 43% will outweigh the underperformance of the 57%, especially over longer periods of time.  (57% and 43% are used for illustrative purposes here.  The actual percentages vary.)

Say that you have an opportunity to make the same bet 1,000 times in a row, and that the bet is as follows:  You bet $1.  You have a 60% chance of losing $1, and a 40% chance of winning $2.  This is a favorable bet because the expected value is positive: 40% x $2 = $0.80, while 60% x $1 = $0.60.  If you made this bet repeatedly over time, you would average $0.20 profit on each bet, since $0.80 – $0.60 = $0.20.

If you make this bet 1,000 times in a row, then roughly speaking, you will lose 60% of them (600 bets) and win 40% of them (400 bets).  But your profit will be about $200.  That’s because 400 x $2 = $800, while 600 x $1 = $600.  $800 – $600 = $200.

Systematically investing in deep value stocks is similar to the bet just described.  You may lose 57% of the bets and win 43% of the bets.  But over time, you will almost certainly profit because the average upside is greater than the average downside.  Your expected return is also higher than the market return over the long term.

 

“CIGAR BUTT’S” vs. SEE’S CANDIES

In his 1989 Letter to Shareholders, Buffett writes about his “Mistakes of the First Twenty-Five Years,” including a discussion of “cigar butt” (deep value) investing:

My first mistake, of course, was in buying control of Berkshire.  Though I knew its business – textile manufacturing – to be unpromising, I was enticed to buy because the price looked cheap.  Stock purchases of that kind had proved reasonably rewarding in my early years, though by the time Berkshire came along in 1965 I was becoming aware that the strategy was not ideal. 

If you buy a stock at a sufficiently low price, there will usually be some hiccup in the fortunes of the business that gives you a chance to unload at a decent profit, even though the long-term performance of the business may be terrible.  I call this the ‘cigar butt’ approach to investing.  A cigar butt found on the street that has only one puff left in it may not offer much of a smoke, but the ‘bargain purchase’ will make that puff all profit. 

Link: http://www.berkshirehathaway.com/letters/1989.html

Buffett has made it clear that cigar butt (deep value) investing does work.  In fact, fairly recently, Buffett bought at basket of cigar butts in South Korea.  The results were excellent.  But he did this in his personal portfolio.

This highlights a major reason why Buffett evolved from investing in cigar butts to investing in higher quality businesses:  size of investable assets.  When Buffett was managing a few hundred million dollars or less, which includes when he managed an investment partnership, Buffett achieved outstanding results in part by investing in cigar butts.  But when investable assets swelled into the billions of dollars at Berkshire Hathaway, Buffett began investing in higher quality companies.

  • Cigar butt investing works best for micro caps.  But micro caps won’t move the needle if you’re investing many billions of dollars.

The idea of investing in higher quality companies is simple:  If you can find a business with a sustainably high ROE – based on a sustainable competitive advantage – and if you can hold that stock for a long time, then your returns as an investor will approximate the ROE (return on equity).  This assumes that the company can continue to reinvest all of its earnings at the same ROE, which is extremely rare when you look at multi-decade periods.

  • The quintessential high-quality business that Buffett and Munger purchased for Berkshire Hathaway is See’s Candies.  They paid $25 million for $8 million in tangible assets in 1972.  Since then, See’s Candies has produced over $2 billion in (pre-tax) earnings, while only requiring a bit over $40 million in reinvestment.
  • See’s turns out more than $80 million in profits each year.  That’s over 100% ROE (return on equity), which is extraordinary.  But that’s based mostly on assets in place.  The company has not been able to reinvest most of its earnings.  Instead, Buffett and Munger have invested the massive excess cash flows in other good opportunities – averaging over 20% annual returns on these other investments (for most of the period from 1972 to present).

Furthermore, buying and holding stock in a high-quality business brings enormous tax advantages over time because you never have to pay taxes until you sell.  Thus, as a high-quality business – with sustainably high ROE – compounds value over many years, a shareholder who never sells receives the maximum benefit of this compounding.

Yet it’s extraordinarily difficult to find a business that can sustain ROE at over 20% – including reinvested earnings – for decades.  Buffett has argued that cigar butt (deep value) investing produces more dependable results than investing exclusively in high-quality businesses.  Very often investors buy what they think is a higher-quality business, only to find out later that they overpaid because the future performance does not match the high expectations that were implicit in the purchase price.  Indeed, this is what LSV show in their famous paper (discussed above) in the case of “glamour” (or “growth”) stocks.

 

MICROCAP CIGAR BUTTS

Buffett has said that you can do quite well as an investor, if you’re investing smaller amounts, by focusing on cheap micro caps.  In fact, Buffett has maintained that he could get 50% per year if he could invest only in cheap micro caps.

Investing systematically in cheap micro caps can often lead to higher long-term results than the majority of approaches that invest in high-quality stocks.

First, micro caps, as a group, far outperform every other category.  See the historical performance here: https://boolefund.com/best-performers-microcap-stocks/

Second, cheap micro caps do even better.  Systematically buying at low multiples works over the course of time, as clearly shown by LSV and many others.

Finally, if you apply the Piotroski F-Score to screen cheap micro caps for improving fundamentals, performance is further boosted:  The biggest improvements in performance are concentrated in cheap micro caps with no analyst coverage.  See: https://boolefund.com/joseph-piotroski-value-investing/

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

 

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Ben Graham Was a Quant

(Image:  Zen Buddha Silence by Marilyn Barbone.)

September 10, 2017

Dr. Steven Greiner has written an excellent book, Ben Graham Was a Quant (Wiley, 2011).  In the Preface, Greiner writes:

The history of quantitative investing began when Ben Graham put his philosophy into easy-to-understand screens.

Graham was, of course, very well aware that emotions derail most investors.  Having a clearly defined quantitative investment strategy that you stick with over the long term—both when the strategy is in favor and when it’s not—is the best chance most investors have of doing as well as or better than the market.

  • An index fund is one of the simplest quantitative approaches.  Warren Buffett and Jack Bogle have consistently and correctly pointed out that a low-cost broad market index fund is the best long-term investment strategy for most investors.  See:  https://boolefund.com/warren-buffett-jack-bogle/

An index fund tries to copy an index, which is itself typically based on companies of a certain size.  By contrast, quantitative value investing is based on metrics that indicate undervaluation.

 

QUANTITATIVE VALUE INVESTING

Here is what Ben Graham said in an interview in 1976:

I have lost most of the interest I had in the details of security analysis which I devoted myself to so strenuously for many years.  I feel that they are relatively unimportant, which, in a sense, has put me opposed to developments in the whole profession.  I think we can do it successfully with a few techniques and simple principles.  The main point is to have the right general principles and the character to stick to them.

I have a considerable amount of doubt on the question of how successful analysts can be overall when applying these selectivity approaches.  The thing that I have been emphasizing in my own work for the last few years has been the group approach.  To try to buy groups of stocks that meet some simple criterion for being undervalued – regardless of the industry and with very little attention to the individual company

I am just finishing a 50-year study—the application of these simple methods to groups of stocks, actually, to all the stocks in the Moody’s Industrial Stock Group.  I found the results were very good for 50 years.  They certainly did twice as well as the Dow Jones.  And so my enthusiasm has been transferred from the selective to the group approach.  What I want is an earnings ratio twice as good as the bond interest ratio typically for most years.  One can also apply a dividend criterion or an asset value criterion and get good results.  My research indicates the best results come from simple earnings criterions.

Imagine—there seems to be almost a foolproof way of getting good results out of common stock investment with a minimum of work.  It seems too good to be true.  But all I can tell you after 60 years of experience, it seems to stand up under any of the tests I would make up. 

See:  http://www.cfapubs.org/doi/pdf/10.2470/rf.v1977.n1.4731

Greiner points out that a quantitative investment approach is a natural extension of Graham’s simple quantitative methods.

Greiner says there are three groups of quants:

  • The first group is focused on EMH, and on creating ETF’s and tracking portfolios.
  • The second group is focused on financial statement data, and economic data. They look for relationships between returns and fundamental factors.  They typically have a value bias and use Ben Graham-style portfolios.
  • The third group is focused on trading. (Think of D.E. Shaw or Renaissance Technologies.)

Greiner’s book is focused on the second group.

Greiner also distinguishes three elements of a portfolio:

  • The return forecast (the alpha in the simplest sense)
  • The volatility forecast (the risk)
  • The weights of the securities in the portfolio

Greiner writes that, while many academics believe in efficient markets, many practicing investors do not.  This certainly includes Ben Graham, Warren Buffett, Charlie Munger, and Jeremy Grantham, among others.  Greiner includes a few quotations:

I deny emphatically that because the market has all the information it needs to establish a correct price, that the prices it actually registers are in fact correct.  – Ben Graham

The market is incredibly inefficient and people have a hard time getting their brains around that.  – Jeremy Grantham

Here’s Buffett in his 1988 Letter to the Shareholders of Berkshire Hathaway:

Amazingly, EMT was embraced not only by academics, but by many investment professionals and corporate managers as well.  Observing correctly that the market was frequently efficient, they went on to conclude incorrectly that it was always efficient.  The difference between these propositions is night and day. 

Greiner sums it up well:

Sometimes markets (and stocks) completely decouple from their underlying logical fundamentals and financial statement data due to human fear and greed.

 

DESPERATELY SEEKING ALPHA

Greiner refers to the July 12, 2010 issue of Barron’s.  Barron’s reported that, of 248 funds with five-star ratings as of December 1999, only four had kept that status as of December of 2009.  87 of the 248 funds were gone completely, while the other funds had been downgraded.  Greiner’s point is that “the star ratings have no predictive ability” (page 15).

Greiner reminds us that William Sharpe and Jack Treynor held that every investment has two separate risks:

  • market risk (systematic risk or beta)
  • company-specific risk (unsystematic risk or idiosyncratic risk)

Sharpe’s CAPM defines both beta and alpha:

Sharpe’s CAPM uses regressed portfolio return (less risk-free return) to calculate a slope and an intercept, which are called beta and alpha.  Beta is the risk term that specifies how much of the market’s risk is accounted for in the portfolio’s return.  Alpha, on the other, is the intercept, and this implies how much return the portfolio is obtaining in and above the market, separate from any other factors.  (page 16)

But risk involves not only just individual company risk.  It also involves how one company’s stock is correlated with the stocks of other companies.  If you can properly estimate the correlations among various stocks, then using Markowitz’ approach, you can maximize return for a given level of risk, or minimize risk for a given level of return.

Ben Graham’s approach, by contrast, was just to make sure you have a large enough group of quantitatively cheap stocks.  Graham was not concerned about any correlations among the cheap stocks.  As long as you have enough cheap stocks in the basket, Graham’s approach has been shown to work well over time.

The focus here, writes Greiner, is on finding alpha.  (Beta as a concept has some obvious problems.)  But if you think you’ve found alpha, you have to be careful that it isn’t a risk factor “masquerading as alpha” (page 17).  Moreover, alpha is excess return relative to an index or benchmark.  We’re talking about long-only investing and relative returns.

Greiner describes some current modeling of alpha:

In current quantitative efforts in modeling for alpha, the goal is primarily to define the factors that influence stocks or drive returns the most, and construct portfolios strongly leaning on those factors.  Generally, this is done by regressing future returns against historical financial-statement data… For a holding period of a quarter to several years, the independent variables are financial-statement data (balance-sheet, income-statement, and cash-flow data).  (page 19)

However, the nonlinear, chaotic behavior of the stock market means that there is still no standardized way to prove definitively that a certain factor causes the stock return.  Greiner explains:

The stock market is not a repeatable event.  Every day is an out-of-sample environment.  The search for alpha will continue in quantitative terms simply because of this precarious and semi-random nature of the stock markets.  (page 21)

Greiner then says that an alpha signal generated by some factor must have certain characteristics, including the following:

  • It must come from real economic variables. (You don’t want spurious correlations.)
  • The signal must be strong enough to overcome trading costs.
  • It must not be dominated by a single industry or security.
  • The time series of return obtained from the alpha source should offer little correlation with other sources of alpha. (You could use low P/E and low P/B in the same model, but you would have to account for their correlation.)
  • It should not be misconstrued as a risk factor. (If a factor is not a risk factor and it explains the return – or the variance of the return – then it must be an alpha factor.)
  • Return to this factor should have low variance. (If the factor’s return time series is highly volatile, then the relationship between the factor and the return is unstable.  It’s hard to harness the alpha in that case.)
  • The cross-sectional beta (or factor return) for the factor should also have low variance over time and maintain the same sign the majority of the time (this is coined factor stationarity). (Beta cannot jump around and still be useful.)
  • The required turnover to implement the factor in a real strategy cannot be too high. (Graham liked three-year holding periods.)

 

RISKY BUSINESS

Risk means more things can happen than will happen.  – Elroy Dimson

In other words, as Greiner says, your actual experience does not include all the risks to which you were exposed.  The challenge for the investor is to be aware of all the possible risks to which your portfolio is exposed.  Even something improbable shouldn’t come as a complete surprise if you’ve done a comprehensive job at risk management.  Of course, Warren Buffett excels at thinking this way, not only as an investor in businesses, but also because Berkshire Hathaway includes large insurance operations.

Greiner points out that the volatility of a stock is not in itself risk, though it may be a symptom of risk.  Clearly there have been countless situations (e.g., very overvalued stocks) when stock prices had not been volatile, but risk was clearly high.  Similarly, there have been many situations (e.g., very undervalued stocks) when volatility had been high, but risk was quite low.

When stock markets begin falling, stocks become much more correlated and often become disconnected from fundamentals when there is widespread fear.  In these situations, a spike in volatility is a symptom of risk.  At the same time, as fear increases and the selling of stocks increases, most stocks are becoming much safer with respect to their intrinsic values.  So the only real risks during market sell-offs relate to stockholders who are forced to sell or who sell out of fear.  Sell-offs are usually buying opportunities for quantitative value investors.

I will tell you how to become richClose the doors.  Be fearful when others are greedy.  Be greedy when others are fearful.  – Warren Buffett

So how do you figure out risk exposures?  It is often a difficult thing to do.  Greiner defines ELE events as extinction-level events, or extreme-extreme events.  If an extreme-extreme event has never happened before, then it may not be possible to estimate the probability unless you have “God’s risk model.”  (page 33)

But even considering financial and economic history, in general it is not a certain guide to the future.  Greiner quotes Ben Graham:

It is true that one of the elements that distinguishes economics, finance and security analysis from other practical disciplines is the uncertain validity of past phenomena as a guide to the present and future.  

Yet Graham does hold that past experience, while not a certain guide to the future, is reliable enough when it comes to value investing.  Value investing has always worked over time because the investor systematically buys stocks well below probable intrinsic value—whether net asset value or earnings power.  This approach creates a solid margin of safety for each individual purchase (on average) and for the portfolio (over time).

Greiner details how quants think about modeling the future:

Because the future has not happened yet, there isn’t a full set of outcomes in a normal distribution created from past experience.  When we use statistics like this to predict the future, we are making an assumption that the future will look like today or similar to today, all things being equal, and also assuming extreme events do not happen, altering the normal occurrences.  This is what quants do.  They are clearly aware that extreme events do happen, but useful models don’t get discarded just because some random event can happen.  We wear seat belts because they will offer protection in the majority of car accidents, but if an extreme event happens, like a 40-ton tractor-trailer hitting us head on at 60 mph, the seat belts won’t offer much safety.  Do we not wear seat belts because of the odd chance of the tractor-trailer collision?  Obviously we wear them.

… in reality, there are multiple possible causes for every event, even those that are extreme, or black swan.  Extreme events have different mechanisms (one or more) that trigger cascades and feedbacks, whereas everyday normal events, those that are not extreme, have a separate mechanism.  The conundrum of explanation arises only when you try to link all observations, both from the world of extreme random events and from normal events, when, in reality, these are usually from separate causes.  In the behavioral finance literature, this falls under the subject of multiple-equilibria… highly nonlinear and chaotic market behavior occurs in which small triggers induce cascades and contagion, similar to the way very small changes in initial conditions bring out turbulence in fluid flow.  The simple analogy is the game of Pick-Up Sticks, where the players have to remove a single stick from a randomly connected pile, without moving all the other sticks.  Eventually, the interconnectedness of the associations among sticks results in an avalanche.  Likewise, so behaves the market.  (pages 35-36)

If 95 percent of events can be modeled using a normal distribution, for example, then of course we should do so.  Although Einstein’s theories of relativity are accepted as correct, that does not mean that Newton’s physics is not useful as an approximation.  Newtonian mechanics is still very useful for many engineers and scientists for a broad range of non-relativistic phenomena.

Greiner argues that Markowitz, Merton, Sharpe, Black, and Scholes are associated with models that are still useful, so we shouldn’t simply toss those models out.  Often the normal (Gaussian) distribution is a good enough approximation of the data to be very useful.  Of course, we must be careful in the many situations when the normal distribution is NOT a good approximation.

As for financial and economic history, although it’s reliable enough most of the time when it comes to value investing, it still involves a high degree of uncertainty.  Greiner quotes Graham again, who (as usual) clearly understood a specific topic before the modern terminology – in this case hindsight bias – was even invented:

The applicability of history almost always appears after the event.  When it is all over, we can quote chapter and verse to demonstrate why what happened was bound to happen because it happened before.  This is not really very helpful.  The Danish philosopher Kirkegaard made the statement that life can only be judged in reverse but it must be lived forwards.  That certainly is true with respect to our experience in the stock market.

Building your risk model can be summarized in the following steps, writes Greiner:

  1. Define your investable universe: that universe of stocks that you’ll always be choosing from to invest in.
  2. Define your alpha model (whose factors become your risk model common factors); this could be the CAPM, the Fama-French, or Ben Graham’s method, or some construct of your own.
  3. Calculate your factor values for your universe. These become your exposures to the factor.  If you have B/P as a factor in your model, calculate the B/P for all stocks in your universe.  Do the same for all other factors.  The numerical B/P for a stock is then termed exposure.  Quite often these are z-scored, too.
  4. Regress your returns against your exposures (just as in CAPM, you regress future returns against the market to obtain a beta or the Fama-French equation to get 3 betas). These regression coefficients or betas to your factors become your factor returns.
  5. Do this cross-sectionally across all the stocks in the universe for a given date. This will produce a single beta for each factor for that date.
  6. Move the date one time step or period and do it all over. Eventually, after, say, 60 months, there would be five years of cross-sectional regressions that yield betas that will also have a time series of 60 months.
  7. Take each beta’s time series and compute its variance. Then, compute the covariance between each factor’s beta.  The variance and covariance of the beta time series act as proxies for the variance of the stocks.  These are the components of the covariance matrix.  On-diagonal components are the variance of the factor returns, the variance of the betas, and off-diagonal elements are the covariance between factor returns.
  8. Going forward, calculate expected returns, multiply the stock weight vector by the exposure matrix times the beta vector for a given date. The exposure matrix is N x M, where N is the number of stocks and M is the number of factors.  The covariance matrix is M x M, and the exposed risks, predicted through the model, are derived from it.

Greiner explains that the convention in risk management is to rename regression coefficients factor returns, and to rename actual financial statement variables (B/P, E/P, FCF/P) exposures.

Furthermore, not all stocks in the same industry have the same exposure to that industry.  Greiner:

This would mean that, rather than assigning a stock to a single sector or industry group, it could be assigned fractionally across all industries that it has a part in.  This might mean that some company that does 60 percent of its business in one industry and 40 percent in another would result ultimately in a predicted beta from the risk model that is also, say, 0.76 one industry and 0.31 another.  Although this is novel, and fully known and appreciated, few investment managers do it because of the amount of work required to separate out the stock’s contribution to all industries.  However, FactSet’s MAC model does include this operation.  (page 45)

 

BETA IS NOT “SHARPE” ENOUGH

Value investors like Ben Graham know that price variability is not risk.  Instead, risk is the potential for loss due an impairment of intrinsic value (net asset value or earnings power).  Greiner writes:

[Graham] would argue that an investor (but not a speculator) does not lose money simply because the price of a stock declines, assuming that the business itself has had no changes, it has cash flow, and its business prospects look good.  Graham contends that if one selects a group of candidate stocks with his margin of safety, then there is still a good chance that even though this portfolio’s market value may drop below cost occasionally over some period of time, this fact does not make the portfolio risky.  In his definition, then, risk is there only if there is a reason to liquidate the portfolio in a fire sale or if the underlying securities suffer a serious degradation in their earnings power.  Lastly, he believes there is risk in buying a security by paying too much for its underlying intrinsic value.  (pages 55-56)

Saying volatility represents risk is to mistake the often emotional opinions of Mr. Market with the fundamentals of the business in which you have a stake as a shareholder.

As a reminder, if the variables are fully random and nonserially correlated, independent, and identically distributed, then we have a normal distribution.  The problem in modeling stock returns is that the return mean varies with time and the errors are not random:

Of course there are many, many types of distributions.  For instance there are binomial, Bernoulli, Poisson, Cauchy or Lorentzian, Pareto, Gumbel, Student’s-t, Frechet, Weibull, and Levy Stable distributions, just to name a few, all of which can be continuous or discrete.  Some of these are asymmetric about the mean (first moment) and some are not.  Some have fat tails and some do not.  You can even have distributions with infinite second moments (infinite variance).  There are many distributions that need three or four parameters to define them rather than just the two consisting of mean and variance.  Each of these named distributions has come about because some phenomena had been found whose errors or outcomes were not random, were not given merely by chance, or were explained by some other cause.  Investment return is an example of data that produces some other distribution than normal and requires more than just the mean and variance to describe its characteristics properly.  (page 58)

Even though market prices have been known to have non-normal distributions and to maintain statistical dependence, most people modeling market prices have downplayed this information.  It’s just been much easier to assume that market returns follow a random walk resulting in random errors, which can be easily modeled using a normal distribution.

Unfortunately, observes Greiner, a random walk is a very poor approximation of how market prices behave.  Market returns tend to have fatter tails.  But so much of finance theory depends on the normal distribution that it would be a great deal of work to redo it, especially given that the benefits of more accurate distributions are not fully clear.

You can make an analogy with behavioral finance.  It’s now very well-established by thousands of experiments how many people behave less than fully rationally, especially when making various decisions under uncertainty.  However, the most useful economic models in many situations are still based on the assumption that people behave with full rationality.

Despite many market inefficiencies, the EMH – a part of rationalist economics – is still very useful, as Richard Thaler explains in The Making of Behavioral Economics:

So where do I come down on the EMH?  It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful.  In a world of Econs, I believe that the EMH would be true.  And it would not have been possible to do research in behavioral finance without the rational model as a starting point.  Without the rational framework, there are no anomalies from which we can detect misbehavior.  Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research.  We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.  (pages 250-251)

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed.  Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’  There are definitely anomalies:  sometimes the market overreacts, and sometimes it underreacts.  But it remains the case that most active money managers fail to beat the market…

Thaler then notes that he has much less faith in the second component of EMH – that the price is right.  The price is often wrong, and sometimes very wrong, says Thaler.  However, that doesn’t mean that you can beat the market.  It’s extremely difficult to beat the market, which is why the ‘no-free-lunch’ component of EMH is mostly true.

Greiner describes equity returns:

… the standard equity return distribution (and index return time series) typically has negative skewness (third moment of the distribution) and is leptokurtotic (‘highly peaked,’ the fourth moment), neither of which are symptomatic of random walk phenomena or a normal distribution description and imply an asymmetric return distribution.  (page 60)

One problem with beta is that it has covariance in the numerator.  And if two variables are linearly independent, then their covariance is always zero.  But if beta is very low or zero, that does not tell you whether the portfolio is truly independent of the benchmark.  Greiner explains that you can determine linear dependence as follows:  When the benchmark has moved X in the past, has the portfolio consistently moved 0.92*X?  If yes, then the portfolio and the benchmark are linearly dependent.  Then we could express the return of the portfolio as a simple multiple of the benchmark, which seems to give beta some validity.  However, again, you could have linear dependence of 0.92*X, but the beta might be much lower or even zero, in which case beta is meaningless.

Another example would be a market sell-off in which most stocks become highly correlated.  Using beta as a signal for correlation in this case likely would not work at all.

Greiner examines ten years’ worth of returns of the Fidelity Magellan mutual fund.  The distribution of returns is more like a Frechet distribution than a normal distribution in two ways:  it is more peaked in the middle than a normal distribution, and it has a fatter left tail than a normal distribution (see Figure 3.6, page 74).  The Magellan Fund returns also have a fat tail on the right side of the distribution.  This is where the Frechet misses.  But overall, the Frechet distribution matches the Magellan Fund returns better than a normal distribution.  Greiner’s point is that the normal distribution is often not the most accurate distribution for describing investment returns.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Future of the Mind

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 20, 2017

This week’s blog post covers another book by the theoretical physicist Michio Kaku—The Future of the Mind (First Anchor Books, 2015).

Most of the wealth we humans have created is a result of technological progress (in the context of some form of capitalism plus the rule of law).  Most future wealth will result directly from breakthroughs in physics, artificial intelligence, genetics, and other sciences.  This is why AI is fascinating in general (not just for investing).  AI—in combination with other technologies—may eventually turn out to be the most transformative technology of all time.

 

A PHYSICIST’S DEFINITION OF CONSCIOUSNESS

Physicists have been quite successful historically because of their ability to gather data, to measure ever more precisely, and to construct testable, falsifiable mathematical models to predict the future based on the past.  Kaku explains:

When a physicist first tries to understand something, first he collects data and then he proposes a “model,” a simplified version of the object he is studying that captures its essential features.  In physics, the model is described by a series of parameters (e.g., temperature, energy, time).  Then the physicist uses the model to predict its future evolution by simulating its motions.  In fact, some of the world’s largest supercomputers are used to simulate the evolution of models, which can describe protons, nuclear explosions, weather patterns, the big bang, and the center of black holes.  Then you create a better model, using more sophisticated parameters, and simulate it in time as well.  (page 42)

Kaku then writes that he’s taken bits and pieces from fields such as neurology and biology in order to come up with a definition of consciousness:

Consciousness is a process of creating a model of the world using multiple feedback loops in various parameters (e.g., in temperature, space, time, and in relation to others), in order to accomplish a goal (e.g., find mates, food, shelter).

Kaku emphasizes that humans use the past to predict the future, whereas most animals are focused only on the present or the immediate future.

Kaku writes that one can rate different levels of consciousness based on the definition.  The lowest level of consciousness is Level 0, where an organism has limited mobility and creates a model using feedback loops in only a few parameters (e.g., temperature).  Kaku gives the thermostat as an example.  If the temperature gets too hot or too cold, the thermostat registers that fact and then adjusts the temperature accordingly using an air conditioner or heater.  Kaku says each feedback loop is “one unit of consciousness,” so the thermostat – with only one feedback loop – would have consciousness of Level 0:1.

Organisms that are mobile and have a central nervous system have Level I consciousness.  There’s a new set of parameters—relative to Level 0—based on changing locations.  Reptiles are an example of Level I consciousness.  The reptilian brain may have a hundred feedback loops based on their senses, etc.  The totality of feedback loops give the reptile a “mental picture” of where they are in relation to various objects (including prey), notes Kaku.

Animals exemplify Level II consciousness.  The number of feedback loops jumps exponentially, says Kaku.  Many animals have complex social structures.  Kaku explains that the limbic system includes the hippocampus (for memories), the amygdala (for emotions), and the thalamus (for sensory information).

You could rank the specific level of Level II consciousness of an animal by listing the total number of distinct emotions and social behaviors.  So, writes Kaku, if there are ten wolves in the wolf pack, and each wolf interacts with all the others with fifteen distinct emotions and gestures, then a first approximation would be that wolves have Level II:150 consciousness.  (Of course, there are caveats, since evolution is never clean and precise, says Kaku.)

 

LEVEL III CONSCIOUSNESS: SIMULATING THE FUTURE

Kaku observes that there is a continuum of consciousness from the most basic organisms up to humans.  Kaku quotes Charles Darwin:

The difference between man and the higher animals, great as it is, is certainly one of degree and not of kind.

Kaku defines human consciousness:

Human consciousness is a specific form of consciousness that creates a model of the world and then simulates it in time, by evaluating the past to simulate the future.  This requires mediating and evaluating many feedback loops in order to make a decision to achieve a goal.

Kaku explains that we as humans have so many feedback loops that we need a “CEO”—an expanded prefrontal cortex that can analyze all the data logically and make decisions.  More precisely, Kaku writes that neurologist Michael Gazzaniga has identified area 10, in the lateral prefrontal cortex, which is twice as big in humans as in apes.  Area 10 is where memory, planning, abstract thinking, learning rules, picking out what is relevant, etc. happens.  Kaku says he will refer to this region as the dorsolateral prefrontal cortex, roughly speaking.

Most animals, by constrast, do not think and plan, but rely on instinct.  For instance, notes Kaku, animals do not plan to hibernate, but react instinctually when the temperature drops.  Predators plan, but only for the immediate future.  Primates plan a few hours ahead.

Humans, too, rely on instinct and emotion.  But humans also analyze and evaluate information, and run mental simulations of the future—even hundreds or thousands of years into the future.  This, writes Kaku, is how we as humans try to make the best decision in pursuit of a goal.  Of course, the ability to simulate various future scenarios gives humans a great evolutionary advantage for things like evading predators and finding food and mates.

As humans, we have so many feedback loops, says Kaku, that it would be a chaotic sensory overload if we didn’t have the “CEO” in the dorsolateral prefrontal cortex.  We think in terms of chains of causality in order to predict future scenarios.  Kaku explains that the essence of humor is simulating the future but then having an unexpected punch line.

Children play games largely in order to simulate specific adult situations.  When adults play various games like chess, bridge, or poker, they mentally simulate various scenarios.

Kaku explains the mystery of self-awareness:

Self-awareness is creating a model of the world and simulating the future in which you appear.

As humans, we constantly imagine ourselves in various future scenarios.  In a sense, we are continuously running “thought experiments” about our lives in the future.

Kaku writes that the medial prefrontal cortex appears to be responsible for creating a coherent sense of self out of the various sensations and thoughts bombarding our brains.  Furthermore, the left brain fits everything together in a coherent story even when the data don’t make sense.  Dr. Michael Gazzaniga was able to show this by running experiments on split-brain patients.

Kaku speculates that humans can reach better conclusions if the brain receives a great deal of competing data.  With enough data and with practice and experience, the brain can often reach correct conclusions.

At the beginning of the next section—Mind Over Matter—Kaku quotes Harvard psychologist Steven Pinker:

The brain, like it or not, is a machine.  Scientists have come to that conclusion, not because they are mechanistic killjoys, but because they have amassed evidence that every aspect of consciousness can be tied to the brain.

 

DARPA

DARPA is the Pentagon’s Defense Advanced Research Projects Agency.  Kaku writes that DARPA has been central to some of the most important technological breakthroughs of the twentieth century.

President Dwight Eisenhower set up DARPA originally as a way to compete with the Russians after they launched Sputnik into orbit in 1957.  Over the years, some of DARPA’s projects became so large that they spun them off as separate entities, including NASA.

DARPA’s “only charter is radical innovation.”  DARPA scientists have always pushed the limits of what is physically possible.  One of DARPA’s early projects was Arpanet, a telecommunications network to connect scientists during and after World War III.  After the breakup of the Soviet bloc, the National Science Foundation decided to declassify Arpanet and give away the codes and blueprints for free.  This would eventually become the internet.

DARPA helped create Project 57, which was a top-secret project for guiding ballistic missiles to specific targets.  This technology later became the foundation for the Global Positioning System (GPS).

DARPA has also been a key player in other technologies, including cell phones, night-vision goggles, telecommunications advances, and weather satellites, says Kaku.

Kaku writes that, with a budget over $3 billion, DARPA has recently focused on the brain-machine interface.  Kaku quotes former DARPA official Michael Goldblatt:

Imagine if soldiers could communicate by thought alone… Imagine the threat of biological attack being inconsequential.  And contemplate, for a moment, a world in which learning is as easy as eating, and the replacement of damaged body parts as convenient as a fast-food drive-through.  As impossible as these visions sound or as difficult as you might think the task would be, these visions are the everyday work of the Defense Sciences Office [a branch of DARPA].  (page 74)

Goldblatt, notes Kaku, thinks the long-term legacy of DARPA will be human enhancement.  Goldblatt’s daughter has cerebral palsy and has been confined to a wheelchair all her life.  Goldblatt is highly motivated not only to help millions of people in the future and create a legacy, but also to help his own daughter.

 

TELEKINESIS

Cathy Hutchinson became a quadriplegic after suffering a massive stroke.  But in May 2012, scientists from Brown University placed a tiny chip on top of her brain—called Braingate—which is connected by wires to a computer.  (The chip has ninety-six electrodes for picking up brain impulses.)  Her brain could then send signals through the computer to control a mechanical robotic arm.  She reported her great excitement and said she knows she will get robotic legs eventually, too.  This might happen soon, says Kaku, since the field of cyber prosthetics is advancing fast.

Scientists at Northwestern placed a chip with 100 electrodes on the brain of a monkey.  The signals were carefully recorded while the monkey performed various tasks involving the arms.  Each task would involve a specific firing of neurons, which the scientists eventually were able to decipher.

Next, the scientists took the signal sequences from the chip and instead of sending them to a mechanical arm, they sent the signals to the monkey’s own arm.  Eventually the monkey learned to control its own arm via the computer chips.  (The reason 100 electrodes is enough is because they were placed on the output neurons.  So the monkey’s brain had already done the complex processing involving millions of neurons by the time the signals reached the electrodes.)

This device is one of many that Northwestern scientists are testing.  These devices, which continue to be developed, can help people with spinal cord injuries.

Kaku observes that much of the funding for these developments comes from a DARPA project called Revolutionizing Prosthetics, a $150 million effort since 2006.  Retired U.S. Army colonel Geoffrey Ling, a neurologist with several tours of duty in Iraq and Afghanistan, is a central figure behind Revolutionizing Prosthetics.  Dr. Ling was appalled by the suffering caused by roadside bombs.  In the past, many of these brave soldiers would have died.  Today, many more can be saved.  However, more than 1,300 of them have lost limbs after returning from the Middle East.

Dr. Ling, with funding from the Pentagon, instructed his staff to figure out how to replace lost limbs within five years.  Ling:

They thought we were crazy.  But it’s in insanity that things happen.

Kaku continues:

Spurred into action by Dr. Ling’s boundless enthusiasm, his crew has created miracles in the laboratory.  For example, Revolutionary Prosthetics funded scientists at the Johns Hopkins Applied Physics Laboratory, who have created the most advanced mechanical arm on Earth, which can duplicate nearly all the delicate motions of the fingers, hand, and arm in three dimensions.  It is the same size and has the same strength and agility as a real arm.  Although it is made of steel, if you covered it up with flesh-colored plastic, it would be nearly indistinguishable from a real arm.

This arm was attached to Jan Sherman, a quadriplegic who had suffered from a genetic disease that damaged the connection between her brain and her body, leaving her completely paralyzed from the neck down.  At the University of Pittsburgh, electrodes were placed directly on top of her brain, which were then connected to a computer and then to a mechanical arm.  Five months after surgery to attach the arm, she appeared on 60 Minutes.  Before a national audience, she cheerfully used her new arm to wave, greet the host, and shake his hand.  She even gave him a fist bump to show how sophisticated the arm was.

Dr. Ling says, ‘In my dream, we will be able to take this into all sorts of patients, patients with strokes, cerebral palsy, and the elderly.‘  (page 84)

Dr. Miguel Nicholelis of Duke University is pursuing novel applications of the brain-machine interface (BMI).  Dr. Nicholelis has demonstrated that BMI can be done across continents.  He put a chip on a monkey’s brain.  The chip was connected to the internet.  When the monkey was walking on a treadmill in North Carolina, the signals were sent to a robot in Kyoto, Japan, which performed the same walking motions.

Dr. Nicholelis is also working on the problem that today’s prosthetic hands lack a sense of touch.  Dr. Nicholelis is trying to create a direct brain-to-brain interface to overcome this challenge.  Messages would go from the brain to the mechanical arm, and then directly back to the brain, bypassing the stem altogether.  This is a brain-machine-brain interface (BMBI).

Dr. Nicholelis connected the motor cortex of rhesus monkeys to mechanical arms.  The mechanical arms have sensors, and send signals back to the brain by electrodes connected to the somato-sensory cortex (which registers the sensation of touch).  Dr. Nicholelis invented a new code to represent different surfaces.  After a month of practice, the brain learns the new code and can thus distinguish among different surfaces.

Dr. Nicholelis told Kaku that something like the holodeck from Star Trek—where you wander in a virtual world, but feel sensations when you bump into virtual objects—will be possible in the future.  Kaku writes:

The holodeck of the future might use a combination of two technologies.  First, people in the holodeck would wear internet contact lenses, so that they would see an entirely new virtual world everywhere they looked.  The scenery in your contact lense would change instantly with the push of a button.  And if you touched any object in this world, signals sent into the brain would simulate the sensation of touch, using BMBI technology.  In this way, objects in the virtual world you see inside your contact lense would feel solid.  (page 87)

Scientists have begun to explore an “Internet of the mind,” or brain-net.  In 2013, scientists went beyond animal studies and demonstrated the first human brain-to-brain communication.

This milestone was achieved at the University of Washington, with one scientist sending a brain signal (move your right arm) to another scientist.  The first scientist wore an EEG helmet and played a video game.  He fired a cannon by imagining moving his right arm, but was careful not to move it physically.

The signal from the EEG helmet was sent over the Internet to another scientist, who was wearing a transcranial magnetic helmet carefully placed over the part of his brain that controlled his right arm.  When the signal reached the second scientist, the helmet would send a magnetic pulse into his brain, which made his right arm move involuntarily, all by itself.  Thus, by remote control, one human brain could control the movement of another.

This breakthrough opens up a number of possibilities, such as exchanging nonverbal messages via the Internet.  You might one day be able to send the experience of dancing the tango, bungee jumping, or skydiving to the people on your e-mail list.  Not just physical activity, but emotions and feelings as well might be sent via brain-to-brian communication.

Nicholelis envisions a day when people all over the world could participate in social networks not via keyboards, but directly through their minds.  Instead of just sending e-mails, people on the brain-net would be able to telepathically exchange thoughts, emotions, and ideas in real time.  Today a phone call conveys only the information of the conversation and the tone of voice, nothing more.  Video conferencing is a bit better, since you can read the body language of the person on the other end.  But a brain-net would be the ultimate in communications, making it possible to share the totality of mental information in a conversation, including emotions, nuances, and reservations.  Minds would be able to share their most intimate thoughts and feeelings.  (pages 87-88)

Kaku gives more details of what would be needed to create a brain-net:

Creating a brain-net that can transmit such information would have to be done in stages.  The first step would be inserting nanoprobes into important parts of the brain, such as the left temporal lobe, which governs speech, and the occipital lobe, which governs vision.  Then computers would analyze these signals and decode them.  This information in turn could be sent over the Internet by fiber-optic cables.  

More difficult would be to insert these signals into another person’s brain, where they could be processed by the receiver.  So far, progress in this area has focused only on the hippocampus, but in the future it should be possible to insert messages directly into other parts of the brain corresponding to our sense of hearing, light, touch, etc.  So there is plenty of work to be done as scientists try to map the cortices of the brain involved in these senses.  Once these cortices have been mapped… it should be possible to insert words, thoughts, memories, and experiences into another brain.  (page 89)

Dr. Nicolelis’ next goal is the Walk Again Project.  They are creating a complete exoskeleton that can be controlled by the mind.  Nicolelis calls it a “wearable robot.”  The aim is to allow the paralyzed to walk just by thinking.  There are several challenges to overcome:

First, a new generation of microchips must be created that can be placed in the brain safely and reliably for years at a time.  Second, wireless sensors must be created so the exoskeleton can roam freely.  The signals from the brain would be received wirelessly by a computer the size of a cell phone that would probably be attached to your belt.  Third, new advances must be made in deciphering and interpreting signals from the brain via computers.  For the monkeys, a few hundred neurons were necessary to control the mechanical arms.  For a human, you need, at minimum, several thousand neurons to control an arm or leg.  And fourth, a power supply must be found that is portable and powerful enough to energize the entire exoskeleton.  (page 92)

 

MEMORIES AND THOUGHTS

One interesting possibility is that the long-term memory evolved in humans because it was useful for us in simulating and predicting future scenarios.

Indeed, brain scans done by scientists at Washington University in St. Louis indicate that areas used to recall memories are the same as those involved in simulating the future.  In particular, the link between the dorsolateral prefrontal cortex and the hippocampus lights up when a person is engaged in planning for the future and remembering the past.  In some sense, the brain is trying to ‘recall the future,’ drawing upon memories of the past in order to determine how something will evolve into the future.  This may also explain the curious fact that people who suffer from amnesia… are often unable to visualize what they will be doing in the future or even the very next day.  (page 113)

Some claim that Alzheimer’s disease may be the disease of the century.  As of Kaku’s writing, there were 5.3 million Americans with Alzheimer’s, and that number is expected to quadruple by 2050.  Five percent of people aged sixty-five to seventy-four have it, but more than 50 percent of those over eighty-five have it, even if they have no obvious risk factors.

One possible way to try to combat Alzheimer’s is to create antibodies or a vaccine that might specifically target misshapen protein molecules associated with the disease.  Another approach might be to create an artificial hippocampus.  Yet another approach is to see if specific genes can be found that improve memory.  Experiments on mice and fruit flies have been underway.

If the genetic fix works, it could be administered by a simple shot in the arm.  If it doesn’t work, another possible approach is to insert the proper proteins into the body.  Instead of a shot, it would be a pill.  But scientists are still trying to understand the process of memory formation.

Eventually, writes Kaku, it will be possible to record the totality of stimulation entering into a brain.  In this scenario, the Internet may become a giant library not only for the details of human lives, but also for the actual consciousness of various individuals.  If you want to see how your favorite hero or historical figure felt as they confronted the major crises of their lives, you’ll be able to do so.  Or you could share the memories and thoughts of a Nobel Prize-winning scientist, perhaps gleaning clues about how great discoveries are made.

 

ENHANCING OUR INTELLIGENCE

What made Einstein Einstein?  It’s very difficult to say, of course.  Partly, it may be that he was the right person at the right time.  Also, it wasn’t just raw intelligence, but perhaps more a powerful imagination and an ability to stick with problems for a very long time.  Kaku:

The point here is that genius is perhaps a combination of being born with certain mental abilities and also the determination and drive to achieve great things.  The essence of Einstein’s genius was probably his extraordinary ability to simulate the future through thought experiments, creating new physical principles via pictures.  As Einstein himself once said, ‘The true sign of intelligence is not knowledge, but imagination.’  And to Einstein, imagination meant shattering the boundaries of the known and entering the domain of the unknown.  (page 133)

The brain remains “plastic” even into adult life.  People can always learn new skills.  Kaku notes that the Canadian psychologist Dr. Donald Hebb made an important discovery about the brain:

the more we exercise certain skills, the more certain pathways in our brains become reinforced, so the task becomes easier.  Unlike a digital computer, which is just as dumb today as it was yesterday, the brain is a learning machine with the ability to rewire its neural pathways every time it learns something.  This is a fundamental difference between the digital computer and the brain.  (page 134)

Scientists also believe that the ability to delay gratification and the ability to focus attention may be more important than IQ for success in life.

Furthermore, traditional IQ tests only measure “convergent” intelligence related to the left brain and not “divergent” intelligence related to the right brain.  Kaku quotes Dr. Ulrich Kraft:

‘The left hemisphere is responsible for convergent thinking and the right hemisphere for divergent thinking.  The left side examines details and processes them logically and analytically but lacks a sense of overriding, abstract connections.  The right side is more imaginative and intuitive and tends to work holistically, integrating pieces of an informational puzzle into a whole.’  (page 138)

Kaku suggests that a better test of intelligence might measure a person’s ability to imagine different scenarios related to a specific future challenge.

Another avenue of intelligence research is genes.  We are 98.5 percent identical genetically to chimpanzees.  But we live twice as long and our mental abilities have exploded in the past six million years.  Scientists have even isolated just a handful of genes that may be responsible for our intelligence.  This is intriguing, to say the least.

In addition to having a larger cerebral cortex, our brains have many folds in them, vastly increasing their surface area.  (The brain of Carl Friedrich Gauss was found to be especially folded and wrinkled.)

Scientists have also focused on the ASPM gene.  It has mutated fifteen times in the last five or six million years.  Kaku:

Because these mutations coincide with periods of rapid growth in intellect, it it tantalizing to speculate that ASPM is among the handful of genes responsible for our increased intelligence.  If this is true, then perhaps we can determine whether these genes are still active today, and whether they will continue to shape human evolution in the future.  (page 154)

Scientists have also learned that nature takes numerous shortcuts in creating the brain.  Many neurons are connected randomly, so a detailed blueprint isn’t needed.  Neurons organize themselves in a baby’s brain in reaction to various specific experiences.  Also, nature uses modules that repeat over and over again.

It is possible that we will be able to boost our intelligence in the future, which will increase the wealth of society (probably significantly).  Kaku:

It may be possible in the coming decades to use a combination of gene therapy, drugs, and magnetic devices to increase our intelligence.  (page 162)

…raising our intelligence may help speed up technological innovation.  Increased intelligence would mean a greater ability to simulate the future, which would be invaluable in making scientific discoveries.  Often, science stagnates in certain areas because of a lack of fresh new ideas to stmulate new avenues of research.  Having an ability to simulate different possible futures would vastly increase the rate of scientific breakthroughs.

These scientific discoveries, in turn, could generate new industries, which would enrich all of society, creating new markets, new jobs, and new opportunities.  History if full of technological breakthroughs creating entirely new industries that benefited not just the few, but all of society (think of the transistor and the laser, which today form the foundation of the world economy).  (page 164)

 

DREAMS

Kaku explains that the brain, as a neural network, may need to dream in order to function well:

The brain, as we have seen, is not a digital computer, but rather a neural network of some sort that constantly rewires itself after learning new tasks.  Scientists who work with neural networks noticed something interesting, though.  Often these systems would become saturated after learning too much, and instead of processing more information they would enter a “dream” state, whereby random memories would sometimes drift and join together as the neural networks tried to digest all the new material.  Dreams, then, might reflect “house cleaning,” in which the brain tries to organize its memories in a more coherent way.  (If this is true, then possibly all neural networks, including all organisms that can learn, might enter a dream state in order to sort out their memories.  So dreams probably serve a purpose.  Some scientists have speculated that this might imply that robots that learn from experience might also eventually dream as well.)

Neurological studies seem to back up this conclusion.  Studies have shown that retaining memories can be improved by getting sufficient sleep between the time of activity and a test.  Neuroimaging shows that the areas of the brain that are activated during sleep are the same a those involved in learning a new task.  Dreaming is perhaps useful in consolidating this new information.  (page 172)

In 1977, Dr. Allan Hobson and Dr. Robert McCarley made history – seriously challenging Freud’s theory of dreams—by proposing the “activation synthesis theory” of dreams:

The key to dreams lies in nodes found in the brain stem, the oldest part of the brain, which squirts out special chemicals, called adrenergics, that keep us alert.  As we go to sleep, the brain stem activates another system, the cholinergic, which emits chemicals that put us in a dream state.

As we dream, cholinergic neurons in the brain stem begin to fire, setting off erratic pulses of electrical energy called PGO (pontine-geniculate-occipital) waves.  These waves travel up the brain stem into the visual cortex, stimulating it to create dreams.  Cells in the visual cortex begin to resonate hundreds of times per second in an irregular fashion, which is perhaps responsible for the sometimes incoherent nature of dreams.  (pages 174-175)

 

ALTERED STATE OF CONSCIOUSNESS

There seem to be certain parts of the brain that are associated with religious experiences and also with spirituality.  Dr. Mario Beauregard of the University of Montreal commented:

If you are an atheist and you live a certain kind of experience, you will relate it to the magnificence of the universe.  If you are a Christian, you will associate it with God.  Who knows.  Perhaps they are the same thing.

Kaku explains how human consciousness involves delicate checks and balances similar to the competing points of view that a good CEO considers:

We have proposed that a key function of human consciousness is to simulate the future, but this is not a trivial task.  The brain accomplishes it by having these feedback loops check and balance one another.  For example, a skillful CEO at a board meeting tries to draw out the disagreement among staff members and to sharpen competing points of view in order to sift through the various arguments and then make a final decision.  In the same way, various regions of the brain make diverging assessments of the future, which are given to the dorsolateral profrontal cortex, the CEO of the brain.  These competing assessments are then evaluated and weighted until a balanced final decision is made.  (page 205)

The most common mental disorder is depression, afflicting twenty million people in the United States.  One way scientists are trying to cure depression is deep brain stimulation (DBS)—inserting small probes into the brain and causing an electrical shock.  Kaku:

In the past decade, DBS has been used on forty thousand patients for motor-related diseases, such as Parkinson’s and epilepsy, which cause uncontrolled movements of the body.  Between 60 and 100 percent of the patients report significant improvement in controlling their shaking hands.  More than 250 hospitals in the United States now perform DBS treatment.  (page 208)

Dr. Helen Mayberg and colleagues at Washington University School of Medicine have discovered an important clue to depression:

Using brain scans, they identified an area of the brain, called Brodmann area 25 (also called the subcallosal cingulate region), in the cerebral cortex that is consistently hyperactive in depressed individuals for whom all other forms of treatment have been unsuccessful. 

…Dr. Mayberg had the idea of applying DBS directly to Broadmann area 25… her team took twelve patients who were clinically depressed and had shown no improvement after exhaustive use of drugs, psychotherapy, and electroshock therapy.

They found that eight of these chronically depressed individuals showed immediate progress.  Their success was so astonishing, in fact, that other groups raced to duplicate these results and apply DBS to other mental disorders…

Dr. Mayberg says, ‘Depression 1.0 was psychotherapy… Depression 2.0 was the idea that it’s a chemical imbalance.  This is Depression 3.0.  What has captured everyone’s imagination is that, by dissecting a complex behavior disorder into its component systems, you have a new way of thinking about it.’

Although the success of DBS in treating depressed individuals is remarkable, much more research needs to be done…

 

THE ARTIFICIAL MIND AND SILICON CONSCIOUSNESS

Kaku introduces the potential challenge of handling artificial intelligence as it evolves:

Given the fact that computer power has been doubling every two years for the past fifty years under Moore’s law, some say it is only a matter of time before machines eventually acquire self-awareness that rivals human intelligence.  No one knows when this will happen, but humanity should be prepared for the moment when machine consciousness leaves the laboratory and enters the real world.  How we deal with robot consciousness could decide the future of the human race.  (page 216)

Kaku observes that AI has gone through three cycles of boom and bust.  In the 1950s, machines were built that could play checkers and solve algebra problems.  Robot arms could recognize and pick up blocks.  In 1965, Dr. Herbert Simon, one of the founders of AI, made a prediction:

Machines will be capable, within 20 years, of doing any work a man can do.

In 1967, another founder of AI, Dr. Marvin Minsky, remarked:

…within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved. 

But in the 1970s, not much progress in AI had been made.  In 1974, both the U.S. and British governments significantly cut back their funding for AI.  This was the beginning of the first AI winter.

But as computer power steadily increased in the 1980s, a new gold rush occurred in AI, fueled mainly by Pentagon planners hoping to put robot soldiers on the battlefield.  Funding for AI hit a billion dollars by 1985, with hundreds of millions of dollars spent on projects like the Smart Truck, which was supposed to be an intelligent, autonomous truck that could enter enemy lines, do reconnaissance by itself, perform missions (such as rescuing prisoners), and then return to friendly territory.  Unfortunately, the only thing that the Smart Truck did was get lost.  The visible failures of these costly projects created yet another AI winter in the 1990s.  (page 217)

Kaku continues:

But now, with the relentless march of computer power, a new AI renaissance has begun, and slow but substantial progress has been made.  In 1997, IBM’s Deep Blue computer beat world chess champion, Garry Kasparov.  In 2005, a robot car from Stanford won the DARPA Grand Challenge for a driverless car.  Milestones continue to be reached.

This question remains:  Is the third try a charm?

Scientists now realize that they vastly underestimated the problem, because most human thought is actually subconscious.  The conscious part of our thoughts, in fact, represents only the tiniest portion of our computations.

Dr. Steve Pinker says, ‘I would pay a lot for a robot that would put away the dishes or run simple errands, but I can’t, because all of the little problems that you’d need to solve to build a robot to do that, like recognizing objects, reasoning about the world, and controlling hands and feet, are unsolved engineering problems.’  (pages 217-218)

Kaku asked Dr. Minsky when he thought machines would equal and then surpass human intelligence.  Minsky replied that he’s confident it will happen, but that he doesn’t make predictions about specific dates any more.

If you remove a single transistor from a Pentium chip, the computer will immediately crash, writes Kaku.  But the human brain can perform quite well even with half of it missing:

This is because the brain is not a digital computer at all, but a highly sophisticated neural network of some sort.  Unlike a digital computer, which has a fixed architecture (input, output, and processor), neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task.  The brain has no programming, no operating system, no Windows, no central processor.  Instead, its neural networks are massively parallel, with one hundred billion neurons firing at the same time in order to accomplish a single goal: to learn.

In light of this, AI researchers are beginning to reexamine the ‘top-down approach’ they have followed for the past fifty years (e.g., putting all the rules of common sense on a CD).  Now AI researchers are giving the ‘bottom-up approach’ a second look.  This approach tries to follow Mother Nature, which has created intelligent beings (us) via evolution, starting with simple animals like worms and fish and then creating more complex ones.  Neural networks must learn the hard way, by bumping into things and making mistakes.  (page 220)

Dr. Rodney Brooks, former director of the MIT Artificial Intelligence Laboratory, introduced a totally new approach to AI.  Why not build small, insectlike robots that learn how to walk by trail and error, just as nature learns?  Brooks told Kaku that he used to marvel at the mosquito, with a microscopic brain of a few neurons, which can, nevertheless, maneuver in space better than any robot airplane.  Brooks built a series of tiny robots called ‘insectoids’ or ‘bugbots,’ which learn by bumping into things.  Kaku comments:

At first, it may seem that this requires a lot of programming.  The irony, however, is that neural networks require no programming at all.  The only thing that the neural network does is rewire itself, by changing the strength of certain pathways each time it makes a right decision.  So programming is nothing; changing the network is everything.  (page 221)

The Mars Curiosity rover is one result of this bottom-up approach.

Scientists have realized that emotions are central to human cognition.  Humans usually need some emotional input, in addition to logic and reason, in order to make good decisions.  Robots are now be programmed to recognize various human emotions and also to exhibit emotions themselves.  Robots also need a sense of danger and some feeling of pain in order to avoid injuring themselves.  Eventually, as robots become ever more conscious, there will be many ethical questions to answer.

Biologists used to debate the question, “What is life?”  But, writes Kaku, the physicist and Nobel Laureate Francis Crick has observed that the question is not well-defined now that we are advancing in our understanding of DNA.  There are many layer and complexities to the question, “What is life?”  Similarly, there are likely to be many layers and complexities to the question of what constitutes “emotion” or “consciousness.”

Moreover, as Rodney Brooks argues, we humans are machines.  Eventually the robot machines we are building will be just as alive as we are.  Kaku summarizes a conversation he had with Brooks:

This evolution in human perspective started with Nicholaus Copernicus when he realized that the Earth is not the center of the universe, but rather goes around the sun.  It continued with Darwin, who showed that we were similar to the animals in our evolution.  And it will continue into the future… when we realize that we are machines, except that we are made of wetware and not hardware.  (page 248)

Kaku then quotes Brooks directly:

We don’t like to give up our specialness, so you know, having the idea that robots could really have emotions, or that robots could be living creatures—I think is going to be hard for us to accept.  But we’re going to come to accept it over the next fifty years.

Brooks also thinks we will successfully create robots that are safe for humans:

The robots are coming, but we don’t have to worry too much about that.  It’s going to be a lot of fun.

Furthermore, Brooks argues that we are likely to merge with robots.  After all, we’ve already done this to an extent.  Over twenty thousand people have cochlear implants, giving them the ability to hear.

Similarly, at the University of Southern California and elsewhere, it is possible to take a patient who is blind and implant an artificial retina.  One method places a mini video camera in eyeglasses, which converts an image into digital signals.  These are sent wirelessly to a chip placed in the person’s retina.  The chip activates the retina’s nerves, which then send messages down the optic nerve to the occipital lobe of the brain.  In this way, a person who is totally blind can see a rough image of familiar objects.  Another design has a light-sensitive chip placed on the retina itself, which then sends signals directly to the optic nerve.  This design does not need an external camera.  (page 249)

This means, says Kaku, that eventually we’ll be able to enhance our ordinary senses and abilities.  We’ll merge with our robot creations.

 

REVERSE ENGINEERING THE BRAIN

Kaku highlights three approaches to the brain:

Because the brain is so complex, there are at least three distinct ways in which it can be taken apart, neuron by neuron.  The first is to simulate the brain electronically with supercomputers, which is the approach being taken by the Europeans.  The second is to map out the neural pathways of living brains, as in BRAIN [Brain Research Through Advancing Innovative Neurotechnologies Initiative].  (This task, in turn, can be further subdivided, depending on how these neurons are analyzed – either anatomically, neuron by neuron, or by function and activity.)  And third, one can decipher the genes that control the development of the brain, which is an approach pioneered by billionaire Paul Allen of Microsoft.  (page 253)

Dr. Henry Markram is a central figure in the Human Brain Project.  Kaku quotes Dr. Markram:

To build this—the supercomputers, the software, the research—we need around one billion dollars.  This is not expensive when one considers that the global burden of brain disease will exceed twenty percent of the world gross domestic project very soon.

Dr. Markram also said:

It’s essential for us to understand the human brain if we want to get along in society, and I think that it is a key step in evolution.  

How does the human genome go from twenty-three thousand genes to one hundred billion neurons?

The answer, Dr. Markram believes, is that nature uses shortcuts.  The key to his approach is that certain modules of neurons are repeated over and over again once Mother Nature finds a good template.  If you look at microscopic slices of the brain, at first you see nothing but a random tangle of neurons.  But upon closer examination, patterns of modules that are repeated over and over appear.  

(Modules, in fact, are one reason why it is possible to assemble large skyscrapers so rapidly.  Once a single module is designed, it is possible to repeat it endlessly on the assembly line.  Then you can rapidly stack them on top of one another to create the skyscraper.  Once the paperwork is all signed, an apartment building can be assembled using modules in a few months.)

The key to Dr. Markram’s Blue Brain project is the “neocortical column,” a module that is repeated over and over in the brain.  In humans, each column is about two millimeters tall, with a diameter of half a millimeter, and contains sixty thousand neurons.  (As a point of comparison, rat neural modules contain about ten thousand neurons each.)  In took ten years, from 1995 to 2005, for Dr. Markram to map the neurons in such a column and to figure out how it worked.  Once that was deciphered, he then went to IBM to create massive iterations of these columns.  (page 257)

Kaku quotes Dr. Markram again:

…I think, quite honestly, that if the planet understood how the brain functions, we would resolve conflicts everywhere.  Because people would understand how trivial and how deterministic and how controlled conflicts and reactions and misunderstandings are.

The slice-and-dice approach:

The anatomical approach is to take apart the cells of an animal brain, neuron by neuron, using the “slice-and-dice” method.  In this way, the full complexity of the environment, the body, and memories are already encoded in the model.  Instead of approximating a human brain by assembling a huge number of transistors, these scientists want to identify each neuron of the brain.  After that, perhaps each neuron can be simulated by a collection of transistors so that you’d have an exact replica of the human brain, complete with memory, personality, and connection to the senses.  Once someone’s brain is fully reverse engineered in this way, you should be able to have an informative conversation with that person, complete with memories and a personality.  (page 259)

There is a parallel project called the Human Connectome Project.

Most likely, this effort will be folded into the BRAIN project, which will vastly accelerate this work.  The goal is to produce a neuronal map of the human brain’s pathways that will elucidate brain disorders such as autism and schizophrenia.  (pages 260-261)

Kaku notes that one day automated microscopes will continuously take the photographs, while AI machines continuously analyze them.

The third approach:

Finally, there is a third approach to map the brain.  Instead of analyzing the brain by using computer simulations or by identifying all the neural pathways, yet another approach was taken with a generous grant of $100 million from Microsoft billionaire Paul Allen.  The goal was to construct a map or atlas of the mouse brain, with the emphasis on identifying the genes responsible for creating the brain.

…A follow-up project, the Allen Human Brain Atlas, was announced… with the hope of creating an anatomically and genetically complete 3-D map of the human brain.  In 2011, the Allen Institute announced that it had mapped the biochemistry of two human brains, finding one thousand anatomical sites with one hundred million data points detailing how genes are expressed in the underlying biochemistry.  The data confirmed that 82 percent of our genes are expressed in the brain.  (pages 261-262)

Kaku says the Human Genome Project was very successful in sequencing all the genes in the human genome.  But it’s just the first step in a long journey to understand how these genes work.  Similarly, once scientists have reverse engineered the brain, that will likely be only the first step in understanding how the brain works.

Once the brain is reverse-engineered, this will help scientists understand and cure various diseases.  Kaku observes that, with human DNA, if there is a single mispelling out of three billion base pairs, that can cause uncontrolled flailing of your limbs and convulsions, as in Huntington’s disease.  Similarly, perhaps just a few disrupted connections in the brain can cause certain illnesses.

Successfully reverse engineering the brain also will help with AI research.  For instance, writes Kaku, humans can recognize a familiar face from different angles in .1 seconds.  But a computer has trouble with this.  There’s also the question of how long-term memories are stored.

Finally, if human consciousness can be transferred to a computer, does that mean that immortality is possible?

 

THE FUTURE

Kaku talked with Dr. Ray Kurzweil, who told him it’s important for an inventor to anticipate changes.  Kurzweil has made a number of predictions, at least some of which have been roughly accurate.  Kurzweil predicts that the “singularity” will occur around the year 2045.  Machines will have reached the point when they not only have surpassed humans in intelligence; machines also will have created next-generation robots even smarter than themselves.

Kurzweil holds that this process of self-improvement can be repeated indefinitely, leading to an explosion—thus the term “singularity”—of ever-smarter and ever more capable robots.  Moreover, humans will have merged with their robot creations and will, at some point, become immortal.

Robots of ever-increasing intelligence and ability will require more power.  Of course, there will be breakthroughs in energy technology, likely including nuclear fusion and perhaps even antimatter and/or black holes.  So the cost to produce prodigious amounts of energy will keep coming down.  At the same time, because Moore’s law cannot continue forever, super robots eventually will need ever-increasing amounts of energy.  At some point, this will probably require traveling—or sending nanobot probes—to numerous other stars or to other areas where the energy of antimatter and/or of black holes can be harnessed.

Kaku notes that most people in AI agree that a “singularity” will occur at some point.  But it’s extremely difficult to predict the exact timing.  It could happen sooner than Kurzweil predicts or it could end up taking much longer.

Kurzweil wants to bring his father back to life.  Eventually something like this will be possible.  Kaku:

…I once asked Dr. Robert Lanza of the company Advanced Cell Technology how he was able to bring a long-dead creature “back to life,” making history in the process.  He told me that the San Diego Zoo asked him to create a clone of a banteng, an oxlike creature that had died out about twenty-five years earlier.  The hard part was extracting a usable cell for the purpose of cloning.  However, he was successful, and then he FedExed the cell to a farm, where it was implanted into a female cow, which then gave birth to this animal.  Although no primate has ever been cloned, let alone a human, Lanza feels it’s a technical problem, and that it’s only a matter of time before someone clones a human.  (page 273)

The hard part of cloning a human would be bringing back their memories and personality, says Kaku.  One possibility would be creating a large data file containing all known information about a person’s habits and life.  Such a file could be remarkably accurate.  Even for people dead today, scores of questions could be asked to friends, relatives, and associates.  This could be turned into hundreds of numbers, each representing a different trait that could be ranked from 0 to 10, writes Kaku.

When technology has advanced enough, it will become possible—perhaps via the Connectome Project—to recreate a person’s brain, neuron for neuron.  If it becomes possible for you to have your connectome completed, then your doctor—or robodoc—would have all your neural connections on a hard drive.  Then, says Kaku, at some point, you could be brought back to life, using either a clone or a network of digital transistors (inside an exeskeleton or surrogate of some sort).

Dr. Hans Moravec, former director of the Artificial Intelligence Laboratory at Carnegie Mellon University, has pioneered an intriguing idea:  transferring your mind into an immortal robotic body while you’re still alive.  Kaku explains what Moravec told him:

First, you lie on a stretcher, next to a robot lacking a brain.  Next, a robotic surgeon extracts a few neurons from your brain, and then duplicates these neurons with some transistors located in the robot.  Wires connect your brain to the transistors in the robot’s empty head.  The neurons are then thrown away and replaced by the transistor circuit.  Since your brain remains connected to these transistors via wires, it functions normally and you are fully conscious during this process.  Then the super surgeon removes more and more neurons from your brain, each time duplicating these neurons with transistors in the robot.  Midway through the operation, half your brain is empty; the other half is connected by wires to a large collection of transistors inside the robot’s head.  Eventually all the neurons in your brain have been removed, leaving a robot brain that is an exact duplicate of your original brain, neuron for neuron.  (page 280)

When you wake up, you are likely to have a few superhuman powers, perhaps including a form of immortality.  This technology is likely far in the future, of course.

Kaku then observes that there is another possible path to immortality that does not involve reverse engineering the brain.  Instead, super smart nanobots could periodically repair your cells.  Kaku:

…Basically, aging is the buildup of errors, at the genetic and cellular level.  As cells get older, errors begin to build up in their DNA and cellular debris also starts to accumulate, which makes the cells sluggish.  As cells begin slowly to malfunction, skin begins to sag, bones become frail, hair falls out, and our immune system deteriorates.  Eventually, we die.

But cells also have error-correcting mechanisms.  Over time, however, even these error-correcting mechanisms begin to fail, and aging accelerates.  The goal, therefore, is to strengthen natural cell-repair mechanisms, which can be done via gene therapy and the creation of new enzymes.  But there is also another way: using “nanobot” assemblers.

One of the linchpins of this futuristic technology is something called the “nanobot,” or an atomic machine, which patrols the bloodstream, zapping cancer cells, repairing the damage from the aging process, and keeping us forever young and healthy.  Nature has already created some nanobots in the form of immune cells that patrol the body in the blood.  But these immune cells attack viruses and foreign bodies, not the aging process.

Immortality is within reach if these nanobots can reverse the ravages of the aging process at the molecular and cellular level.  In this vision, nanobots are like immune cells, tiny police patrolling your bloodstream.  They attack any cancer cells, neutralize viruses, and clean out the debris and mutations.  Then the possibility of immortality would be within reach using our own bodies, not some robot or clone.  (pages 281-282)

Kaku writes that his personal philosophy is simple: If something is possible based on the laws of physics, then it becomes an engineering and economics problem to build it.  A nanobot is an atomic machine with arms and clippers that grabs molecules, cuts them at specific points, and then splices then back together.  Such a nanobot would be able to create almost any known molecule.  It may also be able to self-reproduce.

The late Richard Smalley, a Nobel Laureate in chemistry, argued that quantum forces would prevent nanobots from being able to function.  Eric Drexler, a founder of nanotechnology, pointed out that ribosomes in our own body cut and splice DNA molecules at specific points, enabling the creation of new DNA strands.  Eventually Drexler admitted quantum forces do get in the way sometimes, while Smalley acknowledged that if ribosomes can cut and split molecules, perhaps there are other ways, too.

Ray Kurzweil is convinced that nanobots will shape society itself.  Kaku quotes Kurzweil:

…I see it, ultimately, as an awakening of the whole universe.  I think the whole universe right now is basically made up of dumb matter and energy and I think it will wake up.  But if it becomes transformed into this sublimely intelligent matter and energy, I hope to be a part of that.

 

THE MIND AS PURE ENERGY

Kaku writes that it’s well within the laws of physics for the mind to be in the form of pure energy, able to explore the cosmos.  Isaac Asimov said his favorite science-fiction short story was “The Last Question.”  In this story, humans have placed their physical bodies in pods, while their minds roam as pure energy.  But they cannot keep the universe itself from dying in the Big Freeze.  So they create a supercomputer to figure out if the Big Freeze can be avoided.  The supercomputer responds that there is not enough data.  Eons later, when stars are darkening, the supercomputer finds a solution: It takes all the dead stars and combines them, producing an explosion.  The supercomputer says, “Let there be light!”

And there was light.  Humanity, with its supercomputer, had become capable of creating a new universe.

 

BOOLE MICROCAP FUND

An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:  https://boolefund.com/best-performers-microcap-stocks/

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.

 

If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail: jb@boolefund.com

 

Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.