Physics of the Future

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 13, 2017

Science and technology are moving forward faster than ever before:

…this is just the beginning.  Science is not static.  Science is exploding exponentially all around us.  (page 12)

Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future.  His book, Physics of the Future (Anchor Books, 2012), is a result.

Kaku explains why his predictions may carry more weight than those of other futurists:

  • His book is based on interviews with more than 300 top scientists.
  • Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
  • Prototypes of all the technologies mentioned in the book already exist.
  • As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.

The ancients had little understanding of the forces of nature, so they invented the gods of mythology.  Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.

We are on the verge of becoming a planetary, or Type I, civilization.  This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.

But there are still some things, like face to face meetings, that appear not to have changed much.  Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years.  People still like to see tourist attractions in person.  People still like live performances.  Many people still prefer taking courses in-person rather than online.  (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)

Here are the chapters from Kaku’s book that I cover:

  • Future of the Computer
  • Future of Artificial Intelligence
  • Future of Medicine
  • Nanotechnology
  • Future of Energy
  • Future of Space Travel
  • Future of Humanity


help writing wedding vows job personal statement examples what the viagra side effect essay relationship stepfathers essays on school uniform debate follow a person i admire the most my mother essay essay on my childhood buying cipro hope is life essay enter site cals cornell essay outline click here source url sildenafil auc essay about copyright law mphil thesis in english megaustabs viagra vs maxman source site flushing bactrim out of your system johns hopkins university essays that worked harvard supplemental essays follow daily journal orange county for publication go kamagra super active FUTURE OF THE COMPUTER

Kaku quotes Helen Keller:

No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.

According to Moore’s law, computer power doubles every eighteen months.  Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly.  Also, exponential growth is often not noticeable for the first few decades.  But eventually things can change dramatically.

Even the near future may be quite different, writes Kaku:

…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control.  They will, to a degree, recognize the human voice and face and converse in a formal language.  They will be able to create entire virtual worlds that we can only dream of today.  Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper.  Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders.  (pages 25-26)

In order to discuss the future of science and technology, Kaku has divided each chapter into three parts:  the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).

In the near future, we can surf the internet via special glasses or contact lenses.  We can navigate with a handheld device or just by moving our hands.  We can connect to our office via the lense.  It’s likely that when we encounter a person, we will see their biography on our lense.

Also, we will be able to travel by driverless cars.  This will allow us to use commute time to access the internet via our lenses or to do other work.  Kaku notes that the word car accident may disappear from the language once driveless cars become advanced and ubiquitous enough.  Instead of nearly 40,000 dying in the United States in car accidents each year, there may be zero deaths from car accidents.  Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.

At home, you will have a room with screens on every wall.  If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.

You won’t need to carry a computer with you.  Computers will be embedded nearly everywhere.  You’ll have constant access to computers and the internet via your glasses or contact lenses.

As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person.  This includes the moon, Mars, and other currently exotic locations.

Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland.  Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill.  Kaku found that he could run, hide, sprint, or lie down.  Everything he saw was very realistic.  In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.

Your doctor – likely a human face appearing on your wall – will have all your genetic information.  Also, you’ll be able to pass a tiny probe over your body and diagnose any illness.  (MRI machines will be as small as a phone.)  As well, tiny chips or sensors will be embedded throughout your environment.  Most forms of cancer will be identified and destroyed before a tumor ever forms.  Kaku says the word tumor will disappear from the human language.

Furthermore, we’ll probably be able to slow down and even reverse the aging process.  We’ll be able to regrow organs based on computerized access to our genes.  We’ll likely be able to reengineer our genes.

In the medium term (2030 to 2070):

  • Moore’s law may reach an end.  Computing power will still continue to grow exponentially, however, just not as fast as before.
  • When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail.  You’ll be able to download informative lectures about anything you see.  In fact, a real professor will appear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
  • If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers.  You’ll be able to see through hills and other obstacles.
  • If you’re a surgeon, you’ll see in great detail everything inside the body.  You’ll have access to all medical records, etc.
  • Universal translators will allow any two people to converse.
  • True 3-D images will surround us when we watch a movie.  3-D holograms will become a reality.

In the far future (2070 to 2100):

We will be able to control computers directly with our minds.

John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain.  Through trial and error, the paralyzed person learns to move the cursor on a computer screen.  Eventually they can read and write e-mails, and play computer games.  Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.

Similarly, paralyzed people will be able to control mechanical arms and legs from their brains.  Experiments with monkeys have already achieved this.

Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain.  MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.

Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy.  In this way, we’ll be able to control objects just by thinking.  Astronauts on earth will be able to control superhuman robotic bodies on the moon.



AI pioneer Herbert Simon, in 1965, said:

Machines will be capable, in twenty years, of doing any work a man can do.

Unfortunately not much progress was made.  In 1974, the first AI winter began as the U.S. and British governments cut off funding.

Progress again was made in the 1980’s.  But because it was overhyped, another backlash occurred and a second AI winter began.  Many people left the field as funding disappeared.

The human brain is a type of neural network.  Neural networks follow Hebb’s rule:  every time a correct decision is made, those neural pathways are reinforced.  Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience.  ‘

Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers.  Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.

Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).

What’s interesting is that robots are superfast when doing human mental calculations.  But robots still are not good at visual pattern recognition, movement, and common sense.  Robots can see far more detail than humans, but robots have trouble making sense of what they see.  Also, many things in our experience that we as humans know by common sense, robots don’t understand.

There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things.  But so far, these projects haven’t worked.

There are two ways to give a robot the ability to learn:  top-down and bottom-up.  An example of the top-down approach is STAIR (Stanford artificial intelligence robot).  Everything is programmed into STAIR from the beginning.  For STAIR to understand an image, it must compare the image to all the images already programmed into it.

The LAGR (learning applied to ground robots) uses the bottom-up approach.  It learns everything from scratch, by bumping into things.  LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.

Robots will become ever more helpful in medicine:

For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia.  Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar.  But the da Vinci robotic system can vastly decrease all these.  The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery.  Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body.  There are 800 hospitals in Europe and North and South America that use this system;  48,000 operations were performed in 2006 alone using this robot.  Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.

In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today.  In fact, in the future, only rarely will the surgeon slice the skin at all.  Noninvasive surgery will become the norm.

Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread.  Micromachines smaller than the period at the end of this sentence will do much of the mechanical work.  (pages 93-94)

But to make robots intelligent, scientists must learn more about how the human brain works.

The human brain has roughly three levels.  The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc.  At the next level, there is the monkey brain, or the limbic system, located at the center of our brain.  Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.

The third level of the human brain is the front and outer part – the cerebral cortex.  This level defines humanity and is responsible for the ability to think logically and rationally.

Scientists still have a way to go in understanding in sufficient detail how the human brain works.

By midcentury, scientists will be able to reverse engineer the brain.  In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer.  Kaku quotes Fred Hapgood from MIT:

Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.

By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku.  However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.

For example, says Kaku, the Human Genome Project is like a dictionary with no definitions.  We can spell out each gene in the human body.  But we still don’t know what each gene does exactly.  Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm, C. elegans.  But scientists still can’t fully translate this map into the worm’s behavior.

Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.

When will machines become conscious?  Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future.  If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious.  On the other hand, something like the Turning test may help to identify when machines have become practically indistinguishable from humans.

When will robots exceed humans?  Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.

What if superintelligent robots can make even smarter copies of themselves?  They might thereby gain the ability to evolve exponentially.  Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.

The singularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially.  The inventor Ray Kurzweil has become a spokesman for the singularity.  But he thinks humans will merge with this digital superintelligence.  Kaku quotes Kurzweil:

It’s not going to be an invasion of intelligent machines coming over the horizon.  We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.

Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us.  The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).

One problem is that the military is the largest funder of AI research.  On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).

Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.

One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience.  Such an army might turn into a practical way to explore the solar system and beyond.  One by-product of Brooks’ idea is the Mars Rover.

Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm.  AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.

Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics.  Thus, they have sought a single, unifying equation underlying all intelligence.  But, says Minsky, there is no such thing:

Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness.  Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task.  He calls this ‘the society of minds’:  that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years.  (page 123)

Brooks predicts that, by 2100, there will be very intelligent robots.  But we will be part robot and part connected with robots.

He sees this progressing in stages.  Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions.  For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf.  These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…  

Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain.  One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons.  Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision.  These groups, for the first time in history, have been able to restore a degree of sight to the blind…  (pages 124-125)

Scientists have also successfully created a robotic hand.  One patient, Robin Ekenstam, had his right hand amputated.  Scientists have given him a robotic hand with four motors and forty sensors.  The doctors connected Ekenstam’s nerves to the chips in the artificial hand.  As a result, Ekenstam is able to use the artificial hand as if it were his own hand.  He feels sensations in the artificial fingers when he picks stuff up.  In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.

Furthermore, the brain is extremely plastic because it is a neural network.  So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.

And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities.  Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats.  Similarly, genetic engineering will become possible.  As Brooks commented:

We will now longer find ourselves confined by Darwinian evolution.

Another way people will merge with robots is with surrogates and avatars.  For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.

Robot pioneer Hans Morevic has described one way this could happen:

…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot.  The operation starts when we lie beside a robot without a brain.  A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull.  As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot.  Not only do we have a robotic body, we have also the benefits of a robot:  immortality in superhuman bodies that are perfect in appearance.  (pages 130-131)



Kaku quotes Nobel Laureate James Watson:

No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?

Nobel Laureate David Baltimore:

I don’t really think our bodies are going to have any secrets left within this century.  And so, anything that we can manage to think about will probably have a reality.

Kaku mentions biologist Robert Lanza:

Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit.  In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before.  Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah.  There, the fertilized cell was implanted into a female cow.  Ten months later he got the news that his latest creation had just been born.  On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out.  Another day, he could be working on cloning human embryo cells.  He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells.  (page 138)

Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book, What is Life?  He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.

Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule.  In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix.  When unraveled, a single strand of DNA stretches about 6 feet long.  On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code.  By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life.  (page 140)

Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form.  David Baltimore:

Biology is today an information science.

Kaku writes:

The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule.  Atom for atom, we know how to build the molecules of life from scratch.  And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.

Welcome to bioinformatics:

…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms.  For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA.  In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes.  (page 143)

You’ll talk to your doctor – likely a software program – on the wall screen.  Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form.  If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.

If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed.  (There are over 91,000 in the United States waiting for an organ transplant.)

…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells.  The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells.  (page 144)

Eventually cloning will be possible for humans.

The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep.  By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original.  (page 150)

Successes in animal studies will be translated to human studies.  First diseases caused by a single mutated gene will be cured.  Then diseases caused by multiple muted genes will be cured.

At some point, there will be “designer children.”  Kaku quotes Harvard biologist E. O. Wilson:

Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become. 

The “smart mouse” gene was isolated in 1999.  Mice that have it are better able to navigate mazes and remember things.  Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn.  This supports Hebb’s rule:  learning occurs when certain neural pathways are reinforced.

It will take decades to iron out side effects and unwanted consequences of genetic engineering.  For instance, scientists now believe that there is a healthy balance between forgetting and remembering.  It’s important to remember key lessons and specific skills.  But it’s also important not to remember too much.  People need a certain optimism in order to make progress and evolve.

Scientists now know what aging is:  Aging is the accumulation of errors at the genetic and cellular level.  These errors have various causes.  For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku.  Errors can also accumulate as ‘junk’ molecular debris.

The buildup of genetic errors is a by-product of the second law of thermodynamics:  entropy always increases.  However, there’s an important loophole, notes Kaku.  Entropy can be reduced in one place as long as it is increased at least as much somewhere else.  This means that aging is reversible.  Kaku quotes Richard Feynman:

There is nothing in biology yet found that indicates the inevitability of death.  This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.

Kaku continues:

…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding.  His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals.  In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…

…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM.  By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers.  Scientists will be able to scan millions of genomes of two groups of people, the young and the old.  By comparing the two groups, one can identify where aging takes place at the genetic level.  A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated.  (pages 168-169)

Scientists think aging is only 35 percent determined by genes.  Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria.  This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging.  Soon we could live to 150.  By 2100, we could live well beyond that.

If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent.  This is called calorie restriction.  Every organism studied so far exhibits this phenomenon.

…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging.  In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time.  Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long.  (page 170)

Now scientists have shown that caloric restriction also works for primates:  less diabetes, less cancer, less heart disease, and better health and longer life.

In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells.  SIR2 is activated when it detects that the energy reserves of a cell are low.  The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins.  Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.

Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair.  If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine.  (page 171)

Kaku reports what William Haseltine, biotech pioneer, told him:

The nature of life is not mortality.  It’s immortality.  DNA is an immortal molecule.  That molecule first appeared perhaps 3.5 billion years ago.  That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that.  First to extend our lives two- or three-fold.  And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely.  And I don’t think that will be an unnatural process.  (page 173)

Kaku concludes that extending life span in the future will likely result from a combination of activities:

  • growing new organs as they wear out or become diseased, via tissue engineering and stem cells
  • ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
  • using gene therapy to alter genes that may slow down the aging process
  • maintaining a healthy lifestyle (exercise and a good diet)
  • using nanosensors to detect diseases like cancer years before they become a problem

Kaku quotes Richard Dawkins:

I believe that by 2050, we shall be able to read the language [of life].  We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.

Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor.  After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.




For the most part, nanotechnology is still a very young science.  But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes.  MEMS are tiny machines so small they can easily fit on the tip of a needle.  They are created using the same etching technology used in the computer business.  Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them.  (pages 207-208)

Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car.  This has already saved thousands of lives.

One day nanomachines may be able to replace surgery entirely.  Cutting the skin may become completely obsolete.  Nanomachines will also be able to find and kill cancer cells in many cases.  These nanomachines can be guided by magnets.

DNA fragments can be embedded on a tiny chip using transistor etching technology.  The DNA fragments can bind to specific gene sequences.  Then, using a laser, thousands of genes can be read at one time, rather than one by one.  Prices for these DNA chips continue to plummet due to Moore’s law.

Small electronic chips will be able to do the work that is now done by an entire laboratory.  These chips will be embedded in our bathrooms.  Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks.  In the future, it may cost pennies and take just a few minutes.

In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite.  They won the Nobel Prize for their work.  Graphene is a single sheet of carbon, no more than one atom thick.  And it can conduct electricity.  It’s also the strongest material ever tested.  (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)

Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor:  one atom thick and ten atoms across.  (The smallest transistors currently are about 30 nanometers.  Novoselov’s transistors are 30 times smaller.)

The real challenge now is how to connect molecular transistors.

The most ambitious proposal is to use quantum computers, which actually compute on individual atoms.  Quantum computers are extremely powerful.  The CIA has looked at them for their code-breaking potential.

Quantum computers actually exist.  Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.”  When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.

The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations.  (When atoms are “coherent,” they vibrate in phase with one another.)  Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.

Scientists are working on programmable matter the size of grains of sand.  These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object.  In fact, many common consumer products may be replaced by software programs sent over the internet.  If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.

In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything.  This would be the crowning achievement of engineering, says Kaku.  One problem is the sheer number of atoms that would need to be re-arranged.  But this could be solved by self-replicating nanobots.

A version of this “replicator” already exists.  Mother Nature can take the food we eat and create a baby in nine months.  DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food, notes Kaku.  Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms.  (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)



Kaku writes that in this century, we will harness the power of the stars.  In the short term, this means solar and hydrogen will replace fossil fuels.  In the long term, it means we’ll tap the power of fusion and even solar energy from outer space.  Also, cars and trains will be able to float using magnetism.  This can drastically reduce our use of energy, since most energy today is used to overcome friction.

Currently, fossil fuels meet about 80 percent of the world’s energy needs.  Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.

Electric vehicles will reduce the use of fossil fuels.  But we also have to transform the way electricity is generated.  Solar power will keep getting cheaper.  But much more clean energy will be required in order gradually to replace fossil fuels.

Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases.  However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.

Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve.  This increases the odds that terrorists could acquire nuclear weapons.

Within a few decades, global warming will become even more obvious.  The signs are already clear, notes Kaku:

  • The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
  • Greenland’s ice shelves continue to shrink.  (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
  • Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off.  (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
  • For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
  • Temperatures started to be reliably recorded in the late 1700s;  1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded.  Levels of carbon dioxide are rising dramatically.
  • As the earth heats up, tropical diseases are gradually migrating northward.

It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide.  But we must be careful about unintended side effects on ecosystems.

Eventually fusion power may solve most of our energy needs.  Fusion powers the sun and lights up all the stars.

Anyone who can successfully master fusion power will have unleashed unlimited eternal energy.  And the fuel for these fusion plants comes from ordinary seawater.  Pound for pound, fusion power releases 10 million times more power than gasoline.  An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum.  (page 272)

It’s extremely difficult to heat hydrogen gas to tens of millions of degrees.  But scientists will probably master fusion power within the next few decades.  And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.

One way scientists are trying produce nuclear fusion is by focusing huge lasers on to a tiny point.  If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion.  This approach is called inertial confinement fusion.

The other main approach used by scientists to try to create fusion is magnetic confinement fusion.  A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.

What is most difficult in this approach is squeezing the hydrogen gas uniformly.  Otherwise, it bulges out in complex ways.  Scientists are using supercomputers to try to control this process.  (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion.  So stars form easily.)

Most of the energy we burn is used to overcome friction.  Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.

In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance.  Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy.  The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.

But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero.  Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero.  This is important because liquid nitrogen forms at 77 degrees (Kelvin).  Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.

Remember that most energy is used to overcome friction.  Even for electricity, up to 30 percent can be lost during transmission.  But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years.  Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.

Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.

The reason the magnet floats is simple.  Magnetic lines of force cannot penetrate a superconductor.  This is the Meissner effect.  (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.)  When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic.  This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float.  (page 289)

Room temperature superconductors will allow trains and cars to move without any friction.  This will revolutionize transportation.  Compressed air could get a car going.  Then the car could float almost forever as long as the surface is flat.

Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains.  A maglev train does lose energy to air friction.  In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.

Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible.  A reduced cost of space travel may make it feasible to send hundreds of solar satellites into space.  One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles.  But the main problem is the cost of booster rockets.  (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are working to reduce the cost of rockets by making them reusable.)



Kaku quotes Carl Sagan:

We have lingered long enough on the shores of the cosmic ocean.  We are ready at last to set sail for the stars.

Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:

So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition.  This, in turn, will generate more interest in one day sending a probe to these distant planets.  There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms.  (page 297)

Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars.  But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.

For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans.  And the surface of Europa is continually heated by the tidal forces caused by gravity.

It had been thought that life required sunlight.  But in 1977, life was found on earth, deep under water in the Galapagos Rift.  Energy from undersea volcano vents provided enough energy for life.  Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents.  Some of the most primitive forms of DNA have been found on the bottom of the ocean.

In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature.  Kaku:

At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty.  In one scenario, our universe is a huge bubble of some sort that is continually expanding.  We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper).  But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath.  Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation).  Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion.  (page 301)

Space travel is very expensive.  It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon.  It costs much more to send a person to Mars.

Robotic missions are far cheaper than manned missions.  And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.

Kaku next describes a mission to Mars:

Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission.  But getting to Mars will be much more difficult than reaching the moon.  In contrast to the moon, Mars represents a quantum leap in difficulty.  It takes only three days to reach the moon.  It takes six months to a year to reach Mars.

In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like.  Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.

Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station.  To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars.  With no air, soil, or water, everything must be brought from earth.  It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars.  The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth.  Any rip in a space suit would create rapid depressurization and death.  (page 312)

Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth.  The temperature never goes above the melting point of ice.  And the dust storms are ferocious and often engulf the entire planet.

Eventually astronauts may be able to terraform Mars to make it more hospitable for life.  The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice.  (Methane gas is an even more potent greenhouse gas than carbon dioxide.)  Once the temperature rises, the underground permafrost may begin to thaw.  Riverbeds would fill with water, and lakes and oceans might form again.  This would release more carbon dioxide, leading to a positive feedback loop.

Another possible way to terraform Mars would be to deflect a comet towards the planet.  Comets are made mostly of water ice.  A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.

The polar regions of Mars are made of frozen carbon dioxide and ice.  It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps.  This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.

Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars.  They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide.  They could also be genetically modified to maximize their growth on Mars.  These algae pools could accelerate terraforming in several ways.  First, they could convert carbon dioxide into oxygen.  Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun.  Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet.  Fourth, the algae can be harvested for food.  Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen.  (page 315)

Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.

2070 to 2100:  A Space Elevator and Interstellar Travel

Near the end of the century, scientists may finally be able to construct a space elevator.  With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky.  Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.

One challenge is to create a carbon nanotube cable that is 50,000 miles long.  Another challenge is that space satellites in orbit travel at 18,000 miles per hour.  If a satellite collided with the space elevator, it would be catastrophic.  So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.

Another challenge is turbulent weather on earth.  The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform.  Moreover, there must be an escape pod in case the cable breaks.

Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt.  The next goal would be travelling to a star.  A conventional chemical rocket would take 70,000 years to reach the nearest star.  But there are several proposals for an interstellar craft:

  • solar sail
  • nuclear rocket
  • ramjet fusion
  • nanoships

Although light has no mass, it has momentum and so can exert pressure.  The pressure is super tiny.  But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft.  The solar sail would likely be miles wide.  The craft would have to circle the sun for a few years, gaining more and more momentum.  Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.

Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power.  One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters.  It would have been powered by 1,000 hydrogen bombs.  (This also would have been a good way to get rid of atomic bombs meant only for warfare.)  Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion.  So the project was set aside.

A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust.  In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space.  The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy via nuclear fusion.  With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.

Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year.  This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship.  (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light.  But meanwhile, on earth, millions of years will have passed.)

Note that there are still engineering questions about the ramjet fusion engine.  For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space.  Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.

Another possibility is antimatter rocket ships.  If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel.  Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.

Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars.  These nanoships might become cheap enough to produce and to fuel.  They might even be self-replicating.

Millions of nanoships could gather intelligence like a “swarm” does.  For instance, a single ant is super simple.  But a colony of ants can create a complex ant hill.  A similar concept is the “smart dust” considered by the Pentagon.  Billions of particles, each a sensor, could be used to gather a great deal of information.

Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light.  Moreover, scientists may be able to create one or a few self-replicating nanoprobes.  Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.



Kaku writes:

All the technological revolutions described here are leading to a single point:  the creation of a planetary civilization.  This transition is perhaps the greatest in human history.  In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos.  Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate.  (pages 378-379)

In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations.  So he proposed three types of civilization:

  • A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
  • A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
  • A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).

Kaku explains:

The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations.  Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies.  (page 381)

Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet.  There are signs, says Kaku, that humanity will reach Type I in a matter of decades.

  • The internet allows a person to connect with virtually anyone else on the planet effortlessly.
  • Many families around the world have middle-class ambitions:  a suburban house and two cars.
  • The criterion for being a superpower is not weapons, but economic strength.
  • Entertainers increasingly consider the global appeal of their products.
  • People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
  • The news is becoming planetary.
  • Soccer and the Olympics are emerging to dominate planetary sports.
  • The environment is debated on a planetary scale.  People realize they must work together to control global warming and pollution.
  • Tourism is one of the fastest-growing industries on the planet.
  • War has rarely occurred between two democracies.  A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
  • Diseases will be controlled on a planetary basis.

A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova.  Or we may be able to keep the sun from exploding.  (Or we might be able to change the orbit of our planet.)  Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere.  Also, we probably will have colonized not just the entire solar system, but nearby stars.

By the time we become a Type III civilization, we will have explored most of the galaxy.  We may have done this using self-replicating robot probes.  Or we may have mastered Planck energy (10^19 billion electron volts).  At this energy, space-time itself becomes unstable.  The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time.  By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time.  As a result, a Type III civilization might be able to colonize the entire galaxy.

It’s possible that a more advanced civilization has already visited or detected us.  For instance, they may have used tiny self-replicating probes that we haven’t noticed yet.  It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.

Kaku writes that many people are not aware of the historic transition humanity is now making.  But this could change if we discover evidence of intelligent life somewhere in outer space.  Then we would consider our level of technological evolution relative to theirs.

Consider the SETI Institute.  This is from their website (

SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.

Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets.  Whether evolution will give rise to intelligent, technological civilizations is open to speculation.  However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.

Finding evidence of other technological civilizations however, requires significant effort.  Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.

Work at the Center is divided into two areas:  Research and Development (R&D) and Projects.  R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects.  The algorithms and technology developed in the lab are first field-tested and then implemented during observing.  The observing results are used to guide the development of new hardware, software, and observing facilities.  The improved SETI observing Projects in turn provide new ideas for Research and Development.  This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.

Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is.  A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible.  If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Bogle on Index Funds

(Image:  Zen Buddha Silence by Marilyn Barbone.)

August 6, 2017

Ultra-low-cost index funds tend to be exceptionally good long-term investments.  It’s not just that, on an annual basis, index funds typically do better than 60-80% of all funds.  It’s that index funds very consistently do better.  Consistently outperforming 60-80% of all funds annually virtually guarantees that index funds will beat at least 90-95% of all funds over the course of several decades or more.  It’s just a matter of simple arithmetic, as Bogle has noted.  Moreover, the past several decades illustrate this result (see Brute Facts below).

If you’re a long-term investor, then by investing in index funds, you are likely to beat at least 90-95% of all investors, net of costs, over time.  Investing in index funds is the best long-term investment for the vast majority of investors, as Warren Buffett—one of the greatest investors ever—has often noted.  See:

Jack Bogle’s Doun’t Count on It! (Wiley, 2011) is a collection of his writings on a variety of topics including capitalism, entrepreneurship, indexing, idealism, and heroes.  It’s a long book (586 pages), but well worth reading.  Below is my brief summary of Chapter 18 (pages 369-392).



The main reason that index funds generally beat at least 90-95% of all investors over time is ultra-low costs.  Bogle:

…we don’t need to accept the EMH [Efficient Market Hypothesis] to be index believers.  For there is a second reason for the triumph of indexing, and it is not only more compelling but unarguably universal.  I call it the CMH—the Cost Matters Hypothesis—and not only is it all that is needed to explain why indexing must and does work, but it in fact enables us to quantify with some precision how well it works.  Whether or not the markets are efficient, the explanatory power of the CMH holds.  (page 371)

Bogle further explains:

…The mathematical expectation of the speculator is not zero;  it is a loss equal to the amount of transaction costs incurred.

So, too, the mathematical expectation of the long-term investor also is a shortfall to whatever returns our financial markets are generous enough to provide.  Indeed the shortfall can be described as precisely equal to the costs of our system of financial intermediation—the sum total of all those advisory fees, marketing expenditures, sales loads, brokerage commissions, transaction costs, custody and legal fees, and securities processing expenses.  Intermediation costs in the U.S. equity market may well total as much as $250 billion a year or more.  If today’s $13 trillion stock market were to provide, say, a 7 percent annual return ($910 billion), costs would consume more than a quarter of it, leaving less than three-quarters of the return for the investors—those who put up 100 percent of the capital.  We don’t need the EMH to explain the dire odds that investors face in their quest to beat the stock market.  We need only the CMH.  Whether markets are efficient or inefficient, investors as a group must fall short of the market return by the amount of the costs they incur.  (page 372)



Bogle recounts:

Our introduction of First Index Investment Trust was greeted by the investment community with derision.  It was dubbed ‘Bogle’s Folly,’ and described as un-American, inspiring a widely circulated poster showing Uncle Sam calling on the world to ‘Help Stamp Out Index Funds’… Fidelity Chairman Edward C. Johnson led the skeptics, assuring the world that Fidelity had no intention of following Vanguard’s lead:  ‘I can’t believe that the great mass of investors are going to be satisfied with just receiving average returns.  The name of the game is to be the best.’  (Fidelity now runs some $38 billion in indexed assets.)  (pages 375-376)

Of course, all investors would like to get the best returns if possible.  Yet, by definition, investors on the whole will get average results.  But that is before costs.

After costs, the average investor will get less than the market returns.  And the amount of the shortfall will precisely equal the costs.



Bogle examines the long-term performance of mutual funds:

…In 1970, there were 355 equity mutual funds, and we have now had more than three decades over which to measure their success.  We’re first confronted with an astonishing—and important—revelation:  Only 147 funds survived the period.  Fully 208 of those funds vanished from the scene, an astonishing 60 percent failure rate…

Now let’s look at the records of the survivors—doubtless the superior funds of the initial group.  Yet fully 104 of them fell short of the 11.3 percent average annual return achieved by the unmanaged S&P 500 Index.  Just 43 funds exceeded the index return.  If, reasonably enough, we describe a return that comes within plus or minus a single percentage point of the market as statistical noise, 52 of the surviving funds provided a return roughly equivalent to that of the market.  A total of 72 funds, then, were clear losers (i.e., by more than a percentage point), with only 23 clear winners above that threshold.

If we widen the ‘noise’ threshold to plus or minus two percentage points, we find that 43 of the 50 funds outside that range were inferior and only 7 superior—a tiny 2 percent of the 355 funds that began the period…

But I believe the evidence actually overrates the long-term achievement of the seven putatively successful funds.  Is the obvious credibility of those superior records in fact credible?  I’m not so sure.  Those winning funds have much in common.  First, each was relatively unknown (and relatively unowned by investors) at the start of the period.  Their assets were tiny, with the smallest at $1.9 million, the median at $9.8 million, and the largest at $59 million.  Second, their best returns were achieved during their first decade, and resulted in enormous asset growth, typically from those little widows’ mites at the start of the period to $5 billion or so at the peak, before performance started to deteriorate.  (One fund actually peaked at $105 billion!)  Third, despite their glowing early records, most have lagged the market fairly consistently during the past decade, sometimes by a substantial amount… The pattern for five of the seven funds is remarkably consistent:  a peak in relative return in the early 1990s, followed by annual returns of the next decade that lagged the market’s return by about three percentage points per year—roughly, S&P 500 +12 percent, mutual fund +9 percent.

In the field of fund management it seems apparent that ‘nothing fails like success’… For the vicious circle of investing—good past performance draws large dollars of inflow, and having large dollars to manage crimps the very ingredients that were largely responsible for the good performance—is almost inevitable in any winning field.  So even if an investor was smart enough or lucky enough to have selected one of the few winning funds at the outset, selecting such funds by hindsight—after their early success—was also largely a loser’s game.  Whatever the case, the brute evidence of the past three decades makes a powerful case against the quest to find the needle in the haystack.  Investors would be better served by simply owning, through an index fund, the market haystack itself.  (pages 378-380)

Bogle continues:

In the field of investment management, relying on past performance simply has not worked.  The past has not been prologue, for there is little persistence in fund performance.  A recent study of equity mutual fund risk-adjusted returns during 1983-2003 reflected a randomness in performance that is virtually perfect.  A comparison of fund returns in the first half to the second half of the first decade, in the first half to the second half of the second decade, and in the first full decade to the second full decade makes the point clear.  Averaging the three periods shows that 25 percent of the top-quartile funds in the first period found themselves in the top quartile in the second—precisely what chance would dictate.  Almost the same number of top-quartile funds—23 percent—tumbled to the bottom quartile, again a close-to-random outcome.  In the bottom quartile, 28 percent of the funds mired there during the first half remained there in the second, while slightly more—29 percent—had actually jumped to the top quartile.

…Simply picking the top-performing funds of the past fails to be a winning strategy.  What is more, even when funds succeed in outpacing their peers, they still have a way to go to match the return of the stock market index itself.  (pages 381-382)



Bogle writes:

…What do the proponents of active management point to?  Themselves!  ‘We can do it better.’  ‘We have done it better.’  ‘Just buy the (inevitably superior performing) funds that we advertise.’  It turns out, then, that the big idea that defines active management is that there is no big idea.  Its proponents offer only a few good anecdotes of the past and promises for the future.

Also, it turns out that there is in fact one big idea that can be generalized without contradiction.  Cost is the single statistical construct that is highly correlated with future investment success.  The higher the cost, the lower the return.  Equity fund expense ratios have a negative correlation coefficient of -0.61 with equity fund returns.  In the fund business, you get what you don’t pay for.  You get what you don’t pay for!

If we simply aggregate funds by quartile, this correlation jumps right out at us.  During the decade ended November 30, 2003, the lowest-cost quartile of funds provided an average annual return of 10.7 percent;  the second-lowest, 9.8 percent;  the second-highest, 9.5 percent;  and the highest quartile, 7.7 percent—the difference of fully three percentage points per year between the high and low quartiles, equal to a 30 percent increase in annual return!  The same pattern holds irrespective of manager style or market capitalization.  But of course, with index funds carrying by far the lowest costs in the industry, there are few, if any, promotions by active managers of the undeniable relationship between cost and value.  (pages 385-386)



Bogle explains why index funds have succeeded in beating nearly all other funds over the course of several decades or more:

The reasons for that success are the essence of simplicity:  (1) the broadest possible diversification, often subsuming the entire U.S. stock market;  (2) a focus on the long-term, with minimal, indeed nominal, portfolio turnover (say, 3% to 5% annually);  and (3) rock-bottom cost, with neither advisory fees nor sales loads, and minimal operating expenses….

…While fund costs essentially represent the difference between success and failure for investors who seek to accumulate assets, they have gone up as index fees have come down.  The initial expense ratio of our 500 Index Fund was 0.43 percent, compared to 1.40 percent for the average equity fund.  Today, it is 0.18 percent or less, while the ratio for the average equity fund has risen to 1.58 percent.  Add in turnover costs and sales commissions and the all-in cost of the average fund is at least 2.5 percent, suggesting a future annual index fund advantage of at least 2.3 percent per year.  (page 387)



Bogle concludes:

Now think of this in personal terms.  What difference would an index fund make in your own retirement plan over, say, 40 years?  Well, let’s postulate a future long-term annual return of 8 percent on stocks.  If we assume that mutual fund costs continue at their present level of at least 2.5 percent a year, an average mutual fund might return 5.5 percent.  Extending this tax-deferred compounding out in time on your investment of $3,000 each year over 40 years, an investment in the stock market itself would grow to $840,000, with the market index fund not far behind.  Your actively managed mutual fund would produce $430,000—only a little more than one-half as much.

Looked at from a different perspective, your retirement plan has earned a value of $840,000 before costs, and donated $410,000 of that total to the mutual fund industry.  You have kept the remainder – $430,000.  The financial system has consumed 48 percent of the return, and you have achieved but 52 percent of your earning potential.  Yet it was you who provided 100 percent of the initial capital;  the industry provided none.  Confronted by the issue in this way, would an intelligent investor consider this split to represent a fair shake?  Merely to ask the question is to answer it:  ‘No.’  (pages 391-392)



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.


A Man for All Markets

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 30, 2017

A Man for All Markets: From Las Vegas to Wall Street, How I Beat the Dealer and the Market (Random House, 2017) is the autobiography of Edward O. Thorp, a remarkable person.  Here’s the beginning:

Join me in my odyssey through the worlds of science, gambling, and the securities markets.  You will see how I overcame risks and reaped rewards in Las Vegas, Wall Street, and life.  On the way, you will meet interesting people from blackjack card counters to investment experts, from movie starts to Nobel Prize winners.  And you’ll learn about options and other derivatives, hedge funds, and why a simple investment approach beats most investors in the long run, including experts.

The simple approach to which Thorp refers is investing in ultra-low-cost index funds.  Thorp’s view here is similar to Warren Buffett’s:



Even as a young child, Thorp loved learning.  And he especially loved testing ideas by doing experiments:

A trait that showed up about this time was my tendency not to accept anything I was told until I had checked it out for myself.

From the beginning, I loved learning through experimentation and exploration how my world worked.

Thorp also demonstrated awesome powers of concentration:

When I was reading or just thinking, my concentration was so complete that I lose all awareness of my surroundings.

Thorp was influenced by a few great teachers, including Jack Chasson:

Jack was twenty-seven then, with wavy brown hair and the classic good looks of a Greek god.  He had a ready, warm smile and a way of saying something that boosted the self-esteem of everyone he met… my first great teacher…



Thorp became fascinated by radio and electronics.  The ability to hear voices from the air amazed him:

The mechanical world of wheels, pulleys, pendulums, and gears was ordinary.  I could see, touch, and watch it in action.  But this new world was one of invisible waves that traveled through space.  You had to figure out through experiments that it was actually there and then use logic to grasp how it worked.

Eventually this led Thorp to think things through for himself and also to design experiments:

I was learning to work things out for myself, not limited to prompting from teachers, parents, or the school curriculum.  I relished the power of pure thought, combined with the logic and predictability from science.  I loved visualizing an idea and then making it happen.

Learning, thinking, and experimenting were all great fun for Thorp, leading him to contemplate becoming a scientist at a university:

An academic life was becoming my dream.  I liked all the science experiments I was doing and the knowledge they led to.  If I could have a career continuing this kind of playing, I would be very happy.  And the way to have that kind of life was by joining the academic world where they had the laboratories, the kinds of experiments and projects I enjoyed, and maybe the chance to work with other people like me.

In the summer of 1948, Thorp read through a list of 60 great novels, mostly American literature but also including some foreign authors like Dostoyevski and Stendhal.  Thorp’s teacher Jack Chasson had given him the list and then lent him the books from his personal library.



Thorp was the number one student in his chemistry class, but lost that position when he was cheated.  When the mistake was not corrected, Thorp changed his major to physics:

This rash decision, which led me to change my school and my major subject, would change my whole path in life.  In hindsight, it turned out for the best, as my interests and my future were in physics and mathematics.

In graduate school, after transferring from Berkeley to UCLA, Thorp completed all the course work for the PhD and was halfway through his thesis on the structure of atomic nuclei.  All he had to do was finish the thesis work and pass a final oral exam.  But he would have to learn much more mathematics in order to finish the complex quantum mechanical calculations.  Thorp realized that he could earn a PhD in mathematics much sooner than he would likely be able to earn a PhD in physics.  So he got the PhD in mathematics.

While in graduate school, Thorp had become re-acquainted with Vivian Sinetar.  Thorp says he was lucky she was still single, despite family pressure to marry.  Also, Vivian, whose parents were immigrant Hungarian Jews, would be the first to marry outside the Jewish faith.  Fortunately, Vivian’s parents liked Thorp even though he was an academic rather than a doctor or a lawyer.



Ed Thorp and his wife Vivian spent one Christmas vacation in Las Vegas because the city had turned itself into a bargain vacation spot (to attract gamblers).  The city was different at that time:

Back then the long, straight, uncrowded highway had a dozen or so one-story hotel-casino complexes scattered on either side with hundreds of yards of sand and tumbleweeds separating them.

Just before this trip to Vegas, Thorp had learned from a colleague what is now called basic strategy for blackjack.  This strategy gave the player the smallest statistical disadvantage – 0.62 percent – of any casino game.  Thorp thought he would have fun by risking a few dollars trying out basic strategy.

Before this trip, Thorp had already realized that roulette could be beaten.  Why not blackjack?

The belief that casinos must come out ahead in the long run was supported by conventional wisdom, which argued that if blackjack could be beaten, the casinos would have to either change the rules or drop the game.  Neither had happened.  But, confident from my experiments that I could predict roulette, I wasn’t willing to accept these claims about blackjack.  I decided to check for myself if the player could systematically win.



Thorp explains:

It wasn’t the money that drew me to blackjack.  Though we could certainly use extra dollars, Vivian and I expected to lead the usual low-budget academic life.  What intrigued me was the possibility that merely by sitting in a room and thinking, I could figure out how to win.

Back from vacation, Thorp went to the section of the UCLA library where mathematical and statistical research articles were.

I started with the fact that the strategy I had used in the casino assumed that every card had the same chance of being dealt as any other during play.  This cut the casino’s edge to just 0.62 percent, the best odds of any game being offered.  But I realized that the odds as the game progressed actually depended on which cards were still left in the deck and that the edge would shift as play continued, sometimes favoring the casino and sometimes the player.  The player who kept track could vary his bets accordingly. 

The player would keep his bets small when the casino had the advantage, which was most of the time.  But the player would bet much more when the odds were in his favor.  Over a large number of hands, the casino would win most of the small bets, but the player would win most of the big bets.  As long as the deal was fair—otherwise the player should learn to quit right away—Thorp’s strategy would be profitable over time.

Thorp began to do calculations to see how the player’s advantage changed based on which cards had already been played.  Thorp figured out that what mattered was the proportion of each type of card left as a percentage of the total number of cards left.

When Thorp started teaching mathematics at MIT, he had access to an IBM 704 computer, which he used to test his blackjack approximations.  Next he used the computer to figure out how the odds changed when all four of a specific card were missing from the remaining deck.  The math also showed that if removing a specific group of cards shifted the odds in one direction, adding an equal number of the same cards would move the odds the other way by the same amount.

Eventually Thorp was able to calculate the player advantage based on the specific cards that had been played.  He decided to publish his results in Proceedings of the National Academy of Sciences.  But he needed a member of the academy to approve and forward his work.  The only mathematics member of the academy at MIT was Claude Shannon.  Shannon was famous for the invention of information theory, the foundation of modern computing.

To Thorp’s surprise, Shannon was fascinated by Thorp’s ideas.  A few minutes became an hour and a half of animated dialogue.  Shannon said Thorp had likely made a theoretical breakthrough.  But he suggested the paper be titled “A Favorable Strategy for Twenty-One” instead of “A Winning Strategy for Blackjack.”

Then Shannon asked Thorp if he was working on anything else involving games of chance.  Thorp told him about his idea that roulette was predictable.  This led to several more hours of excited conversation.  Shannon and Thorp decided to work together to build a small, wearable computer that could be used to beat roulette.



Thorp had decided to present his blackjack system at the annual meeting of the American Mathematical Society in Washington, DC.  Thorp made this decision because previous mathematicians (centuries earlier in some cases) seemed to have proven that no casino game could be beat.  Dick Stewart of The Boston Globe had heard about Thorp’s upcoming talk.  Stewart called Thorp to ask about it.  The newspaper also sent a photographer to take Thorp’s picture.  The next morning Stewart’s article and Thorp’s picture were on the front page.

When the day of the meeting arrived, instead of the usual scholarly audience of forty or fifty, there were hundreds of curious people, including many with sunglasses, pinkie rings, or cigars.  Thorp writes:

In the abstract, life is a mixture of chance and choice.  Chance can be thought of as the cards you are dealt in life.  Choice is how you play them.  I chose to investigate blackjack.  As a result, chance offered me a new set of unexpected opportunities.

Thorp was deluged by offers to back a casino test, ranging from a few thousand dollars to $100,000.  Many were curious about whether Thorp’s system would really work.  Thorp felt he owed his readers proof.

The most promising offer came from two New York multimillionaires.  Thorp called them Mr. X and Mr. Y.  Initially, Thorp was concerned about the dangers of a bankroll provided by strangers.  But Mr. X kept calling, so Thorp finally decided to meet him.

Emmanual “Manny” Kimmel (Mr. X) arrived at Thorp’s residence in a Cadillac with two good-looking young blondes.  Kimmel introduced the two women as his “nieces.”  Kimmel dealt blackjack to Thorp for a couple of hours while asking him about his research.  Then they agreed to plan a trip to Nevada.  When Manny was leaving, he grabbed several pearl necklaces from his pocket and offered a strand to Vivian.  The pearls stayed in Thorp’s family and are now warn by his daughter.

Kimmel and his friend (Mr. Y) gave Thorp a bankroll of $100,000.  (This is the equivalent of $800,000 in 2017 dollars.)  But Thorp insisted on starting with only $10,000 in order to prove first that his system worked.

This plan, of betting only at a level at which I was emotionally comfortable and not advancing until I was ready, enabled me to play my system with a calm and disciplined accuracy.  This lesson from the blackjack tables would prove invaluable throughout my investment lifetime as the stakes grew ever larger.

Thorp’s system worked.  But the blackjack player had to understand randomness and odds over a very long series of bets.  Most small bets the casino would win.  And there would also be times when the player was unlucky on bigger bets, despite favorable odds.  But eventually, over time, Thorp’s system worked.

…the Ten-Count System had shown moderately heavy losses mixed with ‘lucky’ streaks of the most dazzling brilliance.  I learned later that this was a characteristic of a random series of favorable bets.  And I would see it again and again in real life in both the gambling and the investment worlds.

Note:  Thorp’s system worked as long as the deal was fair most of the time.  But the player had to learn to spot signs of cheating.  The player also had to quit games where losses were happening fast.  (Fast losses usually meant cheating.)

Cheating was so relentless during those days in Las Vegas that I spent as much time learning about the many ways it was being done as I did playing.  Everywhere we went, we reached a point where we were cheated, barred from play,  or the dealer reshuffled the cards after every hand.



Thorp and Shannon created a wearable computer that would allow the player to win at roulette.

Thorp was now in a position to win a good deal of money—compared to his salary as a mathematics professor—by playing blackjack and roulette.  But introspection revealed to him that he would enjoy life more as an academic than as a gambler:

I was at a point in life where I could choose between two very different futures.  I could roam the world as a professional gambler winning millions per year.  Switching between blackjack and roulette, I could spend some of the winnings as perfect camouflage by also betting on other games offering a small casino edge, like craps or baccarat.

My other choice was to continue my academic life.  The path I would take was determined by my character, namely, What makes me tick?  As the Greek philosopher Heraclitus said, ‘Character is destiny.’



Thorp writes:

Gambling is investing simplified.  The striking similarities between the two suggested to me that, just as some gambling games could be beaten, it might also be possible to do better than the market averages.  Both can be analyzed using mathematics, statistics, and computers.  Each requires money management, choosing the proper balance between risk and return.  Betting too much, even though each individual bet is in your favor, can be ruinous… On the other hand, playing safe and betting too little means you leave money on the table.  The psychological makeup to succeed at investing also has similarities to that for gambling.  Great investors are often good at both.

Thorp made several mistakes when he started investing.  The first stock he bought dropped 50%.  Thorp decided to wait until he could get even.  This happened after four years.  One thing Thorp learned from this experience is to avoid anchoring.

Learn about the anchoring effect here:

Thorp’s second mistake was investing based on momentum.  It didn’t work.  Thorp learned not to expect momentum to continue unless you have good reasons to think it will.

Thorp’s third mistake was to buy silver on margin.  Initially silver rose and Thorp used the profits to buy even more silver on margin.  Then the silver price dropped, which wiped Thorp out because he was on margin. After that, silver started going up again, but Thorp had already lost his whole investment due to his use of margin.  This experience taught Thorp about proper risk management.

Thorp learned how to invest in undervalued warrants while hedging the position:

To form a hedge, take two securities whose prices tend to move together, such as a warrant and the common stock it can be used to purchase, but which are comparatively mispriced.  Buy the relatively underpriced security and sell short the relatively overpriced security.  If the proportions in the position are chosen well, then even though prices fluctuate, the gains and losses on the two sides will approximately offset or hedge each other.  If the relative mispricing between the two securities disappears as expected, close the position in both and collect a profit.

Thorp figured out a formula for pricing warrants and options.  An option to buy a stock is like a warrant except that usually the company issues warrants.  Thorp began investing portfolios for friends and acquaintances.



Ralph Waldo Gerard was an early investor with Thorp.  Previously, Gerard had invested in the Buffett Partnership.  Gerard was related to the father of value investing, Benjamin Graham (Buffett’s teacher and mentor).

Gerard invited Thorp and his wife to his home for dinner with Susie and Warren Buffett.  Buffett is arguably the most successful investor of all time.  But Thorp learned that Buffett had to work extremely hard in order to find a few excellent long-term investments.

By contrast, Thorp’s quantitative, statistical investment strategy seemed much easier than analyzing in detail thousands of companies.  Thorp’s approach would give him more free time to enjoy family and to pursue his academic career.

Later, Buffett invited Gerard and Thorp to his home in Emerald Bay, California for an afternoon of bridge.  Thorp:

Bridge is what mathematicians call a game of imperfect information.  The bidding, which precedes the play of the cards, gives some information about the four concealed hands held by two pairs of players who are opposing each other.  As the cards are played, players use the bidding and the cards they have seen so far to make inferences about who has the remaining unplayed cards.  The stock market also is a game of imperfect information and even resembles bridge in that both have their deceptions.  As in bridge, you do better in the market if you get more information sooner and put it to better use.  It’s no surprise that Buffett, arguably the greatest investor in history, is a bridge addict.

Thorp was impressed by Buffett and made a prediction:

Impressed by Warren’s mind and his methods, as well as his record as an investor, I told Vivian that I believed he would eventually become the richest man in America.  Buffett was an extraordinarily smart evaluator of underpriced companies, so he could compound money much faster than the average investor.  He also could continue to rely mainly on his own talent even as his capital grew to an enormous amount.  Warren furthermore understood the power of compound interest and, clearly, planned to apply it over a long time.

Thorp partnered with a New York stockbroker, Jay Regan, who had studied philosophy at Dartmouth.  Together, they launched Convertible Hedge Associates—later renamed Princeton Newport Partners.  They aimed to raise $5 million, but only reached $1.4 million.  They went ahead anyways.



Princeton Newport Partners (PNP) specialized in the hedging of convertible securities—warrants, options, convertible bonds and preferreds, and other types of derivative securities.  PNP not only hedged each individual position.  But it also hedged the portfolio against changes in interest rates and changes in the overall market level.  PNP’s near total reliance on quantitative methods—using mathematical formulas, economic models, and computers—made them the earliest “quants.”

Thorp was motivated to reduce risk:

Influenced by having been born during the Great Depression and by my early investment experiences, I made reducing risk a central feature of my investing approach.

The hedges protected us against losses but at the expense of giving up some of the gains in the big up-markets.

In 1973-1974, each $1,000 invested in the S&P 500 would have shrunk to $618, whereas each $1,000 invested in PNP grew to $1,160.

Thorp’s wife, Vivian, not only raised their three children.  She was also active in local politics, helping reelect a decent congressman.  And Vivian organized and ran a large phone bank that helped elect the first black man to a California statewide office.  Moreover, she influenced many people one on one.

One time, a woman complained to Vivian about “those Jews.”  Vivian was Jewish and had lost several relatives in Nazi World War II prison camps.  Ed Thorp:

When she told us about meeting the woman, we expected to hear how she tore her to shreds.  Explaining why she did not, Vivian pointed out that the woman would have learned nothing and simply would have become an enemy.  Vivian patiently educated this basically good person and they became friends for the rest of their lives.

Thorp’s PhD thesis had been in pure mathematics and this continued to be his focus for fifteen years.  Although Thorp loved teaching and research, eventually he resigned his full professorship at the University of California, Irvine.  He felt a sense of loss, but it turned out to be for the best.  Thorp continued his friendships and research collaborations.  He continued to present his work at meetings and publish it in the mathematical, financial, and gambling literature.



Thorp and his colleagues continued to solve problems for valuing derivatives before academics did.  This gave PNP a large edge from 1967 to 1988, when PNP closed.

Hedging with derivatives was a key source of profits for PNP during its entire nineteen years.  Such hedging also became a core strategy for many later hedge funds like Citadel, Stark, and Elliott, which each went on to manage billions.

Some risks cannot be hedged:

There is another kind of risk on Wall Street from which computers and formulas can’t protect you.  That’s the danger of being swindled or defrauded.  Being cheated at cards in the casinos in the 1960s was valuable preparation for the far greater scale of dishonesty I would encounter in the investment world.  The financial press reveals new skulduggery on a daily basis.



PNP’s dream for the 1980s was to expand their expertise into new areas.

Of the scores of indicators we systematically analyzed, several correlated strongly with past performance.  Among them were earnings yield (annual earnings divided by price), dividend yield, book value divided by price, momentum, short interest…, earnings surprise…, purchases and sales by company officers, directors, and large shareholders, and the ratio of total company sales to the market price of the company.  We studied each of these separately, then worked out how to combine them.  When the historical patterns persisted as prices unfolded into the future, we created a trading system called MIDAS (multiple indicator diversified asset system) and used it to run a separate long/short hedge fund (long the “good” stocks, short the “bad” ones).  The power of MIDAS was that it applied to the entire multitrillion-dollar stock market, with the possibility of investing very large sums.

From November 1, 1979 through January 1, 1988, PNP’s capital expanded from $28.6 million to $273 million.  The partnership earned 22.8 percent per year before fees, which meant 18.2 percent per year for limited partners.

Furthermore, PNP invented excellent new products that could allow the fund to manage billions.  They included:

  • State-of-the-art convertible, warrant, and option computerized analytic models and trading systems
  • Statistical arbitrage
  • Expert investments based on interest rates
  • OSM Partners, a “fund of hedge funds”



In the 1970s, less established companies had to scramble for funding.  A young financial innovator named Michael Milken had an idea:

Milken’s group underwrote issues of low-rated, high-yielding bonds—the so-called junk bonds—some of which were convertible or came with warrants to purchase stock… Filling a gaping need and hungry demand in the business community, Milken’s group became the greatest financing engine in Wall Street history.

Such innovation outraged the old line establishment of corporate America, who were initially transfixed like deer in the headlights as a horde of entrepreneurs, funded with seemingly unlimited Drexel-generated cash, began a wave of unfriendly takeovers.  Many old firms were vulnerable because the officers and directors had done a poor job of investing the shareholders’ equity.  With subpar returns on capital, the stocks were cheap…

The officers and directors of America’s big corporations were happy with the way things had been.  They enjoyed their hunting lodges and private jets, made charitable donations for their personal aggrandizement and objectives, and granted themselves generous salaries, retirement plans, bonuses of cash, stock, and stock options, and golden parachutes.  All these things were designed by and for themselves and paid for with corporate dollars, the expenses routinely ratified by a scattered and fragmented shareholder base.  Economists call this conflict of interest between management, or agents, and the shareholders, who are the real owners, the agency problem.  It continues today, one example being the massive continuing grants of stock options by management to itself…

Rudolph Giuliani, U.S. Attorney for the Southern District of New York, was on a campaign to prosecute real and alleged Wall Street criminals.  As a part of his effort to prosecute Michael Milken at Drexel Burnham and Robert Freeman at Goldman Sachs, Giuliani went after Thorp’s partner Jay Regan, who knew both Milken and Freeman well.

Giuliani went after the Princeton office of PNP.  The Newport office, where Thorp and forty others worked, did not have any knowledge of the alleged acts in the Princeton office.  No one at the Newport office was implicated in, or charged with, any wrongdoing in this (or any other) matter.

To apply more pressure, the U.S. Attorney began contacted the limited partners of PNP.  They subpoened them to come to New York and testify before the grand jury.  Thorp explains that the limited partners were passive participants in PNP.  The subpoenas thus had no real value for Giuliani’s case.  It seems Giuliani wanted to disturb and upset these limited partners so that they might withdraw from PNP.

In the end, convictions for racketeering and tax fraud against a few PNP defendants were thrown out by the Second Court of Appeals.  Thorp writes:

In January 1992, having achieved their real goal, which was to convict Milken and Freeman, the prosecutors dropped the remaining charges against four of the five PNP defendants and a relate charge against the Drexel trader.  Princeton’s head trader and the Drexel defendant were still facing fines and three-month prison terms for their remaining counts.  In September 1992, a federal judge vacated these sentences as well.

Thorp later explains:

The old establishment financiers were lucky in that prosecutors would find numerous violations of securities laws within the Milken group and among its allies, associates, and clients.  However, it is difficult to judge how relatively bad these were, compared with the incessant violations that have always been, and continue to be, endemic in business and finance, because only a few of the many violators  are caught, and when they are prosecuted it may be for only a tiny fraction of their offenses.  This contrasts with the case of Drexel, where the searchlight of government was focused to reveal as many violations as possible.  It’s like the case of the man who was cited three times in a single year for driving while intoxicated.  His neighbor would also drink and drive, but was never pulled over.  Who is the greater criminal?  Now suppose I tell you that the caught man did it only three times and was apprehended every time, whereas his neighbor did it a hundred times and was never caught.  How could this happen?  What if I tell you that the two men are bitter business rivals and that the traffic cop’s boss, the police chief, gets large campaign contributions from the man who got no traffic citations.  Now who is the greater criminal?

Thorp considered launching a partnership that would be similar to PNP.  But he loved the quantitative analysis part of the business, not operations and marketing.  So he decided to wind down the Newport office.



Although the closing of PNP erased billions in future wealth for Thorp and his colleagues, Thorp and his wife had more than enough money to be free to spend their time exactly as they wanted.

Around this time, Thorp discovered the greatest financial fraud.  He had been hired to examine some hedge fund investments.  Thorp approved them with one exception: Bernard Madoff Investment.

Madoff claimed to use a split-strike price strategy:  He would buy a stock, sell a call option at a higher price, and use the proceeds to pay for a put option at a lower price.

I explained that, according to financial theory, the long-run impact on portfolio returns from many properly priced options with zero net proceeds should also be zero.  So we expect, over time, that the client’s portfolio return should be roughly the same as the return on equities.  The returns Madoff reported were too large to be believed.  Moreover, in months when stocks are down, the strategy should produce a loss—but Madoff wasn’t reporting any losses.  After checking the client’s account statements I found that losing months for the strategy were magically converted to winners by short sales of S&P Index futures.  In the same way, months that should have produced very large wins were ‘smoothed out.’


…At my suggestion, the client then hired my firm to conduct a detailed analysis of their individual transactions to prove or disprove my suspicions that they were fake.  After analyzing about 160 individual option trades, we found that for half of them no trades occurred on the exchange where Madoff said that they supposedly took place.  For many of the remaining half that did trade, the quantity reported by Madoff just for my client’s two accounts exceeded the entire volume reported for everyone.  To check the minority of remaining trades, those that did not conflict with the prices and volumes reported by the exchanges, I asked an official at Bear Stearns to find out in confidence who all the buyers and sellers of the options were.  We could not connect any of them to Madoff’s firm.

Thorp had proved Madoff’s investment operation was a fraud.  Madoff was running a Ponzi scheme.

In 1991, Thorp was seeking a partner to whom to sell their statistical arbitrage software.  This led him to meet with Bruce Kovner, a successful commodities trader.

About this time he realized large oil tankers were in such oversupply that the older ones were selling for little more than scrap value.  Kovner formed a partnership to buy one.  I was one of the limited partners.  Here was an interesting option.  We were largely protected against loss because we could always sell the tanker for scrap, recovering most of our investment;  but we had a substantial upside:  Historically, the demand for tankers had fluctuated widely and so had their price.  Within a few years, our refurbished 475,000-ton monster, the Empress Des Mers, was profitably plying the world’s sea-lanes stuffed with oil.  I liked to think of my ownership as a twenty-foot section just forward of the bridge… The Empress Des Mers operated profitably into the twenty-first century, when the saga finally ended.  Having generated a return on investment of 30 percent annualized, she was sold for scrap in 2004, fetching almost $23 million, far more than her purchase price of $6 million.

Thorp discusses traders who always try to save a tiny amount on each trade.  The problem is that the trader may do this successfully twenty times in a row, but then miss a trade that goes up so much that it wipes out the savings on the previous twenty trades.

What the hagglers and the traders do reminds me of the behavioral psychology distinction between two extremes on a continuum of types:  satisficers and maximizers.  When a maximizer goes shopping, looks for a handyman, buys gas, or plans a trip, he searches for the best (maximum) possible deal.  Time and effort don’t matter much.  Missing the very best deal leads to regret and stress.  On the other hand, the satisficer, so-called because he is satisfied with a result that is close to the best, factors in the costs of searching and decision making, as well as the risk of losing a near-optimal opportunity and perhaps never finding anything as good again.

This is reminiscent of the so-called secretary or marriage problem in mathematics.  Assume that you will interview a series of people, from which you will choose one.  Further, you must consider them one at a time, and having once rejected someone, you cannot reconsider.  The optimal strategy is to wait until you have seen about 37 percent of the prospects, then choose the next one you see who is better than anybody among this first 37 percent that you passed over.  If no one is better you are stuck with the last person on the list.




…Some exchanges, such as NASDAQ, let HF [High Frequency] traders peek at customer orders ahead of everyone else for thirty milliseconds before the order goes to the exchange.  Seeing an order to buy, for instance, the HF traders can buy first, pushing the stock price up, then resell to the customer at a profit.  Seeing someone’s order to sell, the HF trader sells first, causing the stock to fall, and then buys it back at the lower price.  How is this different from the crime of front-running, described in Wikipedia as ‘the illegal practice of a stock broker executing orders on a security for its own account while taking advantage of advance knowledge of pending orders from its customers’?

Some securities industry spokesmen argue that harvesting this wealth from investors somehow makes the markets more efficient and that ‘markets need liquidity.’  Nobel Prize-winning economist Paul Krugman disagrees sharply, arguing that high-frequency trading is simply a way of taking wealth from ordinary investors, serves no useful purpose, and wastes national wealth because the resources consumed create no social good.

Since the more the rest of us trade the more we as a group lose to the computers, here’s one more reason to buy and hold rather than trade, unless you have a big enough edge.



Thorp discusses a statistical arbitrage investment project:

The idea of the project was to study how the historical returns of securities were related to various characteristics, or indicators.  Among the scores of fundamental and technical measures we considered were the ratio of earnings per share to price per share, known as the earnings yield, the liquidation or “book” value of the company compared with its market price, and the total market value of the company (its “size”).  Today our approach is well known and widely explored but back in 1979 it was denounced by massed legions of academics who believed market prices already had fully adjusted to such information.  Many practitioners disagreed.  The time was right for our project because the necessary high-quality databases and the powerful new computers with which to explore them were just becoming affordable.

The idea for statistical arbitrage was based on the discovery (by one of Thorp’s researchers) that the stocks that had gone up the most over the previous two weeks did the worst as a group over the ensuing few weeks, while the stocks that had gone down the most over the previous two weeks did the best.

In 1994, Thorp launched a new investment partnership, Ridgeline Partners.  Limited partners gained 18 percent per year over eight and a quarter years.

We charged Ridgeline Partners 1 percent per year plus 20 percent of net new profits.  We voluntarily reduced fees during a period when we felt disappointed in our performance.  We gave back more than $1 million to the limited partners.  Some of today’s greedy hedge fund managers might say our return of fees was economically irrational, but our investors were happy and we nearly always had a waiting list.  Ridgeline was closed a large part of the time to new investors, and current partners were often restricted from adding capital.  To maintain higher returns, we sometimes even reduced our size by returning capital to partners.

Instead of charging more fees, Thorp says he sought to treat limited partners as he would wish to be treated if he were in their place.  Thorp closed the fund down in the fall of 2002 because returns had declined due to more hedge funds using statistical arbitrage programs.  More importantly, Ed and Vivian wanted time to travel, read, and learn, and to be with their family.




The consensus of industry studies of hedge fund returns to investors seems to be that, considering the level of risk, hedge funds on average once gave their investors extra return, but this has faded as the industry expanded.  Later analyses say average results are worse than portrayed.  Funds voluntarily report their results to the industry databases.  Winners tend to participate much more than losers.  One study showed that this doubled the reported average annual return for funds as a group from an actual 6.3 percent during 1996-2014 to a supposed 12.6 percent.

The study goes one to point out that if returns over the years are given weights that correspond to the dollars invested, then the returns are ‘only marginally higher than risk-free [U.S. Treasury Bonds] rates of return.’  Another reason that reports by the industry look better than what investors experienced is that they combined higher-percentage returns from the earlier years, when the total invested in hedge funds was smaller, with the lower-percentage returns later, when they managed much more money.

It’s difficult to get an edge picking stocks.  Hedge funds are little businesses just like companies that trade on the exchanges.  Should one be any better at picking hedge funds than we are at picking stocks?

Thorp points out that you will rarely find an investment that is better than an ultra-low-cost index fund over time.  Also, some hedge funds and mutual funds create spectacular records early on but mediocre results when assets under management have grown:

One method that leads to this has also been used to launch new mutual funds.  Fund managers sometimes start a new fund with a small amount of capital.  They then stuff it with hot IPOs (initial public offerings) that brokers give them as a reward for the large volume of business they have been doing through their established funds.  During this process of ‘salting the mine,’ the fund is closed to the public.  When it establishes a stellar track record, the public rushes in, giving the fund managers a huge capital base from which they reap large fees.  The brokers who supplied the hot IPOs are rewarded by a flood of additional business from the triumphant managers of the new fund.  The available volume of hot IPOs is too small to help returns much once the fund gets big, so the track record declines to mediocrity.  However, the fund promoters can use more hot IPOs to incubate yet another spectacularly performing new fund;  and so it goes on.

Like Buffett, Thorp predicts the gradual disappearance of any excess returns produced by hedge funds as a group.  Here is Buffett’s view:




Call any investment that mimics the whole market of listed U.S. securities ‘passive’ and notice that since each of these passive investments acts just like the market, so does a pool of all of them.  If these passive investors together own, say, 15 percent of every stock, then ‘everybody else’ owns 85 percent and, taken as a group, their investments also are like one giant index fund.  But ‘everybody else’ means all the active investors, each of whom has his own recipe for how much to own of each stock and none of whom has indexed.  As Nobel Prize winner Bill Sharpe says, it follows from the laws of arithmetic that the combined holdings of all the active investors also replicates the index.

Reducing risk through diversification is a reason to own an index fund.  An even more important reason to own an index fund is to reduce your costs.  Ultra-low costs are why index funds outperform, necessarily, the vast majority of investors, especially over the course of several decades.  Thorp explains:

Investors who don’t index pay on average an extra 1 percent a year in trading costs and another 1 percent to what Warren Buffett calls ‘helpers’—the money managers, salespeople, advisers, and fiduciaries that permeate all areas of investing.  As a result of these costs, active investors as a group trail the index by 2 percent or so, whereas the passive investor who selects a no-load (no sales fee), low-expense-ratio (low overhead and low management fee) index fund can pay less than 0.25 percent in fees and trading costs.  From the gambling perspective, the return to an active investor is that of a passive investor plus the extra gain or loss from paying (on average) 2 percent a year to toss a fair coin in some (imaginary) casino.  Taxable active investors do even worse, because a high portfolio turnover means short-term capital gains, which currently are taxed at a higher rate than gains from securities, the sales of which have been deferred for a year.

Furthermore, notes Thorp, one way an investor could mimic an index fund is simply to buy a portfolio of at least twenty stocks.  If the choices are randomized, then the returns from this portfolio should track the index over time.  Consider, for instance, that the Dow Jones Industrial Average—comprised of thirty stocks—has closely tracked the S&P 500 Index over time.

Moreover, the portfolio of twenty stocks could be even lower cost than an ultra-low-cost index fund because the 20-stock portfolio likely would not require any trading at all, whereas a broad market ultra-low-cost index fund would have to make minor adjustments over time in order to keep tracking the index.



Thorp observes that for a perfectly efficient market, one you can’t beat, we expect:

  • All information to be instantly available to many participants.
  • Many participants to be financially rational.
  • Many participants to be able instantly to evaluate all available relevant information and determine the current fair price of every security.
  • New information to cause prices immediately to jump to the new fair price, preventing anyone from gaining an excess market return by trading at intermediate prices during the transition.

Supporters of the EMH (Efficient Market Hypothesis) typically argue that these conditions hold as an approximation.

In the real world of investing, Thorp writes that the market is somewhat inefficient.  In particular:

  • Some information is instantly available to the minority that happen to be listening at the right time and place.  Much information starts out known only to a limited number of people, then spreads to a wider group in stages.  This spreading could take from minutes to months, depending on the situation.  The first people to act on the information capture the gains.  The others get nothing or lose.  (Note:  The use of early information by insiders can be either legal or illegal, depending on the type of information, how it is obtained, and how it’s used.)
  • Each of us is financially rational only in a limited way.  We vary from those who are almost totally irrational to some who strive to be financially rational in nearly all their actions.  In real markets the rationality of the participants is limited.
  • Participants typically have only some of the relevant information for determining the fair price of a security.  For each situation, both the time to process the information and the ability to analyze it generally very widely.
  • The buy and sell orders that come in response to an item of information sometimes arrive in a flood within a few seconds, causing the price to gap or nearly gap to the new level.  More often, however, the reaction to news is spread out over minutes, hours, days, or months, as the academic literature documents.

These realities tell us how to beat the market, says Thorp:

  • Get good information early.  How do you know if your information is good enough or early enough?  If you are not sure, then it probably isn’t.
  • Be a disciplined rational investor.  Follow logic and analysis rather than sales pitches, whims, or emotion.  Assume you may have an edge only when you can make a rational affirmative case that withstands your attempts to tear it down.  Don’t gamble unless you are highly confident you have the edge.  As Buffett says, ‘Only swing at the fat pitches.’
  • Find a superior method of analysis.  Ones that you have seen pay off for me include statistical arbitrage, convertible hedging, the Black-Scholes formula, and card counting at blackjack.  Other winning strategies include superior security analysis by the gifted few and the methods of the better hedge funds.
  • When securities are known to be mispriced and people take advantage of this, their trading tends to eliminate the mispricing.  This means the earliest traders gain the most and their continued trading tends to reduce or eliminate the mispricing.  When you have identified an opportunity, invest ahead of the crowd.

Thorp sums it up:

Note that market inefficiency depends on the observer’s knowledge.  Most market participants have no demonstrable advantage.  For them, just as the cards in blackjack or the numbers at roulette seem to appear at random, the market appears to be completely efficient.

To beat the market, focus on investments well within your knowledge and ability to evaluate, your ‘circle of competence.’  Be sure your information is current, accurate, and essentially complete.  Be aware that information flows down a ‘food chain,’ with those who get it first ‘eating’ and those who get it late being eaten.  Finally, don’t bet on an investment unless you can demonstrate by logic, and if appropriate by track record, that you have an edge.



Thorp wraps up his book by sharing some of what he learned on his odyssey through science, mathematics, gambling, hedge funds, finance, and investing:

Education has made all the difference for me.  Mathematics taught me to reason logically and to understand numbers, tables, charts, and calculations as second nature.  Physics, chemistry, astronomy, and biology revealed wonders of the world, and showed me how to build models and theories to describe and to predict.  This paid off for me in both gambling and investing.

Education builds software for your brain.  When you’re born, think of yourself as a computer with a basic operating system and not much else.  Learning is like adding programs, big and small, to this computer, from drawing a face to riding a bicycle to reading to mastering calculus.  You will use these programs to make your way in the world.  Much of what I’ve learned came from schools and teachers.  Even more valuable, I learned at an early age to teach myself.  This paid off later on because there weren’t any courses in how to beat blackjack, build a computer for roulette, or launch a market-neutral hedge fund.

I found that most people don’t understand the probability calculations needed to figure out gambling games or to solve problems in everyday life.  We didn’t need that skill to survive as a species in the forests and jungles.  When a lion roared, you instinctively climbed the nearest tree and thought later about what to do next.  Today we often have the time to think, calculate, and plan ahead, and here’s where math can help us make decisions…

Thorp later writes that economists have found one factor that explains a nation’s future economic growth more than any other:  its output of scientists and engineers.  Therefore it’s crucial to have the best education system we can.  It’s essential that we strive to keep talented American-born scientists and engineers in the United States, and that we also seek to keep gifted foreign-born scientists and engineers after they have received advanced degrees in the United States.  Thorp:

To starve education is to eat our seed corn.  No tax today, no technology tomorrow.

Thorp concludes:

Life is like reading a novel or running a marathon.  It’s not so much about reaching a goal but rather about the journey itself and the experiences along the way.  As Benjamin Franklin famously said, ‘Time is the stuff life is made of,’ and how you spend it makes all the difference.

…Whatever you do, enjoy your life and the people who share it with you, and leave something good of yourself for the generations to follow.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Warren Buffett on Jack Bogle

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 23, 2017

Warren Buffett has long maintained that most investors—large and small—would be best off by simply investing in ultra-low-cost index funds.  Buffett explains his reasoning again in the 2016 Letter to Berkshire Shareholders (see pages 21-25):

Passive investors will essentially match the market over time.  So, argues Buffett, active investors will match the market over time before costs (including fees and expenses).  After costs, active investors will, in aggregate, trail the market by the total amount of costs.  Thus, the net returns of most active investors will trail the market over time.  Buffett:

There are, of course, some skilled individuals who are highly likely to out-perform the S&P over long stretches.  In my lifetime, though, I’ve identified—early on—only ten or so professionals that I expected would accomplish this feat.

There are no doubt many hundreds of people—perhaps thousands—whom I have never met and whose abilities would equal those of the people I’ve identified.   The job, after all, is not impossible.  The problem simply is that the great majority of managers who attempt to over-perform will fail.  The probability is also very high that the person soliciting your funds will not be the exception who does well.

As for those active managers who produce a solid record over 5-10 years, many of them will have had a fair amount of luck.  Moreover, good records attract assets under management.  But large sums are always a drag on performance.



Long Bets is a non-profit started by Jeff Bezos.  As Buffett describes in his 2016 Letter to Shareholders, “proposers” can post a proposition at that will be proved right or wrong at some date in the future.  They wait for someone to take the other side of the bet.  Each side names a charity that will be the beneficiary if its side wins and writes a brief essay defending its position.


Subsequently, I publicly offered to wager $500,000 that no investment pro could select a set of at least five hedge funds—wildly-popular and high-fee investing vehicles—that would over an extended period match the performance of an unmanaged S&P-500 index fund charging only token fees.  I suggested a ten-year bet and named a low-cost Vanguard S&P fund as my contender.  I then sat back and waited expectantly for a parade of fund managers—who could include their own fund as one of the five—to come forth and defend their occupation.  After all, these managers urged others to bet billions on their abilities.  Why should they fear putting a little of their own money on the line?

What followed was the sound of silence.  Though there are thousands of professional investment managers who have amassed staggering fortunes by touting their stock-selecting prowess, only one man—Ted Seides—stepped up to my challenge.  Ted was a co-manager of Protégé Partners, an asset manager that had raised money from limited partners to form a fund-of-funds—in other words, a fund that invests in multiple hedge funds.

I hadn’t known Ted before our wager, but I like him and admire his willingness to put his money where his mouth was…

For Protégé Partners’ side of our ten-year bet, Ted picked five funds-of-funds whose results were to be averaged and compared against my Vanguard S&P index fund.  The five he selected had invested their money in more than 100 hedge funds, which meant that the overall performance of the funds-of-funds would not be distorted by the good or poor results of a single manager.

Here are the results so far after nine years (from 2008 thru 2016):

Net return after 9 years
Fund of Funds A 8.7%
Fund of Funds B 28.3%
Fund of Funds C 62.8%
Fund of Funds D 2.9%
Fund of Funds E 7.5%


Net return after 9 years
S&P 500 Index Fund 85.4%


Compound Annual Return
All Funds of Funds 2.2%
S&P 500 Index Fund 7.1%

To see a more detailed table of the results, go to page 22 of the Berkshire 2016 Letter:

Buffett continues:

The compounded annual increase to date for the index fund is 7.1%, which is a return that could easily prove typical for the stock market over time.  That’s an important fact:  A particularly weak nine years for the market over the lifetime of this bet would have probably helped the relative performance of the hedge funds, because many hold large ‘short’ positions.  Conversely, nine years of exceptionally high returns from stocks would have provided a tailwind for index funds.

Instead we operated in what I would call a ‘neutral’ environment.  In it, the five funds-of-funds delivered, through 2016, an average of only 2.2%, compounded annually.  That means $1 million invested in those funds would have gained $220,000.  The index fund would meanwhile have gained $854,000.

Bear in mind that every one of the 100-plus managers of the underlying hedge funds had a huge financial incentive to do his or her best.  Moreover, the five funds-of-funds managers that Ted selected were similarly incentivized to select the best hedge-fund managers possible because the five were entitled to performance fees based on the results of the underlying funds.

I’m certain that in almost all cases the managers at both levels were honest and intelligent people.  But the results for their investors were dismal—really dismal.  And, alas, the huge fixed fees charged by all of the funds and funds-of-funds involved—fees that were totally unwarranted by performance—were such that their managers were showered with compensation over the nine years that have passed.  As Gordon Gekko might have put it: ‘Fees never sleep.’

The underlying hedge-fund managers in our bet received payments from their limited partners that likely averaged a bit under the prevailing hedge-fund standard of ‘2 and 20,’ meaning a 2% annual fixed fee, payable even when losses are huge, and 20% of profits with no clawback (if good years were followed by bad ones).  Under this lopsided arrangement, a hedge fund operator’s ability to simply pile up assets under management has made many of these managers extraordinarily rich, even as their investments have performed poorly.

Still, we’re not through with fees.  Remember, there were the fund-of-funds managers to be fed as well. These managers received an additional fixed amount that was usually set at 1% of assets.  Then, despite the terrible overall record of the five funds-of-funds, some experienced a few good years and collected ‘performance’ fees.  Consequently, I estimate that over the nine-year period roughly 60%—gulp!—of all gains achieved by the five funds-of-funds were diverted to the two levels of managers.  That was their misbegotten reward for accomplishing something far short of what their many hundreds of limited partners could have effortlessly—and with virtually no cost—achieved on their own.

In my opinion, the disappointing results for hedge-fund investors that this bet exposed are almost certain to recur in the future.  I laid out my reasons for that belief in a statement that was posted on the Long Bets website when the bet commenced (and that is still posted there)…

Even if you take the smartest 10% of all active investors, most of them will trail the market, net of costs, over the course of a decade or two.  Most investors (even the smartest) who think they can beat the market are wrong.  Buffett’s bet against Protégé Partners is yet another example of this.



If a statue is ever erected to honor the person who has done the most for American investors, the handsdown choice should be Jack Bogle.  For decades, Jack has urged investors to invest in ultra-low-cost index funds.  In his crusade, he amassed only a tiny percentage of the wealth that has typically flowed to managers who have promised their investors large rewards while delivering them nothing—or, as in our bet, less than nothing—of added value.

In his early years, Jack was frequently mocked by the investment-management industry.  Today, however, he has the satisfaction of knowing that he helped millions of investors realize far better returns on their savings than they otherwise would have earned.  He is a hero to them and to me.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Tips from a Legendary Growth Investor

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 9, 2017

Philip A. Fisher is a legendary growth investor.  He is the author of Common Stocks and Uncommon Profits (Wiley, 1996; originally published by Harper & Brothers, 1958).  Growth only creates value when the return on invested capital (ROIC) is higher than the cost of capital.  Fisher focuses on value-creating growth.

Warren Buffett – partly through the influences of both Charlie Munger and Phil Fisher – went from buying statistically cheap stocks to buying stocks where the business could maintain a high ROIC for many years.  Buffett also learned from Fisher the value of scuttlebutt research – interviewing competitors, suppliers, customers, industry experts, and others who might have special insight into the company or industry.  Finally, Buffett learned from Fisher that you should concentrate the investment portfolio on your best ideas.  Buffett once remarked:

I’m 15% Fisher and 85% Benjamin Graham.

Typically, Buffett only buys a stock (or an entire company) when he feels certain about the future earnings.  This means the business in question must have a sustainable competitive advantage in order to keep the ROIC above the cost of capital.  Buffett then looks at the current price and determines if it’s at a discount relative to future earnings power.  Because Buffett is still trying to buy at a discount to intrinsic value (in terms of future earnings power), he’s 85% Graham.

  • That’s not to say Buffett does a precise calculation.  Only that there must be an obvious discount present.  At the 1996 Berkshire Hathaway annual meeting, Munger said:  “Warren talks about these discounted cash flows… I’ve never seen him do one.”  Buffett replied:  “That’s true.  If [the value of the company] doesn’t just scream at you, it’s too close.”  (Janet Lowe, page 145, Warren Buffett Speaks (Wiley, 2007))

Phil Fisher doesn’t think about buying at a discount to future earnings power.  He just knows that if a company can maintain a relatively high ROIC for many years into the future, then all else equal, earnings will march higher over the years and the stock will follow.  So Fisher simply looks for these rare companies that can maintain a high ROIC many years into the future.  Fisher doesn’t try to calculate whether the current price is at a discount to some specific level of future earnings.



Fisher highlights fifteen points that an investor should investigate in order to determine if a prospective investment is worthwhile.  A worthwhile investment can, over a few years, increase several hundred percent, or it can increase proportionately more over a longer period of time.

Point 1.  Does the company have products or services with sufficient market potential to make possible a sizable increase in sales for at least several years?

Fisher writes that sales growth is often uneven on an annual basis.  So the important question is whether the company can grow over several years.  Ideally, a company should be able to grow for decades.  This generally only happens when management is highly capable.

Point 2.  Does the management have a determination to continue to develop products or processes that will still further increase total sales potentials when the growth potentials of currently attractive product lines have largely been exploited?

To grow beyond the next few years, ongoing scientific research and development engineering are required.  Usually such research is most effective when it is clearly related to new products bearing some similarity to existing products.  The main point is that management has to be farsighted enough to develop new products that, if successful, will produce growth many years from today.

Point 3.  How effective are the company’s research and development efforts in relation to its size?

Some well-run companies get twice (or more) the ultimate gains for each research dollar than other companies.  A good company has technically skilled engineers and scientists, but also leaders who can coordinate the research efforts of people with diverse backgrounds.

Moreover, company leaders have to integrate research, production, and sales.  Otherwise, costs may not be minimized or products may not sell as well as they could.  Non-optimal products are usually vulnerable to more efficient competition.

Point 4.  Does the company have an above-average sales organization?

Fisher writes:

It is the making of a sale that is the most basic single activity of any business.  Without sales, survival is impossible.  It is the making of repeat sales to satisfied customers that is the first benchmark of success.  Yet, strange as it seems, the relative efficiency of a company’s sales, advertising, and distributive organizations receives far less attention from most investors, even the careful ones, than do production, research, finance, or other major subdivisions of corporate activity.  (page 31)

In some successful companies, a large chunk of a salesperson’s time – often over the course of many years – is devoted to training.

Point 5.  Does the company have a worthwhile profit margin?

Marginal companies typically increase their earnings more during good periods, but they also experience more rapid declines during bad periods.  The best long-term investments usually have the best profit margins and the best ROIC in the industry.  Marginal companies are very rarely good long-term investments.

Point 6.  What is the company doing to maintain or improve profit margins?

Fisher observes:

Some companies achieve great success by maintaining capital-improvement or product-engineering departments.  The sole function of such departments is to design new equipment that will reduce costs and thus offset or partially offset the rising trend of wages.  Many companies are constantly reviewing procedures and methods to see where economies can be brought about.  (page 37)

Point 7.  Does the company have outstanding labor and personnel relations?

A company that has above-average profits and that pays above-average wages is likely to have good labor relations.  Furthermore, management should treat employees well in other ways.  Ideally, employees will feel that they are a crucial part of the business mission.

Point 8.  Does the company have outstanding executive relations?

Executives should feel that promotions are based solely on merit.  Some degree of friction is natural, but such friction should be kept to a minimum in order to ensure that executives work together.

Point 9.  Does the company have depth to its management?

Fisher explains:

…companies worthy of investment interest are those that will continue to grow.  Sooner or later a company will reach a size where it just will not be able to take advantage of further opportunities unless it starts developing some executive talent in some depth.  (page 41)

Fisher also points out that executives must be given real authority in order for them to develop.  As well, top executives should be open to suggestions from developing executives.

Point 10.  How good are the company’s cost analysis and accounting controls?

No company is going to continue to have outstanding success for a long period of time if it cannot break down its over-all costs with sufficient accuracy and detail to show the cost of each small step in its operation.  Only in this way will a management know what most needs its attention.  Only in this way can management judge whether it is properly solving each problem that does need its attention.  (page 42)

Point 11.  Are there other aspects of the business, somewhat peculiar to the industry involved, which will give the investor important clues as to how outstanding the company may be in relation to its competition?

Typically it is leadership in engineering or in business processes – rather in than patents – that allows a company to maintain its competitive position.

Point 12.  Does the company have a short-range or long-range outlook in regard to profits?

One company will constantly make the sharpest possible deals with suppliers.  Another will at times pay above contract price to a vendor who has had unexpected expense in making delivery, because it wants to be sure of having a dependable source of needed raw materials or high quality components available when the market has turned and supplies may be desperately needed.  The difference in treatment of customers is equally noticeable.  The company that will go to special trouble and expense to take care of the needs of a regular customer caught in an unexpected jam may show lower profits on the particular transaction, but far greater profits over the years.  (page 46)

Point 13.  In the foreseeable future will the growth of the company require sufficient equity financing so that the larger number of shares then outstanding will largely cancel the existing stockholders’ benefit from this anticipated growth?

If the company is well-run and profitable, then a reasonable amount of equity financing need not deter you as an investor.  A stock offering creates cash for the company.  If the ROIC on this cash is high enough, and the price at which the stock offering is made is not too low, then future earnings per share will not suffer.

Point 14.  Does the management talk freely to investors about its affairs when things are going well but ‘clam up’ when troubles and disappointments occur?

Even the best-run companies will encounter unexpected difficulties at times.  Also, companies that will grow their earnings far into the future will constantly be pursuing technical research projects, some of which won’t work:

By the law of averages, some of these are bound to be costly failures.  Others will have unexpected delays and heartbreaking expenses during the early period of plant shake-down.  For months on end, such extra and unbudgeted costs will spoil the most carefully laid profit forecasts for the business as a whole.  Such disappointments are an inevitable part of even the most successful business.  If met forthrightly and with good judgment, they are merely one of the costs of eventual success.  They are frequently a sign of strength rather than weakness in a company.  (page 48)

It’s crucial when failures or setbacks do occur that management is candid in reporting the bad news.

Point 15.  Does the company have a management of unquestionable integrity?

There are countless ways management could enrich itself at the expense of shareholders.  This includes issuing stock options far beyond what is reasonable and fair.

Managers with high integrity always keep the interests of outside shareholders ahead of their own interests.  Good managers tend to produce positive surprises, while bad managers tend to produce negative surprises.  Over a long period of time, it’s simply not worth investing when you can’t trust management.



Fisher argues that a superbly managed growth company will generally see its stock increase hundreds of percent each decade.  By contrast, a stock that is merely statistically undervalued by 50 percent will generally only double.

You should invest part of your portfolio in larger, more conservative growth companies, and the rest in smaller growth companies.  How much to invest in each category depends on your circumstances and temperament.  If you can leave the investment alone for a long time and you don’t mind shorter term volatility, then it makes sense to invest more in smaller growth companies.



Fisher writes that forecasting business trends is not far enough along to be dependable for investing purposes.  This is still true.  I wrote last week about why you shouldn’t try market timing:

Yet, says Fisher, often when a new full-scale plant is about to begin production, there will be a buying opportunity.  First, it takes many weeks at least to get the plant running.  And if it’s a revolutionary process, it can take far longer than even the most pessimistic engineer estimates.

Even after the new plant is operating, generally there are difficulties and unexpected expenses.  Often word spreads that the new plant is in trouble, which causes some investors to sell the stock.  A few months later, the company might report a drop in net income due to the unexpected expenses.  Fisher:

Word passes all through the financial community that the management has blundered.

At this point the stock might well prove a sensational buy.  Once the extra sales effort has produced enough volume to make the first production scale plant pay, normal sales effort is frequently enough to continue the upward movement of the sales curve for many years.  Since the same techniques are used, the placing in operation of a second, third, fourth, and fifth plant can nearly always be done without the delays and special expenses that occurred during the prolonged shake-down period of the first plant.  By the time plant Number Five is running at capacity, the company has grown so big and prosperous that the whole cycle can be repeated on another brand new product without the same drain on earnings percentage-wise or the same downward effect on the price of the company’s shares.  The investor has acquired at the right time an investment which can grow for him for many years.  (page 65)

Fisher reiterates that it’s possible to learn how an individual company will perform.  But it’s not possible to forecast the stock market with any useful degree of consistency.  There are too many variables, including the business cycle, interest rates, government policy, and technological innovation.



For an investor, mistakes are inevitable.  Generally speaking, a careful investor may be right as much as 70% of the time.  But that means being wrong 30% of the time.  The important thing is to learn to identify mistakes as quickly as possible.  This is not easy, as Fisher explains:

…there is a complicating factor that makes the handling of investment mistakes more difficult.  This is the ego in each of us.  None of us likes to admit to himself that he has been wrong.  If we have made a mistake in buying a stock but can sell the stock at a small profit, we have somehow lost any sense of having been foolish.  On the other hand, if we sell at a small loss we are quite unhappy about the whole matter.  This reaction, while completely natural and normal, is probably one of the most dangerous in which we can indulge ourselves in the entire investment process.  More money has probably been lost by investors holding a stock they really did not want until they could ‘at least come out even’ than from any other single reason.  If to these actual losses are added the profits that might have been made through the proper reinvestment of these funds if such reinvestment had been made when the mistake was first realized, the cost of self-indulgence becomes truly tremendous.

Furthermore this dislike of taking a loss, even a small loss, is just as illogical as it is natural.  If the real object of common stock investment is the making of a gain of a great many hundreds of per cent over a period of years, the difference between, say, a 20 per cent loss or a 5 per cent profit becomes a comparatively insignificant matter…

While losses should never cause strong self-disgust or emotional upset, neither should they be passed over lightly.  They should always be reviewed with care so that a lesson is learned from each of them.  If the particular elements which caused a misjudgment on a common stock purchase are thoroughly understood, it is unlikely that another poor purchase will be made through misjudging the same investment factors.  (page 78)

The second reason for selling is if the company no longer qualifies with respect to the fifteen points.  Usually this is either because there has been a deterioration of management or because the company no longer has the same growth prospects.

Deterioration of management, writes Fisher, is sometimes due to complacency, but it usually is because new top executives are not as good as their predecessors.

A third reason for selling is that a much better investment opportunity has been found.  Attractive investments are extremely hard to find, observes Fisher.  When you do find one, it’s often worth switching (including paying capital gains taxes) if the new opportunity appears to have much more upside than some current investment.

Once you have found a good company, you should rarely sell.  Even if you knew a bear market was about to occur – which can very rarely, if ever, be known – if your stock will probably reach a new high in the next bull market, then trying to sell and then re-buy is risky and time-consuming.

You can’t know how far a specific stock will decline – if at all – and thus you won’t know when to buy the stock back.  Also, the stock may not necessarily decline at the same rate, or even at the same time, as the general market.  In other words, if your stock is likely to increase at least 400% eventually, say from a price of $20 a share to $100+ a share, then it’s risky and time-consuming to try to sell at $20 and buy it back at $16 or $12.  Many investors who try to do this end up not buying the stock back below where they sold it.  Fisher sums it up:

That which really matters is not to disturb a position that is going to be worth a great deal more later.  (page 83)

This is even more true when you factor in capital gains taxes.

Some argue that if a stock has increased a great deal, you should sell it.  This makes no sense, says Fisher.  If the stock is a long-term winner of the sort you’re looking for, then by definition it’s going to increase significantly and frequently be hitting new all-time highs.  Fisher concludes:

If the job has been correctly done when a common stock is purchased, the time to sell it is—almost never.  (page 85)



If you’ve found an excellently managed growth company – a company that can maintain a relatively high ROIC, including on reinvested earnings – then you should prefer low dividends or no dividends.  Fisher:

Actually dividend considerations should be given the least, not the most, weight by those desiring to select outstanding stocks.  Perhaps the most peculiar aspect of this much-discussed subject of dividends is that those giving them the least consideration usually end up getting the best dividend return.  Worthy of repetition here is that over a span of five to ten years, the best dividend results will come not from the high-yield stocks but from those with the relatively low yield.  So profitable are the results of the ventures opened up by exceptional managements that while they still continue the policy of paying out a low proportion of current earnings, the actual number of dollars paid out progressively exceed what could have been obtained from high-yield shares.  Why shouldn’t this natural and logical trend continue in the future?  (pages 94-95)

At the extreme, for an outstanding company that will grow for decades, it may be best if the company paid no dividends at all.  If you bought Berkshire Hathaway at the beginning of 1965 and held it through the end of 2015, you would have gotten 20.8% annual returns versus 9.7% for the S&P 500 (including dividends).  Your cumulative return for holding Berkshire stock would come to 1,598,284% versus 11,335% for the S&P 500 (including dividends).  Berkshire has never paid a dividend because Buffett and Munger have always been able to find better uses for the cash over the years.



Don’t buy into promotional companies.

All too often, young promotional companies are dominated by one or two individuals who have great talent for certain phases of business procedure but are lacking in other equally essential talents.  They may be superb salesmen but lack other types of business ability.  More often they are inventors or production men, totally unaware that even the best products need skillful marketing as well as manufacture.  The investor is seldom in a position to convince such individuals of the skills missing in themselves or their young organizations.  Usually he is even less in a position to point out to such individuals where such talents may be found.  (page 97)

Don’t ignore a good stock just because it is traded ‘over the counter.’

The key point here is just to be sure you are investing in the right company.

Don’t buy a stock just because you like the ‘tone’ of its annual report.

Often annual reports are either overly optimistic or they fail to disclose material information needed by the investor.  Very often you need to look beyond the annual report in order to find all important information.

Don’t assume that the high price at which a stock may be selling in relation to earnings is necessarily an indication that further growth in those earnings has largely been already discounted in the price.

If a company can grow profitably in the future like it has in the past, then even with a high P/E, the stock may still be a good buy.  Fisher:

This is why some of the stocks that at first glance appear highest priced may, upon analysis, be the biggest bargains.  (page 105)

Don’t quibble over eighths and quarters.

If you’ve found a well-managed growth company whose stock is likely to increase at least several hundreds of percent in the future, then obviously it would be a big mistake to miss it just because the price is slightly higher than what you want.



Don’t overstress diversification.

Investors have been so oversold on diversification that fear of having too many eggs in one basket has caused them to put far too little into companies they thoroughly know and far too much in others about which they know nothing at all.  It never seems to occur to them… that buying a company without having sufficient knowledge of it may be even more dangerous than having inadequate diversification.  (pages 108-109)

When Buffett was managing the Buffett Partnerships (1957 to 1970), in the mid 1960’s he put 40% of the portfolio in American Express when the stock fell due to the salad oil scandal.  Buffett and Munger have always believed in concentrating on their best ideas.  Buffett:

We believe that a policy of portfolio concentration may well decrease risk if it raises, as it should, both the intensity with which an investor thinks about a business and the comfort-level he must feel with its economic characteristics before buying into it.

Buffett again in a 1998 lecture at the University of Florida:

If you can identify six wonderful businesses, that is all the diversification you need.  And you will make a lot of money.  And I can guarantee that going into the seventh one instead of putting more money into your first one is [going to] be a terrible mistake.  Very few people have gotten rich on their seventh best idea.  So I would say for anyone working with normal capital who really knows the businesses they have gone into, six is plenty, and I [would] probably have half of [it in] what I like best.


Fisher summarizes:

In the field of common stocks, a little bit of a great many can never be more than a poor substitute for a few of the outstanding.  (page 118)

Don’t be afraid of buying on a war scare.

Fisher explains:

Through the entire twentieth century, with a single exception, every time major war has broken out anywhere in the world or whenever American forces have become involved in any fighting whatever, the American stock market has always plunged sharply downward.  This one exception was the outbreak of World War II in September 1939.  At that time, after an abortive rally on thoughts of fat war contracts to a neutral nation, the market soon was following the typical downward course, a course which some months later resembled panic as news of German victories began piling up.  Nevertheless, at the conclusion of all actual fighting – regardless of whether it was World War I, World War II, or Korea – most stocks were selling at levels vastly higher than prevailed before there was any thought of war at all.  (page 118)

Whether stocks end up higher due to inflationary government policies, or whether stocks actually are worth more, depends on circumstances, writes Fisher.  Yet either way, buying stocks after the initial war scare has been the right move.

Don’t forget your Gilbert and Sullivan.

Some investors look at the highest and lowest price at which a stock has traded in each of the past five years.  This is illogical and dangerous, writes Fisher, because what really matters is how the company – and stock – will perform for many years into the future.  A good growth stock will increase at least several hundred percent from its current price as a result of the company’s future economic performance.  Past stock prices are largely irrelevant.

Don’t fail to consider time as well as price in buying a true growth stock.

Occasionally if you’ve followed a company for some time, you may notice that certain ventures have consistently been followed by stock price increases.  Although it won’t always work, you could use this information as a guide to when to buy the stock.

Don’t follow the crowd.

Psychology can cause a stock to be priced almost anywhere in the short term, as the value investor Howard Marks has noted.  Fisher:

These great shifts in the way the financial community appraises the same set of facts at different times are by no means confined to stocks as a whole.  Particular industries and individual companies within those industries constantly change in financial favor, due as often to altered ways of looking at the same facts as to actual background occurrences themselves.  (page 131)



It’s difficult to find good investment ideas.  In your search, you may accidentally exclude a few of the best ideas, while spending a great deal of time on many stocks that won’t turn out to be good ideas.

Note:  Fisher is talking about growth stocks.  If you’re a value investor, then a quantitative investment strategy can work well over time.

One way to find good investment ideas is to see what top investors are doing.

Fisher offers some details about how he approaches potential investment ideas.  In the first stage, he does not seek to talk with anyone in management.  He does not go over old annual reports.  Fisher:

I will, however, glance over the balance sheet to determine the general nature of the capitalization and financial position.  If there is an SEC prospectus I will read with care those parts covering breakdown of total sales by product lines, competition, degree of officer or other major ownership of common stock (this can also usually be obtained from the proxy statement) and all earning statement figures throwing light on depreciation (and depletion, if any), profit margins, extent of research activity, and abnormal or non-recurring costs in prior years’ operations.

Now I am ready really to go to work.  I will use the ‘scuttlebutt’ method I have already described just as much as I possibly can… I will try to see (or reach by telephone) every key customer, supplier, competitor, ex-employee, or scientist in a related field that I know or whom I can approach through mutual friends.  However, suppose I still do not know enough people or do not have a friend of a friend who knows enough of the people who can supply me with the required background?  What do I do then?

Frankly, if I am not even close to getting much of the information I need, I will give up the investigation and go on to something else.  To make big money on investments it is unnecessary to get some answer to every investment that might be considered.  What is necessary is to get the right answer a large proportion of the very small number of times actual purchases are made.  For this reason, if way too little background is forthcoming and the prospects for a great deal more is bleak, I believe the intelligent thing to do is to put the matter aside and go on to something else.  (pages 140-141)

If you’ve finished ‘scuttlebutt’ research with regard to the fifteen points, then the next step is to approach management.  Only ‘scuttlebutt’ can give you enough knowledge to approach management with intelligent questions.

Fisher writes that he may find one worthwhile stock out of every 250 stocks he considers as possibilities.  He finds one good stock out of every 50 he looks at in some detail.  And Fisher invests about one time of out every 2 or 2.5 company visits.  By the time Fisher visits a company, he has already uncovered via ‘scuttlebutt’ nearly all the important information.  If Fisher can confirm his investment thesis when he meets with management, as well as ease some of his concerns, then he is ready to make the investment.



Fisher concludes Common Stocks and Uncommon Profits by noting the importance of temperament:

One of the ablest investment men I have ever known told me many years ago that in the stock market a good nervous system is even more important than a good head.  (page 148)

Or as Buffett put it:

Investing is not a game where the guy with the 160 IQ beats the guy with the 130 IQ… Once you have ordinary intelligence, what you need is the temperament to control the urges that get other people into trouble in investing.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Why You Shouldn’t Try Market Timing

(Image:  Zen Buddha Silence by Marilyn Barbone.)

July 2, 2017

In Investing: The Last Liberal Art (Columbia University Press, 2nd edition, 2013), Robert Hagstrom has an excellent chapter on decision making.  Hagstrom examines Philip Tetlock’s discussion of foxes versus hedgehogs.



Philip Tetlock, professor of psychology at the University of Pennsylvania, spent fifteen years (1988-2003) studying the political forecasts made by 284 experts.  As Hagstrom writes:

All of them were asked about the state of the world; all gave their prediction of what would happen next.  Collectively, they made over 27,450 forecasts.  Tetlock kept track of each one and calculated the results.  How accurate were the forecasts?  Sadly, but perhaps not surprisingly, the predictions of experts are no better than ‘dart-throwing chimpanzees.’  (page 149)

In other words, one could have rolled a 6-sided dice 27,450 times over the course of fifteen years, and one would have achieved the same level of predictive accuracy as this group of top experts.  (The predictions were in the form of:  more of X, no change in X, or less of X.  Rolling a 6-sided dice would be one way to generate random outcomes among three equally likely scenarios.)

In a nutshell, political experts generally achieve high levels of knowledge (about history, politics, etc.), but most of this knowledge does not help in making predictions.  When it comes to predicting the future, political experts suffer from overconfidence, hindsight bias, belief system defenses, and lack of Bayesian process, says Hagstrom.

Although the overall record of political forecasting is dismal, Tetlock was still able to identify a few key differences:

The aggregate success of the forecasters who behaved most like foxes was significantly greater than those who behaved like hedgehogs.  (page 150)

The distinction between foxes and hedgehogs goes back to an essay by Sir Isaiah Berlin entitled, ‘The Hedgehog and the Fox: An Essay on Tolstoy’s View of History.’  Berlin defined hedgehogs as thinkers who viewed the world through the lens of a single defining idea, and foxes as thinkers who were skeptical of grand theories and instead drew on a wide variety of ideas and experiences before making a decision.



Hagstrom clearly explains key differences between Foxes and Hedgehogs:

Why are hedgehogs penalized?  First, because they have a tendency to fall in love with pet theories, which gives them too much confidence in forecasting events.  More troubling, hedgehogs were too slow to change their viewpoint in response to disconfirming evidence.  In his study, Tetlock said Foxes moved 59 percent of the prescribed amount toward alternate hypotheses, while Hedgehogs moved only 19 percent.  In other words, Foxes were much better at updating their Bayesian inferences than Hedgehogs.

Unlike Hedgehogs, Foxes appreciate the limits of their own knowledge.  They have better calibration and discrimination scores than Hedgehogs.  (Calibration, which can be thought of as intellectual humility, measures how much your subjective probabilities correspond to objective probabilities.  Discrimination, sometimes called justified decisiveness, measures whether you assign higher probabilities to things that occur than to things that do not.)  Hedgehogs have a stubborn belief in how the world works, and they are more likely to assign probabilities to things that have not occurred than to things that actually occur.

Tetlock tells us Foxes have three distinct cognitive advantages.

  1. They begin with ‘reasonable starter’ probability estimates. They have better ‘inertial-guidance’ systems that keep their initial guesses closer to short-term base rates.
  2. They are willing to acknowledge their mistakes and update their views in response to new information. They have a healthy Bayesian process.
  3. They can see the pull of contradictory forces, and, most importantly, they can appreciate relevant analogies.

Hedgehogs start with one big idea and follow through – no matter the logical implications of doing so.  Foxes stitch together a collection of big ideas.  They see and understand the analogies and then create an aggregate hypothesis.  I think we can say the fox is the perfect mascot for the College of Liberal Arts Investing.  (pages 150-151)



We have two classes of forecasters: Those who don’t know – and those who don’t know they don’t know. – John Kenneth Galbraith

Last year, I wrote about The Most Important Thing, a terrific book by the great value investor Howard Marks.  See:

One of the sections from that blog post, ‘Knowing What You Don’t Know,’ is directly relevant to the discussion of foxes versus hedgehogs.  We can often ‘take the temperature’ of the stock market.  Thus, we can have some idea that the market is high and may fall after an extended period of increases.

But we can never know for sure that the market will fall, and if so, when precisely.  In fact, the market does not even have to fall much at all.  It could move sideways for a decade or two, and still end up at more normal levels.  Thus, we should always focus our energy and time on finding individual securities that are undervalued.

There could always be a normal bear market, meaning a drop of 15-25%.  But that doesn’t conflict with a decade or two of a sideways market.  If we own stocks that are cheap enough, we could still be fully invested.  Even when the market is quite high, there are usually cheap micro-cap stocks, for instance.  Buffett made a comment indicating that he would have been fully invested in 1999 if he were managing a small enough sum to be able to focus on micro caps:

If I was running $1 million, or $10 million for that matter, I’d be fully invested.

There are a few cheap micro-cap stocks today.  Moreover, some oil-related stocks are cheap from a 5-year point of view.

Warren Buffett, when he was running the Buffett Partnership, knew for a period of almost ten years (roughly 1960 to 1969) that the stock market was high (and getting higher) and would either fall or move sideways for many years.  Yet he was smart enough never to predict precisely when the correction would occur.  Because Buffett stayed focused on finding individual companies that were undervalued, Buffett produced an outstanding track record for the Buffett Partnership.  Had he ever not invested in cheap stocks because he knew the stock market was high, Buffett would not have produced such an excellent track record.  (For more about the Buffett Partnership, see:

Buffett on forecasting:

We will continue to ignore political and economic forecasts, which are an expensive distraction for many investors and businessmen.

Charlie and I never have an opinion on the market because it wouldn’t be any good and it might interfere with the opinions we have that are good.

Here is what Ben Graham, the father of value investing, said about forecasting the stock market:

…if I have noticed anything over these 60 years on Wall Street, it is that people do not succeed in forecasting what’s going to happen to the stock market.

Howard Marks has tracked (in a limited way) many macro predictions, including U.S. interest rates, the U.S. stock market, and the yen/dollar exchange rate.  He found quite clearly that most forecasts were not correct.

I can elaborate on two examples that I spent much time on (when I should have stayed focused on finding individual companies available at cheap prices):

  • the U.S. stock market
  • the yen/dollar exchange

The U.S. stock market

A secular bear market for U.S. stocks began (arguably) in the year 2000 when the 10-year Graham-Shiller P/E – also called the CAPE (cyclically adjusted P/E) – was over 30, its highest level in U.S. history.  The long-term average CAPE is around 16.  Based on over one hundred years of history, the pattern for U.S. stocks in a secular bear market would be relatively flat or lower until the CAPE approached 10.

However, ever since Greenspan started running the Fed in the 1980’s, the Fed has usually had a policy of stimulating the economy and stocks by lowering rates or keeping rates as low as possible.  This has caused U.S. stocks to be much higher than otherwise.  For instance, with rates today staying near zero, U.S. stocks could easily be at least twice as high as ‘normal’ indefinitely, assuming the Fed decides to keep rates low for many more years.  Furthermore, as Buffett has noted, very low rates for many decades would eventually mean price/earnings ratios on stocks of 100.

In addition to the current Fed regime, there are several additional reasons why rates may stay low.  As Jeremy Grantham recently wrote:

  • We could be between waves of innovation, which suppresses growth and the demand for capital.
  • Population in the developed world and in China is rapidly aging. With more middle-aged savers and less high-consuming young workers, the result could be excess savings that depresses all returns on capital.
  • Nearly 100% of all the recovery in total income since 2009 has gone to the top 0.1%.

Grantham discusses all of these possible reasons for low rates in the Q3 2016 GMO Letter:’s-an-asset-owner-to-do-and-not-with-a-bang-but-a-whimper.pdf?sfvrsn=8

Grantham gives more detail on income inequality in the Q4 2016 GMO Letter:

(In order to see GMO commentaries, you may have to register but it’s free.)

Around the year 2012 (or even earlier), some of the smartest market historians – including Russell Napier, author of Anatomy of the Bear – started predicting that the S&P 500 Index would fall towards a CAPE of 10 or lower, which is how every previous U.S. secular bear market concluded.  It didn’t happen in 2012, or in 2013, or in 2014, or in 2015, or in 2016.  Moreover, it may not happen in 2017 or even 2018.

Again, there could always be a normal bear market involving a drop of 15-25%.  But that doesn’t conflict with a sideways market for a decade or two.  Grantham suggests total returns of about 2.8% per year for the next 20 years.

Grantham, an expert on bubbles, also pointed out that the usual ingredients for a bubble do not exist today.  Normally in a bubble, there are excellent economic fundamentals combined with a euphoric extrapolation of those fundamentals into the future.  Grantham in Q3 2016 GMO Letter:

  • Current fundamentals are way below optimal – trend line growth and productivity are at such low levels that the usually confident economic establishment is at an obvious loss to explain why. Capacity utilization is well below peak and has been falling.  There is plenty of available labor hiding in the current low participation rate (at a price).  House building is also far below normal.
  • Classic bubbles have always required that the geopolitical world is at least acceptable, more usually well above average.  Today’s, in contrast, you can easily agree is unusually nerve-wracking.
  • Far from euphoric extrapolations, the current market has been for a long while and remains extremely nervous. Investor trepidation is so great that many are willing to tie up money in ultra-safe long-term government bonds that guarantee zero real return rather than buy the marginal share of stock!  Cash reserves are high and traditional measures of speculative confidence are low.  Most leading commentators are extremely bearish.  The net effect of this nervousness is shown in the last two and a half years of the struggling U.S. market…so utterly unlike the end of the classic bubbles.
  • …They – the bubbles in stocks and houses – all coincided with bubbles in credit…Credit is, needless to say, complex…What is important here is the enormous contrast between the credit conditions that previously have been coincident with investment bubbles and the lack of a similarly consistent and broad-based credit boom today.

The yen/dollar exchange

As for the yen/dollar exchange, some of the smartest macro folks around predicted (in 2010 and later) that shorting the yen vs. the U.S. dollar would be the ‘trade of the decade,’ and that the yen/dollar exchange would exceed 200.  In 2007, the yen/dollar was over 120.  By 2011-2012, the yen/dollar had gone to around 76.  In late 2014 and for most of 2015, the yen/dollar again exceeded 120.  However, in late 2015, the BOJ decided not to try to weaken their currency further by printing even larger amounts of money.  The yen/dollar declined from over 120 to about 106.  Since then, it has remained below 120.

The ‘trade of the decade argument’ was the following:  the debt-to-GDP in Japan has reached stratospheric levels (over 400-500%, including over 250% for government debt-to-GDP), government deficits have continued to widen, and the Japanese population is actually shrinking.  Since long-term GDP growth is essentially population growth plus productivity growth, it should become mathematically impossible for the Japanese government to pay back its debt without a significant devaluation of their currency.  If the BOJ could devalue the yen by 67% – which would imply a yen/dollar exchange rate of well over 200 – then Japan could repay the government debt in seriously devalued currency.  In this scenario – a yen devaluation of 67% – Japan effectively would only have to repay 33% of the government debt.  Currency devaluation – inflating away the debts – is what most major economies throughout history have done.

Although the U.S. dollar may be stronger than the yen or the euro, all three governments want to devalue their currency over time.  Therefore, even if the yen loses value, it’s not at all clear how long this will take when you consider the yen versus the dollar.  The yen ‘collapse’ could be delayed by many years.  So if you compare a yen/dollar short position versus a micro-cap value investment strategy, it’s likely that the micro-cap value investment strategy will produce higher returns with less risk.

  • Similar logic applies to market timing. You may get lucky once in a row trying to time the market.  But simply buying cheap stocks – and holding them for at least 3 to 5 years before buying cheaper stocks – is likely to do much better over the course of decades.  Countless extremely intelligent investors throughout history have gone mostly to cash based on a market prediction, only to see the market continue to move higher for many years or even decades.  Again:  Even if the market is high, it can go sideways for a decade or two.  If you buy baskets of cheap micro-cap for a decade or two, there is virtually no chance of losing money, and there’s an excellent chance of doing well.

Also, the total human economy is likely to be much larger in the future, and there may be some way to help the Japanese government with its debts.  The situation wouldn’t seem so insurmountable if Japan could grow its population.  But this might happen in some indirect way if the total economy becomes more open in the future, perhaps involving the creation of a new universal currency.



Financial forecasting cannot be done with any sort of consistency.  Every year, there are many people making financial forecasts, and so purely as a matter of chance, a few will be correct in a given year.  But the ones correct this year are almost never the ones correct the next time around, because what they’re trying to predict can’t be predicted with any consistency.  Howard Marks writes:

I am not going to try to prove my contention that the future is unknowable.  You can’t prove a negative, and that certainly includes this one.  However, I have yet to meet anyone who consistently knows what lies ahead macro-wise…

One way to get to be right sometimes is to always be bullish or always be bearish; if you hold a fixed view long enough, you may be right sooner or later.  And if you’re always an outlier, you’re likely to eventually be applauded for an extremely unconventional forecast that correctly foresaw what no one else did.  But that doesn’t mean your forecasts are regularly of any value…

It’s possible to be right about the macro-future once in a while, but not on a regular basis.  It doesn’t do any good to possess a survey of sixty-four forecasts that includes a few that are accurate; you have to know which ones they are.  And if the accurate forecasts each six months are made by different economists, it’s hard to believe there’s much value in the collective forecasts.

Marks gives one more example:  How many predicted the crisis of 2007-2008?  Of those who did predict it – there was bound to be some from pure chance alone – how many of those then predicted the recovery starting in 2009 and continuing until today (early 2017)?  The answer is ‘very few.’  The reason, observes Marks, is that those who got 2007-2008 right “did so at least in part because of a tendency toward negative views.”  They probably were negative well before 2007-2008, and more importantly, they probably stayed negative afterward.  And yet, from a close of 676.53 on March 9, 2009, the S&P 500 Index has increased more than 240% to a close of 2316.10 on February 10, 2017.

Marks has a description for investors who believe in the value of forecasts.  They belong to the ‘I know’ school, and it’s easy to identify them:

  • They think knowledge of the future direction of economies, interest rates, markets and widely followed mainstream stocks is essential for investment success.
  • They’re confident it can be achieved.
  • They know they can do it.
  • They’re aware that lots of other people are trying to do it too, but they figure either (a) everyone can be successful at the same time, or (b) only a few can be, but they’re among them.
  • They’re comfortable investing based on their opinions regarding the future.
  • They’re also glad to share their views with others, even though correct forecasts should be of such great value that no one would give them away gratis.
  • They rarely look back to rigorously assess their record as forecasters. (page 121)

Marks contrasts the confident ‘I know’ folks with the guarded ‘I don’t know’ folks.  The latter believe you can’t predict the macro-future, and thus the proper goal for investing is to do the best possible job analyzing individual securities.  If you belong to the ‘I don’t know’ school, eventually everyone will stop asking you where you think the market’s going.

You’ll never get to enjoy that one-in-a-thousand moment when your forecast comes true and the Wall Street Journal runs your picture.  On the other hand, you’ll be spared all those times when forecasts miss the mark, as well as the losses that can result from investing based on overrated knowledge of the future.

No one likes investing on the assumption that the future is unknowable, observes Marks.  But if the future IS largely unknowable, then it’s far better as an investor to acknowledge that fact than to pretend otherwise.

Furthermore, says Marks, the biggest problems for investors tend to happen when investors forget the difference between probability and outcome (i.e., the limits of foreknowledge):

  • when they believe the shape of the probability distribution is knowable with certainty (and that they know it),
  • when they assume the most likely outcome is the one that will happen,
  • when they assume the expected result accurately represents the actual result, or
  • perhaps most important, when they ignore the possibility of improbable outcomes.

Marks sums it up:

Overestimating what you’re capable of knowing or doing can be extremely dangerous – in brain surgery, transocean racing or investing.  Acknowledging the boundaries of what you can know – and working within those limits rather than venturing beyond – can give you a great advantage.  (page 123)

Or as Warren Buffett wrote in the 2014 Berkshire Hathaway Letter to Shareholders:

Anything can happen anytime in markets.  And no advisor, economist, or TV commentator – and definitely not Charlie nor I – can tell you when chaos will occur.  Market forecasters will fill your ear but will never fill your wallet.




An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Does the Stock Market Overreact?

(Image:  Zen Buddha Silence by Marilyn Barbone.)

June 18, 2017

Richard H. Thaler recently published a book entitled Misbehaving: The Making of Behavioral Economics.  It’s an excellent book.  According to Nobel Laureate Daniel Kahneman, Richard Thaler is “the creative genius who invented the field of behavioral economics.”

Thaler defines “Econs” as the fully rational human beings that traditional economists have always assumed for their models.  “Humans” are often less than fully rational, as demonstrated not only by decades of experiments, but also by the history of various asset prices.

For this blog post, I will focus on Part VI (Finance, pages 203-253).  But first a quotation Thaler has at the beginning of his book:

The foundation of political economy and, in general, of every social science, is evidently psychology.  A day may come when we shall be able to deduce the laws of social science from the principles of psychology.

– Vilfredo Pareto, 1906



Chicago economist Eugene Fama coined the term “efficient market hypothesis,” or EMH for short.  Thaler writes that the EMH has two (related) components:

  • the price is right – the idea is that any asset will sell for its “intrinsic value.” “If the rational valuation of a company is $100 million, then its stock will trade such that the market cap of the firm is $100 million.”
  • no free lunch– EMH holds that all publically available information is already reflected in current stock prices, thus there is no reliable way to “beat the market” over time.

NOTE:  If prices are always right, that means that there can never be bubbles in asset prices.  It also implies that there are no undervalued stocks, at least none that an investor could consistently identify.  There is no way to “beat the market” over a long period of time except by luck.  Warren Buffett was lucky.

Thaler observes that finance did not become a mainstream topic in economics departments before the advent of cheap computer power and great data.  The University of Chicago was the first to develop a comprehensive database of stock prices going back to 1926.  After that, research took off, and by 1970 EMH was well-established.

Thaler also points out that the famous economist J. M. Keynes was “a true forerunner of behavioral finance.”  Keynes, who was a great value investor, thought that “animal spirits” play an important role in financial markets.

Keynes also observed that professional investors are playing an intricate guessing game, similar to picking out the prettiest faces from a set of photographs:

…It is not a case of choosing those which, to the best of one’s judgment, are really the prettiest, nor even those which average opinion genuinely thinks the prettiest.  We have reached the third degree where we devote our intelligences to anticipating what average opinion expects average opinion to be.  And there are some, I believe, who practice the fourth, fifth, and higher degrees.



On average, investors overreact to recent poor performance for low P/E stocks, which is why the P/E’s are low.  And, on average, investors overreact to recent good performance for high P/E stocks, which is why the P/E’s are high.

Having said that, Thaler is quick to quote a warning by Ben Graham about timing: ‘Undervaluations caused by neglect or prejudice may persist for an inconveniently long time, and the same applies to inflated prices caused by overenthusiasm or artificial stimulus.’  Thaler gives the example of the late 1990s:  for years, Internet stocks just kept going up, while value stocks just kept massively underperforming.

According to Thaler, most academic financial economists overlooked Graham’s work:

It was not so much that anyone had refuted Graham’s claim that value investing worked;  it was more that the efficient market theory of the 1970s said that value investing couldn’t work.  But it did.  Late that decade, accounting professor Sanjoy Basu published a thoroughly competent study of value investing that fully supported Graham’s strategy.  However, in order to get such papers published at the time, one had to offer abject apologies for the results.  (page 221)

Thaler and his research partner Werner De Bondt came up with the following.  Suppose that investors are overreacting.  Suppose that investors are overly optimistic about the future growth of high P/E stocks, thus driving the P/E’s “too high.”  And suppose that investors are excessively pessimistic about low P/E stocks, thus driving the P/E’s “too low.”  Then subsequent high returns from value stocks and low returns from growth stocks present simple reversion to the mean.  But EMH says that:

  • The price is right:  Stock prices cannot diverge from intrinsic value.
  • No free lunch:  Because all information is already in the stock price, it is not possible to beat the market. Past stock prices and the P/E cannot predict future price changes.

Thaler and De Bondt took all the stocks listed on the New York Stock Exchange, and ranked their performance over three to five years.  They isolated the worst performing stocks, which they called “Losers.”  And they isolated the best performing stocks, which they called “Winners.”  Writes Thaler:

If markets were efficient, we should expect the two portfolios to do equally well.  After all, according to the EMH, the past cannot predict the future.  But if our overreaction hypothesis were correct, Losers would outperform Winners.  (page 223)


The results strongly supported our hypothesis.  We tested for overreaction in various ways, but as long as the period we looked back at to create the portfolios was long enough, say three years, then the Loser portfolio did better than the Winner portfolio.  Much better.  For example, in one test we used five years of performance to form the Winner and Loser portfolios and then calculated the returns of each portfolio over the following five years, compared to the overall market.  Over the five-year period after we formed our portfolios, the Losers outperformed the market by about 30% while the Winners did worse than the market by about 10%.



In response to widespread evidence that ‘Loser’ stocks (low P/E) – as a group – outperform ‘Winner’ stocks, defenders of EMH were forced to argue that ‘Loser’ stocks are riskier as a group.

NOTE:  On an individual stock basis, a low P/E stock may be riskier.  But a basket of low P/E stocks generally far outperforms a basket of high P/E stocks.  The question is whether a basket of low P/E stocks is riskier than a basket of high P/E stocks.

According to the CAPM (Capital Asset Pricing Model), the measure of the riskiness of a stock is its correlation with the rest of the market, or “beta.”  If a stock has a beta of 1.0, then its volatility is similar to the volatility of the whole market.  If a stock has a beta of 2.0, then its volatility is double the volatility of the whole market (e.g., if the whole market goes up or down by 10%, then this individual stock will go up or down by 20%).

According to CAPM, if the basket of Loser stocks subsequently outperforms the market while the basket of Winner stocks underperforms, then the Loser stocks must have high betas and the Winner stocks must have low betas.  But Thaler and De Bondt found the opposite.  Loser stocks (value stocks) were much less risky as measured by beta.

Eventually Eugene Fama himself, along with research partner Kenneth French, published a series of papers documenting that, indeed, both value stocks and small stocks earn higher returns than predicted by CAPM.  In short, “the high priest of efficient markets” (as Thaler calls Fama) had declared that CAPM was dead.

But Fama and French were not ready to abandon the EMH (Efficient Market Hypothesis).  They came up with the Fama-French Three Factor Model.  They showed that value stocks are correlated – a value stock will tend to do well when other value stocks are doing well.  And they showed that small-cap stocks are similarly correlated.

The problem, again, is that there is no evidence that a basket of value stocks is riskier than a basket of growth stocks.  And there is no theoretical reason to believe that value stocks, as a group, are riskier.

Thaler asserts that the debate was settled by the paper ‘Contrarian Investment, Extrapolation, and Risk’ published in 1994 by Josef Lakonishok, Andrei Shleifer, and Robert Vishny.  This paper shows clearly that value stocks outperform, and value stocks are, if anything, less risky than growth stocks.  Link to paper:

Lakonishok, Shleifer, and Vishny launched the highly successful LSV Asset Management based on their research:

(Recently Fama and French have introduced a five-factor model, which includes profitability.  Profitability was one of Ben Graham’s criteria.)



If you held a stock forever, it would be worth all future dividends discounted back to the present.  Even if you sold the stock, as long as you held it for a very long time, the distant future sales price (discounted back to the present) would be a negligible part of the intrinsic value of the stock.  The stock price is really the present value of all expected future dividend payments.

Bob Shiller collected historical data on stock prices and dividends.

Then, starting in 1871, for each year he computed what he called the ‘ex post rational’ forecast of the stream of future dividends that would accrue to someone who bought a portfolio of the stocks that existed at that time.  He did this by observing the actual dividends that got paid out and discounting them back to the year in question.  After adjusting for the well-established trend that stock prices go up over long periods of time, Shiller found that the present value of dividends was… highly stable.  But stock prices, which we should interpret as attempts to forecast the present value of dividends, are highly variable….  (231-232, my emphasis)

Shiller demonstrated that a stock price typically moves around much more than the intrinsic value of the underlying business.

October 1987 provides yet another example of stock prices moving much more than fundamental values.  The U.S. stock market dropped more than 25% from Thursday, October 15, 1987 to Monday, October 19, 1987.  This happened in the absence of any important news, financial or otherwise.  Writes Thaler:

If prices are too variable, then they are in some sense ‘wrong.’  It is hard to argue that the price at the close of trading on Thursday, October 15, and the price at the close of trading the following Monday – which was more than 25% lower – can both be rational measures of intrinsic value, given the absence of news.



It’s important to note that although the assumption of rationality and the EMH have been demonstrated not to be true – at least strictly speaking – behavioral economists have not invented a model of human behavior that can supplant rationalist economics.  Therefore, rationalist economics, and not behaviorist economics, is still the chief basis by which economists attempt to predict human behavior.

Neuroscientists, psychologists, biologists, and other scientists will undoubtedly learn much more about human behavior in the coming decades.  But even then, human behavior, due to its complexity, may remain partly unpredictable for some time.  Thus, rationalist economic models may continue to be useful.

  • Rationalist models, including game theory, may also be central to understanding and predicting artificially intelligent agents.
  • It’s also possible (as hard as it may be to believe) that human beings will evolve – perhaps partly with genetic engineering and/or with help from AI – and become more rational overall.

The Law of One Price

In an efficient market, the same asset cannot sell simultaneously for two different prices.  Thaler gives the standard example of gold selling for $1,000 an ounce in New York and $1,010 an ounce in London.  If transaction costs were small enough, a smart trader could buy gold in New York and sell it in London.  This would eventually cause the two prices to converge.

But there is one obvious example that violates this law of one price:  closed-end funds, which had already been written about by Ben Graham.

For an open-end fund, all trades take place at NAV (net asset value).  Investors can purchase a stake in an open-end fund on the open market, without there having to be a seller.  So the total amount invested in an open-end fund can vary depending upon what investors do.

But for a closed-end fund, there is an initial amount invested in the fund, say $100 million, and then there can be no further investments and no withdrawals.  A closed-end fund is traded on an exchange.  So an investor can buy partial ownership of a closed-end fund, but this means that a previous owner must sell that stake to the buyer.

According to EMH, closed-end funds should trade at NAV.  But in the real world, many closed-end funds trade at prices different from NAV (sometimes a premium and sometimes a discount).  This is an obvious violation of the law of one price.

Charles Lee, Andrei Shleifer, and Richard Thaler wrote a paper on closed-end funds in which they identified four puzzles:

  • Closed-end funds are often sold by brokers with a sales commission of 7%. But within six months, the funds typically sell at a discount of more than 10%.  Why do people repeatedly pay $107 for an asset that in six months is worth $90?
  • More generally, why do closed-end funds so often trade at prices that differ from the NAV of its holdings?
  • The discounts and premia vary noticeably across time and across funds. This rules out many simple explanations.
  • When a closed-end fund, often under pressure from shareholders, changes its structure to an open-end fund, its price often converges to NAV.

The various premia and discounts on closed-end funds simply make no sense.  These mispricings would not exist if investors were rational because the only rational price for a closed-end fund is NAV.

Lee, Shleifer, and Thaler discovered that individual investors are the primary owners of closed-end funds.  So Thaler et al. hypothesized that individual investors have more noticeably shifting moods of optimism and pessimism.  Says Thaler:

We conjectured that when individual investors are feeling perky, discounts on closed-end funds shrink, but when they get depressed or scared, the discounts get bigger.  This approach was very much in the spirit of Shiller’s take on social dynamics, and investor sentiment was clearly one example of ‘animal spirits.’  (pages 241-242)

In order to measure investor sentiment, Thaler et al. used the fact that individual investors are more likely than institutional investors to own shares of small companies.  Thaler et al. reasoned that if the investor sentiment of individual investors changes, it would be apparent both in the discounts of closed-end funds and in the relative performance of small companies (vs. big companies).  And this is exactly what Thaler et al. found upon doing the research.  The greater the discounts to NAV for closed-end funds, the larger the difference was in returns between small stocks and large stocks.



Years later, Thaler revisited the law of one price with a Chicago colleague, Owen Lamont.  Owen had spotted a blatant violation of the law of one price involving the company 3Com.  3Com’s main business was in networking computers using Ethernet technology, but through a merger they had acquired Palm, which made a very popular (at the time) handheld computer the Palm Pilot.

In the summer of 1999, as most tech stocks seemed to double almost monthly, 3Com stock seemed to be neglected.  So management came up with the plan to divest itself of Palm.  3Com sold about 4% of its stake in Palm to the general public and 1% to a consortium of firms.  As for the remaining 95% of Palm, each 3Com shareholder would receive 1.5 shares of Palm for each share of 3Com they owned.

Once this information was public, one could infer the following:  As soon as the initial shares of Palm were sold and started trading, 3Com shareholders would in a sense have two separate investments.  A single share of 3Com included 1.5 shares of Palm plus an interest in the remaining parts of 3Com – what’s called the “stub value” of 3Com.  Note that the remaining parts of 3Com formed a profitable business in its own right.  So the bottom line is that one share of 3Com should equal the “stub value” of 3Com plus 1.5 times the price of Palm.

When Palm started trading, it ended the day at $95 per share.  So what should one share of 3Com be worth?  It should be worth the “stub value” of 3Com – the remaining profitable businesses of 3Com (Ethernet tech, etc.) – PLUS 1.5 times the price of Palm, or 1.5 x $95, which is $143.

Again, because the “stub value” of 3Com involves a profitable business in its own right, this means that 3Com should trade at X (the stub value) plus $143, so some price over $143.

But what actually happened?  The same day Palm started trading, ending the day at $95, 3Com stock fell to $82 per share.  Thaler writes:

That means that the market was valuing the stub value of 3Com at minus $61 per share, which adds up to minus $23 billion!  You read that correctly.  The stock market was saying that the remaining 3Com business, a profitable business, was worth minus $23 billion.  (page 246)

Thaler continues:

Think of it another way.  Suppose an Econ is interested in investing in Palm.  He could pay $95 and get one share of Palm, or he could pay $82 and get one share of 3Com that includes 1.5 shares of Palm plus an interest in 3Com.

Thaler observes that two things are needed for such a blatant violation of the law of one price to emerge and persist:

  • You need some traders who want to own shares of the now publicly traded Palm, traders who appear not to realize the basic math of the situation.  These traders are called “noise traders,” because they are trading not based on real information (or real news), but based purely on “noise.”  (The term “noise traders” was invented by Fischer Black.  See:
  • There also must be something preventing smart traders from driving prices back to where they are supposed to be.  After all, the sensible investor can buy a share of 3Com for $82, and get 1.5 shares of Palm (worth $143) PLUS an interest in remaining profitable businesses of 3Com.  Actually, the rational investor would go one step further:  buy 3Com shares (at $82) and then short an appropriate number of Palm shares (at $95).  When the deal is completed and the rational investor gets 1.5 shares of Palm for each share of 3Com owned, he can then use those shares of Palm to repay the shares he borrowed earlier when shorting the publicly traded Palm stock.  This was a CAN’T LOSE investment.  Then why wasn’t everyone trying to do it?

The problem was that there were very few shares of Palm being publicly traded.  Some smart traders made tens of thousands.  But there wasn’t enough publicly traded Palm stock available for any rational investor to make a huge amount of money.  So the irrational prices of 3Com and Palm were not corrected.

Thaler also tells a story about a young Benjamin Graham.  In 1923, DuPont owned a large number of shares of General Motors.  But the market value of DuPont was about the same as its stake in GM.  DuPont was a highly profitable firm.  So this meant that the stock market was putting the “stub value” of DuPont’s highly profitable business at zero.  Graham bought DuPont and sold GM short.  He made a lot of money when the price of DuPont went up to more rational levels.

In mid-2014, says Thaler, there was a point when Yahoo’s holdings of Alibaba were calculated to be worth more than the whole of Yahoo.

Sometimes, as with the closed-end funds, obvious mispricings can last for a long time, even decades.  Andrei Shleifer and Robert Vishny refer to this as the “limits of arbitrage.”



What are the implications of these examples?  If the law of one price can be violated in such transparently obvious cases such as these, then it is abundantly clear that even greater disparities can occur at the level of the overall market.  Recall the debate about whether these was a bubble going on in Internet stocks in the late 1990s….  (page 250)

So where do I come down on the EMH?  It should be stressed that as a normative benchmark of how the world should be, the EMH has been extraordinarily useful.  In a world of Econs, I believe that the EMH would be true.  And it would not have been possible to do research in behavioral finance without the rational model as a starting point.  Without the rational framework, there are no anomalies from which we can detect misbehavior.  Furthermore, there is not as yet a benchmark behavioral theory of asset prices that could be used as a theoretical underpinning of empirical research.  We need some starting point to organize our thoughts on any topic, and the EMH remains the best one we have.  

When it comes to the EMH as a descriptive model of asset markets, my report card is mixed.  Of the two components, using the scale sometimes used to judge claims made by political candidates, I would judge the no-free-lunch component to be ‘mostly true.’  There are definitely anomalies:  sometimes the market overreacts, and sometimes it underreacts.  But it remains the case that most active money managers fail to beat the market…

I have a much lower opinion about the price-is-right component of the EMH, and for many important questions, this is the more important component…

My conclusion:  the price is often wrong, and sometimes very wrong.  Furthermore, when prices diverge from fundamental value by such wide margins, the misallocation of resources can be quite big.  For example, in the United States, where home prices were rising at a national level, some regions experienced especially rapid price increases and historically high price-to-rental ratios.  Had both homeowners and lenders been Econs, they would have noticed these warning signals and realized that a fall in home prices was becoming increasingly likely.  Instead, surveys by Shiller showed that these were the regions in which expectations about the future appreciation of home prices were the most optimistic.  Instead of expecting mean reversion, people were acting as if what goes up must go up even more.  (my emphasis)

Thaler adds that policy-makers should realize that asset prices are often wrong, and sometimes very wrong, instead of assuming that prices are always right.




An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

The Investment Checklist

(Image:  Zen Buddha Silence by Marilyn Barbone.)

April 23, 2017

Michael Shearn is the author of The Investment Checklist (Wiley, 2012), a very good book about how to research stocks.

For investors who have a long-term investment time horizon, micro-cap value stocks should be a major focus.  I launched the Boole Microcap Fund to create a very low-cost way for investors to invest in undervalued micro-cap stocks.  Boole currently uses a fully quantitative investment strategy.  (Ultimately Boole will use an early form of artificial intelligence, which is a natural extension of a fully quantitative strategy.)

For investors who use a fully quantitative strategy, it’s worthwhile to review good investment checklists like Shearn’s.  Although in practice, a quantitative micro-cap strategy can rely primarily on a few simple metrics – for example, a high EBIT/EV and a high Piotroski F-Score – one must regularly look for ways to improve the formula.



Shearn writes that he came up with his checklist by studying his own mistakes, and also by studying mistakes other investors and executives had made.  Shearn says the checklist helps an investor to focus on what’s important.  Shearn argues that a checklist also helps one to fight against the strong human tendency to seek confirming evidence while ignoring disconfirming evidence.

Shearn explains how the book is organized:

The three most common investing mistakes relate to the price you pay, the management team you essentially join when you invest in a company, and your failure to understand the future economics of the business you’re considering investing in.  (page xv)



There are many ways to find investment ideas.  One of the best ways to find the most potential investment ideas is to look at micro-cap stocks trading at high EBIT/EV (or, equivalently, low EV/EBIT) and with a high Piotroski F-Score.

Micro-cap stocks perform best over time.  See:

Low EV/EBIT – equivalently, high EBIT/EV – does better than the other standard measures of cheapness such as low P/E, low P/S, and low P/B.  See:

  • Quantitative Value (Wiley, 2013), by Wesley Gray and Tobias Carlisle
  • Deep Value (Wiley, 2014), Tobias Carlisle

A high Piotroski F-Score is most effective when applied to cheap micro-cap stocks.  See:

In sum, if you focus on micro-cap stocks trading at a high EBIT/EV and with a high Piotroski F-Score, you should regularly find many potentially good investment ideas.  This is essentially the process used by the Boole Microcap Fund.

There are, of course, many other good ways to find ideas.  Shearn mentions forced selling, such as when a stock is dropped from an index.  Also, spin-offs typically involve some forced selling.  Moreover, the 52-week low list and other new-low lists often present interesting ideas.

Looking for the areas of greatest distress can lead to good investment opportunities.  For instance, some offshore oil drillers appear to be quite cheap from a three- to five-year point of view assuming oil returns to a market clearing price of $60-70.



A fully quantitative approach can work quite well.  Ben Graham, the father of value investing, often used a fully quantitative approach.  Graham constructed a portfolio of the statistically cheapest stocks, according to various metrics like low P/E or low P/B.

I’ve already noted that the Boole Microcap Fund uses a fully quantitative approach:  micro-cap stocks with a high EBIT/EV and a high Piotroski F-Score.  This particular quantitative strategy has the potential to beat both the Russell Microcap Index and the S&P 500 Index by solid margins over time.

But there are a few ways that you can possibly do better than the fully quantitative micro-cap approach I’ve outlined.  One way is using the same quantitative approach as a screen, doing in-depth research on several hundred candidates, and then building a very concentrated portfolio of the best 5 to 8 ideas.

In practice, it is extremely difficult to make the concentrated approach work.  The vast majority of investors are better off using a fully quantitative approach (which selects the best 20 to 30 ideas, instead of the best 5 to 8 ideas).

The key ingredient to make the concentrated strategy work is passion.  Some investors truly love learning everything possible about hundreds of companies.  If you develop such a passion, and then apply it for many years, it’s possible to do better than a purely quantitative approach, especially if you’re focusing on micro-cap stocks.  Micro-cap stocks are the most inefficiently priced part of the market because most professional investors never look there.  Moreover, many micro-cap companies are relatively simple businesses that are easier for the investor to understand.

I’m quite passionate about value investing, including micro-cap value investing.  But I’m also passionate about fully automated investing, whether via index funds or quantitative value funds.  I know that low-cost broad market index funds are the best long-term investment for most investors.  Low-cost quantitative value funds – especially if focused on micro caps – can do much better than low-cost broad market index funds.

I am more passionate about perfecting a fully quantitative investment strategy – ultimately by using an early form of artificial intelligence – than I am about studying hundreds of micro-cap companies in great detail.  I know that a fully quantitative approach that picks the best 20 to 30 micro-cap ideas is very likely to perform better than my best 5 to 8 micro-cap ideas over time

Also, once value investing can be done well by artificial intelligence, it won’t be long before the best AI value investor will be better than the best human value investor.  Very few people thought that a computer could beat Garry Kasparov at chess, but IBM’s Deep Blue achieved this feat in 1997.  Similarly, few people thought that a computer could beat human Jeopardy! champions.  But IBM’s Watson trounced Ken Jennings and Brad Rutter at Jeopardy! in 2011.

Although investing is far more complex than chess or Jeopardy!, there is no reason to think that a form of artificial intelligence will not someday be better than the best human investors.  This might not happen for many decades.  But that it eventually will happen is virtually inevitable.  Scientists will figure out, in ever more detail, exactly how the human brain functions.  And scientists will eventually design a digital brain that can do everything the best human brain can do.

The digital brain will get more and more powerful, and faster and faster.  And at some point, the digital brain is likely to gain the ability to accelerate its own evolution (perhaps by re-writing its source code).  Some have referred to such an event – a literal explosion in the capabilities of digital superintelligence, leading to an explosion in technological progress – as the singularity.



If you’re going to try to pick stocks, then, notes Shearn, a good question to ask is: How would you evaluate this business if you were to become its CEO?

If you were to become CEO of a given business, then you’d want to learn everything you could about the industry and about the company.  To really understand a business can easily take 6-12 months or even longer, depending on your prior experience and prior knowledge, and also depending upon the size and complexity of the business.  (Micro-cap companies tend to be much easier to understand.)

You should read at least ten years’ worth of annual reports (if available).  If you’re having difficulty understanding the business, Shearn recommends asking yourself what the customer’s world would look like if the business (or industry) did not exist.

You should understand exactly how the business makes money.  You’d also want to understand how the business has evolved over time.  (Many businesses include their corporate history on their website.)



Shearn writes:

The more you can understand a business from the customer’s perspective, the better position you will be in to value that business, because satisfied customers are the best predictor of future earnings for a business.  As Dave and Sherry Gold, co-founders of dollar store retailer 99 Cent Only Stores, often say, ‘The customer is CEO.’  (page 39)

To gain an understanding of the customers, Shearn recommends that you interview some customers.  Most investors never interview customers.  So if you’re willing to spend the time interviewing customers, you can often gain good insight into the business that many other investors won’t have.

Shearn says it’s important to identify the core customers, since often a relatively small percentage of customers will represent a large chunk of the company’s revenues.  Core customers may also reveal how the business caters to them specifically.  Shearn gives an example:

Paccar is a manufacturer of heavy trucks that is a great example of a company that has built its product around its core customer, the owner operator.  Owner operators buy the truck they drive and spend most of their time in it.  They work for themselves, either contracting directly with shippers or subcontracting with big truck companies.  Owner operators care about quality first, and want amenities, such as noise-proofed sleeper cabins with luxury-grade bedding and interiors.  They also want the truck to look sharp, and Paccar makes its Peterbilt and Kenworth brand trucks with exterior features to please this customer.  Paccar also backs up the driver with service features, such as roadside assistance and a quick spare parts network.  Because owner operators want this level of quality and service, they are less price sensitive, and they will pay 10 percent more for these brands.  (page 42)

Shearn writes that you want to find out how easy or difficult it is to convince customers to buy the products or services.  Obviously a business with a product or service that customers love is preferable as an investment, other things being equal.

A related question is: what is the customer retention rate?  The longer the business retains a customer, the more profitable the business is.  Also, loyal customers make future revenues more predictable, which in itself can lead to higher profits.  Businesses that carefully build long-term relationships with their customers are more likely to do well.  Are sales people rewarded just for bringing in a customer, or are they also rewarded for retaining a customer?

You need to find out what pain the business alleviates for the customer, as well.  Similarly, you want to find out how essential the product or service is.  This will give you insight into how important the product or service is for the customers.  Shearn suggests the question: If the business disappeared tomorrow, what impact would this have on the customer base?



Not only do you want to find out if the business has a sustainable competitive advantage.  But you also want to learn if the industry is good, writes Shearn.  And you want to find out about supplier relations.

Shearn lists common sources of sustainable competitive advantage:

  • Network economics
  • Brand loyalty
  • Patents
  • Regulatory licenses
  • Switching costs
  • Cost advantages stemming from scale, location, or access to a unique asset

If a product or service becomes more valuable if more customers use it, then the business may have a sustainable competitive advantage from network economics.  Facebook becomes more valuable to a wider range of people as more and more people use it.

If customers are loyal to a particular brand and if the business can charge a premium price, this creates a sustainable competitive advantage.  Coca-Cola has a very strong brand.  So does See’s Candies (owned by Berkshire Hathaway).

A patent legally protects a product or service over a 17- to 20-year period.  If a patented product or service has commercial value, then the patent is a source of sustainable competitive advantage.

Regulatory licenses – by limiting competition – can be a source of sustainable competitive advantage.

Switching costs can create a sustainable competitive advantage.  If it has taken time to learn new software, for example, that can create a high switching cost.

There are various cost advantages that can be sustainable.  If there are high fixed-costs in a given industry, then as a business grows larger, it can benefit from lower per-unit costs.  Sometimes a business has a cost advantage by its location or by access to a unique asset.

Sustainable Competitive Advantages Are Rare

Even if a business has had a sustainable competitive advantage for some time, that does not guarantee that it will continue to have one going forward.  Any time a business is earning a high ROIC – more specifically, a return on invested capital that is higher than the cost of capital – competitors will try to take some of those excess returns.  That is the essence of capitalism.  High ROIC usually reverts to the mean (average ROIC) due to competition and/or due to changes in technology.

Most Investment Gains Are Made During the Development Phase

Shearn points out that most of the gains from a sustainable competitive advantage come when the business is still developing, rather than when the business is already established.  The biggest gains on Wal-Mart’s stock occurred when the company was developing.  Similarly for Microsoft, Amazon, or Apple.

Pricing Power

Pricing power is usually a function of a sustainable competitive advantage.  Businesses that have pricing power tend to have a few characteristics in common, writes Shearn:

  • They usually have high customer-retention rates
  • Their customers spend only a small percentage of their budget on the business’s product or service
  • Their customers have profitable business models
  • The quality of the product is more important than the price

Nature of Industry and Competitive Landscape

Some industries, like software, may be considered “good” in that the best companies have a sustainable competitive advantage as represented by a sustainably high ROIC.

But an industry with high ROIC’s, like software, is hyper-competitive.  Competition and/or changes in technology can cause previously unassailable competitive advantages to disappear entirely.

It’s important to examine companies that failed in the past.  Why did they fail?

IMPORTANT:  Stock Price

As a value investor, depending upon the price, a low-quality asset can be a much better investment than a high-quality asset.  This is a point Shearn doesn’t mention but should.  As Howard Marks explains:

A high-quality asset can constitute a good or bad buy, and a low-quality asset can constitute a good or bad buy.  The tendency to mistake objective merit for investment opportunity, and the failure to distinguish between good assets and good buys, gets most investors into trouble.

Supplier Relations

Does the business have a good relationship with its suppliers?  Does the business help suppliers to innovate?  Is the business dependent on only a few suppliers?



Shearn explains why the fundamentals – the things a business has to do in order to be successful – are important:

As an investor, identifying and tracking fundamentals puts you in a position to more quickly evaluate a business.  If you already understand the most critical measures of a company’s operational health, you will be better equipped to evaluate unexpected changes in the business or outside environment.  Such changes often present buying opportunities if they affect the price investors are willing to pay for a business without affecting the fundamentals of the business.  (page 99)

Moreover, there are specific operating metrics for a given business or industry that are important to track.  Monitoring the right metrics can give you insight into any changes that may be significant.  Shearn lists the following industry primers:

  • Reuters Operating Metrics
  • Standard & Poor’s Industry Surveys
  • Fisher Investment guides

Shearn also mentions internet search and books are sources for industry metrics.  Furthermore, there are trade associations and trade journals.

Shearn suggests monitoring the appropriate metrics, and writing down any changes that occur over three- to five-year periods.  (Typically a change over just one year is not enough to draw a conclusion.)

Key Risks

Companies list their key risks in the 10-K in the section Risk Factors.  It is obviously important to identify what can go wrong.  Shearn:

…it is important for you to spend some time in this section and investigate whether the business has encountered the risks listed in the past and what the consequences were.  This will help you understand how much impact each risk may have.  (page 106)

You would like to identify how each risk could impact the value of the business.  You may want to use scenario analysis of the value of the business in order to capture specific downside risks.

Shearn advises thinking like an insurance underwriter about the risks for a given business.  What is the frequency of a given risk – in other words, how often has it happened in the past?  And what is the severity of a given risk – if the downside scenario materializes, what impact will that have on the value of the business?  It is important to study what has happened in the past to similar businesses and/or to businesses that were in similar situations.  This allows you to develop a better idea of the frequency – i.e., the base rate – of specific risks.

Is the Balance Sheet Strong or Weak?

A strong balance sheet allows the business not only to survive, but in some cases, to thrive by being able to take advantage of opportunities.  A weak balance sheet, on the other hand, can mean the difference between temporary difficulties and insolvency.

You need to figure out if future cash flows will be enough to make future debt payments.

For value investors in general, the advice given by Graham, Buffett, and Munger is best: Avoid companies with high debt.  The vast majority of the very best value investments ever made involved companies with low debt or no debt.  Therefore, it is far simpler just to avoid companies with high debt.

Occasionally there may be equity stub situations where the potential upside is so great that a few value investors may want to carefully consider it.  Then you would have to determine what the liquidity needs of the business are, what the debt-maturity schedule is, whether the interest rates are fixed or variable, what the loan covenants indicate, and whether specific debts are resource or non-recourse.

Return on Reinvestment or RONIC

It’s not high historical ROIC that counts:

What counts is the ability of a business to reinvest its excess earnings at a high ROIC, which is what creates future value.  (page 129)

You need to determine the RONIC – return on new invested capital.  How much of the excess earnings can the company reinvest and at what rate of return?

How to Improve ROIC

Shearn gives two ways a business can improve its ROIC:

  • Using capital more efficiently, such as managing inventory better or managing receivables better, or
  • Increasing profit margins, instead of through one-time, non-operating boosts to cash earnings.

A supermarket chain has low net profit margins, so it must have very high inventory turnover to be able to generate high ROIC.  On the other hand, a steel manufacturer has low asset turnover, therefore it must achieve a high profit margin in order to generate high ROIC.



Scenario analysis is useful when there is a wide range of future earnings.  As mentioned earlier, some offshore oil drillers appear very cheap right now on the assumption that oil returns to a market clearing price of $60-70 a barrel within the next few years.  If it takes five years for oil to return to $60-70, then many offshore oil drillers will have lower intrinsic value (a few may not survive).  If it takes three years (or less) for oil to return to $60-70, then some offshore drillers are likely very cheap compared to their normalized earnings.

Compare Cash Flow from Operations to Net Income

As Shearn remarks, management has much less flexibility in manipulating cash flow from operations than it does net income because the latter includes many subjective estimates.  Over the past one to five years, cash flow from operations should closely approximate net income, otherwise there may be earnings manipulation.


If the accounting is conservative and straightforward, that should give you more confidence in management than if the accounting is liberal and hard to understand.  Shearn lists some ways management can manipulate earnings:

  • Improperly inflating sales
  • Under- or over-stating expenses
  • Manipulating discretionary costs
  • Changing accounting methods
  • Using restructuring charges to increase future earnings
  • Creating reserves by manipulating estimates

Management can book a sale before the revenue is actually earned in order to inflate revenues.

Management can capitalize an expense over several time periods, which shifts some current expenses to later periods thereby boosting short-term earnings.  Expenses commonly capitalized include start-up costs, R&D expenses, software development, maintenance costs, marketing, and customer-acquisition costs.  Shearn says you can find out whether a business routinely capitalizes its costs by reading the footnotes to the financial statements.

Manipulating discretionary costs is common, writes Shearn.  Most companies try to meet their quarterly earnings goals.  Most great owner operator businesses – like Warren Buffett’s Berkshire Hathaway or Henry Singleton’s Teledyne – spend absolutely no time worrying about short-term (including quarterly) earnings.

Managers often extend the useful life of particular assets, which reduces quarterly depreciation expenses.

A business reporting a large restructuring loss may add extra expenses in the restructuring charge in order to reduce future expenses (and boost future earnings).

Management can overstate certain reserve accounts in order to draw on those reserves during future bad times (in order to boost earnings during those bad times).  Reserves can be booked for: bad debts, sales returns, inventory obsolescence, warranties, product liability, litigation, or environmental contingencies.

Operating Leverage

If a business has high operating leverage, then it is more difficult to forecast future earnings.  Again, scenario analysis can help in this situation.

High operating leverage means that a relatively small change in revenues can have a large impact on earnings.  A business with high fixed costs has high operating leverage, whereas a business with low fixed costs has low operating leverage.

For example, as Shearn records, in 2008, Boeing reported that revenues decreased 8.3 percent and operating income decreased 33.9 percent.

Working Capital

Shearn explains:

The amount of working capital a business needs depends on the capital intensity and the speed at which a business can turn its inventory into cash.  The shorter the commitment or cycle, the less cash is tied up and the more a business can use the cash for other internal purposes.  (page 163)

Boeing takes a long time to turn sheet metal and various electronics into an airplane.  Restaurants, on the other hand, turn inventories into cash quite quickly.

The Cash Conversion Cycle (CCC) tells you how quickly a company can turn its inventory and receivables into cash and pay its short-term obligations.

CCC = Inventory conversion period (Days)

+ Receivables conversion period (Days)

– Payables conversion period (Days)

When a company has more current liabilities than current assets, that means it has negative working capital.  In this situation, the customers and suppliers are financing the business, so growth is less expensive.  Typically cash flow from operations will exceed net income for a business with negative working capital.

Negative working capital is only good as long as sales are growing, notes Shearn.



Sound management is usually essential for a business to do well, although ideally, as Buffett joked, you want a business so good that any idiot can run it, because eventually one will.

Shearn offers good advice on how to judge management:

It is best to evaluate a management team over time.  By not rushing into investment decisions and by taking the time to understand a management team, you can reduce your risk of misjudging them.  Most errors in assessing managers are made when you try to judge their character quickly or when you see only what you want to see and ignore flaws or warning signs.  The more familiar you are with how managers act under different types of circumstances, the better you are able to predict their future actions.  Ideally, you want to understand how managers have operated in both difficult and favorable circumstances.  (pages 174-175)

Types of managers

  • Owner-operator
  • Long-tenured manager
  • Hired hand

An owner-operator is a manager who has a genuine passion for the business and is typically the founder.  Shearn gives examples:

  • Sam Walton, founder of Wal-Mart
  • Dave and Sherry Gold, co-founders of 99 Cent Only Stores
  • Joe Mansueto, founder of Morningstar
  • John Mackey, co-founder of Whole Foods Market
  • Warren Buffett, CEO of Berkshire Hathaway
  • Founders of most family-controlled businesses

Shearn continues:

These passionate leaders run the business for key stakeholders such as customers, employees, and shareholders alike… They typically are paid modestly and have high ownership interests in the business.  (page 177)

(Shearn also defines a second and third type of owner-operator to the extent that the owner-operator runs the business for their own benefit.)

A long-tenured manager has worked at the business for at least three years.  (A second type of long-tenured manager joined from outside the business, but worked in the same industry.)

A hired hand is a manager who has joined from outside the business, but who has worked in a related industry.  (Shearn defines a second type of hired hand who has worked in a completely unrelated industry.)

The Importance of Tenure in Operating the Business

Out of the 500 businesses in the S&P 500, only 28 have CEOs who have held office for more than 15 years (this is as of the year 2012, when Shearn was writing).  Of these 28 long-term CEOs, 25 of them had total shareholder returns during their tenures that beat the S&P 500 index (including dividends reinvested).

Management Style: Lions and Hyenas

Based on an interview with Seng Hock Tan, Shearn distinguishes between Lion Managers and Hyena Managers.

Lion Manager:

  • Committed to ethical and moral values
  • Thinking long term and maintains a long-term focus
  • Does not take shortcuts
  • Thirsty for knowledge and learning
  • Supports partners and alliances
  • Treats employees as partners
  • Admires perseverance

Hyena Manager:

  • Has little interest in ethics and morals
  • Thinks short term
  • Just wants to win the game
  • Has little interest in knowledge and learning
  • A survivor and an opportunist
  • Treats employees as expenses
  • Admires tactics, resourcefulness, and guile

Operating Background

Shearn observes that it can be risky to have a top executive who does not have a background in the day-to-day operations of the business.

Low Salaries and High Stock Ownership

Ideally, managers will be incentivized based high stock ownership (and comparatively low salaries) as a function of building long-term business value.  This aligns management incentives with shareholder interests.

You also want managers who are generous to all employees in terms of stock ownership.  This means the managers and employees have similar incentives (which are aligned with shareholder interests).

Finally, you want managers who gradually increase their ownership interest in the business over time.



Obviously you prefer a good manager, not only because the business will tend to do better over time, but also because you won’t have to spend time worrying.

Shearn on a CEO who manages the business for all stakeholders:

If you were to ask investors whether shareholder value is more important than customer service at a business, most would answer that it is.  What they fail to consider is that shareholder value is a byproduct of a business that keeps its customers happy.  In fact, many of the best-performing stocks over the long term are the ones that balance the interests of all stakeholder groups, including customers, employees, suppliers, and other business partners.  These businesses are managed by CEOs who have a purpose greater than solely generating profits for their shareholders.  (pages 210-211)

Shearn mentions John Mackey, co-founder and CEO of Whole Foods Market, who coined the term conscious capitalism to describe businesses designed to benefit all stakeholders.  Shearn quotes Mackey:

Long-term profits come from having a deeper purpose, great products, satisfied customers, happy employees, great suppliers, and from taking a degree of responsibility for the community and environment we live in.  The paradox of profits is that, like happiness, they are best achieved by not aiming directly for them.

Continuous Incremental Improvement


Contrary to popular belief, most successful businesses are built on hundreds of small decisions, instead of on one well-formulated strategic plan.  For example, when most successful entrepreneurs start their business, they do not have a business plan stating what their business will look like in 2, 5, or 10 years.  They instead build their business day by day, focusing on customer needs and letting these customer needs shape the direction of their business.  It is this stream of everyday decisions over time that accounts for great outcomes, instead of big one-time decisions….

Another common theme among businesses that improve day by day is that they operate on the premise that it is best to repeatedly launch a product or service with a limited number of its customers so that it can use customer reactions and feedback to modify it.  They operate on the premise that it is okay to learn from mistakes…

You need to determine if the management team you are investing in works on proving a concept before investing a lot of capital in it or whether it prefers to put a lot of money in all at once hoping for a big payoff.  (page 215)

PIPER = persistent incremental progress eternally repeated

As CEO, Henry Singleton was one of the best capital allocators in American business history.  Under Singleton, Teledyne stock compounded at 17.9 percent over 25 years (or a 53x return, vs. 6.7x for the S&P 500 Index).

Singleton believed that the best plan was no plan, as he once explained at an annual meeting:

…we’re subject to a tremendous number of outside influences, and the vast majority of them cannot be predicted.  So my idea is to stay flexible.  I like to steer the boat each day rather than plan ahead way into the future.

Shearn points out that one major problem with a strategic plan is the commitment and consistency principle (see Robert Cialdini’s Influence).  When people make a public statement, they tend to have a very difficult time admitting they were wrong and changing course when the evidence calls for it.  Similarly, notes Shearn, strategic plans can make people blind to other opportunities.

When managers give short-term guidance, it can have similar effects as a strategic plan.  People may make decisions that harm long-term business value just in order to hit short-term (statistically meaningless) numbers.  Also, managers may even start borrowing from the future in order to meet the numbers.  Think of Enron, WorldCom, Tyco, Adelphia, and HealthSouth, says Shearn.

Does management value its employees?


…Try to understand if the management team values its employees because the only way it will obtain positive results is through these people.

When employees feel they are partners with their boss in a mutual effort, rather than merely employees of some business run by managers they never see, morale will increase.  Furthermore, when a business has good employee relations, it typically has many other good attributes, such as good customer relations and the ability to adapt quickly to changing economic circumstances.  (page 225)

Are the CEO and CFO disciplined in making capital allocation decisions?

As Shearn observes, operating a business and allocating capital involve two completely different skills sets.  Many CEOs do not have skill in capital allocation.  Capital allocation includes:

  • Investing in new projects
  • Holding cash on the balance sheet
  • Paying dividends
  • Buying back stock
  • Making acquisitions

Shearn writes:

One of the best capital allocators in corporate history was Henry Singleton, longtime CEO of Teledyne, who cofounded the business in 1960 and served as CEO until 1986.  In John Train’s book The Money Masters, Warren Buffett reported that he believes ‘Henry Singleton has the best operating and capital-deployment record in American business.’  When Teledyne’s stock was trading at extremely high prices in the 1960s, Singleton used the high-priced stock as currency to make acquisitions.  Singleton made more than 130 acquisitions of small, high-margin manufacturing and technology businesses that operated in defensible niches managed by strong management.  When the price-to-earnings ratio of Teledyne fell sharply starting in the 1970s, he repurchased stock.  Between 1972 and 1984, he reduced the share count by more than 90 percent.  He repurchased stock for as low as $6 per share in 1972, which by 1987 traded at more than $400 per share.  (page 249)



Does the CEO love the money or the business?

This question comes from Warren Buffett.  Buffett looks for CEOs who love the business.  CEOs who are passionate about their business are more likely to persevere through many difficulties and over long periods of time.  CEOs who are passionate about their business are more likely to excel over the long term.  As Steve Jobs said in his commencement address to Stanford University students in 2005:

The only way to do great work is to love what you do.  If you haven’t found it yet, keep looking.  Don’t settle.

If someone has stayed in one industry for a long time, odds are they love their work.  If a CEO is very focused on the business, and not worried about appearances or large social or charity events, that’s a good sign the CEO is passionate about the business.  Does the CEO direct philanthropic resources to causes they truly care about, or are they involved in ‘social scene philanthropy’?

Are the Managers Lifelong Learners Who Focus on Continuous Improvement?

Lifelong learners are managers who are never satisfied and continually find ways to improve the way they run a business.  This drive comes from their passion for the business.  It is extremely important for management to constantly improve, especially if a business has been successful for a long period of time.  Look for managers who regard success as a base from which they continue to grow, rather than as a final accomplishment.  (page 263)

How Have They Behaved Under Adversity?


You never truly know someone’s character until you have seen it tested by stress, adversity, or a crisis, because a crisis produces extremes in behavior…  (page 264)

You need to determine how a manager responds to a difficult situation and then evaluate the action they took.  Were they calm and intentional in dealing with a negative situation, or were they reactive instead?  (page 266)

The best managers are those who quickly and openly communicate how they are thinking about the problem and outline how they are going to solve it.  (page 267)

Does Management Think Independently?

…The best managers always maintain a long-term focus, which means that they are often building for years before they see concrete results.  For example, in 2009, Jeff Bezos, founder of online retailer, talked about the way that some investors congratulate on success in a single reporting period.  ‘I always tell people, if we have a good quarter, it’s because of the work we did three, four, and five years ago.  It’s not because we did a good job this quarter.’  (page 275)

The best CEOs think independently.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Emotions and Biases

(Image:  Zen Buddha Silence by Marilyn Barbone.)

April 9, 2017

Meir Statman, an expert in behavioral finance, has written a good book, What Investors Really Want (McGraw-Hill, 2011).

Here is my brief summary of the important points:



Statman argues that investments bring utilitarian benefits, expressive benefits, and emotional benefits.  The utilitarian benefits relate to being able to achieve financial goals, such as financial freedom or the ability to pay for the education of grandchildren.

Expressive benefits can convey to ourselves and others our values and tastes.  For instance, an investor is, in effect, saying, ‘I’m smart and can pick winning investments.’  Emotional benefits relate to how the activity makes you feel.  As Statman notes, Christopher Tsai said about his father Gerald Tsai, Jr. – a pioneer of the go-go funds in the 1960s:  “He loved doing transactions.  He loved the excitement of it.”

Statman tells the story of an engineer who learned that Statman is a professor of finance.  The engineer asked where he could buy the Japanese yen.  Statman asked him why, and the engineer said that the yen would zoom past the dollar based on macroeconomic fundamentals.  Statman replied:

Buying and selling Japanese yen, American stocks, French bonds, and all other investments is not like playing tennis against a practice wall, where you can watch the ball hit the wall and place yourself at just the right spot to hit it back when it bounces.  It is like playing tennis against an opponent you’ve never met before.  Are you faster than your opponent?  Will your opponent fool you by pretending to hit the ball to the left side, only to hit it to the right?  (page ix)

Later, Statman continues:

I tried to dissuade my fellow dinner guest from trading Japanese yen but I have probably failed.  Perhaps I failed to help my fellow dinner guest overcome his cognitive error, learn that trading should be framed as playing tennis against a possibly better player, and refrain from trading.  Or I might have succeeded in helping my fellow guest overcome his cognitive error and yet failed to dissuade him from trading because he wanted the expressive and emotional benefits of the trading game, the fun of playing and the thrill of winning.  (page xiii)

Statman explains that, in many fields of life, emotions are helpful in good decision-making.  Yet when it comes to areas such as investing, emotions tend to be harmful.

There is often a tension between what we should do and what we want to do.  And if we are stressed or fatigued, then it becomes even harder to do what we should do instead of what we want to do.

Moreover, our emotional reactions to changing stock prices generally mislead us.  When stocks are going up, we typically feel more confident and want to own more stocks.  When stocks are going down, we tend to feel less confident and want to own fewer stocks.  But this is exactly the opposite of what we should do if we want to maximize our long-term investment results.



Beat-the-market investors have always been searching for investments with returns higher than risks.  But such investments are much rarer than is commonly supposed.  For every investor who beats the market, another must trail the market.  And that is before fees and expenses.  After fees and expenses, there are very few investors who beat the market over the course of several decades.

Statman mentions a study of stock traders.  Those who traded the most trailed the index by more than 7 percent per year on average.  Those who traded the least trailed the index by only one-quarter of 1 percent.  Furthermore, a study of Swedish investors showed that heavy traders lose, on average, nearly 4 percent of their total financial wealth each year.



Framing means that people can react differently to a particular choice based on how it is presented.  Framing is everywhere in the world of investments.  Statman explains:

Some frames are quick and intuitive, but frames that come to mind quickly and intuitively are not always correct… The beat-the-market frame that comes to mind quickly and intuitively is that of tennis played against a practice wall, but the correct frame is tennis played against a possibly better player.  Incorrect framing of the beat-the-market game is one cognitive error that fools us into believing that beating the market is easy.  (page 18)

Statman has some advice for overcoming the framing error:

It is not difficult to overcome the framing error.  All we need to do is install an app in our minds as we install apps on our iPhones.  When we are ready to trade it would pipe in, asking, ‘Who is the idiot on the other side of the trade?  Have you considered the likelihood that the idiot is you?’  (page 21)

The broader issue (discussed below) is that most of us, by nature, are overconfident in many areas of life, including investing.  Overconfidence is the most widespread cognitive bias that we have.  Using procedures such as a checklist can help reduce errors from overconfidence.  Also, keeping a journal of every investment decision – what the hypothesis is, what the evidence is, and what ended up happening – can help you to improve over time, hopefully reducing cognitive errors such as overconfidence.



Heuristics are mental shortcuts that often work, but sometimes don’t.  There is a good discussion of the representativeness heuristic on Wikipedia:

Daniel Kahneman and Amos Tversky defined representativeness as:

the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated.

When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not actually make it more likely.  The key issue is sample size versus base rate.

Many people mistakenly assume that a small sample – even as small as a single example – is representative of the relevant population.  This mistake is called the law of small numbers.

If you have a small sample, you cannot take it as representative of the entire population.  In other words, a small sample may differ significantly from the base rate.  If you have a large enough sample, then by the law of large numbers, you can conclude that the large sample approximates the base rate (the entire population).

For instance, if you flip a coin ten times and get 8 heads, you cannot conclude that flipping the same coin thousands of times will yield approximately 80% heads.  But if you flip a coin ten thousand times and get 5,003 heads, you can conclude that the base rate for heads is 50%.

If a mutual fund manager beats the market five years out of six, we conclude that it must be due to skill even though that is far too short a period for such a conclusion.  By randomness alone, there will be many mutual fund managers who beat the market five years out of six.



Our brains are good at finding patterns.  But when the data are highly random, our brains often find patterns that don’t really exist.

For example, there is no way to time the market.  Yet many investors try to time the market, jumping in and out of stocks.  Nearly everyone who tries market timing ends up trailing a simple index fund over time.

Part of the problem is that the brain only notices and remembers the handful of investors who were able to time the market successfully.  What investors should examine is the base rate:  Out of all investors who have tried market timing, how many have succeeded?  A very tiny percentage.



When our sentiment is positive, we expect our investments to bring returns higher than risk.  When our sentiment is negative, we expect our investments to bring returns lower than risk.

People expect the stocks of admired companies to do better than the stocks of spurned companies, but the opposite is true.  That’s a key reason deep value investing works:  on average, people are overly negative on out-of-favor or struggling companies, and people are overly positive on companies currently doing well.

People even expect higher returns if the name of a stock is easier to pronounce!

Finally, many investors think they can get rich from a new technological innovation.  In the vast majority of the cases, this is not true.  For every Ford, for every Microsoft, for every Google, for every Amazon, there are many companies in the same industry that failed.



A sense of control, like optimism, is generally beneficial, helping us to overcome challenges and feel happier.  A sense of control is good in most areas of life, but – like overconfidence – it is generally harmful in areas that involve much randomness, such as investing.

Statman explains:

A sense of control gained through lucky charms or rituals can be useful.  In a golfing experiment, some people were told they were receiving a lucky ball; others received the same ball and were told nothing.  Everyone was instructed to take ten putts.  Players who were told that their ball was lucky made 6.42 putts on average while those with the ordinary ball made only 4.75.  People in another experiment were asked to bring a personal lucky charm to a memory test.  Half of them kept the charm with them, but the charms of the other half were kept in another room.  People who had the charms with them reported that they had greater confidence that they would do well on the test than the people whose charms were kept away, and people who had the charms with them indeed did better on the memory test.

The outcomes of golf and memory tasks are not random; they are tasks that can be improved by concentration and effort.  A sense of control brought about by lucky charms or lucky balls can help improve performance if a sense of control brings real control.  But no concentration or effort can improve performance when outcomes are random, not susceptible to control, as is often true in much of investing and trading.  (page 50)

Statman describes one experiment involving traders who saw an index move up or down.  The task was to raise the index as much as possible by the end of each of four rounds.  Traders were also told that three keys on their keyboard have special effect.

In truth, movements in the index were random and the three keys had no effect on outcomes.  Any sense of control was illusory.  Still, some traders believed that they had much control while others believed that they had little.  It turned out that the traders with the highest sense of control displayed the lowest level of performance.  (page 51)



Statman also discusses cognitive biases.  He remarks that cognitive biases affect each one of us slightly differently.  Some may fall prey to hindsight bias more often.  Some have more trouble with availability.  Others may be more overconfident, and so forth.

Before examining some cognitive biases, it’s worth briefly reviewing Daniel Kahneman’s definition of two different mental systems that we have:

System 1:   Operates automatically and quickly;  makes instinctual decisions based on heuristics.

System 2:   Allocates attention (which has a limited budget) to the effortful mental activities that demand it, including complex computations involving logic, math, or statistics.

Kahneman writes – in Thinking, Fast and Slow – that System 1 and System 2 usually work quite well together:

The division of labor between System 1 and System 2 is highly efficient:  it minimizes effort and optimizes performance.   The arrangement works well most of the time because System 1 is generally very good at what it does: its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate.

Yet in some circumstances – especially if a good judgment requires complex computations such as logic, math, or statistics – System 1 has cognitive biases, or systematic errors that it tends to make.

The systematic errors of System 1 happen predictably in areas such as investing or forecasting.  These areas involve so much randomness that the intuitive statistics of System 1 lead predictably and consistently to errors.



availability bias:   we tend to overweight evidence that comes easily to mind.

Related to the availability bias are vividness bias and recency bias.  We typically overweight facts that are vivid (e.g., plane crashes or shark attacks).   We also overweight facts that are recent (partly because they are more vivid).

Statman comments on the availability bias and on the near-miss effect:

Availability errors compound representativeness errors, misleading us further into the belief that beating the market is easy.  Casinos exploit availability errors.  Slot machines are quiet when players lose, but they jingle cascading coins when players win.  We exaggerate the likelihood of winning because the loud voice of winning is available to our minds more readily than the quiet voice of losing… Scans of the brains of gamblers who experience near-misses show activation of a reward-related brain circuitry, suggesting that near-misses increase the transmission of dopamine.  This makes gambling addiction similar to drug addiction.  (page 29)

Statman pens the following about mutual fund marketing:

Mutual fund companies employ availability errors to persuade us to buy their funds.  Morningstar, a company that rates mutual funds, assigns to each fund a number of stars that indicate its relative performance, one star for the bottom group, three stars for the average group, and five stars for the top group.  Have you ever seen an advertisement for a fund with one or two stars?  But we’ve all seen advertisements for four- and five-star funds.  Availability errors lead us to judge the likelihood of finding winning funds by the proportion of four- and five-start funds available to our minds.  (page 29-30)



confirmation bias:   we tend to search for, remember, and interpret information in a way that confirms our pre-existing beliefs or hypotheses.

Confirmation bias makes it quite difficult for many of us to improve upon or supplant our existing beliefs or hypotheses.  This bias also tends to make most of us overconfident about our existing beliefs or hypotheses, since all we can see are supporting data.

It’s clear that System 1 (intuition) often errors when it comes to forming and testing hypotheses.  First of all, System 1 always forms a coherent story (including causality), irrespective of whether there are truly any logical connections at all among various things in our experience.  Furthermore, when System 1 is facing a hypothesis, it automatically looks for confirming evidence.

But even System 2 – the logical and mathematical system that we possess and can develop – by nature uses a positive test strategy:

A deliberate search for confirming evidence, known as positive test strategy, is also how System 2 tests a hypothesis.  Contrary to the rules of philosophers of science, who advise testing hypotheses by trying to refute them, people (and scientists, quite often) seek data that are likely to be compatible with the beliefs they currently hold.  (page 81, Thinking, Fast and Slow)

Thus, the habit of always looking for disconfirming evidence of our hypotheses – especially our best-loved hypotheses (Charlie Munger’s term) – is arguably the most important intellectual habit we could develop in the never-ending search for wisdom and knowledge.

Charles Darwin is a wonderful model in this regard.  Darwin was far from being a genius in terms of IQ.  Yet Darwin trained himself always to search for facts and evidence that would contradict his hypotheses.  Charlie Munger explains in “The Psychology of Human Misjudgment” (see Poor Charlie’s Alamanack, expanded 3rd edition):

One of the most successful users of an antidote to first conclusion bias was Charles Darwin.  He trained himself, early, to intensively consider any evidence tending to disconfirm any hypothesis of his, more so if he thought his hypothesis was a particularly good one… He provides a great example of psychological insight correctly used to advance some of the finest mental work ever done.  (my emphasis)

As Statman states:

Confirmation errors contribute their share to the perception that winning the beat-the-market game is easy.  We commit the confirmation error when we look for evidence that confirms our intuition, beliefs, claims, and hypotheses, but overlook evidence that disconfirms them… The remedy for confirmation errors is a structure that forces us to consider all the evidence, confirming and disconfirming alike, and guides us to tests that tell us whether our intuition, beliefs, claims, or hypotheses are confirmed by the evidence or disconfirmed by it.

One manifestation of confirmation errors is the tendency to trim disconfirming evidence from stories… The fact that a forecast of an imminent stock market crash was made years before its coming is unappetizing, so we tend to trim it off our stock market stories.  (page 31)



Hindsight bias:   the tendency, after an event has occurred, to see the event as having been predictable, despite little or no objective basis for predicting the event prior to its occurrence.

This is a very powerful bias that we have.   Because we view the past as much more predictable than it actually was, we also view the future as much more predictable than it actually is.

Hindsight bias is also called the knew-it-all-along effect or creeping determinism.  (See:

Kahneman writes about hindsight bias as follows:

Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events.   Baruch Fischhoff first demonstrated this ‘I-knew-it-all-along’ effect, or hindsight bias, when he was a student in Jerusalem.  Together with Ruth Beyth (another of our students), Fischhoff conducted a survey before President Richard Nixon visited China and Russia in 1972.   The respondents assigned probabilities to fifteen possible outcomes of Nixon’s diplomatic initiatives.   Would Mao Zedong agree to meet with Nixon?   Might the United States grant diplomatic recognition to China?   After decades of enmity, could the United States and the Soviet Union agree on anything significant?

After Nixon’s return from his travels, Fischhoff and Beyth asked the same people to recall the probability that they had originally assigned to each of the fifteen possible outcomes.   The results were clear.   If an event had actually occurred, people exaggerated the probability that they had assigned to it earlier.   If the possible event had not come to pass, the participants erroneously recalled that they had always considered it unlikely.   Further experiments showed that people were driven to overstate the accuracy not only of their original predictions but also of those made by others.   Similar results have been found for other events that gripped public attention, such as the O.J. Simpson murder trial and the impeachment of President Bill Clinton.  The tendency to revise the history of one’s beliefs in light of what actually happened produces a robust cognitive illusion.  (pages 202-3, my emphasis)

Concludes Kahneman:

The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent that it really is.  The illusion that one has understood the past feeds the further illusion that one can predict and control the future.  These illusions are comforting.   They reduce the anxiety we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence.  (page 204-5, my emphasis)

Statman elucidates:

So, if an introverted man marries a shy woman, it must be because, as we have known all along, ‘birds of a feather flock together’ and if he marries an outgoing woman, it must be because, as we have known all along, ‘opposites attract.’  Similarly, if stock prices decline after a prolonged rise, it must be, as we have known all along, that ‘trees don’t grow to the sky’ and if stock prices continue to rise, it must be, as we have equally known all along, that ‘the trend is your friend.’  Hindsight errors are a serious problem for all historians, including stock market historians.  Once an event is part of history, there is a tendency to see the sequence that led to it as inevitable.  In hindsight, poor choices with happy endings are described as brilliant choices, and unhappy endings of well-considered choices are attributed to horrendous choices.  (page 33)

Statman later writes about Warren Buffett’s understanding of hindsight bias:

Warren Buffett understands well the distinction between hindsight and foresight and the temptation of hindsight.  Roger Lowenstein mentioned in his biography of Buffett the events surrounding the increase in the Dow Jones Industrial Index beyond 1,000 in early 1966 and its subsequent decline by spring.  Some of Buffett’s partners called to warn him that the market might decline further.  Such calls, said Buffett, raised two questions:

If they knew in February that the Dow was going to 865 in May, why didn’t they let me in on it then; and

If they didn’t know what was going to happen during the ensuing three months back in February, how do they know in May?

Statman concludes:  We will always be normal, never rational, but we can increase the ratio of smart normal behavior to stupid normal behavior by recognizing our cognitive errors and devising methods to overcome them.

One of the best ways to minimize errors from cognitive bias is to use a fully automated investment strategy.  A low-cost broad market index fund will allow you to beat at least 90% of all investors over several decades.  If you adopt a quantitative value approach, you can do even better.



Overconfidence is such as widespread cognitive bias among people that Kahneman devotes Part 3 of his book, Thinking, Fast and Slow, entirely to this topic.  Kahneman says in his introduction:

The difficulties of statistical thinking contribute to the main theme of Part 3, which describes a puzzling limitation of our mind:  our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in.   We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events.   Overconfidence is fed by the illusory certainty of hindsight.   My views on this topic have been influenced by Nassim Taleb, the author of The Black Swan.  (pages 14-5)

As Statman describes:

Investors overestimate the future returns of their investments relative to the returns of the average investor.  Investors even overestimate their past returns relative to the returns of the average investor.  Members of the American Association of Individual Investors overestimated their own investment returns by an average of 3.4 percentage points relative to their actual returns, and they overestimated their own returns relative to those of the average investor by 5.1 percentage points.  The unrealistic optimism we display in the investment arena is similar to the unrealistic optimism we display in other arenas.  (page 45)

Statman also warns that stockbrokers and stock exchanges have good reasons to promote overconfidence because unrealistically optimistic investors trade far more often.



self-attribution bias:   we tend to attribute good outcomes to our own skill, while blaming bad outcomes on bad luck.

This ego-protective bias prevents us from recognizing and learning from our mistakes.  This bias also contributes to overconfidence.

As with the other cognitive biases, often self-attribution bias makes us happier and stronger.  But we have to learn to slow ourselves down and take extra care in areas – like investing – where overconfidence will hurt us.



In Behavioural Investing (Wiley, 2007), James Montier explains a study done by Paul Slovic (1973).  Eight experienced bookmakers were shown a list of 88 variables found on a typical past performance chart on a horse.  Each bookmaker was asked to rank the piece of information by importance.

Then the bookmakers were given data for 40 past races and asked to rank the top five horses in each race.  Montier:

Each bookmaker was given the past data in increments of the 5, 10, 20, and 40 variables he had selected as most important.  Hence each bookmaker predicted the outcome of each race four times – once for each of the information sets.  For each prediction the bookmakers were asked to give a degree of confidence ranking in their forecast.  (page 136)

Here are the results:

Accuracy was virtually unchanged, regardless of the number of pieces of information the bookmaker was given (5, 10, 20, then 40).

But confidence skyrocketed as the number of pieces of information increased (5, 10, 20, then 40).

This same result has been found in a variety of areas.  As people get more information, the accuracy of their judgments or forecasts typically does not change at all, while their confidence in the accuracy of their judgments or forecasts tends to increase dramatically.



In The Black Swan, Nassim Taleb writes the following about the narrative fallacy:

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them.  Explanations bind facts together.  They make them all the more easily remembered;  they help them make more sense.  Where this propensity can go wrong is when it increases our impression of understanding.  (page 63-4)

The narrative fallacy is central to many of the biases and misjudgments mentioned by Daniel Kahneman and Charlie Munger.  The human brain, whether using System 1 (intuition) or System 2 (logic), always looks for or creates logical coherence among random data.

Thanks to evolution, System 1 is usually right when it assumes causality.  For example, there was movement in the grass, probably caused by a predator, so run.  And even in the modern world, as long as cause-and-effect is straightforward and not statistical, System 1 is amazingly good at what it does:  its models of familiar situations are accurate, its short-term predictions are usually accurate as well, and its initial reactions to challenges are swift and generally appropriate.  (Kahneman)

Furthermore, System 2, by searching for underlying causes or coherence, has, through careful application of the scientific method over centuries, developed a highly useful set of scientific laws by which to explain and predict various phenomena.

The trouble comes when the data or phenomena in question are ‘highly random’ – or inherently unpredictable (based on current knowledge).  In these areas, System 1 is often very wrong when it creates coherent stories or makes predictions.  And even System 2 assumes necessary logical connections when there may not be any – at least, none that can be discovered for some time.

Note:  The eighteenth century Scottish philosopher (and psychologist) David Hume was one of the first to clearly recognize the human brain’s insistence on always assuming necessary logical connections in any set of data or phenomena.



anchoring effect:   we tend to use any random number as a baseline for estimating an unknown quantity, despite the fact that the unknown quantity is totally unrelated to the random number.

Kahneman and Tversky did one experiment where they spun a wheel of fortune, but they had secretly programmed the wheel so that it would stop on 10 or 65.   After the wheel stopped, participants were asked to estimate the percentage of African countries in the UN.   Participants who saw “10” on the wheel guessed 25% on average, while participants who saw “65” on the wheel guessed 45% on average, a huge difference.

Behavioral finance expert James Montier ran his own experiment on anchoring.   People were asked to write down the last four digits of their phone number.   Then they were asked whether the number of doctors in their capital city is higher or lower than the last four digits of their phone number.   Results:  Those whose last four digits were greater than 7000 on average reported 6762 doctors, while those with telephone numbers below 2000 arrived at an average 2270 doctors.  (Behavioural Investing, page 120)

Those are just two experiments out of many.  The anchoring effect is “one of the most reliable and robust results of experimental psychology” (page 119, Kahneman).  Furthermore, Montier observes that the anchoring effect is one reason why people cling to financial forecasts, despite the fact that most financial forecasts are either wrong, useless, or impossible to time.

When faced with the unknown, people will grasp onto almost anything. So it is little wonder that an investor will cling to forecasts, despite their uselessness.  (Montier, page 120)



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.  See the historical chart here:

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.


If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:




Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.

Our Process and Vision

(Image:  Zen Buddha Silence by Marilyn Barbone.)

April 2, 2017

If you’re investing small sums, you can earn the highest returns by focusing on microcap stocks.  That’s why many top value investors started in micro caps.  For instance, Warren Buffett concentrated on micro caps when he managed his partnership starting in 1957, which produced the highest returns of his career.  And Buffett has repeatedly said that in today’s market, he could get 50% per year if he could invest in micro caps.

Look at this summary of the CRSP Decile-Based Size and Return Data from 1927 to 2020:

Decile Market Cap-Weighted Returns Equal Weighted Returns Number of Firms (year-end 2020) Mean Firm Size (in millions)
1 9.67% 9.47% 179 145,103
2 10.68% 10.63% 173 25,405
3 11.38% 11.17% 187 12,600
4 11.53% 11.29% 203 6,807
5 12.12% 12.03% 217 4,199
6 11.75% 11.60% 255 2,771
7 12.01% 11.99% 297 1,706
8 12.03% 12.33% 387 888
9 11.55% 12.51% 471 417
10 12.41% 17.27% 1,023 99
9+10 11.71% 15.77% 1,494 199

(CRSP is the Center for Research in Security Prices at the University of Chicago.  You can find the data for various deciles here:

The smallest two deciles – 9+10 – comprise microcap stocks, which typically are stocks with market caps below $500 million.  What stands out is the equal weighted returns of the 9th and 10th size deciles from 1927 to 2020:

Microcap equal weighted returns = 15.8% per year

Large-cap equal weighted returns = ~10% per year

In practice, the annual returns from microcap stocks will be 1-2% lower because of the difficulty (due to illiquidity) of entering and exiting positions.  So we should say that an equal weighted microcap approach has returned 14% per year from 1927 to 2020, versus 10% per year for an equal weighted large-cap approach.

Still, if you can do 4% better per year than the S&P 500 Index (on average) – even with only a part of your total portfolio – that really adds up after a couple of decades.

  • Most professional investors ignore micro caps as too small for their portfolios.  This causes many micro caps to get very cheap.  And that’s why an equal weighted strategy – applied to micro caps – tends to work well.



By adding a value screen to a microcap strategy, it is possible to add at least 2-3% per year.  There are several ways to measure cheapness, such as low EV/EBIT, low P/E, and low P/CF.



You can further boost performance by screening for improving fundamentals.  One excellent way to do this is using the Piotroski F_Score, which works best for cheap micro caps.  See:



If you invest in microcap stocks, you can get about 14% a year.  If you also use a simple screen for value, that adds at least 2% a year.  If, in addition, you screen for improving fundamentals, that adds at least another 2% a year.  So that takes you to 18% a year, which compares quite well to the 10% a year you could get from an S&P 500 index fund.

What’s the difference between 18% a year and 10% a year?  If you invest $50,000 at 10% a year for 30 years, you end up with $872,000, which is good.  If you invest $50,000 at 18% a year for 30 years, you end up with $7.17 million, which is much better.



An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time.

This outperformance increases significantly by focusing on cheap micro caps.  Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals.  We rank microcap stocks based on these and similar criteria.

There are roughly 10-20 positions in the portfolio.  The size of each position is determined by its rank.  Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost).  Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.

The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods.  We also aim to outpace the Russell Microcap Index by at least 2% per year (net).  The Boole Fund has low fees.



If you are interested in finding out more, please e-mail me or leave a comment.

My e-mail:



Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.