(Image: Zen Buddha Silence by Marilyn Barbone.)
August 13, 2017
Science and technology are moving forward faster than ever before:
…this is just the beginning. Science is not static. Science is exploding exponentially all around us. (page 12)
Michio Kaku has devoted part of his life to trying to understand and predict the technologies of the future. His book, Physics of the Future (Anchor Books, 2012), is a result.
Kaku explains why his predictions may carry more weight than those of other futurists:
- His book is based on interviews with more than 300 top scientists.
- Every prediction is based on the known laws of physics, including the four fundamental forces (gravity, electromagnetism, nuclear strong, and nuclear weak).
- Prototypes of all the technologies mentioned in the book already exist.
- As a theoretical physicist, Kaku is an “insider” who really understands the technologies mentioned.
The ancients had little understanding of the forces of nature, so they invented the gods of mythology. Now, in the twenty-first century, we are in a sense becoming the gods of mythology based on the technological powers we are gaining.
We are on the verge of becoming a planetary, or Type I, civilization. This is inevitable as long as we don’t succumb to chaos or folly, notes Kaku.
But there are still some things, like face to face meetings, that appear not to have changed much. Kaku explains this using the Cave Man Principle, which refers to the fact that humans have not changed much in 100,000 years. People still like to see tourist attractions in person. People still like live performances. Many people still prefer taking courses in-person rather than online. (In the future we will improve ourselves in many ways with genetic engineering, in which case the Cave Man Principle may no longer apply.)
Here are the chapters from Kaku’s book that I cover:
- Future of the Computer
- Future of Artificial Intelligence
- Future of Medicine
- Future of Energy
- Future of Space Travel
- Future of Humanity
enter site https://earthwiseradio.org/editing/english-article-essay-spm/8/ source https://homemods.org/usc/descriptive-essays-on-the-beach/46/ https://njsora.us/annotated/argumentative-essay-topics-censorship/29/ https://library.citytech.cuny.edu/podcast/article.php?publish=200-250-words-essay-on-unemployment lasix xapia see url generic viagra online sales follow url https://thembl.org/masters/essay-about-practice-teaching-experience/60/ business plan software valuation chewing a viagra source site khasiat obat cytotec di click student case brief template prednisone for shoulder pain thesis about learning disabilities here https://explorationproject.org/annotated/writing-a-best-woman-speech/80/ architects paper https://chanelmovingforward.com/stories/african-american-civil-rights-essay/51/ https://www.rmhc-reno.org/project/eve-hypothesis-evolution/25/ 6 orders of classification essay https://www.arvadachamber.org/verified/chegg-homework-help-my-questions/49/ motivation to do assignments marathi essays on freedom fighters 5 gm kamagra 100 mg https://academicminute.org/paraphrasing/5-paragraph-essay-about-martin-luther-king-jr/3/ dissertation uni kln zahnmedizin 5 parts of a body paragraph in an essay FUTURE OF THE COMPUTER
Kaku quotes Helen Keller:
No pessimist ever discovered the secrets of the stars or sailed to the uncharted land or opened a new heaven to the human spirit.
According to Moore’s law, computer power doubles every eighteen months. Kaku writes that it’s difficult for us to grasp exponential growth, since our minds think linearly. Also, exponential growth is often not noticeable for the first few decades. But eventually things can change dramatically.
Even the near future may be quite different, writes Kaku:
…In the coming decade, chips will be combined with supersensitive sensors, so that they can detect diseases, accidents, and emergencies and alert us before they get out of control. They will, to a degree, recognize the human voice and face and converse in a formal language. They will be able to create entire virtual worlds that we can only dream of today. Around 2020, the price of a chip may also drop to about a penny, which is the cost of scrap paper. Then we will have millions of chips distributed everywhere in our environment, silently carrying out our orders. (pages 25-26)
In order to discuss the future of science and technology, Kaku has divided each chapter into three parts: the near future (to 2030), the midcentury (2030 to 2070), and the far future (2070 to 2100).
In the near future, we can surf the internet via special glasses or contact lenses. We can navigate with a handheld device or just by moving our hands. We can connect to our office via the lense. It’s likely that when we encounter a person, we will see their biography on our lense.
Also, we will be able to travel by driverless cars. This will allow us to use commute time to access the internet via our lenses or to do other work. Kaku notes that the word car accident may disappear from the language once driveless cars become advanced and ubiquitous enough. Instead of nearly 40,000 dying in the United States in car accidents each year, there may be zero deaths from car accidents. Moreover, most traffic jams will be avoided when driveless cars can work together to keep traffic flowing freely.
At home, you will have a room with screens on every wall. If you’re lonely, your computer will set up a bridge game, arrange a date, plan a vacation, or organize a trip.
You won’t need to carry a computer with you. Computers will be embedded nearly everywhere. You’ll have constant access to computers and the internet via your glasses or contact lenses.
As computing power expands, you’ll probably be able to visit most places via virtual reality before actually going there in person. This includes the moon, Mars, and other currently exotic locations.
Kaku writes about visiting the most advanced version of a holodeck at the Aberdeen Proving Ground in Maryland. Sensors were placed on his helmet and backpack, and he walked on an Omnidirectional Treadmill. Kaku found that he could run, hide, sprint, or lie down. Everything he saw was very realistic. In the future, says Kaku, you’ll be able to experience total immersion in a variety of environments, such as dogfights with alien spaceships.
Your doctor – likely a human face appearing on your wall – will have all your genetic information. Also, you’ll be able to pass a tiny probe over your body and diagnose any illness. (MRI machines will be as small as a phone.) As well, tiny chips or sensors will be embedded throughout your environment. Most forms of cancer will be identified and destroyed before a tumor ever forms. Kaku says the word tumor will disappear from the human language.
Furthermore, we’ll probably be able to slow down and even reverse the aging process. We’ll be able to regrow organs based on computerized access to our genes. We’ll likely be able to reengineer our genes.
In the medium term (2030 to 2070):
- Moore’s law may reach an end. Computing power will still continue to grow exponentially, however, just not as fast as before.
- When you gaze at the sky, you’ll be able to see all the stars and constellations in great detail. You’ll be able to download informative lectures about anything you see. In fact, a real professor will appear right in front of you and you’ll be able to ask him or her questions during or after a lecture.
- If you’re a soldier, you’ll be able to see a detailed map including the current locations of all combatants, supplies, and dangers. You’ll be able to see through hills and other obstacles.
- If you’re a surgeon, you’ll see in great detail everything inside the body. You’ll have access to all medical records, etc.
- Universal translators will allow any two people to converse.
- True 3-D images will surround us when we watch a movie. 3-D holograms will become a reality.
In the far future (2070 to 2100):
We will be able to control computers directly with our minds.
John Donoghue at Brown University, who was confined to a wheelchair as a kid, has invented a chip that can be put in a paralyzed person’s brain. Through trial and error, the paralyzed person learns to move the cursor on a computer screen. Eventually they can read and write e-mails, and play computer games. Patients can also learn to control a motorized wheelchair – this allows paralyzed people to move themselves around.
Similarly, paralyzed people will be able to control mechanical arms and legs from their brains. Experiments with monkeys have already achieved this.
Eventually, as fMRI brain scans become far more advanced, it will be possible to read each thought in a brain. MRI machines themselves will go from being several tons to being smaller than phones and as thin as a dime.
Also in the far future, everything will have a tiny superconductor inside that can generate a burst of magnetic energy. In this way, we’ll be able to control objects just by thinking. Astronauts on earth will be able to control superhuman robotic bodies on the moon.
FUTURE OF ARTIFICIAL INTELLIGENCE
AI pioneer Herbert Simon, in 1965, said:
Machines will be capable, in twenty years, of doing any work a man can do.
Unfortunately not much progress was made. In 1974, the first AI winter began as the U.S. and British governments cut off funding.
Progress again was made in the 1980’s. But because it was overhyped, another backlash occurred and a second AI winter began. Many people left the field as funding disappeared.
The human brain is a type of neural network. Neural networks follow Hebb’s rule: every time a correct decision is made, those neural pathways are reinforced. Neural networks learn the way a baby learns, by bumping into things and slowly learning from experience. ‘
Furthermore, the neural network of a human brain is a massive parallel processor, which makes it different from most computers. Thus, even though digital computers send signals at the speed of light, whereas neuron signals only travel about 200 miles per hour, the human brain is still faster (on many tasks) due to its massive parallel processing.
Finally, while neurons can either fire or not fire, neurons can also transmit continuous signals (in-between 0 and 1), not just discrete signals (only 0 and 1).
What’s interesting is that robots are superfast when doing human mental calculations. But robots still are not good at visual pattern recognition, movement, and common sense. Robots can see far more detail than humans, but robots have trouble making sense of what they see. Also, many things in our experience that we as humans know by common sense, robots don’t understand.
There have been massive projects to try to give robots common sense by brute force – by programming in thousands of common sense things. But so far, these projects haven’t worked.
There are two ways to give a robot the ability to learn: top-down and bottom-up. An example of the top-down approach is STAIR (Stanford artificial intelligence robot). Everything is programmed into STAIR from the beginning. For STAIR to understand an image, it must compare the image to all the images already programmed into it.
The LAGR (learning applied to ground robots) uses the bottom-up approach. It learns everything from scratch, by bumping into things. LAGR slowly creates a mental map of its environment and constantly refines that map with each pass.
Robots will become ever more helpful in medicine:
For example, traditional surgery for a heart bypass operation involves opening a foot-long gash in the middle of the chest, which requires general anesthesia. Opening the chest cavity increases the possibility for infection and the length of time for recovery, creates intense pain and discomfort during the healing process, and leaves a disfiguring scar. But the da Vinci robotic system can vastly decrease all these. The da Vinci robot has four robotic arms, one for manipulating a video camera and three for precision surgery. Instead of making a long incision in the chest, it makes only several tiny incisions in the side of the body. There are 800 hospitals in Europe and North and South America that use this system; 48,000 operations were performed in 2006 alone using this robot. Surgery can also be done by remote control over the internet, so a world-class surgeon in a major city can perform surgery on a patient in an isolated rural area on another continent.
In the future, more advanced versions will be able to perform surgery on microscopic blood vessels, nerve fibers, and tissues by manipulating microscopic scapels, tweezers, and needles, which is impossible today. In fact, in the future, only rarely will the surgeon slice the skin at all. Noninvasive surgery will become the norm.
Endoscopes (long tubes inserted into the body that can illuminate and cut tissue) will be thinner than thread. Micromachines smaller than the period at the end of this sentence will do much of the mechanical work. (pages 93-94)
But to make robots intelligent, scientists must learn more about how the human brain works.
The human brain has roughly three levels. The reptilian brain is near the base of the skull and controls balance, aggression, searching for food, etc. At the next level, there is the monkey brain, or the limbic system, located at the center of our brain. Animals that live in groups have especially well-developed limbic systems, which allow them to communicate via body language, grunts, whines, and gestures, notes Kaku.
The third level of the human brain is the front and outer part – the cerebral cortex. This level defines humanity and is responsible for the ability to think logically and rationally.
Scientists still have a way to go in understanding in sufficient detail how the human brain works.
By midcentury, scientists will be able to reverse engineer the brain. In other words, scientists will be able to take apart the brain, neuron by neuron, and then simulate each individual neuron on a huge computer. Kaku quotes Fred Hapgood from MIT:
Discovering how the brain works – exactly how it works, the way we know how a motor works – would rewrite almost every text in the library.
By midcentury, we should have both the computing power to simulate the brain and decent maps of the brain’s neural architecture, writes Kaku. However, it may take longer to understand fully how the human brain works or to create a machine that can duplicate the human brain.
For example, says Kaku, the Human Genome Project is like a dictionary with no definitions. We can spell out each gene in the human body. But we still don’t know what each gene does exactly. Similarly, scientists in 1986 successfully mapped 302 nerve cells and 6,000 chemical synapses in the tiny worm, C. elegans. But scientists still can’t fully translate this map into the worm’s behavior.
Thus, it may take several additional decades, even after the human brain is accurately mapped, before scientists understand how all the parts of the human brain function together.
When will machines become conscious? Human consciousness involves sensing and recognizing the environment, self-awareness, and planning for the future. If machines move gradually towards consciousness, it may be difficult to pinpoint exactly when they do become conscious. On the other hand, something like the Turning test may help to identify when machines have become practically indistinguishable from humans.
When will robots exceed humans? Douglas Hofstadter has observed that, even if superintelligent robots greatly exceed us, they are still in a sense our children.
What if superintelligent robots can make even smarter copies of themselves? They might thereby gain the ability to evolve exponentially. Some think superintelligent robots might end up turning the entire universe into the ultimate supercomputer.
The singularity is the term used to describe the event when robots develop the ability to evolve themselves exponentially. The inventor Ray Kurzweil has become a spokesman for the singularity. But he thinks humans will merge with this digital superintelligence. Kaku quotes Kurzweil:
It’s not going to be an invasion of intelligent machines coming over the horizon. We’re going to merge with this technology… We’re going to put these intelligent devices in our bodies and brains to make us live longer and healthier.
Kaku believes that “friendly AI” is the most likely scenario, as opposed to AI that turns against us. The term “friendly AI” was coined by Eliezer Yudkowsky, who founded the Singularity Institute for Artificial Intelligence – now called the Machine Intelligence Research Institute (MIRI).
One problem is that the military is the largest funder of AI research. On the other hand, in the future, more and more funding will come from the civilian commercial sector (especially in Japan).
Kaku notes that a more likely scenario than “friendly AI” alone is friendly AI integrated with genetically enhanced humans.
One option invented by Rodney Brooks, former direction of the MIT Artificial Intelligence Lab, is for an army of “bugbots” with minimal programming that would learn from experience. Such an army might turn into a practical way to explore the solar system and beyond. One by-product of Brooks’ idea is the Mars Rover.
Some researchers including Brooks and Marvin Minsky have lamented the fact that AI scientists have often followed too closely the current dominant AI paradigm. AI paradigms have included a telephone-switching network, a steam engine, and a digital computer.
Moreover, Minsky has observed that many AI researchers have followed the paradigm of physics. Thus, they have sought a single, unifying equation underlying all intelligence. But, says Minsky, there is no such thing:
Evolution haphazardly cobbled together a bunch of techniques we collectively call consciousness. Take apart the brain, and you find a loose collection of minibrains, each designed to perform a specific task. He calls this ‘the society of minds’: that consciousness is actually the sum of many separate algorithms and techniques that nature stumbled upon over millions of years. (page 123)
Brooks predicts that, by 2100, there will be very intelligent robots. But we will be part robot and part connected with robots.
He sees this progressing in stages. Today, we have the ongoing revolution in prostheses, inserting electronics directly into the human body to create realistic substitutes for hearing, sight, and other functions. For example, the artificial cochlea has revolutionized the field of audiology, giving back the gift of hearing to the deaf. These artificial cochlea work by connecting electronic hardware with biological ‘wetware,’ that is, neurons…
Several groups are exploring ways to assist the blind by creating artificial vision, connecting a camera to the human brain. One method is to directly insert the silicon chip into the retina of the person and attach the chip to the retina’s neurons. Another is to connect the chip to a special cable that is connected to the back of the skull, where the brain processes vision. These groups, for the first time in history, have been able to restore a degree of sight to the blind… (pages 124-125)
Scientists have also successfully created a robotic hand. One patient, Robin Ekenstam, had his right hand amputated. Scientists have given him a robotic hand with four motors and forty sensors. The doctors connected Ekenstam’s nerves to the chips in the artificial hand. As a result, Ekenstam is able to use the artificial hand as if it were his own hand. He feels sensations in the artificial fingers when he picks stuff up. In short, the brain can control the artificial hand, and the artificial hand can send feedback to the brain.
Furthermore, the brain is extremely plastic because it is a neural network. So artificial appendages or sense organs may be attached to the brain at different locations, and the brain learns how to control this new attachment.
And if today’s implants and artificial appendages can restore hearing, vision, and function, then tomorrow’s may give us superhuman abilities. Even the brain might be made more intelligent by injecting new neurons, as has successfully been done with rats. Similarly, genetic engineering will become possible. As Brooks commented:
We will now longer find ourselves confined by Darwinian evolution.
Another way people will merge with robots is with surrogates and avatars. For instance, we may be able to control super robots as if they were our own bodies, which could be useful for a variety of difficult jobs including those on the moon.
Robot pioneer Hans Morevic has described one way this could happen:
…we might merge with our robot creations by undergoing a brain operation that replaces each neuron of our brain with a transistor inside a robot. The operation starts when we lie beside a robot without a brain. A robotic surgeon takes every cluster of gray matter in our brain, duplicates it transistor by transistor, connects the neurons to the transistors, and puts the transistors into the empty robot skull. As each cluster of neurons is duplicated in the robot, it is discarded… After the operation is over, our brain has been entirely transferred into the body of a robot. Not only do we have a robotic body, we have also the benefits of a robot: immortality in superhuman bodies that are perfect in appearance. (pages 130-131)
FUTURE OF MEDICINE
Kaku quotes Nobel Laureate James Watson:
No one really has the guts to say it, but if we could make ourselves better human beings by knowing how to add genes, why wouldn’t we?
Nobel Laureate David Baltimore:
I don’t really think our bodies are going to have any secrets left within this century. And so, anything that we can manage to think about will probably have a reality.
Kaku mentions biologist Robert Lanza:
Today, Lanza is chief science officer of Advanced Cell Technology, with hundreds of papers and inventions to his credit. In 2003, he made headlines when the San Diego Zoo asked him to clone a banteng, an endangered species of wild ox, from the body of one that had died twenty-five years before. Lanza successfully extracted usable cells from the carcass, processed them, and sent them to a farm in Utah. There, the fertilized cell was implanted into a female cow. Ten months later he got the news that his latest creation had just been born. On another day, he might be working on ’tissue engineering,’ which may eventually create a human body shop from which we can order new organs, grown from our own cells, to replace organs that are diseased or have worn out. Another day, he could be working on cloning human embryo cells. He was part of the historic team that cloned the world’s first human embryo for the purpose of generating embryonic stem cells. (page 138)
Austrian physicist and philosopher Erwin Schrodinger, one of the founders of quantum theory, wrote an influential book, What is Life? He speculated that all life was based on a code of some sort, and that this was encoded on a molecule.
Physicist Francis Crick, inspired by Schrodinger’s book, teamed up with geneticist James Watson to prove that DNA was this fabled molecule. In 1953, in one of the most important discoveries of all time, Watson and Crick unlocked the structure of DNA, a double helix. When unraveled, a single strand of DNA stretches about 6 feet long. On it is contained a sequence of 3 billion nucleic acids, called A, T, C, G (adenine, thymine, cytosine, and guanine), that carry the code. By reading the precise sequence of nucleic acids placed along the DNA molecule, one could read the book of life. (page 140)
Eventually everyone will have his or her genome – listing approximately 25,000 genes – cheaply available in digital form. David Baltimore:
Biology is today an information science.
The quantum theory has given us amazingly detailed models of how the atoms are arranged in each protein and DNA molecule. Atom for atom, we know how to build the molecules of life from scratch. And gene sequencing – which used to be a long, tedious, and expensive process – is all automated with robots now.
Welcome to bioinformatics:
…this is opening up an entirely new branch of science, called bioinformatics, or using computers to rapidly scan and analyze the genome of thousands of organisms. For example, by inserting the genomes of several hundred individuals suffering from a certain disease into a computer, one might be able to calculate the precise location of the damaged DNA. In fact, some of the world’s most powerful computers are involved in bioinformatics, analyzing millions of genes found in plants and animals for certain key genes. (page 143)
You’ll talk to your doctor – likely a software program – on the wall screen. Sensors will be embedded in your bathroom and elsewhere, able to detect cancer cells years before tumors form. If there is evidence of cancer, nanoparticles will be injected into your bloodstream and will deliver cancer-fighting drugs directly to the cancer cells.
If your robodoc cannot cure the disease or the problem, then you will simply grow a new organ or new tissue as needed. (There are over 91,000 in the United States waiting for an organ transplant.)
…So far, scientists can grow skin, blood, blood vessels, heart valves, cartilage, bone, noses, and ears in the lab from your own cells. The first major organ, the bladder, was grown in 2007, the first windpipe in 2009… Nobel Laureate Walter Gilbert told me that he foresees a time, just a few decades into the future, when practically every organ of the body will be grown from your own cells. (page 144)
Eventually cloning will be possible for humans.
The concept of cloning hit the world headlines in 1997, when Ian Wilmut of the University of Edinburgh was able to clone Dolly the sheep. By taking a cell from an adult sheep, extracting the DNA within its nucleus, and then inserting this nucleus into an egg cell, Wilmut was able to accomplish the feat of bringing back a genetic copy of the original. (page 150)
Successes in animal studies will be translated to human studies. First diseases caused by a single mutated gene will be cured. Then diseases caused by multiple muted genes will be cured.
At some point, there will be “designer children.” Kaku quotes Harvard biologist E. O. Wilson:
Homo sapiens, the first truly free species, is about to decommission natural selection, the force that made us… Soon we must look deep within ourselves and decide what we wish to become.
The “smart mouse” gene was isolated in 1999. Mice that have it are better able to navigate mazes and remember things. Smart mouse genes work by increasing the presence of a specific neurotransmitter, which thereby makes it easier for the mouse to learn. This supports Hebb’s rule: learning occurs when certain neural pathways are reinforced.
It will take decades to iron out side effects and unwanted consequences of genetic engineering. For instance, scientists now believe that there is a healthy balance between forgetting and remembering. It’s important to remember key lessons and specific skills. But it’s also important not to remember too much. People need a certain optimism in order to make progress and evolve.
Scientists now know what aging is: Aging is the accumulation of errors at the genetic and cellular level. These errors have various causes. For instance, metabolism creates free radicals and oxidation, which damage the molecular machinery of cells, writes Kaku. Errors can also accumulate as ‘junk’ molecular debris.
The buildup of genetic errors is a by-product of the second law of thermodynamics: entropy always increases. However, there’s an important loophole, notes Kaku. Entropy can be reduced in one place as long as it is increased at least as much somewhere else. This means that aging is reversible. Kaku quotes Richard Feynman:
There is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.
…The scientific world was stunned when Michael Rose of the University of California at Irvine announced that he was able to increase the lifespan of fruit flies by 70 percent by selective breeding. His ‘superflies,’ or Methuselah flies, were found to have higher quantities of the antioxidant superoxide dismutase (SOD), which can slow down the damage caused by free radicals. In 1991, Thomas Johnson of the University of Colorado at Boulder isolated a gene, which he dubbed age-1, that seems to be responsible for aging in nematodes and increases their lifespan by 110 percent…
…isolating the genes responsible for aging could be accelerated in the future, especially when all of us have our genomes on CD-ROM. By then, scientists will have a tremendous database of billions of genes that can be analyzed by computers. Scientists will be able to scan millions of genomes of two groups of people, the young and the old. By comparing the two groups, one can identify where aging takes place at the genetic level. A preliminary scan of these genes has already isolated about sixty genes on which aging seems to be concentrated. (pages 168-169)
Scientists think aging is only 35 percent determined by genes. Moreover, just as a car ages in the engine, so human aging is concentrated in the engine of the cell, the mitochondria. This has allowed scientists to narrow their search for “age genes” and also to look for ways to accelerate gene repair inside the mitochondria, possibly slowing or reversing aging. Soon we could live to 150. By 2100, we could live well beyond that.
If you lower your daily calorie intake by 30 percent, your lifespan is increased by roughly 30 percent. This is called calorie restriction. Every organism studied so far exhibits this phenomenon.
…Animals given this restricted diet have fewer tumors, less heart disease, a lower incidence of diabetes, and fewer diseases related to aging. In fact, caloric restriction is the only known mechanism guaranteed to increase the lifespan that has been tested repeatedly, over almost the entire animal kingdom, and it works every time. Until recently, the only known species that still eluded researchers of caloric restriction were the primates, of which humans are a member, because they live so long. (page 170)
Now scientists have shown that caloric restriction also works for primates: less diabetes, less cancer, less heart disease, and better health and longer life.
In 1991, Leonard Guarente of MIT, David Sinclair of Harvard, and others discocvered the gene SIR2 in yeast cells. SIR2 is activated when it detects that the energy reserves of a cell are low. The SIR2 gene has a counterpart in mice and people called the SIRT genes, which produce proteins called sirtuins. Scientists looked for chemicals that activate the sirtuins and found the chemical resveratrol.
Scientists have found that sirtuin activators can protect mice from an impressive variety of diseases, including lung and colon cancer, melanoma, lymphoma, type 2 diabetes, cardiovascular disease, and Alzheimer’s disease, according to Sinclair. If even a fraction of these diseases can be treated in humans via sirtuins, it would revolutionize all medicine. (page 171)
Kaku reports what William Haseltine, biotech pioneer, told him:
The nature of life is not mortality. It’s immortality. DNA is an immortal molecule. That molecule first appeared perhaps 3.5 billion years ago. That self-same molecule, through duplication, is around today… It’s true that we run down, but we’ve talked about projecting way into the future the ability to alter that. First to extend our lives two- or three-fold. And perhaps, if we understand the brain well enough, to extend both our body and our brain indefinitely. And I don’t think that will be an unnatural process. (page 173)
Kaku concludes that extending life span in the future will likely result from a combination of activities:
- growing new organs as they wear out or become diseased, via tissue engineering and stem cells
- ingesting a cocktail of proteins and enzymes designed to increase cell repair mechanisms, regulate metabolism, reset the biological clock, and reduce oxidation
- using gene therapy to alter genes that may slow down the aging process
- maintaining a healthy lifestyle (exercise and a good diet)
- using nanosensors to detect diseases like cancer years before they become a problem
Kaku quotes Richard Dawkins:
I believe that by 2050, we shall be able to read the language [of life]. We shall feed the genome of an unknown animal into a computer which will reconstruct not only the form of the animal but the detailed world in which its ancestors lived…, including their predators or prey, parasites or hosts, nesting sites, and even hopes and fears.
Dawkins believes, writes Kaku, that once the missing gene has been mathematically created by computer, we might be able to re-create the DNA of this organism, implant it in a human egg, and put the egg in a woman, who will give birth to our ancestor. After all, the entire genome of our nearest genetic neighbor, the long-extinct Neanderthal, has now been sequenced.
For the most part, nanotechnology is still a very young science. But one aspect of nanotechnology is now beginning to affect the lives of everyone and has already blossomed into a $40 billion worldwide industry – microelectromechanical systems (MEMS) – that includes everything from ink-jet cartridges, air bag sensors, and displays to gyroscopes for cars and airplanes. MEMS are tiny machines so small they can easily fit on the tip of a needle. They are created using the same etching technology used in the computer business. Instead of etching transistors, engineers etch tiny mechanical components, creating machine parts so small you need a microscope to see them. (pages 207-208)
Airbags can deploy in 1/25th of a second thanks to MEM accelerometers that can detect the sudden braking of your car. This has already saved thousands of lives.
One day nanomachines may be able to replace surgery entirely. Cutting the skin may become completely obsolete. Nanomachines will also be able to find and kill cancer cells in many cases. These nanomachines can be guided by magnets.
DNA fragments can be embedded on a tiny chip using transistor etching technology. The DNA fragments can bind to specific gene sequences. Then, using a laser, thousands of genes can be read at one time, rather than one by one. Prices for these DNA chips continue to plummet due to Moore’s law.
Small electronic chips will be able to do the work that is now done by an entire laboratory. These chips will be embedded in our bathrooms. Currently, some biopsies or chemical analyses can cost hundreds of thousands and take weeks. In the future, it may cost pennies and take just a few minutes.
In 2004, Andre Geim and Kostya Novoselov of the University of Manchester isolated graphene from graphite. They won the Nobel Prize for their work. Graphene is a single sheet of carbon, no more than one atom thick. And it can conduct electricity. It’s also the strongest material ever tested. (Kaku notes that an elephant balanced on a pencil – on graphene – would not tear it.)
Novoselov’s group used electrons to carve out channels in the graphene, thereby making the world’s smallest transistor: one atom thick and ten atoms across. (The smallest transistors currently are about 30 nanometers. Novoselov’s transistors are 30 times smaller.)
The real challenge now is how to connect molecular transistors.
The most ambitious proposal is to use quantum computers, which actually compute on individual atoms. Quantum computers are extremely powerful. The CIA has looked at them for their code-breaking potential.
Quantum computers actually exist. Atoms pointing up can be interpreted as “1” and pointing down can be interpreted as “0.” When you send an electromagnetic pulse in, some atoms switch directions from “1” to “0”, or vice versa, and this constitutes a calculation.
The problem now is that the tiniest disturbances from the outside world can easily disrupt the delicate balance of the quantum computer, causing its atoms to “decohere,” throwing off its calculations. (When atoms are “coherent,” they vibrate in phase with one another.) Kaku writes that whoever solves this problem will win a Nobel Prize and become the richest person on earth.
Scientists are working on programmable matter the size of grains of sand. These grains are called “catoms” (for claytronic atoms), and eventually will be able to form almost any object. In fact, many common consumer products may be replaced by software programs sent over the internet. If you have to replace an appliance, for instance, you may just have to press a button and a group of catoms will turn into the object you need.
In the far future, the goal is to create a molecular assembler, or “replicator,” which can be used to create anything. This would be the crowning achievement of engineering, says Kaku. One problem is the sheer number of atoms that would need to be re-arranged. But this could be solved by self-replicating nanobots.
A version of this “replicator” already exists. Mother Nature can take the food we eat and create a baby in nine months. DNA molecules guide the actions of ribosomes – which cut and splice molecules in the right order – using the proteins and amino acids in your food, notes Kaku. Mother Nature often uses enzymes in water solution in order to facilitate the chemical reactions between atoms. (That’s not necessarily a limitation for scientists, since not all chemical reactions involve water or enzymes.)
FUTURE OF ENERGY
Kaku writes that in this century, we will harness the power of the stars. In the short term, this means solar and hydrogen will replace fossil fuels. In the long term, it means we’ll tap the power of fusion and even solar energy from outer space. Also, cars and trains will be able to float using magnetism. This can drastically reduce our use of energy, since most energy today is used to overcome friction.
Currently, fossil fuels meet about 80 percent of the world’s energy needs. Eventually, alternative sources of energy will become much cheaper than fossil fuels, especially if you factor in negative externalities, i.e., pollution and global warming.
Electric vehicles will reduce the use of fossil fuels. But we also have to transform the way electricity is generated. Solar power will keep getting cheaper. But much more clean energy will be required in order gradually to replace fossil fuels.
Nuclear fission can create a great deal of energy without producing huge amounts of greenhouse gases. However, nuclear fission generates enormous quantities of nuclear waste, which is radioactive for thousands to tens of millions of years.
Another problem with nuclear energy is that the price of uranium enrichment continues to drop as technologies improve. This increases the odds that terrorists could acquire nuclear weapons.
Within a few decades, global warming will become even more obvious. The signs are already clear, notes Kaku:
- The thickness of Arctic ice has decreased by over 50 percent in just the past fifty years.
- Greenland’s ice shelves continue to shrink. (If all of Greenland’s ice melted, sea levels would rise about 20 feet around the world.)
- Large chunk’s of Antarctica’s ice, which have been stable for tens of thousands of years, are gradually breaking off. (If all of Antarctica’s ice were to melt, sea levels would rise about 180 feet around the world.)
- For every vertical foot the ocean rises, the horizontal spread is about 100 feet.
- Temperatures started to be reliably recorded in the late 1700s; 1995, 2000, 2005, and 2010 ranked among the hottest years ever recorded. Levels of carbon dioxide are rising dramatically.
- As the earth heats up, tropical diseases are gradually migrating northward.
It may be possible to genetically engineer life-forms that can absorb large amounts of carbon dioxide. But we must be careful about unintended side effects on ecosystems.
Eventually fusion power may solve most of our energy needs. Fusion powers the sun and lights up all the stars.
Anyone who can successfully master fusion power will have unleashed unlimited eternal energy. And the fuel for these fusion plants comes from ordinary seawater. Pound for pound, fusion power releases 10 million times more power than gasoline. An 8-ounce glass of water is equal to the energy content of 500,000 barrels of petroleum. (page 272)
It’s extremely difficult to heat hydrogen gas to tens of millions of degrees. But scientists will probably master fusion power within the next few decades. And a fusion plant creates insignificant amounts of nuclear waste compared to nuclear fission.
One way scientists are trying produce nuclear fusion is by focusing huge lasers on to a tiny point. If the resulting shock waves are powerful enough, they can compress and heat fuel to the point of creating nuclear fusion. This approach is called inertial confinement fusion.
The other main approach used by scientists to try to create fusion is magnetic confinement fusion. A huge, hollow doughnut-shaped device made of steel and surrounded by magnetic coils is used to attempt to squeeze hydrogen gas enough to heat it to millions of degrees.
What is most difficult in this approach is squeezing the hydrogen gas uniformly. Otherwise, it bulges out in complex ways. Scientists are using supercomputers to try to control this process. (When stars form, gravity causes the uniform collapse of matter, creating a sphere of nuclear fusion. So stars form easily.)
Most of the energy we burn is used to overcome friction. Kaku observes that a layer of ice between major cities would drastically cut the need for energy to overcome friction.
In 1911, scientists discovered that cooling mercury to four degrees (Kelvin) above absolute zero causes it to lose all electrical resistance. Thus mercury at that temperature is a superconductor – electrons can pass through with virtually no loss of energy. The disadvantage is you have to cool it to near absolute zero using liquid hydrogen, which is very expensive.
But in 1986, scientists learned that ceramics become superconductors at 92 degrees (Kelvin) above absolute zero. Some ceramic superconductors have been created at 138 degrees (Kelvin) above absolute zero. This is important because liquid nitrogen forms at 77 degrees (Kelvin). Thus, liquid nitrogen can be used to cool these ceramics, which is far less expensive.
Remember that most energy is used to overcome friction. Even for electricity, up to 30 percent can be lost during transmission. But experimental evidence suggests that electricity in a superconducting loop can last 100,000 years or perhaps billions of years. Thus, superconductors eventually will allow us to dramatically increase our energy efficiency by virtually eliminating friction.
Moreover, room temperature superconductors could produce supermagnets capable of lifting cars and trains.
The reason the magnet floats is simple. Magnetic lines of force cannot penetrate a superconductor. This is the Meissner effect. (When a magnetic field is applied to a superconductor, a small electric current forms on the surface and cancels it, so the magnetic field is expelled from the superconductor.) When you place the magnet on top of the ceramic, its field lines bunch up since they cannot pass through the ceramic. This creates a ‘cushion’ of magnetic field lines, which are all squeezed together, thereby pushing the magnet away from the ceramic, making it float. (page 289)
Room temperature superconductors will allow trains and cars to move without any friction. This will revolutionize transportation. Compressed air could get a car going. Then the car could float almost forever as long as the surface is flat.
Even without room temperature superconductors, some countries have produced magnetic levitating (maglev) trains. A maglev train does lose energy to air friction. In a vacuum, a maglev train might be able to travel at 4,000 miles per hour.
Later this century, because there is 8 times more sunlight in space than on the surface of the earth, space solar power will be possible. A reduced cost of space travel may make it feasible to send hundreds of solar satellites into space. One challenge is that these solar satellites would have to be 22,000 miles in space, much farther than satellites in near-earth orbits of 300 miles. But the main problem is the cost of booster rockets. (Companies like Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin are working to reduce the cost of rockets by making them reusable.)
FUTURE OF SPACE TRAVEL
Kaku quotes Carl Sagan:
We have lingered long enough on the shores of the cosmic ocean. We are ready at last to set sail for the stars.
Kaku observes that the Kepler satellite will be replaced by more sensitive satellites:
So in the near future, we should have an encyclopedia of several thousand planets, of which perhaps a few hundred will be very similar to earth in size and composition. This, in turn, will generate more interest in one day sending a probe to these distant planets. There will be an intense effort to see if these earthlike twins have liquid-water oceans and if there are any radio emissions from intelligent life-forms. (page 297)
Since liquid water is probably the fluid in which DNA and proteins were first formed, scientists had believed life in our solar system could only exist on earth or maybe Mars. But recently, scientists realized that life could exist under the ice cover of the moons of Jupiter.
For instance, the ocean under the ice of the moon Europa is estimated to be twice the total volume of the earth’s oceans. And the surface of Europa is continually heated by the tidal forces caused by gravity.
It had been thought that life required sunlight. But in 1977, life was found on earth, deep under water in the Galapagos Rift. Energy from undersea volcano vents provided enough energy for life. Some scientists have even suggested that DNA may have formed not in a tide pool, but deep underwater near such volcano vents. Some of the most primitive forms of DNA have been found on the bottom of the ocean.
In the future, new types of space satellite may be able to detect not only radiation from colliding black holes, but also even new information about the Big Bang – a singularity involving extreme density and temperature. Kaku:
At present, there are several theories of the pre-big bang era coming from string theory, which is my specialty. In one scenario, our universe is a huge bubble of some sort that is continually expanding. We live on the skin of this gigantic bubble (we are stuck on the bubble like flies on flypaper). But our bubble universe coexists in an ocean of other bubble universes, making up the multiverse of universes, like a bubble bath. Occasionally, these bubbles might collide (giving us what is called the big splat theory) or they may fission into smaller bubbles (giving us what is called eternal inflation). Each of these pre-big bang theories predicts how the universe should create gravity radiation moments after the initial explosion. (page 301)
Space travel is very expensive. It costs a great deal of money – perhaps $100,000 per pound – to send a person to the moon. It costs much more to send a person to Mars.
Robotic missions are far cheaper than manned missions. And robotic missions can explore dangerous environments, don’t require costly life support, and don’t have to come back.
Kaku next describes a mission to Mars:
Once our nation has made a firm commitment to go to Mars, it may take another twenty to thirty years to actually complete the mission. But getting to Mars will be much more difficult than reaching the moon. In contrast to the moon, Mars represents a quantum leap in difficulty. It takes only three days to reach the moon. It takes six months to a year to reach Mars.
In July 2009, NASA scientists gave a rare look at what a realistic Mars mission might look like. Astronauts would take approximately six months or more to reach Mars, then spend eighteen months on the planet, then take another six months for the return voyage.
Altogether about 1.5 million pounds of equipment would need to be sent to Mars, more than the amount needed for the $100 billion space station. To save on food and water, the astronauts would have to purify their own waste and then use it to fertilize plants during the trip and while on Mars. With no air, soil, or water, everything must be brought from earth. It will be impossible to live off the land, since there is no oxygen, liquid water, animals, or plants on Mars. The atmosphere is almost pure carbon dioxide, with an atmospheric pressure only 1 percent that of earth. Any rip in a space suit would create rapid depressurization and death. (page 312)
Although a day on Mars is 24.6 hours, a year on Mars is almost twice as long as a year on earth. The temperature never goes above the melting point of ice. And the dust storms are ferocious and often engulf the entire planet.
Eventually astronauts may be able to terraform Mars to make it more hospitable for life. The simplest approach would be to inject methane gas into the atmosphere, which might be able to trap sunlight thereby raising the temperature of Mars above the melting point of ice. (Methane gas is an even more potent greenhouse gas than carbon dioxide.) Once the temperature rises, the underground permafrost may begin to thaw. Riverbeds would fill with water, and lakes and oceans might form again. This would release more carbon dioxide, leading to a positive feedback loop.
Another possible way to terraform Mars would be to deflect a comet towards the planet. Comets are made mostly of water ice. A comet hitting Mars’ atmosphere would slowly disintegrate, releasing water in the form of steam into the atmosphere.
The polar regions of Mars are made of frozen carbon dioxide and ice. It might be possible to deflect a comet (or moon or asteroid) to hit the ice caps. This would melt the ice while simultaneously releasing carbon dioxide, which may set off a positive feedback loop, releasing even more carbon dioxide.
Once the temperature of Mars rises to the melting point of ice, pools of water may form, and certain forms of algae that thrive on earth in the Antarctic may be introduced on Mars. They might actually thrive in the atmosphere of Mars, which is 95 percent carbon dioxide. They could also be genetically modified to maximize their growth on Mars. These algae pools could accelerate terraforming in several ways. First, they could convert carbon dioxide into oxygen. Second, they would darken the surface color of Mars, so that it absorbs more heat from the sun. Third, since they grow by themselves without any prompting from outside, it would be a relatively cheap way to change the environment of the planet. Fourth, the algae can be harvested for food. Eventually these algae lakes would create soil and nutrients that may be suitable for plants, which in turn would accelerate the production of oxygen. (page 315)
Scientist have also considered the possibility of building solar satellites around Mars, causing the temperature to rise and the permafrost to begin melting, setting off a positive feedback loop.
2070 to 2100: A Space Elevator and Interstellar Travel
Near the end of the century, scientists may finally be able to construct a space elevator. With a sufficiently long cable from the surface of the earth to outer space, centrifugal force caused by the spinning of the earth would be enough to keep the cable in the sky. Although steel likely wouldn’t be strong enough for this project, carbon nanotubes would be.
One challenge is to create a carbon nanotube cable that is 50,000 miles long. Another challenge is that space satellites in orbit travel at 18,000 miles per hour. If a satellite collided with the space elevator, it would be catastrophic. So the space elevator must be equipped with special rockets to move it out of the way of passing satellites.
Another challenge is turbulent weather on earth. The space elevator must be flexible enough, perhaps anchored to an aircraft carrier or oil platform. Moreover, there must be an escape pod in case the cable breaks.
Also by the end of the century, there will be outposts on Mars and perhaps in the asteroid belt. The next goal would be travelling to a star. A conventional chemical rocket would take 70,000 years to reach the nearest star. But there are several proposals for an interstellar craft:
- solar sail
- nuclear rocket
- ramjet fusion
Although light has no mass, it has momentum and so can exert pressure. The pressure is super tiny. But if the sail is big enough and we wait long enough, sunlight in space – which is 8 times more intense than on earth – could drive a spacecraft. The solar sail would likely be miles wide. The craft would have to circle the sun for a few years, gaining more and more momentum. Then it could spiral out of the solar system and perhaps reach the nearest star in 400 years.
Although a nuclear fission reactor does not generate enough power to drive a starship, a series of exploding atomic bombs could generate enough power. One proposed starship, Orion, would have weighed 8 million tons, with a diameter of 400 meters. It would have been powered by 1,000 hydrogen bombs. (This also would have been a good way to get rid of atomic bombs meant only for warfare.) Unfortunately, the Nuclear Test Ban Treaty in 1963 meant the scientists couldn’t test Orion. So the project was set aside.
A ramjet engine scoops in air in the front, mixes it with fuel, which then ignites and creates thrust. In 1960, Robert Bussard had the idea of scooping not air but hydrogen gas, which is everywhere in outer space. The hydrogen gas would be squeezed and heated by electric and magnetic fields until the hydrogen fused into helium, releasing enormous amounts of energy via nuclear fusion. With an inexhaustible supply of hydrogen in space, the ramjet fusion engine could conceivably run forever, notes Kaku.
Bussard calculated that a 1,000-ton ramjet fusion engine could reach 77 percent of the speed of light after one year. This would allow it to reach the Andromeda galaxy, which is 2,000,000 light-years away, in just 23 years as measured by the astronauts on the starship. (We know from Einstein’s theory of relativity that time slows down significantly for those traveling at such a high percentage of the speed of light. But meanwhile, on earth, millions of years will have passed.)
Note that there are still engineering questions about the ramjet fusion engine. For instance, the scoop might have to be many miles wide, but that might cause drag effects from particles in space. Once the engineering challenges are solved, the ramjet fusion rocket will definitely be on the short list, says Kaku.
Another possibility is antimatter rocket ships. If antimatter could be produced cheaply enough, or found in space, then it could be the ideal fuel. Gerald Smith of Pennsylvania State University estimates that 4 milligrams of antimatter could take us to Mars, while 100 grams could take us to a nearby star.
Nanoships, tiny starships, might be sent by the thousands to explore outer space, including eventually other stars. These nanoships might become cheap enough to produce and to fuel. They might even be self-replicating.
Millions of nanoships could gather intelligence like a “swarm” does. For instance, a single ant is super simple. But a colony of ants can create a complex ant hill. A similar concept is the “smart dust” considered by the Pentagon. Billions of particles, each a sensor, could be used to gather a great deal of information.
Another advantage of nanoships is that we already know how to accelerate particles to near the speed of light. Moreover, scientists may be able to create one or a few self-replicating nanoprobes. Researchers have already looked at a robot that could make a factory on the surface of the moon and then produce virtually unlimited copies of itself.
FUTURE OF HUMANITY
All the technological revolutions described here are leading to a single point: the creation of a planetary civilization. This transition is perhaps the greatest in human history. In fact, the people living today are the most important ever to walk the surface of the planet, since they will determine whether we attain this goal or descend into chaos. Perhaps 5,000 generations of humans have walked the surface of the earth since we first emerged from Africa about 100,000 thousand years ago, and of them, the ones living in this century will ultimately determine our fate. (pages 378-379)
In 1964, Russian astrophysicist Nicolai Kardashev was interested in probing outer space for signals sent from advanced civilizations. So he proposed three types of civilization:
- A Type I civilization is planetary, consuming the sliver of sunlight that falls on their planet (about 10^17 watts).
- A Type II civilization is stellar, consuming all the energy that their sun emits (about 10^27 watts).
- A Type III civilization is galactic, consuming the energy of billions of stars (about 10^37 watts).
The advantage of this classification is that we can quantify the power of each civilization rather than make vague and wild generalizations. Since we know the power output of these celestial objects, we can put specific numerical constraints on each of them as we scan the skies. (page 381)
Carl Sagan has calculated that we are a Type 0.7 civilization, not quite Type I yet. There are signs, says Kaku, that humanity will reach Type I in a matter of decades.
- The internet allows a person to connect with virtually anyone else on the planet effortlessly.
- Many families around the world have middle-class ambitions: a suburban house and two cars.
- The criterion for being a superpower is not weapons, but economic strength.
- Entertainers increasingly consider the global appeal of their products.
- People are becoming bicultural, using English and international customs when dealing with foreigners, but using their local language or customs otherwise.
- The news is becoming planetary.
- Soccer and the Olympics are emerging to dominate planetary sports.
- The environment is debated on a planetary scale. People realize they must work together to control global warming and pollution.
- Tourism is one of the fastest-growing industries on the planet.
- War has rarely occurred between two democracies. A vibrant press, oppositional parties, and a solid middle class tend to ensure that.
- Diseases will be controlled on a planetary basis.
A Type II civilization means we can avoid ice ages, deflect meteors and comets, and even move to another star system if our sun goes supernova. Or we may be able to keep the sun from exploding. (Or we might be able to change the orbit of our planet.) Moreover, one way we could capture all the energy of the sun is to have a giant sphere around it – a Dyson sphere. Also, we probably will have colonized not just the entire solar system, but nearby stars.
By the time we become a Type III civilization, we will have explored most of the galaxy. We may have done this using self-replicating robot probes. Or we may have mastered Planck energy (10^19 billion electron volts). At this energy, space-time itself becomes unstable. The fabric of space-time will tear, perhaps creating tiny portals to other universes or to other points in space-time. By compressing space or passing through wormholes, we may gain the ability to take shortcuts through space and time. As a result, a Type III civilization might be able to colonize the entire galaxy.
It’s possible that a more advanced civilization has already visited or detected us. For instance, they may have used tiny self-replicating probes that we haven’t noticed yet. It’s also possible that, in the future, we’ll come across civilizations that are less advanced, or that destroyed themselves before making the transition from Type 0 to Type 1.
Kaku writes that many people are not aware of the historic transition humanity is now making. But this could change if we discover evidence of intelligent life somewhere in outer space. Then we would consider our level of technological evolution relative to theirs.
Consider the SETI Institute. This is from their website (www.seti.org):
SETI, the Search for Extraterrestrial Intelligence, is an exploratory science that seeks evidence of life in the universe by looking for some signature of its technology.
Our current understanding of life’s origin on Earth suggests that given a suitable environment and sufficient time, life will develop on other planets. Whether evolution will give rise to intelligent, technological civilizations is open to speculation. However, such a civilization could be detected across interstellar distances, and may actually offer our best opportunity for discovering extraterrestrial life in the near future.
Finding evidence of other technological civilizations however, requires significant effort. Currently, the Center for SETI Research develops signal-processing technology and uses it to search for signals from advanced technological civilizations in our galaxy.
Work at the Center is divided into two areas: Research and Development (R&D) and Projects. R&D efforts include the development of new signal processing algorithms, new search technology, and new SETI search strategies that are then incorporated into specific observing Projects. The algorithms and technology developed in the lab are first field-tested and then implemented during observing. The observing results are used to guide the development of new hardware, software, and observing facilities. The improved SETI observing Projects in turn provide new ideas for Research and Development. This cycle leads to continuing progress and diversification in our ability to search for extraterrestrial signals.
Carl Sagan has introduced another method – based on information processing capability – to measure how advanced a civilization is. A Type A civilization only has the spoken word, while a Type Z civilization is the most advanced possible. If we combine Kardashev’s classification system (based on energy) with Sagan’s (based on information), then we would say that our civilization at present is Type 0.7 H.
BOOLE MICROCAP FUND
An equal weighted group of micro caps generally far outperforms an equal weighted (or cap-weighted) group of larger stocks over time. See the historical chart here: http://boolefund.com/best-performers-microcap-stocks/
This outperformance increases significantly by focusing on cheap micro caps. Performance can be further boosted by isolating cheap microcap companies that show improving fundamentals. We rank microcap stocks based on these and similar criteria.
There are roughly 10-20 positions in the portfolio. The size of each position is determined by its rank. Typically the largest position is 15-20% (at cost), while the average position is 8-10% (at cost). Positions are held for 3 to 5 years unless a stock approaches intrinsic value sooner or an error has been discovered.
The mission of the Boole Fund is to outperform the S&P 500 Index by at least 5% per year (net of fees) over 5-year periods. We also aim to outpace the Russell Microcap Index by at least 2% per year (net). The Boole Fund has low fees.
If you are interested in finding out more, please e-mail me or leave a comment.
My e-mail: email@example.com
Disclosures: Past performance is not a guarantee or a reliable indicator of future results. All investments contain risk and may lose value. This material is distributed for informational purposes only. Forecasts, estimates, and certain information contained herein should not be considered as investment advice or a recommendation of any particular security, strategy or investment product. Information contained herein has been obtained from sources believed to be reliable, but not guaranteed. No part of this article may be reproduced in any form, or referred to in any other publication, without express written permission of Boole Capital, LLC.