More on this book
Community
Kindle Notes & Highlights
Read between
April 18 - May 25, 2020
even if by some miracle of divine intervention your forecast turns out to be correct, you can only make money from it, if (and only if) it is different from the consensus. This adds a whole new dimension of complexity to the problem.
As Ben Graham said, “Forecasting security prices is not properly a part of security analysis, ” but that doesn’t stop the analysts from coming up with daft guesses as to the future price of a stock. On average these targets are about 25 percent higher than the current price. However, they are worthless as forecasts.
We would all be better off if we took Keynes’ suggested response when asked about the future, “We simply do not know.”
If investors want such meaningless information, then someone will provide it to them.
experts seem to use a variety of excuses for forecast failure that allow them to avoid admitting they can’t forecast.
When studying experts’ views on a wide range of world political events over a decade, he found that, across the vast array of predictions, experts who reported they had 80 percent or more confidence in their predictions were actually correct only around 45 percent of the time.
The most common excuses were: 1. The “If only” defense—If only the Federal Reserve had raised rates, then the prediction would have been true. Effectively, the experts claim that they would have been correct if only their advice had been followed. 2. The “ceteris paribus” defense—Something outside of the model of analysis occurred, which invalidated the forecast; therefore it isn’t my fault. 3. The “I was almost right” defense—Although the predicted outcome didn’t occur, it almost did. 4. The “It just hasn’t happened yet” defense—I wasn’t wrong, it just hasn’t occurred yet. This is my personal
...more
As much as we like to poke fun at faulty predictions, we can ’t function without them. Even if we disagree with, say, the analysts’ consensus on Cisco, that consensus gives us a basis that helps us to form our own judgments about whether it is overvalued or undervalued. Without forecasts, the market would no longer be grounded to anything.
When given a number we tend to cling to it, even subconsciously—a trait known as anchoring.
legal experts were influenced by irrelevant anchors when setting jail sentences, even when the experts were fully aware of the irrelevance of the input.
“Analysis should be penetrating not prophetic.”
“I’d prefer to be approximately right rather than precisely wrong.”
The whole investment industry is obsessed with learning more and more about less and less, until we know absolutely everything about nothing.
As Daniel J. Boorstin opined, “The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge.”
So for the computer, more information truly was better.
Could anything be done to help doctors look at the right things? The researchers came up with the idea of using laminated cards with various probabilities marked against diagnostic information.
Decision-making seemed to have improved regardless of the use of the tool.
In fact, the doctors had managed to assimilate the correct cues. That is to say, by showing them the correct items to use for diagnosis, the doctors’ emphasis switched from pseudodiagnostic information to truly informative elements. They started looking at the right things!
The power of simple checklists should not be underestimated.
It is far better to focus on what really matters, rather than succumbing to the siren call of Wall Street’s many noise peddlers. We would be far better off analyzing the five things we really need to know about an investment, rather than trying to know absolutely everything about everything concerned with the investment.
too much information leads us to feel overconfident in the extreme, but it does little to aid us.
we actually find even useless information soothing and we process it mindlessly.
psychologists have explored the role of “placebic” information in people’s behavior.31 Placebic information is...
This highlight has been truncated due to consecutive passage length restrictions.
when people see information in a format with which they are familiar, they will unquestioningly process it.
Sometimes his ideas of value appears plausible and justified by business developments and prospects as you know them. Often, on the other hand, Mr. Market lets his enthusiasm or his fears run away with him, and the value he proposes seems to you little short of silly.
John Maynard Keynes pointed out the irony of the situation: “It is largely the fluctuations which throw up bargains and the uncertainty due to the fluctuations which prevents other people from taking advantage of them.”
Turning off the bubblevision is a great step towards preventing yourself from becoming a slave to the market.
we are too busy looking for information that confirms our hypothesis.
This behavioral pitfall of looking for confirming rather than disconfirming evidence is in direct violation of the principles outlined by Karl Popper, the philosopher of science. He argued that the only way to test a hypothesis was to look for all the information that disagreed with it—a process known as falsification.
Not only do we look for information that agrees with us, but we tend to see all information as supporting our hypothesis.
In effect each participant’s views polarized, becoming much more extreme than before the experiment.
Time and again, psychologists have found that confidence and biased assimilation perform a strange tango. It appears the more sure people were that they had the correct view, the more they distorted new evidence to suit their existing preference, which in turn made them even more confident.
Bayes Theorem, a relatively simple formula which shows how the probability that a theory is true is affected by a new piece of evidence.
As the researcher concluded, the psychologists may frequently have formed stereotype conclusions rather firmly from their first fragmentary information and then been reluctant to change their conclusions as they received new information. Or put another way, the psychologists made up their minds early on, and then refused to change them. This is a great example of the interaction between conservatism and confirmatory bias from the last chapter.
people underreact to things that should make them change their minds.
people tend to underreact in unstable environments with precise signals (turning points), but overreact to stable environments with noisy signals (trending markets).
So why are analysts and the rest of us so reticent to alter views? What is the root cause of this conservatism? The answer seems to me to lie in the “sunk cost” fallacy. This is a tendency to allow past unrecoverable expenses to inform current decisions. Brutally put, we tend to hang onto our views too long simply because we spent time and effort in coming up with those views in the first place.
We should all be encouraged to revisit our investment cases and approach them with a blank slate.
As Keynes said “When the facts change I change my mind, what do you do sir?”
Stories essentially govern the way we think. We will abandon evidence in favour of a good story. Taleb calls this tendency to be suckered by stories the narrative fallacy. As he writes in The Black Swan, “The fallacy is associated with our vulnerability to over-interpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world.”
the base rate information was essentially ignored in favor of the anecdotal story.
in the United States the average IPO has underperformed the market by 21 percent per annum in the three years after its listing (covering the period 1980-2007).
Stories usually have an emotional content, hence they appeal to the X-system—the quick and dirty way of thinking. If you want to use the more logical system of thought (the C-system), then you must focus on the facts. Generally, facts are emotionally cold, and thus will pass from the X-system to the C-system.
At GMO we define a bubble as a (real) price movement that is at least two standard deviations from trend.
Taleb defined a black swan as a highly improbable event with three principal characteristics: 1. It is unpredictable. 2. It has massive impact. 3. Ex-post, explanations are concocted that make the event appear less random and more predictable than it was.
Predictable surprises also have three defining characteristics: 1. At least some people are aware of the problem. 2. The problem gets worse over time. 3. Eventually the problem explodes into a crisis, much to the shock of most.
What prevents us from seeing these predictable surprises? At least five major psychological hurdles hamper us. Some of these barriers we have already encountered. Firstly, there is our old friend, over-optimism.
In addition to our over-optimism, we suffer from the illusion of control—the belief that we can influence the outcome of uncontrollable events.
as we saw earlier, simply providing a number can make people feel safer—the illusion of safety.
We have also encountered the third hurdle to spotting predictable surprises. It is self-serving bias—the innate desire to interpret information and act in ways that are supportive of our own self- interests.