The Little Book of Behavioral Investing: How not to be your own worst enemy (Little Book, Big Profits)
Rate it:
Open Preview
30%
Flag icon
even if by some miracle of divine intervention your forecast turns out to be correct, you can only make money from it, if (and only if) it is different from the consensus. This adds a whole new dimension of complexity to the problem.
31%
Flag icon
As Ben Graham said, “Forecasting security prices is not properly a part of security analysis, ” but that doesn’t stop the analysts from coming up with daft guesses as to the future price of a stock. On average these targets are about 25 percent higher than the current price. However, they are worthless as forecasts.
31%
Flag icon
We would all be better off if we took Keynes’ suggested response when asked about the future, “We simply do not know.”
31%
Flag icon
If investors want such meaningless information, then someone will provide it to them.
31%
Flag icon
experts seem to use a variety of excuses for forecast failure that allow them to avoid admitting they can’t forecast.
31%
Flag icon
When studying experts’ views on a wide range of world political events over a decade, he found that, across the vast array of predictions, experts who reported they had 80 percent or more confidence in their predictions were actually correct only around 45 percent of the time.
32%
Flag icon
The most common excuses were: 1. The “If only” defense—If only the Federal Reserve had raised rates, then the prediction would have been true. Effectively, the experts claim that they would have been correct if only their advice had been followed. 2. The “ceteris paribus” defense—Something outside of the model of analysis occurred, which invalidated the forecast; therefore it isn’t my fault. 3. The “I was almost right” defense—Although the predicted outcome didn’t occur, it almost did. 4. The “It just hasn’t happened yet” defense—I wasn’t wrong, it just hasn’t occurred yet. This is my personal ...more
33%
Flag icon
As much as we like to poke fun at faulty predictions, we can ’t function without them. Even if we disagree with, say, the analysts’ consensus on Cisco, that consensus gives us a basis that helps us to form our own judgments about whether it is overvalued or undervalued. Without forecasts, the market would no longer be grounded to anything.
33%
Flag icon
When given a number we tend to cling to it, even subconsciously—a trait known as anchoring.
33%
Flag icon
legal experts were influenced by irrelevant anchors when setting jail sentences, even when the experts were fully aware of the irrelevance of the input.
34%
Flag icon
“Analysis should be penetrating not prophetic.”
36%
Flag icon
“I’d prefer to be approximately right rather than precisely wrong.”
36%
Flag icon
The whole investment industry is obsessed with learning more and more about less and less, until we know absolutely everything about nothing.
36%
Flag icon
As Daniel J. Boorstin opined, “The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge.”
37%
Flag icon
So for the computer, more information truly was better.
39%
Flag icon
Could anything be done to help doctors look at the right things? The researchers came up with the idea of using laminated cards with various probabilities marked against diagnostic information.
39%
Flag icon
Decision-making seemed to have improved regardless of the use of the tool.
39%
Flag icon
In fact, the doctors had managed to assimilate the correct cues. That is to say, by showing them the correct items to use for diagnosis, the doctors’ emphasis switched from pseudodiagnostic information to truly informative elements. They started looking at the right things!
40%
Flag icon
The power of simple checklists should not be underestimated.
40%
Flag icon
It is far better to focus on what really matters, rather than succumbing to the siren call of Wall Street’s many noise peddlers. We would be far better off analyzing the five things we really need to know about an investment, rather than trying to know absolutely everything about everything concerned with the investment.
41%
Flag icon
too much information leads us to feel overconfident in the extreme, but it does little to aid us.
41%
Flag icon
we actually find even useless information soothing and we process it mindlessly.
41%
Flag icon
psychologists have explored the role of “placebic” information in people’s behavior.31 Placebic information is...
This highlight has been truncated due to consecutive passage length restrictions.
42%
Flag icon
when people see information in a format with which they are familiar, they will unquestioningly process it.
43%
Flag icon
Sometimes his ideas of value appears plausible and justified by business developments and prospects as you know them. Often, on the other hand, Mr. Market lets his enthusiasm or his fears run away with him, and the value he proposes seems to you little short of silly.
43%
Flag icon
John Maynard Keynes pointed out the irony of the situation: “It is largely the fluctuations which throw up bargains and the uncertainty due to the fluctuations which prevents other people from taking advantage of them.”
43%
Flag icon
Turning off the bubblevision is a great step towards preventing yourself from becoming a slave to the market.
44%
Flag icon
we are too busy looking for information that confirms our hypothesis.
44%
Flag icon
This behavioral pitfall of looking for confirming rather than disconfirming evidence is in direct violation of the principles outlined by Karl Popper, the philosopher of science. He argued that the only way to test a hypothesis was to look for all the information that disagreed with it—a process known as falsification.
45%
Flag icon
Not only do we look for information that agrees with us, but we tend to see all information as supporting our hypothesis.
46%
Flag icon
In effect each participant’s views polarized, becoming much more extreme than before the experiment.
47%
Flag icon
Time and again, psychologists have found that confidence and biased assimilation perform a strange tango. It appears the more sure people were that they had the correct view, the more they distorted new evidence to suit their existing preference, which in turn made them even more confident.
48%
Flag icon
Bayes Theorem, a relatively simple formula which shows how the probability that a theory is true is affected by a new piece of evidence.
50%
Flag icon
As the researcher concluded, the psychologists may frequently have formed stereotype conclusions rather firmly from their first fragmentary information and then been reluctant to change their conclusions as they received new information. Or put another way, the psychologists made up their minds early on, and then refused to change them. This is a great example of the interaction between conservatism and confirmatory bias from the last chapter.
51%
Flag icon
people underreact to things that should make them change their minds.
51%
Flag icon
people tend to underreact in unstable environments with precise signals (turning points), but overreact to stable environments with noisy signals (trending markets).
52%
Flag icon
So why are analysts and the rest of us so reticent to alter views? What is the root cause of this conservatism? The answer seems to me to lie in the “sunk cost” fallacy. This is a tendency to allow past unrecoverable expenses to inform current decisions. Brutally put, we tend to hang onto our views too long simply because we spent time and effort in coming up with those views in the first place.
52%
Flag icon
We should all be encouraged to revisit our investment cases and approach them with a blank slate.
53%
Flag icon
As Keynes said “When the facts change I change my mind, what do you do sir?”
54%
Flag icon
Stories essentially govern the way we think. We will abandon evidence in favour of a good story. Taleb calls this tendency to be suckered by stories the narrative fallacy. As he writes in The Black Swan, “The fallacy is associated with our vulnerability to over-interpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world.”
55%
Flag icon
the base rate information was essentially ignored in favor of the anecdotal story.
56%
Flag icon
in the United States the average IPO has underperformed the market by 21 percent per annum in the three years after its listing (covering the period 1980-2007).
58%
Flag icon
Stories usually have an emotional content, hence they appeal to the X-system—the quick and dirty way of thinking. If you want to use the more logical system of thought (the C-system), then you must focus on the facts. Generally, facts are emotionally cold, and thus will pass from the X-system to the C-system.
58%
Flag icon
At GMO we define a bubble as a (real) price movement that is at least two standard deviations from trend.
59%
Flag icon
Taleb defined a black swan as a highly improbable event with three principal characteristics: 1. It is unpredictable. 2. It has massive impact. 3. Ex-post, explanations are concocted that make the event appear less random and more predictable than it was.
59%
Flag icon
Predictable surprises also have three defining characteristics: 1. At least some people are aware of the problem. 2. The problem gets worse over time. 3. Eventually the problem explodes into a crisis, much to the shock of most.
60%
Flag icon
What prevents us from seeing these predictable surprises? At least five major psychological hurdles hamper us. Some of these barriers we have already encountered. Firstly, there is our old friend, over-optimism.
60%
Flag icon
In addition to our over-optimism, we suffer from the illusion of control—the belief that we can influence the outcome of uncontrollable events.
60%
Flag icon
as we saw earlier, simply providing a number can make people feel safer—the illusion of safety.
60%
Flag icon
We have also encountered the third hurdle to spotting predictable surprises. It is self-serving bias—the innate desire to interpret information and act in ways that are supportive of our own self- interests.