More on this book
Community
Kindle Notes & Highlights
One particularly powerful tactic for becoming more calibrated is to pretend to bet money.
You express your uncertainty in a way that indicates you have less uncertainty than you really have.
The only desirable answer you can give is if you set your range just right so that you would be indifferent between options A and B.
Then de Finetti proposed that you have another party decide which side of that bet they want to take. If they think you set the price too high, they would sell you such a contract at that price. If they think you set the price too low, they would buy one from you instead. De Finetti pointed out that the best strategy for you would be one where you would be indifferent
as to which position you had to take once you set the price. In other words, you would have to set the price where you saw no arbitrage opportunity. He referred to this as coherence in your price.
Another calibration training method involves asking people to identify potential problems for each of their estimates.
It is easy to get lost in how much you don’t know about a problem and forget that there are still some things you do know.
But, again, the lack of having an exact number is not the same as knowing nothing.
The Greek poet and philosopher Horace seemed to intuitively understand over 2,000 years ago that even when there is a lot of uncertainty, you still have some basis for a range. He said “There is a measure in everything. There are fixed limits beyond which and short of which right cannot find a resting place.”
Real Risk Analysis: The Monte Carlo
At the suggestion of Metropolis, Ulam named this computer-based method of generating random scenarios after Monte Carlo, a famous gambling hotspot, in honor of Ulam’s uncle, a gambler.
To resolve this problem, Monte Carlo simulations use a brute-force approach made possible with computers. We randomly pick a bunch of exact values—thousands—according to the ranges we prescribed and compute a large number of exact values.
Exhibit 6.1 The Normal Distribution With the normal distribution, I will briefly mention a related concept called the standard deviation.
This is also called a “Bernoulli distribution,” after the seventeenth-century mathematician Jacob Bernoulli, who developed several early concepts about the theory of probability.
Stanford University professor who developed a tool he calls Insight.xla.
probe missions, NASA has been applying both a soft “risk score” and more sophisticated Monte Carlo simulations to assess the risks of cost and schedule overruns and mission failures. The cost and schedule estimates from Monte Carlo simulations, on average, have less than half the error of the traditional accounting estimates.8
The McNamara Fallacy “The first step is to measure whatever can be easily measured. This is okay as far as it goes. The second step is to disregard that which can’t easily be measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can’t be measured easily isn’t important. This is blindness. The fourth step is to say that what can’t easily be measured really doesn’t exist. This is suicide.” —Charles Handy, The Empty Raincoat (1995), describing the Vietnam-era measurement policies of Secretary of Defense Robert McNamara
The solution to the first of these three has existed since the 1950s in a field of mathematics called “decision theory,” an offshoot of game theory.
The Expected Opportunity Loss (EOL)
Expected Opportunity Loss if Approved: $5m × 40% = $2m Expected Opportunity Loss if Rejected: $40m × 60% = $24m
Computing an “Incremental Probability” The “Normdist()” function is one of many functions in Excel that are used to compute probabilities in a distribution.
Fortunately, there are alternatives. A leading software company specializing in pricing for the B2B market is Zilliant, a client of mine based in Austin, Texas.
The Measurement Inversion In a decision model with a large number of uncertain variables, the economic value of measuring a variable is usually inversely proportional to how much measurement attention it typically gets.
The solution in this case is simple: Don’t let the champions of an investment be the only ones responsible for measuring their own performance. Executives who approve and evaluate a manager’s initiatives need their own source of measurements.
First, we know that the early part of any measurement usually is the high-value part. Don’t attempt a massive study to measure something if you have a lot of uncertainty about it now.
Lessons from Computing the Value of Information Value of Measurement Matters. If you don’t compute the value of measurements, you are probably measuring the wrong things, the wrong way. Be Iterative. The highest-value measurement is the beginning of the measurement, so do it in bits and take stock after each iteration.
Daniel Fahrenheit’s mercury thermometer quantified what was previously considered the “quality” of temperature. These devices revealed not just a number but something fundamental about the nature of the universe the observers lived in. Each one was a keyhole through which some previously secret aspect of the world could be observed.
The question is “Compared to what?” Compared to the unaided human? Compared to no attempt at measurement at all? Keep the purpose of measurement in mind: uncertainty reduction, not necessarily uncertainty elimination.
Instruments deliberately don’t see some things. Instruments
Decompose It Many measurements start by decomposing an uncertain variable into constituent parts to identify directly observable things that are easier to measure.
Decomposition effect: The phenomenon that the decomposition itself sometimes turns out to provide such a sufficient reduction in uncertainty that further uncertainty reduction through new observations are not required.
The 80 or more major risk/return analyses I’ve done in the past 20 years consisted of a total of over 7,000 individual variables, or an average of
almost 90 variables per model.
Internet searching is unproductive unless you are using the right search terms. It takes practice to use Internet searches effectively, but these tips should help.
“table,” “survey,” “control group,” “correlation,” and “standard deviation,” which would tend to appear in more substantive research. Also, terms like “university,” “PhD,” and “national study” tend to appear in more serious (less fluffy) research.
One example is how Amazon.com provides free gift wrapping in order to help track which books are purchased as gifts. At one point Amazon was not tracking the number of items sold as gifts; the company added the gift-wrapping feature to be
able to track it.
Another example is how consumers are given coupons so retailers can see, among other things, what new...
This highlight has been truncated due to consecutive passage length restrictions.
Inexpensive personal sensors and apps for smart devices are available for many types of measu...
This highlight has been truncated due to consecutive passage length restrictions.
“A random selection of three people would have been better than a group of 300 chosen by Mr. Kinsey.” In another version of this quote, he
Selection bias: Even when attempting randomness in samples, we can get inadvertent nonrandomness.
Observer bias (or the Heisenberg and Hawthorne bias): Subatomic particles and humans have something in common. The act of observing them causes them both to change behavior. In 1927, the physicist Werner Heisenberg derived a formula showing that there is a limit to how much we can know about a particle’s position and velocity. When we observe particles, we have to interact with them (e.g., bounce light off them), causing their paths to change.
To their surprise, they found that worker productivity improved no matter how they changed the workplace.
The workers were simply responding to the knowledge of being observed; or perhaps, researchers hypothesized, management taking interest in them caused a positive reaction.
It is the mark of an educated mind to rest satisfied with the degree of precision which the nature of the subject admits and not to seek exactness where only an approximation is possible.
but most ranges were something like 1 to 5 grams.
This method is widely taught in basic statistics courses and can be used for computing errors for sample sizes as small as two.
William Sealy Gosset, a chemist and statistician at the Guinness brewery in Dublin, had a measurement problem. Gosset needed a way to measure which types of barley
produced the best beer-brewing yields.
By 1908, he had developed a powerful new method he called the “t-statistic,” and he wanted to publish it—but Guinness would not have approved.

