More on this book
Community
Kindle Notes & Highlights
Started reading
October 13, 2018
once we figure out that we care about an “intangible” like public image because it impacts specific things like advertising by customer referral, which affects sales, then we have begun to identify how to measure it.
“thought experiment.”
Imagine you are an alien scientist who can clone not just sheep or even people but entire organizations. Let’s say you were investigating a particular fast food chain and studying the effect of a particular intangible, say, “employee empowerment.” You create a pair of the same organization calling one the “test” group and one the “control” group. Now imagine that you give the test group a little bit more “employee empowerment” while holding the amount in the control group constant. What do you imagine you would actually observe—in any way, directly or indirectly—that would change for the first
...more
Massimo Curatella liked this
The purpose of the measurement is often the key to defining what the measurement is really supposed to be. In the first chapter, I argued that all measurements of any interest to a manager must support at least one specific decision.
Massimo Curatella liked this
Business managers need to realize that some things seem intangible only because they just haven’t defined what they are talking about. Figure out what you mean and you are halfway to measuring it.
Most of these approaches to measurements are just variations on basic methods involving different types of sampling and experimental controls and, sometimes, choosing to focus on different types of questions.
You could engage in a formal office-wide census of this question, but it would be time consuming and expensive and will probably give you more precision than you need. Suppose, instead, you just randomly pick five people.
Rule of Five There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.
Massimo Curatella liked this
the origin of the word experiment. “Experiment” comes from the Latin ex-, meaning “of/from,” and periri, meaning “try/attempt.”
If you were attempting to show whether a particular initiative increased sales, they respond: “But lots of factors affect sales. You’ll never know how much that initiative affected it.”
Four Useful Measurement Assumptions 1. Your problem is not as unique as you think. 2. You have more data than you think. 3. You need less data than you think. 4. An adequate amount of new data is more accessible than you think.
Massimo Curatella liked this
Assume the information you need to answer the question is somewhere within your reach and if you just took the time to think about it, you might find it.
You need far less data than you think.
that the first few observations are usually the highest payback in uncertainty reduction
an innovative public school that teaches primarily through online, remote-learning methods that emphasize individualized curriculum.
This online tool allows students to “raise hands,” ask questions by either voice or text chat, and interact with the teacher in the instructional session. Everything the teachers or students say or do online is recorded.
select recordings of sessions and particular slices of time, each a minute or two long, throughout a recorded session. For those randomly chosen time slices, they could sample what the teacher was saying and what the students were doing.
number of standing ovations.
Above all else, the intuitive experimenter, as the origin of the word “experiment” denotes, makes an attempt. It’s a habit.
what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong.
the so-called Houston Miracle of the Texas school system in the 1990s. Public schools were under a new set of performance metrics to hold educators accountable for results. It is now known that the net effect of this “miracle” was that schools were incentivized to find ways to drop low-achieving students from the rolls. This is hardly the outcome most taxpayers thought they were funding.
If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter.
Massimo Curatella liked this
Applied Information Economics.
Prior to making a measurement, we need to answer the following: 1. What is the decision this measurement is supposed to support? 2. What is the definition of the thing being measured in terms of observable consequences? 3. How, exactly, does this thing matter to the decision being asked? 4. How much do you know about it now (i.e., what is your current level of uncertainty)? 5. What is the value of additional information?
Massimo Curatella liked this
If a measurement matters at all, it is because it must have some conceivable effect on decisions and behavior.
Massimo Curatella liked this
Once the managers realized that many reports simply had no bearing on decisions, they understood that those reports must, therefore, have no value.
When I asked if they could identify a single decision that each report could conceivably affect, they found quite a few that had no effect on any decision. Likewise, the information value of those reports was zero.
Massimo Curatella liked this
Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. For example: “There is a 60% chance this market will more than double in five years, a 30% chance it will grow at a slower rate, and a 10% chance the market will shrink in the same period.”
“What problem are you trying to solve with this measurement?”
They resolved that improved IT security means a reduction in the frequency and severity of a specific list of undesirable events.
Knowing what you know now about something actually has an important and often surprising impact on how you should measure it or even whether you should measure it.
Unfortunately, extensive studies have shown that very few people are naturally calibrated estimators.
Two Extremes of Subjective Confidence Overconfidence: When an individual routinely overstates knowledge and is correct less often than he or she expects. For example, when asked to make estimates with a 90% confidence interval, many fewer than 90% of the true answers fall within the estimated ranges. Underconfidence: When an individual routinely understates knowledge and is correct much more often than he or she expects. For example, when asked to make estimates with a 90% confidence interval, many more than 90% of the true answers fall within the estimated ranges.
Methods like the equivalent bet test help estimators give more realistic assessments of their uncertainty.
asking people to identify pros and cons for the validity of each of their estimates. A pro is a reason why the estimate is reasonable; a con is a reason why it might be overconfident.
look at each bound on the range as a separate “binary” question. A 90% CI interval means there is a 5% chance the true value could be greater than the upper bound and a 5% chance it could be less than the lower bound. This means that estimators must be 95% sure that the true value is less than the upper bound.
the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?”
After a few calibration tests and practice with methods like listing pros and cons, using the equivalent bet, and anti-anchoring, estimators learn to fine-tune their “probability senses.”
But if you are allowed to model your uncertainty with ranges and probabilities, you do not have to state something you don’t know for a fact. If you are uncertain, your ranges and assigned probabilities should reflect that.
lack of having an exact number is not the same as knowing nothing.
The results are clear: The difference in accuracy is due entirely to calibration training, and the calibration training—even though it uses trivia questions—works for real-world predictions.
the equivalent bet,
they had no real motivation to perform well. As I observed in my own workshops, those who did not expect their answers to be used in the subsequent real-world estimation tasks were almost always those who showed little or no improvement.
after a person has been calibrated, I have never heard them offer such challenges. Apparently, the hands-on experience of being forced to assign probabilities, and then seeing that this was a measurable skill in which they could see real improvements, addresses these concerns. Although this was not an objective I envisioned when I first started calibrating people, I came to learn how critical this process was in getting them to accept the entire concept of probabilistic analysis in decision making.
Massimo Curatella liked this
risk reduction is the basis of computing the value of a measurement,
if a measurement matters to you at all, it is because it must inform some decision that is uncertain and has negative consequences if it turns out wrong.
Massimo Curatella liked this
it is very possible to experience an increase in confidence about decisions and forecasts without actually improving things—or even making them worse.
Massimo Curatella liked this
“Monte Carlo simulation.”
uncertainty about the costs and benefits of some new investment is really the basis of that investment’s risk.
all risk in any project investment ultimately can be expressed by one method: the ranges of uncertainty on the costs and benefits and probabilities on events that might affect them.
Massimo Curatella liked this

