How to Measure Anything: Finding the Value of Intangibles in Business
Rate it:
Kindle Notes & Highlights
12%
Flag icon
once we figure out that we care about an “intangible” like public image because it impacts specific things like advertising by customer referral, which affects sales, then we have begun to identify how to measure it.
12%
Flag icon
“thought experiment.”
12%
Flag icon
Imagine you are an alien scientist who can clone not just sheep or even people but entire organizations. Let’s say you were investigating a particular fast food chain and studying the effect of a particular intangible, say, “employee empowerment.” You create a pair of the same organization calling one the “test” group and one the “control” group. Now imagine that you give the test group a little bit more “employee empowerment” while holding the amount in the control group constant. What do you imagine you would actually observe—in any way, directly or indirectly—that would change for the first ...more
12%
Flag icon
The purpose of the measurement is often the key to defining what the measurement is really supposed to be. In the first chapter, I argued that all measurements of any interest to a manager must support at least one specific decision.
12%
Flag icon
Business managers need to realize that some things seem intangible only because they just haven’t defined what they are talking about. Figure out what you mean and you are halfway to measuring it.
13%
Flag icon
Most of these approaches to measurements are just variations on basic methods involving different types of sampling and experimental controls and, sometimes, choosing to focus on different types of questions.
13%
Flag icon
You could engage in a formal office-wide census of this question, but it would be time consuming and expensive and will probably give you more precision than you need. Suppose, instead, you just randomly pick five people.
13%
Flag icon
Rule of Five There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.  
13%
Flag icon
the origin of the word experiment. “Experiment” comes from the Latin ex-, meaning “of/from,” and periri, meaning “try/attempt.”
13%
Flag icon
If you were attempting to show whether a particular initiative increased sales, they respond: “But lots of factors affect sales. You’ll never know how much that initiative affected it.”
13%
Flag icon
Four Useful Measurement Assumptions 1. Your problem is not as unique as you think. 2. You have more data than you think. 3. You need less data than you think. 4. An adequate amount of new data is more accessible than you think.
14%
Flag icon
Assume the information you need to answer the question is somewhere within your reach and if you just took the time to think about it, you might find it.
14%
Flag icon
You need far less data than you think.
14%
Flag icon
that the first few observations are usually the highest payback in uncertainty reduction
14%
Flag icon
an innovative public school that teaches primarily through online, remote-learning methods that emphasize individualized curriculum.
14%
Flag icon
This online tool allows students to “raise hands,” ask questions by either voice or text chat, and interact with the teacher in the instructional session. Everything the teachers or students say or do online is recorded.
14%
Flag icon
select recordings of sessions and particular slices of time, each a minute or two long, throughout a recorded session. For those randomly chosen time slices, they could sample what the teacher was saying and what the students were doing.
14%
Flag icon
number of standing ovations.
14%
Flag icon
Above all else, the intuitive experimenter, as the origin of the word “experiment” denotes, makes an attempt. It’s a habit.
15%
Flag icon
what really makes a measurement of high value is a lot of uncertainty combined with a high cost of being wrong.
15%
Flag icon
the so-called Houston Miracle of the Texas school system in the 1990s. Public schools were under a new set of performance metrics to hold educators accountable for results. It is now known that the net effect of this “miracle” was that schools were incentivized to find ways to drop low-achieving students from the rolls. This is hardly the outcome most taxpayers thought they were funding.
15%
Flag icon
If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter.
17%
Flag icon
Applied Information Economics.
18%
Flag icon
Prior to making a measurement, we need to answer the following: 1. What is the decision this measurement is supposed to support? 2. What is the definition of the thing being measured in terms of observable consequences? 3. How, exactly, does this thing matter to the decision being asked? 4. How much do you know about it now (i.e., what is your current level of uncertainty)? 5. What is the value of additional information?
18%
Flag icon
If a measurement matters at all, it is because it must have some conceivable effect on decisions and behavior.
18%
Flag icon
Once the managers realized that many reports simply had no bearing on decisions, they understood that those reports must, therefore, have no value.
18%
Flag icon
When I asked if they could identify a single decision that each report could conceivably affect, they found quite a few that had no effect on any decision. Likewise, the information value of those reports was zero.
19%
Flag icon
Measurement of Uncertainty: A set of probabilities assigned to a set of possibilities. For example: “There is a 60% chance this market will more than double in five years, a 30% chance it will grow at a slower rate, and a 10% chance the market will shrink in the same period.”
19%
Flag icon
“What problem are you trying to solve with this measurement?”
20%
Flag icon
They resolved that improved IT security means a reduction in the frequency and severity of a specific list of undesirable events.
21%
Flag icon
Knowing what you know now about something actually has an important and often surprising impact on how you should measure it or even whether you should measure it.
21%
Flag icon
Unfortunately, extensive studies have shown that very few people are naturally calibrated estimators.
21%
Flag icon
Two Extremes of Subjective Confidence Overconfidence: When an individual routinely overstates knowledge and is correct less often than he or she expects. For example, when asked to make estimates with a 90% confidence interval, many fewer than 90% of the true answers fall within the estimated ranges. Underconfidence: When an individual routinely understates knowledge and is correct much more often than he or she expects. For example, when asked to make estimates with a 90% confidence interval, many more than 90% of the true answers fall within the estimated ranges.
22%
Flag icon
Methods like the equivalent bet test help estimators give more realistic assessments of their uncertainty.
23%
Flag icon
asking people to identify pros and cons for the validity of each of their estimates. A pro is a reason why the estimate is reasonable; a con is a reason why it might be overconfident.
23%
Flag icon
look at each bound on the range as a separate “binary” question. A 90% CI interval means there is a 5% chance the true value could be greater than the upper bound and a 5% chance it could be less than the lower bound. This means that estimators must be 95% sure that the true value is less than the upper bound.
23%
Flag icon
the “absurdity test.” It reframes the question from “What do I think this value could be?” to “What values do I know to be ridiculous?”
23%
Flag icon
After a few calibration tests and practice with methods like listing pros and cons, using the equivalent bet, and anti-anchoring, estimators learn to fine-tune their “probability senses.”
24%
Flag icon
But if you are allowed to model your uncertainty with ranges and probabilities, you do not have to state something you don’t know for a fact. If you are uncertain, your ranges and assigned probabilities should reflect that.
24%
Flag icon
lack of having an exact number is not the same as knowing nothing.
26%
Flag icon
The results are clear: The difference in accuracy is due entirely to calibration training, and the calibration training—even though it uses trivia questions—works for real-world predictions.
26%
Flag icon
the equivalent bet,
26%
Flag icon
they had no real motivation to perform well. As I observed in my own workshops, those who did not expect their answers to be used in the subsequent real-world estimation tasks were almost always those who showed little or no improvement.
27%
Flag icon
after a person has been calibrated, I have never heard them offer such challenges. Apparently, the hands-on experience of being forced to assign probabilities, and then seeing that this was a measurable skill in which they could see real improvements, addresses these concerns. Although this was not an objective I envisioned when I first started calibrating people, I came to learn how critical this process was in getting them to accept the entire concept of probabilistic analysis in decision making.
27%
Flag icon
risk reduction is the basis of computing the value of a measurement,
27%
Flag icon
if a measurement matters to you at all, it is because it must inform some decision that is uncertain and has negative consequences if it turns out wrong.
28%
Flag icon
it is very possible to experience an increase in confidence about decisions and forecasts without actually improving things—or even making them worse.
28%
Flag icon
“Monte Carlo simulation.”
28%
Flag icon
uncertainty about the costs and benefits of some new investment is really the basis of that investment’s risk.
28%
Flag icon
all risk in any project investment ultimately can be expressed by one method: the ranges of uncertainty on the costs and benefits and probabilities on events that might affect them.