How to Measure Anything: Finding the Value of Intangibles in Business
Rate it:
Open Preview
14%
Flag icon
measurement as “uncertainty reduction”
14%
Flag icon
since this uncertainty can change as a result of observations, we treat uncertainty as a feature of the observer, not necessarily the thing being observed.
14%
Flag icon
regardless of whether we have objective frequencies as a guide, we can still describe the uncertainty of a person using probabilities.
14%
Flag icon
Finally, we need to remember that there is another edge to the “uncertainty reduction” sword. Total elimination of uncertainty is not necessary for a measurement but there must be some expected uncertainty reduction. If a decision maker or analyst engages in what they believe to be measurement activities, but their estimates and decisions actually get worse on average, then they are not actually reducing their error and they are not conducting a measurement according to our definition.
14%
Flag icon
we discussed how ordinal scales can be a valid form of a measurement. But they can be misused. It is common for organizations to use ordinal scales in some sort of “weighted score” to evaluate alternatives in a decision. This often involves operations that are technically invalid for ordinal scales (multiplication and addition) and the evidence about the performance of these methods (i.e., whether they measurably improve decisions) is either non-existent or negative.
14%
Flag icon
If someone asks how to measure “strategic alignment” or “flexibility” or “customer satisfaction,” I simply ask: “What do you mean, exactly?” It is interesting how often people further refine their use of the term in a way that almost answers the measurement question by itself.
14%
Flag icon
Once managers figure out what they mean and why it matters, the issue in question starts to look a lot more measurable. This is usually my first level of analysis when I conduct what I’ve called “clarification workshops.” It’s simply a matter of clients stating a particular, but initially ambiguous, item they want to measure. I then follow up by asking “What do you mean by <fill in the blank>?”
15%
Flag icon
Whether the measurement challenge is about security, the environment, or public image, there are two methods that seem to help with the particularly hard-to-define problems. I use what I call a “clarification chain” or, if that doesn’t work, perhaps a type of thought experiment. The clarification chain is just a short series of connections that should bring us from thinking of something as an intangible to thinking of it as tangible.
15%
Flag icon
if X is something that we care about, then X, by definition, must be detectable in some way. How could we care about things like “quality,” “risk,” “security,” or “public image” if these things were totally undetectable, in any way, directly or indirectly? If we have reason to care about some unknown quantity, it is because we think it corresponds to desirable or undesirable results in some way. Second, if this thing is detectable, then it must be detectable in some amount. If you can observe a thing at all, you can observe more of it or less of it. Once we accept that much, the final step is ...more
15%
Flag icon
The clarification chain is a variation on an idea described by the early twentieth-century psychologist Edward Lee Thorndike: “[I]f a thing exists, it exists in some amount, if it exists in some amount, it can be measured”
15%
Flag icon
Clarification Chain If it matters at all, it is detectable/observable. If it is detectable, it can be detected as an amount (or range of possible amounts). If it can be detected as a range of possible amounts, it can be measured.
15%
Flag icon
I may also try a type of “thought experiment.” Imagine you are an alien scientist who can clone not just sheep or even people but entire organizations. Let’s say you were investigating a particular fast food chain and studying the effect of a particular intangible, say, “employee empowerment.” You create a pair of the same organization calling one the “test” group and one the “control” group. Now imagine that you give the test group a little bit more “employee empowerment” while holding the amount in the control group constant. What do you imagine you would actually observe—in any way, ...more
15%
Flag icon
It also helps to state why we want to measure something in order to understand what is really being measured. The purpose of the measurement is often the key to defining what the measurement is really supposed to be.
15%
Flag icon
Most of the apparently difficult measurements, however, involve indirect deductions and inferences. We need to infer something “unseen” from something “seen.” Eratosthenes couldn’t directly see the curvature of Earth, but he could deduce it from shadows and the knowledge that Earth was roughly spherical. Emily Rosa could not directly measure how therapeutic touch allegedly heals, but she could conduct an experiment to test a prerequisite claim (the therapist would at least have to detect an energy field to support the claim that this healing method worked). And she didn’t lament not having ...more
15%
Flag icon
sometimes even small samples can tell you something that improves the odds of making a better bet in real decisions.
15%
Flag icon
Here are a few examples involving inferences about something unseen from something seen:
16%
Flag icon
Measuring when many other, even unknown, variables are involved: We can determine whether the new “quality program” is the reason for the increase in sales as opposed to the economy, competitor mistakes, or a new pricing policy.
16%
Flag icon
Measuring the risk of rare events: The chance of a launch failure of a rocket that has never flown before, another September 11th–type attack, another levee failure in New Orleans, or another major financial crisis
16%
Flag icon
Measuring subjective preferences and values: We can measure the value of art, free time, or reducing risk to your life by assessing how much people actually pay for these things both in terms of their money and their time.
16%
Flag icon
The Rule of Five is simple, it works, and it can be proven to be statistically valid for a wide variety of problems. With a sample this small, the range might be very wide, but if it is significantly narrower than your previous range, then it counts as a measurement.
16%
Flag icon
Rule of Five There is a 93.75% chance that the median of a population is between the smallest and largest values in any random sample of five from that population.
16%
Flag icon
If you randomly select one sample out of a large population, even a population that numbers thousands or millions, where you initially believed the population proportion can be anything between 0% and 100%, there is a 75% chance that the characteristic you observe in that sample is the same as the majority. Let’s call this the “Single Sample Majority Rule” or, if you prefer something more fanciful, “The Urn of Mystery Rule.”
16%
Flag icon
The Single Sample Majority Rule (i.e., The Urn of Mystery Rule) Given maximum uncertainty about a population proportion—such that you believe the proportion could be anything between 0% and 100% with all values being equally likely—there is a 75% chance that a single randomly selected sample is from the majority
17%
Flag icon
The only valid reason to say that a measurement shouldn’t be made is that the cost of the measurement exceeds its benefits.
17%
Flag icon
Applied Information Economics—a method for assessing uncertainty, risks, and intangibles in any type of big, risky decision you can imagine. A key step in the process (in fact, the reason for the name) is the calculation of the economic value of information.
17%
Flag icon
I’ve been computing the economic value of measurements on every variable in scores of various large business decisions. I found some fascinating patterns through this calculation but, for now, I’ll mention this: Most of the variables in a business case had an information value of zero. Still, something like one to four variables were both uncertain enough and had enough bearing on the outcome of the decision to merit deliberate measurement efforts.
17%
Flag icon
Usually, Only a Few Things Matter—But They Usually Matter a Lot In most business cases, most of the variables have an “information value” at or near zero. But usually at least some variables have an information value that is so high that some deliberate measurement effort is easily justified.
17%
Flag icon
If you are betting a lot of money on the outcome of a variable that has a lot of uncertainty, then even a marginal reduction in your uncertainty has a computable monetary value.
17%
Flag icon
When someone says a variable is “too expensive” or “too difficult” to measure, we have to ask “Compared to what?” If the information value of the measurement is literally or virtually zero, of course, no measurement is justified. But if the measurement has any significant value, we must ask: “Is there any measurement method at all that can reduce uncertainty enough to justify the cost of the measurement?” Once we recognize the value of even partial uncertainty reduction, the answer is usually “Yes.”
18%
Flag icon
A variation on the economic objection to measurement is how it influences not management decisions but the behaviors of others in ways that may or may not be the intended outcome. For example, performance metrics for a help desk based on how many calls it handles may encourage help desk workers to take a call and conclude it without solving the client’s problem.
18%
Flag icon
For any given set of measurements, there are a large number of possible incentive structures. This kind of objection presumes that since one set of measurements was part of an unproductive incentive program, then any measurements must incentivize unproductive behavior. Nothing could be further from the truth. If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter.
18%
Flag icon
There is no controversy in social science which shows such a large body of qualitatively diverse studies coming out so uniformly in the same direction as this one. When you’re pushing 90 investigations [now closer to 150], predicting everything from the outcome of football games to the diagnosis of liver disease and when you can hardly come up with a half dozen studies showing even a weak tendency in favor of the [human expert], it is time to draw a practical conclusion.10
19%
Flag icon
If we insist on being ignorant of the relative values of various public welfare programs (which is the necessary result of a refusal to measure their value), then we will almost certainly allocate limited resources in a way that solves less valuable problems for more money.
20%
Flag icon
the preference for ignorance over even marginal reductions in ignorance is never the moral high ground.
20%
Flag icon
Four Useful Measurement Assumptions It’s been measured before. You have far more data than you think.
20%
Flag icon
You need far less data than you think. Useful, new observations are more accessible than you think.
20%
Flag icon
It’s Been Measured Before No matter how difficult or “unique” your measurement problem seems to you, assume it has been done already by someone else, perhaps in another field if not your own.
20%
Flag icon
You Have Far More Data than You Think The information you need to answer the question is somewhere within your reach and, if you just took the time to think about it, you might find it. Few executives are even remotely aware of all the data that are routinely tracked and recorded in their organization. The things you care about measuring are also things that tend to leave tracks, if you are resourceful enough to find them.
20%
Flag icon
The Uniqueness Fallacy. This fallacy is based on that idea that if a situation is unique, there is nothing in general we can learn by examining other situations. Examining different situations may not be perfect, but it is an improvement. As Paul Meehl showed, interpolating from statistical data outperformed experts even when each situation was “unique” in some way (e.g., judging the risk of parole violations or the potential of an applicant for medical school).
20%
Flag icon
Even though each mission really was arguably unique for one or many reasons, the model based on historical data was consistently better than the mission scientists and engineers at predicting overruns of cost and schedule and even mission failures. Again, it is important to remember that experience of the NASA scientists and engineers, like the statistical models, must be based on historical data. If there is no basis to apply statistical models and scientific evidence, then there can be no basis for experience, either.
21%
Flag icon
You Need Far Less Data than You Think As we showed with the Rule of Five and the Single Sample Majority Inference, small samples can be informative, especially when you start from a position of minimal information. In fact, mathematically speaking, when you know almost nothing, almost anything will tell you something.
21%
Flag icon
We will find in later chapters that the first few observations are usually the highest payback in uncertainty reduction for a given amount of effort. In fact, it is a common misconception that the higher your uncertainty, the more data you need to significantly reduce it. Again, when you know next to nothing, you don’t need much additional data to tell you something you didn’t know before.
21%
Flag icon
Useful, New Observations Are More Accessible than You Think Scientific method isn’t just about having data. It’s also about getting data.
21%
Flag icon
don’t assume that the only way to reduce your uncertainty is to use an impractically sophisticated method. Are you trying to get published in a peer-reviewed journal, or are you just trying to reduce your uncertainty about a real-life business decision? Build on the “You need less data than you think” assumption and you may find that you don’t have to gather as much data as you thought.