More on this book
Community
Kindle Notes & Highlights
by
Dan Heath
Read between
June 6 - June 11, 2020
not because of the financial returns but because of the moral returns.
The only drawback, really, is linguistic. “Social determinants of health” is one of those ostentatiously bland phrases that seem engineered to deter interest in the topics they name. Kind of like if dating were rebranded “aspirational interpersonal exchange.”
Nothing is easy. The world is complex and there are no quick fixes. But if I can learn to uncross my arms and extend my hands, I can be someone who eases suffering rather than ignores it.
believes they will transform the health system from the inside:
The school is betting that by drawing future doctors closer to the sources of disease and despair, they will be quicker to identify the leverage points that lead to health.
“power of proximity.”
when we allow ourselves to be shielded and disconnected from those who are vulnerable and disfavored, we sustain and contribute to these problems.
Getting proximate is not a guarantee of progress. It’s a start, not a finish. Upstream change often means fumbling our way forward, figuring out what works and what doesn’t, and under what conditions. But in this context, even a defeat is effectively a victory. Because every time we learn something, we fill in one more piece of the map as we hunt for the levers that can move the world.
soporific
This is the model of an early-warning story: Data warns us of a problem we wouldn’t have seen otherwise—say, needing ambulances deployed closer to nursing homes at mealtimes. And that predictive capacity gives us the time to act to prevent problems. Northwell paramedics can’t stop people from suffering cardiac arrest, but they can stop some of those people from dying.
So where does this leave us? Some early-warning systems work wonders: They can keep elevators from failing and customers from churning. Other times, they may cause more harm than benefit, as in the thyroid cancer “epidemic” in South Korea. How do we distinguish between the two? One key factor is the prevalence of false positives: warnings that incorrectly signal trouble.
alarm fatigue,
When everything is cause for alarm, nothing is cause for alarm.
As we design early-warning systems, we should keep these questions in mind: Will the warning give us enough time to act effectively? (If not, why bother?) What rate of false positives can we expect?
aggrieved
“When we look at it, we don’t see the dominoes, we see the spaces in between,” said Hockley in the talk, “when someone could have done something or said something to stop the next domino from falling over.”
Downstream efforts restore the previous state.
upstream efforts, success is not always self-evident.
But because there is a separation between (a) the way we’re measuring success and (b) the actual results we want to see in the world, we run the risk of a “ghost victory”: a superficial success that cloaks failure.
In the first kind of ghost victory, your measures show that you’re succeeding, but you’ve mistakenly attributed that success to your own work.
The second is that you’ve succeeded on your short-term measures, but they didn’t align with your long-term mission.
the third is that your short-term measures became the mission in a way that really undermined the work.
examine them very closely that you can spot the cracks—the signs of separation between apparent and real success.
In his book Thinking, Fast and Slow, the psychologist Daniel Kahneman wrote that our brains, when confronted with complexity, will often perform an invisible substitution, trading a hard question for an easy one.
Choosing the wrong short-term measures can doom upstream work.
They are critical navigational aids.
Getting short-term measures right is frustratingly complex. And it’s critical. In fact, the only thing worse than contending with short-term measures is not having them at all.
There is also a third kind of ghost victory that’s essentially a special case of the second. It occurs when measures become the mission.
some of the success was illusory.
People “gaming” measures is a familiar phenomenon.
It was like the crime rate itself became the boss.”
We cannot be naïve about this phenomenon of gaming. When people are rewarded for achieving a certain number, or punished for missing it, they will cheat. They will skew. They will skim. They will downgrade. In the mindless pursuit of “hitting the numbers,” people will do anything that’s legal without the slightest remorse—even if it grossly violates the spirit of the mission—and they will find ways to look more favorably upon what’s illegal.
Grove made sure to balance quantity measures with quality measures.
Any upstream effort that makes use of short-term measures—which, presumably, is most of them—should devote time to “pre-gaming,” meaning the careful consideration of how the measures might be misused.
Here are four questions to include in your pre-gaming:
The “rising tides” test: Imagine that we succeed on our short-term measures. What else might explain that success, other than our own efforts, and are we tracking those factors? The misalignment test: Imagine that we’ll eventually learn that our short-term measures do not reliably predict success on our ultimate mission. What would allow us to sniff out that misalignment as early as possible, and what alternate short-term measures might provide potential replacements? The lazy bureaucrat test: If someone wanted to succeed on these measures with the least effort possible, what would they do?
...more
The unintended consequences test: What if we succeed at our mission—not just the short-term measures but the mission itself—yet cause negative unintended consequences that outweigh the value of our work? What should we be paying attention to that’s offstage from our work?
“As you think about a system, spend part of your time from a vantage point that lets you see the whole system, not just the problem that may have drawn you to focus on the system to begin with,”
In planning upstream interventions, we’ve got to look outside the lines of our own work. Zoom out and pan from side to side.
If we try to eliminate X (an invasive species or a drug or a process or a product), what will fill the void? If we invest more time and energy in a particular problem, what will receive less focus as a result, and how might that inattention affect the system as a whole?
When we fail to anticipate second-order consequences, it’s an invitation to disaster, as the “cobra effect” makes clear. The cobra effect occurs when an attempted solution to a problem makes the problem worse.
The effort to reduce the number of cobras yielded more cobras.
When people were placed closer together so that they’d talk more, they talked less. The cobra strikes again.
“Get your model out there where it can be shot at. Invite others to challenge your assumptions and add their own.… The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error.”
For experimentation to succeed, we need prompt and reliable feedback.
“The only way you’re going to know it’s wrong is by having these feedback mechanisms and these measurement systems in place.”
We succeed by ensuring that we’ll have the feedback we need to navigate.
What would happen if you approached your boss’s boss, unsolicited, with a critique of her work? This is a systems problem. There’s an open loop in the system: The insight from physical therapists is never getting fed back to the surgeons.
Feedback loops spur improvement. And where those loops are missing, they can be created.
But improvement shouldn’t require heroism!

