Upstream: The Quest to Solve Problems Before They Happen
Rate it:
Open Preview
30%
Flag icon
not because of the financial returns but because of the moral returns.
30%
Flag icon
The only drawback, really, is linguistic. “Social determinants of health” is one of those ostentatiously bland phrases that seem engineered to deter interest in the topics they name. Kind of like if dating were rebranded “aspirational interpersonal exchange.”
32%
Flag icon
Nothing is easy. The world is complex and there are no quick fixes. But if I can learn to uncross my arms and extend my hands, I can be someone who eases suffering rather than ignores it.
32%
Flag icon
believes they will transform the health system from the inside:
32%
Flag icon
The school is betting that by drawing future doctors closer to the sources of disease and despair, they will be quicker to identify the leverage points that lead to health.
32%
Flag icon
“power of proximity.”
32%
Flag icon
when we allow ourselves to be shielded and disconnected from those who are vulnerable and disfavored, we sustain and contribute to these problems.
32%
Flag icon
Getting proximate is not a guarantee of progress. It’s a start, not a finish. Upstream change often means fumbling our way forward, figuring out what works and what doesn’t, and under what conditions. But in this context, even a defeat is effectively a victory. Because every time we learn something, we fill in one more piece of the map as we hunt for the levers that can move the world.
33%
Flag icon
soporific
33%
Flag icon
This is the model of an early-warning story: Data warns us of a problem we wouldn’t have seen otherwise—say, needing ambulances deployed closer to nursing homes at mealtimes. And that predictive capacity gives us the time to act to prevent problems. Northwell paramedics can’t stop people from suffering cardiac arrest, but they can stop some of those people from dying.
35%
Flag icon
So where does this leave us? Some early-warning systems work wonders: They can keep elevators from failing and customers from churning. Other times, they may cause more harm than benefit, as in the thyroid cancer “epidemic” in South Korea. How do we distinguish between the two? One key factor is the prevalence of false positives: warnings that incorrectly signal trouble.
35%
Flag icon
alarm fatigue,
35%
Flag icon
When everything is cause for alarm, nothing is cause for alarm.
35%
Flag icon
As we design early-warning systems, we should keep these questions in mind: Will the warning give us enough time to act effectively? (If not, why bother?) What rate of false positives can we expect?
35%
Flag icon
aggrieved
36%
Flag icon
“When we look at it, we don’t see the dominoes, we see the spaces in between,” said Hockley in the talk, “when someone could have done something or said something to stop the next domino from falling over.”
36%
Flag icon
Downstream efforts restore the previous state.
36%
Flag icon
upstream efforts, success is not always self-evident.
36%
Flag icon
But because there is a separation between (a) the way we’re measuring success and (b) the actual results we want to see in the world, we run the risk of a “ghost victory”: a superficial success that cloaks failure.
37%
Flag icon
In the first kind of ghost victory, your measures show that you’re succeeding, but you’ve mistakenly attributed that success to your own work.
37%
Flag icon
The second is that you’ve succeeded on your short-term measures, but they didn’t align with your long-term mission.
37%
Flag icon
the third is that your short-term measures became the mission in a way that really undermined the work.
37%
Flag icon
examine them very closely that you can spot the cracks—the signs of separation between apparent and real success.
38%
Flag icon
In his book Thinking, Fast and Slow, the psychologist Daniel Kahneman wrote that our brains, when confronted with complexity, will often perform an invisible substitution, trading a hard question for an easy one.
38%
Flag icon
Choosing the wrong short-term measures can doom upstream work.
38%
Flag icon
They are critical navigational aids.
38%
Flag icon
Getting short-term measures right is frustratingly complex. And it’s critical. In fact, the only thing worse than contending with short-term measures is not having them at all.
38%
Flag icon
There is also a third kind of ghost victory that’s essentially a special case of the second. It occurs when measures become the mission.
39%
Flag icon
some of the success was illusory.
39%
Flag icon
People “gaming” measures is a familiar phenomenon.
39%
Flag icon
It was like the crime rate itself became the boss.”
40%
Flag icon
We cannot be naïve about this phenomenon of gaming. When people are rewarded for achieving a certain number, or punished for missing it, they will cheat. They will skew. They will skim. They will downgrade. In the mindless pursuit of “hitting the numbers,” people will do anything that’s legal without the slightest remorse—even if it grossly violates the spirit of the mission—and they will find ways to look more favorably upon what’s illegal.
40%
Flag icon
Grove made sure to balance quantity measures with quality measures.
40%
Flag icon
Any upstream effort that makes use of short-term measures—which, presumably, is most of them—should devote time to “pre-gaming,” meaning the careful consideration of how the measures might be misused.
40%
Flag icon
Here are four questions to include in your pre-gaming:
40%
Flag icon
The “rising tides” test: Imagine that we succeed on our short-term measures. What else might explain that success, other than our own efforts, and are we tracking those factors? The misalignment test: Imagine that we’ll eventually learn that our short-term measures do not reliably predict success on our ultimate mission. What would allow us to sniff out that misalignment as early as possible, and what alternate short-term measures might provide potential replacements? The lazy bureaucrat test: If someone wanted to succeed on these measures with the least effort possible, what would they do? ...more
40%
Flag icon
The unintended consequences test: What if we succeed at our mission—not just the short-term measures but the mission itself—yet cause negative unintended consequences that outweigh the value of our work? What should we be paying attention to that’s offstage from our work?
41%
Flag icon
“As you think about a system, spend part of your time from a vantage point that lets you see the whole system, not just the problem that may have drawn you to focus on the system to begin with,”
42%
Flag icon
In planning upstream interventions, we’ve got to look outside the lines of our own work. Zoom out and pan from side to side.
42%
Flag icon
If we try to eliminate X (an invasive species or a drug or a process or a product), what will fill the void? If we invest more time and energy in a particular problem, what will receive less focus as a result, and how might that inattention affect the system as a whole?
42%
Flag icon
When we fail to anticipate second-order consequences, it’s an invitation to disaster, as the “cobra effect” makes clear. The cobra effect occurs when an attempted solution to a problem makes the problem worse.
42%
Flag icon
The effort to reduce the number of cobras yielded more cobras.
43%
Flag icon
When people were placed closer together so that they’d talk more, they talked less. The cobra strikes again.
43%
Flag icon
“Get your model out there where it can be shot at. Invite others to challenge your assumptions and add their own.… The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error.”
43%
Flag icon
For experimentation to succeed, we need prompt and reliable feedback.
43%
Flag icon
“The only way you’re going to know it’s wrong is by having these feedback mechanisms and these measurement systems in place.”
43%
Flag icon
We succeed by ensuring that we’ll have the feedback we need to navigate.
43%
Flag icon
What would happen if you approached your boss’s boss, unsolicited, with a critique of her work? This is a systems problem. There’s an open loop in the system: The insight from physical therapists is never getting fed back to the surgeons.
43%
Flag icon
Feedback loops spur improvement. And where those loops are missing, they can be created.
43%
Flag icon
But improvement shouldn’t require heroism!