The Book of Why: The New Science of Cause and Effect
Rate it:
Read between June 4, 2020 - February 19, 2022
2%
Flag icon
None of the letters k, B, or P is in any mathematical way privileged over any of the others. How then can we express our strong conviction that it is the pressure that causes the barometer to change and not the other way around? And if we cannot express even this, how can we
2%
Flag icon
hope to express the many other causal convictions that do not have mathematical formulas, such as that the rooster’s crow does not cause the sun to rise? My college professors could not do it and never complained. I would be willing to bet that none of yours ever did either. We now understand why: never were they shown a mathematical language of causes; nor were they shown its benefits.
2%
Flag icon
never were they shown a mathematical language of causes; nor were they shown its benefits. It is in fact an indictment of science that it has neglected to develop such a language for so many generations.
Phillip Hunter
No representation of “common sense”
2%
Flag icon
scientific tools are developed to meet scientific needs. Precisely because we are so good at handling questions about switches, ice cream, and barometers, our need for special mathematical machinery to handle them was not obvious.
2%
Flag icon
Precisely because we are so good at handling questions about switches, ice cream, and barometers, our need for special mathematical machinery to handle them was not obvious. But as scientific curiosity increased and we began posing causal questions in complex legal, business, medical, and policy-making situations, we found ourselves lacking the tools and principles that mature science should provide.
2%
Flag icon
Despite heroic efforts by the geneticist Sewall Wright (1889–1988), causal vocabulary was virtually prohibited for more than half a century. And when you prohibit speech, you prohibit thought and stifle principles, methods, and tools.
2%
Flag icon
The rooster’s crow is highly correlated with the sunrise; yet it does not cause the sunrise.
2%
Flag icon
participate in the “data economy.” But I hope with this book to convince you that data are profoundly dumb.
2%
Flag icon
But I hope with this book to convince you that data are profoundly dumb. Data
2%
Flag icon
can tell you that the people who took a medicine recovered faster than those who did not take it,...
This highlight has been truncated due to consecutive passage length restrictions.
2%
Flag icon
calculus of causation consists of two languages: causal diagrams, to express what we know, and a symbolic language, resembling algebra, to express what we want to know. The causal diagrams are simply dot-and-arrow pictures that summarize our existing scientific knowledge. The dots represent quantities of interest, called “variables,” and the arrows represent known or suspected causal relationships between those variables—namely, which variable “
2%
Flag icon
to
2%
Flag icon
which o...
This highlight has been truncated due to consecutive passage length restrictions.
2%
Flag icon
If you can navigate using a map of one-way streets, then you can understand causal diagrams, and you can solve the type of questions posed at the beginning of this introduction.
3%
Flag icon
Side by side with this diagrammatic “language of knowledge,” we also have a symbolic “language of queries” to express the questions we want answers to. For
3%
Flag icon
counterfactuals are not products of whimsy but reflect the very structure
3%
Flag icon
world model.
3%
Flag icon
My emphasis on language also comes from a deep conviction that language shapes our thoughts. You cannot answer a question that you cannot ask, and you cannot ask a question that you have no words for.
4%
Flag icon
For this reason, some statisticians to this day find it extremely hard to understand why some knowledge lies outside the province of statistics and why data alone cannot make up for lack of scientific knowledge.
5%
Flag icon
Another advantage causal models have that data mining and deep learning lack is adaptability.
5%
Flag icon
That’s all that a deep-learning program can do: fit a function to data. On the other hand, if she possessed a model of how the drug operated and its causal structure remained intact in the new location, then the estimand she obtained in training would remain valid. It could be applied to the new data to generate a new population-specific prediction function.
5%
Flag icon
think of a cause as something that makes a difference, and the difference it makes must be a difference from what would have happened without
5%
Flag icon
it.”
5%
Flag icon
you are smarter than your data.
5%
Flag icon
Data do not understand causes and effects; humans do.
5%
Flag icon
I hope that the new science of causal inference will enable us to better understand how we do it, because there is no better way to understand ourselves than by emulating ourselves. In the age of computers, this new understanding also brings with it the prospect of amplifying our inn...
This highlight has been truncated due to consecutive passage length restrictions.
6%
Flag icon
causal explanations, not dry facts, make up the bulk of our knowledge, and should be the cornerstone of machine intelligence.
6%
Flag icon
Humans acquired the ability to modify their environment and their own abilities at a dramatically faster rate.
6%
Flag icon
In his book Sapiens, historian Yuval Harari posits that our ancestors’ capacity to imagine nonexistent things was the key to everything, for it allowed them to communicate better.
6%
Flag icon
FIGURE 1.1. Perceived
6%
Flag icon
when we seek to emulate human thought on a computer, or indeed when we try to solve unfamiliar scientific problems, drawing an explicit dots-and-arrows picture is extremely useful.
6%
Flag icon
my research on machine learning has taught me that a causal learner must master at least three distinct levels of cognitive ability: seeing, doing, and imagining.
6%
Flag icon
The second, doing, entails predicting the effect(s) of deliberate alterations of the environment and choosing among these alterations to produce a desired outcome.
7%
Flag icon
Counterfactual learners, on the top rung, can imagine worlds that do not exist and infer reasons for observed phenomena.
7%
Flag icon
Good predictions need not have good explanations.
7%
Flag icon
Nevertheless, deep learning has succeeded primarily by showing that certain questions or tasks we thought were difficult are in fact not. It has not addressed the truly difficult questions that continue to prevent us from achieving humanlike AI.
7%
Flag icon
Deep learning has instead given us machines with truly impressive abilities but no intelligence. The difference is profound and lies in the absence of a model of reality.
7%
Flag icon
This lack of flexibility and adaptability is inevitable in any system that works at the first level of the Ladder of Causation.
7%
Flag icon
This already calls for a new kind of knowledge, absent from the data, which
7%
Flag icon
Intervention ranks higher than association because it involves not just seeing but changing what is.
7%
Flag icon
A sufficiently strong and accurate causal model can allow us to use rung-one (observational) data to answer rung-two (interventional) queries. Without the causal model, we could not go from rung one to rung two.
8%
Flag icon
They cannot tell us what will happen in a counterfactual or imaginary world where some observed facts are bluntly negated.
8%
Flag icon
creature of pure imagination.
8%
Flag icon
flexibility, the ability to reflect and improve on past actions, and, perhaps even more significant, our willingness to take responsibility for past and current actions.
9%
Flag icon
think that Turing was on to something. We probably will not succeed in creating humanlike intelligence until we can create childlike intelligence, and a key component of this intelligence is the mastery of causation.
9%
Flag icon
we want our computer to understand causation, we have to teach it how to break the rules.
10%
Flag icon
(Research has shown that three-year-olds already
10%
Flag icon
understand the entire Ladder of Causation.)
10%
Flag icon
It is because of this robustness, I conjecture, that human intuition is organized around causal, not statistical, relations.
12%
Flag icon
The main point is this: while probabilities encode our beliefs about a static world, causality tells us whether and how probabilities change when the world changes, be it by intervention or by act of imagination.
« Prev 1