More on this book
Community
Kindle Notes & Highlights
Read between
October 12 - November 10, 2019
Fortunately, there’s a better answer. It goes something like this: “Mathematics is not just a sequence of computations to be carried out by rote until your patience or stamina runs out—although it might seem that way from what you’ve been taught in courses called mathematics. Those integrals are to mathematics as weight training and calisthenics are to soccer. If you want to play soccer—I mean, really play, at a competitive level—you’ve got to do a lot of boring, repetitive, apparently pointless drills. Do professional players ever use those drills? Well, you won’t see anybody on the field
...more
This highlight has been truncated due to consecutive passage length restrictions.
One thing the American defense establishment has traditionally understood very well is that countries don’t win wars just by being braver than the other side, or freer, or slightly preferred by God. The winners are usually the guys who get 5% fewer of their planes shot down, or use 5% less fuel, or get 5% more nutrition into their infantry at 95% of the cost. That’s not the stuff war movies are made of, but it’s the stuff wars are made of. And there’s math every step of the way.
A mathematician is always asking, “What assumptions are you making? And are they justified?”
Wald’s personality made it hard for him to focus his attention on applied problems, it’s true. The details of planes and guns were, to his eye, so much upholstery—he peered right through to the mathematical struts and nails holding the story together. Sometimes that approach can lead you to ignore features of the problem that really matter. But it also lets you see the common skeleton shared by problems that look very different on the surface. Thus you have meaningful experience even in areas where you appear to have none.
Math is like an atomic-powered prosthesis that you attach to your common sense, vastly multiplying its reach and strength. Despite the power of mathematics, and despite its sometimes forbidding notation and abstraction, the actual mental work involved is little different from the way we think about more down-to-earth problems.
there is a grave danger that the subject will develop along the line of least resistance, that the stream, so far from its source, will separate into a multitude of insignificant branches, and that the discipline will become a disorganized mass of details and complexities. In other words, at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration.*
Nonlinear thinking means which way you should go depends on where you already are. This insight isn’t new. Already in Roman times we find Horace’s famous remark “Est modus in rebus, sunt certi denique fines, quos ultra citraque nequit consistere rectum” (“There is a proper measure in things. There are, finally, certain boundaries short of and beyond which what is right cannot exist”).
There’s nothing wrong with the Laffer curve—only with the uses people put it to. Wanniski and the politicians who followed his panpipe fell prey to the oldest false syllogism in the book: It could be the case that lowering taxes will increase government revenue; I want it to be the case that lowering taxes will increase government revenue; Therefore, it is the case that lowering taxes will increase government revenue.
“Engagement with Iran? You probably also think Adolf Hitler was just misunderstood.” Why is this kind of reasoning so popular, when a moment’s thought reveals its wrongness?
One of the great joys of mathematics is the incontrovertible feeling that you’ve understood something the right way, all the way down to the bottom; it’s a feeling I haven’t experienced in any other sphere of mental life. And when you know how to do something the right way, it’s hard—for some stubborn people, impossible—to make yourself explain it the wrong way.
Before I was born, boys grew long hair and thus we were bound to get whipped by the Communists. When I was a kid, we played arcade games too much, which left us doomed to be outcompeted by the industrious Japanese. Now we eat too much fast food, and we’re all going to die weak and immobile, surrounded by empty chicken buckets, puddled into the couches from which we long ago became unable to hoist ourselves. The paper certified this anxiety as a fact proved by science.
But every curve, as we just learned from Newton, is pretty close to a line. That’s the idea that drives linear regression, the statistical technique that is to social science as the screwdriver is to home repair. It’s the one tool you’re pretty much definitely going to use, whatever the task. Every time you read in the newspaper that people with more cousins are happier, or that countries that have more Burger Kings have looser morals, or that halving your intake of niacin doubles your risk of athlete’s foot, or that every extra $10,000 of income makes you 3% more likely to vote Republican,*
...more
One gets such wholesale returns of conjecture out of such a trifling investment of fact.
can’t go along with those reformists who want to throw out memorization of the multiplication table. When doing any serious mathematical thinking, you’re going to have to multiply 6 by 8 sometimes, and if you have to reach for your calculator each time you do that, you’ll never achieve the kind of mental flow that actual thinking requires. You can’t write a sonnet if you have to look up
“I don’t get the concept.” The ideas of mathematics can sound abstract, but they make sense only in reference to concrete computations. William Carlos Williams put it crisply: no ideas but in things.
I’m not as radical as that. In fact, I’m not radical at all. Dissatisfying as it may be to partisans, I think we have to teach a mathematics that values precise answers but also intelligent approximation, that demands the ability to deploy existing algorithms fluently but also the horse sense to work things out on the fly, that mixes rigidity with a sense of play. If we don’t, we’re not really teaching mathematics at all. It’s a tall order—but it’s what the best math teachers are doing, anyway,
This is lineocentrism in its purest form.
That’s how the Law of Large Numbers works: not by balancing out what’s already happened, but by diluting what’s already happened with new data, until the past is so proportionally negligible that it can safely be forgotten.
Matthew White,
The universe is big, and if you’re sufficiently attuned to amazingly improbable occurrences, you’ll find them. Improbable things happen a lot. It’s massively improbable to get hit by a lightning bolt, or to win the lottery; but these things happen to people all the time, because there are a lot of people in the world, and a lot of them buy lottery tickets, or go golfing in a thunderstorm, or both. Most coincidences lose their snap when viewed from the appropriate distance.
Aristotle, as usual, was here first: despite lacking any formal notion of probability, he was able to understand that “it is probable that improbable things will happen. Granted this, one might argue that what is improbable is probable.” Once you’ve truly absorbed this fundamental truth, the Baltimore stockbroker has no power over you. That the stockbroker handed you ten straight good stock picks is very unlikely; that he handed somebody such a good run of picks, given ten thousand chances, is not even remotely surprising. In the British statistician R. A. Fisher’s famous formulation, “the
...more
The joke, like all jokes, is a veiled attack: in this case, an attack on sloppy methodology among those neuroimaging researchers
the proportion of heads will almost inevitably approach 1/2, as if constrained by a narrowing channel. This is what’s called the frequentist view of probability.
when we say an outcome is improbable, we are always saying, explicitly or not, that it is improbable under some set of hypotheses we’ve made about the underlying mechanisms of the world.
R. A. Fisher, the founder of the modern practice of statistics,* in the early twentieth century.
null hypothesis
null hypothesis, in executive bullet-point form: Run an experiment. Suppose the null hypothesis is true, and let p be the probability (under that hypothesis) of getting results as extreme as those observed. The number p is called the p-value. If it is very small, rejoice; you get to say your results are statistically significant. If it is large, concede that the null hypothesis has not been ruled out.
Fisher himself and is now widely adhered to, of taking p= 0.05, or 1/20, to be the threshold.
The world is so richly structured and so perfectly ordered—how tremendously unlikely it would be for there to be a world like this one, under the null hypothesis that there’s no primal designer who put the thing together!
But significance testing is not restricted to theological apologetics.
The contribution of R. A. Fisher was to make significance testing into a formal endeavor, a system by which the significance, or not, of an experimental result was a matter of objective fact.
A statistical study that’s not refined enough to detect a phenomenon of the expected size is called underpowered—the equivalent of looking at the planets with binoculars. Moons or no moons, you get the same result, so you might as well not have bothered. You don’t send binoculars to do a telescope’s job. The problem of low power is the flip side to the problem of the British birth control scare. A high-powered study, like the birth control trial, may lead you to burst a vein about a small effect that isn’t actually important. An underpowered one may lead you to wrongly dismiss a small effect
...more
International Journal of Haruspicy, which demands without exception that all published results clear the bar of statistical significance.
A recent paper in Psychological Science, a premier psychological journal, found that married women were significantly more likely to support Mitt Romney, the Republican presidential candidate, when they were in the fertile portion of their ovulatory cycle: of those women queried during their peak fertility period, 40.4% expressed support for Romney, while only 23.4% of the married women polled at infertile times were pulling the lever for Mitt.* The sample is small, just 228 women, but the difference is big, big enough that the result passes the p-value test with a score of .03. Which is just
...more
a scientific field has a drastically distorted view of the evidence for a hypothesis when public dissemination is cut off by a statistical significance threshold.
If the confidence interval is [−0.5%, 0.5%], then the reason you didn’t get statistical significance is because you have good evidence the intervention doesn’t do anything.
If the confidence interval is [−20%, 20%], the reason you didn’t get statistical significance is because you have no idea whether the intervention has an effect, or in which direction it goes. Those two outcomes look the same from the viewpoint of statistical significance, but have quite different implications for what you should do next.
the problem of inference.* How to determine the truth from the evidence?
A significance test is no more or less than a rule, which
people in charge
ap...
This highlight has been truncated due to consecutive passage length restrictions.
unde...
This highlight has been truncated due to consecutive passage length restrictions.
s...
This highlight has been truncated due to consecutive passage length restrictions.
What’s the purpose of a criminal trial? We might naively say it’s to find out whether the defendant actually committed the crime he’s on trial for. But that’s obviously wrong. There are rules of evidence, which forbid the jury from hearing testimony obtained improperly, even if it might help them accurately determine the defendant’s innocence or guilt. The purpose of a court is not truth, but justice.
For Neyman and Pearson, science is like the court. When a drug fails a significance test, we don’t say, “We are quite certain the drug didn’t work,” but merely, “The drug wasn’t shown to work.” And then dismiss it, just as we would a defendant whose presence at the crime scene couldn’t be established within reasonable doubt, even if every man and woman in the courthouse thinks he’s guilty as sin.
Fisher wanted none of this—for him, Neyman and Pearson stunk of pure mathematics, insisting on an austere rationalism at the expense of anything resembling scientific practice.
The significance test is the detective, not the judge.
“The finding is quite interesting, and suggests that more research in this direction is needed”? And how you don’t really even read that part because you think of it as an obligatory warning without content? Here’s the thing—the reason scientists always say that is because it’s important and it’s true!
Fisherian standard of “rarely fails.”