More on this book
Community
Kindle Notes & Highlights
Read between
August 20 - August 29, 2021
“What is best in mathematics deserves not merely to be learnt as a task, but to be assimilated as a part of daily thought, and brought again and again before the mind with ever-renewed encouragement.”
This answer is seldom satisfying to the student. That’s because it’s a lie. And the teacher and the student both know it’s a lie.
“Mathematics is not just a sequence of computations to be carried out by rote until your patience or stamina runs out—although it might seem that way from what you’ve been taught in courses called mathematics. Those integrals are to mathematics as weight training and calisthenics are to soccer. If you want to play soccer—I mean, really play, at a competitive level—you’ve got to do a lot of boring, repetitive, apparently pointless drills. Do professional players ever use those drills? Well, you won’t see anybody on the field curling a weight or zigzagging between traffic cones. But you do see
...more
The mathematical talent at hand was equal to the gravity of the task. In Wallis’s words, the SRG was “the most extraordinary group of statisticians ever organized, taking into account both number and quality.”
A mathematician is always asking, “What assumptions are you making? And are they justified?” This can be annoying. But it can also be very productive. In this case, the officers were making an assumption unwittingly: that the planes that came back were a random sample of all the planes.
But it also lets you see the common skeleton shared by problems that look very different on the surface. Thus you have meaningful experience even in areas where you appear to have none.
What makes that math? Isn’t it just common sense? Yes. Mathematics is common sense. On some basic level, this is clear.
To paraphrase Clausewitz: Mathematics is the extension of common sense by other means. Without the rigorous structure that math provides, common sense can lead you astray. That’s what happened to the officers who wanted to armor the parts of the planes that were already strong enough. But formal mathematics without common sense—without the constant interplay between abstract reasoning and our intuitions about quantity, time, space, motion, behavior, and uncertainty—would just be a sterile exercise in rule-following and bookkeeping. In other words, math would actually be what the peevish
...more
Some principle more complicated than “More government bad, less government good” is in effect.
Nonlinear thinking means which way you should go depends on where you already are.
(Aside: it’s important to point out here that people with out-of-the-mainstream ideas who compare themselves to Edison and Galileo are never actually right. I get letters with this kind of language at least once a month, usually from people who have “proofs” of mathematical statements that have been known for hundreds of years to be false. I can guarantee you Einstein did not go around telling people, “Look, I know this theory of general relativity sounds wacky, but that’s what they said about Galileo!”)
It’s hard to defend Cauchy’s stance on pedagogical grounds. But I’m sympathetic with him anyway. One of the great joys of mathematics is the incontrovertible feeling that you’ve understood something the right way, all the way down to the bottom; it’s a feeling I haven’t experienced in any other sphere of mental life. And when you know how to do something the right way, it’s hard—for some stubborn people, impossible—to make yourself explain it the wrong way.
That’s the idea that drives linear regression, the statistical technique that is to social science as the screwdriver is to home repair.
Working an integral or performing a linear regression is something a computer can do quite effectively. Understanding whether the result makes sense—or deciding whether the method is the right one to use in the first place—requires a guiding human hand. When we teach mathematics we are supposed to be explaining how to be that guide. A math course that fails to do so is essentially training the student to be a very slow, buggy version of Microsoft Excel.
And let’s be frank: that really is what many of our math courses are doing. To make a long, contentious story short (but still contentious), the teaching of mathematics to children has for decades now been the arena of the so-called math wars.
If we settle on a vision of mathematics that consists of “getting the answer right” and no more, and test for that, we run the risk of creating students who test very well but know no mathematics at all.
To see what’s going on, let’s play an imaginary game. The game is called who’s the best at flipping coins. It’s pretty simple. You flip a bunch of coins and whoever gets the most heads wins. To make this a little more interesting, though, not everybody has the same number of coins. Some people—Team Small—have only ten coins, while the members of Team Big have a hundred each. If we score by absolute number of heads, one thing’s for almost sure—the winner of this game is going to come from Team Big. The typical Big player is going to get around 50 heads, a figure none of the Smalls can possibly
...more
That something is the cold, strong hand of the Law of Large Numbers.
But not every ranking system has the quantitative savvy to make allowances for the Law of Large Numbers.
It sounds like small schools, where teachers really know the students and their families and have time to deliver individualized instruction, are better at raising test scores.
The reason small schools dominate the top twenty-five isn’t because small schools are better, but because small schools have more variable test scores.
If you’re an executive managing a lot of teams, how can you accurately assess performance when the smaller teams are more likely to predominate at both the top and bottom tiers of your rankings?
That’s how the Law of Large Numbers works: not by balancing out what’s already happened, but by diluting what’s already happened with new data, until the past is so proportionally negligible that it can safely be forgotten.
Don’t talk about percentages of numbers when the numbers might be negative.
But that makes the claim “true” only in a very weak sense. It’s as if the Obama campaign had released a statement saying, “Mitt Romney has never denied allegations that for years he’s operated a bicontinental cocaine-trafficking ring in Colombia and Salt Lake City.” That statement is also 100% true! But it’s designed to create a false impression. So “true but false” is a pretty fair assessment. It’s the right answer to the wrong question. Which makes it worse, in a way, than a plain miscalculation.
Dividing one number by another is mere computation; figuring out what you should divide by what is mathematics.
“We conclude that the proximity of ELSs with related meanings in the Book of Genesis is not due to chance.”
If the method could be used, even in principle, to induce doubt as to the basic laws of the faith, it was about as authentically Jewish as a bacon cheeseburger.
When you sink your savings into the incubated fund with the eye-popping returns, you’re like the newsletter getter who invests his life savings with the Baltimore stockbroker; you’ve been swayed by the impressive results, but you don’t know how many chances the broker had to get those results.
It’s a lot like playing Scrabble with my eight-year-old son. If he’s unsatisfied with the letters he pulls from the bag, he dumps them back in and draws again, repeating this process until he gets letters he likes. In his view this is perfectly fair; after all, he’s closing his eyes, so he has no way of knowing what letters he’s going to draw! But if you give yourself enough chances, you’ll eventually come across that Z you’re waiting for. And it’s not because you’re lucky; it’s because you’re cheating.
improbable things happen a lot. Aristotle, as usual, was here first: despite lacking any formal notion of probability, he was able to understand that “it is probable that improbable things will happen. Granted this, one might argue that what is improbable is probable.”
Rather, McKay and Bar-Natan are making a potent point about the power of wiggle room. Wiggle room is what the Baltimore stockbroker has when he gives himself plenty of chances to win; wiggle room is what the mutual fund company has when it decides which of its secretly incubating funds are winners and which are trash. Wiggle room is what McKay and Bar-Natan used to work up a list of rabbinical names that jibed well with War and Peace. When you’re trying to draw reliable inferences from improbable events, wiggle room is the enemy.
The miracle, if there is one, is that Witztum and his colleagues were moved to choose precisely those versions of the names on which the Torah scores best.
Or, to put it another way: if we now feel comfortable rejecting the conclusions of the Witztum study, what does that say about the reliability of our standard statistical tests? It says you ought to be a little worried about them.
The really surprising result of Bennett’s paper isn’t that one or two voxels in a dead fish passed a statistical test; it’s that a substantial proportion of the neuroimaging articles he surveyed didn’t use statistical safeguards (known as “multiple comparisons correction”) that take into account the ubiquity of the improbable. Without those corrections, scientists are at serious risk of running the Baltimore stockbroker con, not only on their colleagues but on themselves.
when we say an outcome is improbable, we are always saying, explicitly or not, that it is improbable under some set of hypotheses we’ve made about the underlying mechanisms of the world.
If you flip a coin 82 times and get 82 heads, you ought to be thinking, “Something is biased about this coin,” not “God loves heads.”*
In more than three-quarters of those simulations, the significance test used by GVT reported that there was no reason to reject the null hypothesis—even though the null hypothesis was completely false. The GVT design was underpowered, destined to report the nonexistence of the hot hand even if the hot hand was real.
The short life of the hot hand, which makes it so hard to disprove, makes it just as hard to reliably detect.
Any regular hoops watcher will routinely see one player or another sink five shots in a row. Most of the time, surely, this is due to some combination of indifferent defense, wise shot selection, or, most likely of all, plain good luck, not a sudden burst of basketball transcendence. Which means there’s no reason to expect a guy who’s just hit five in a row to be particularly likely to make the next one.
the reductio ad unlikely, unlike its Aristotelian ancestor, is not logically sound in general.
But impossible and improbable are not the same—not even close. Impossible things never happen. But improbable things happen a lot. That means we’re on quivery logical footing when we try to make inferences from an improbable observation, as reductio ad unlikely asks us to.
But it’s all worth it for those moments of discovery, where everything works, and you find that the texture and protrusions of the liver really do predict the severity of the following year’s flu season, and, with a silent thank-you to the gods, you publish. You might find this happens about one time in twenty.
And yet there are hundreds of haruspices, and thousands of ripped-open sheep, and even one in twenty divinations provides plenty of material to fill each issue of the journal with novel results, demonstrating the efficacy of the methods and the wisdom of the gods.
there’s probably a lot more entrail reading in the sciences than we’d like to admit.
This is the so-called file drawer problem—a scientific field has a drastically distorted view of the evidence for a hypothesis when public dissemination is cut off by a statistical significance threshold. But we’ve already given the problem another name. It’s the Baltimore stockbroker.
When the scientific community file-drawers its failed experiments, it plays both parts at once. They’re running the con on themselves.
The p-hackers truly believe in their hypotheses, just as the Bible coders do, and when you’re a believer, it’s easy to come up with reasons that the analysis that gives a publishable p-value is the one you should have done in the first place.
It’s clear that it’s wrong to use “p < .05” as a synonym for “true” and “p > .05” to mean “false.”