How Not To Be Wrong: The Hidden Maths of Everyday
Rate it:
Open Preview
1%
Flag icon
Math is a science of not being wrong about things, its techniques and habits hammered out by centuries of hard work and argument.
2%
Flag icon
Wald’s insight was simply to ask: where are the missing holes? The ones that would have been all over the engine casing, if the damage had been spread equally all over the plane? Wald was pretty sure he knew. The missing bullet holes were on the missing planes. The reason planes were coming back with fewer hits to the engine is that planes that got hit in the engine weren’t coming back. Whereas the large number of planes returning to base with a thoroughly Swiss-cheesed fuselage is pretty strong evidence that hits to the fuselage can (and therefore should) be tolerated.
2%
Flag icon
To a mathematician, the structure underlying the bullet hole problem is a phenomenon called survivorship bias.
3%
Flag icon
Mathematics is the study of things that come out a certain way because there is no other way they could possibly be.
3%
Flag icon
But calculus is still derived from our common sense—Newton took our physical intuition about objects moving in straight lines, formalized it, and then built on top of that formal structure a universal mathematical description of motion.
3%
Flag icon
To paraphrase Clausewitz: Mathematics is the extension of common sense by other means.
4%
Flag icon
Mitchell’s reasoning is an example of false linearity—he’s assuming, without coming right out and saying so, that the course of prosperity is described by the line segment in the first picture, in which case Sweden stripping down its social infrastructure means we should do the same.
5%
Flag icon
Nonlinear thinking means which way you should go depends on where you already are. This insight isn’t new. Already in Roman times we find Horace’s famous remark “Est modus in rebus, sunt certi denique fines, quos ultra citraque nequit consistere rectum” (“There is a proper measure in things. There are, finally, certain boundaries short of and beyond which what is right cannot exist”).
5%
Flag icon
in the real world—the government does take in some amount of revenue
5%
Flag icon
If you’re to the right of the Laffer peak, and you want to decrease the deficit without cutting spending, there’s a simple and politically peachy solution: lower the tax rate, and thereby increase the amount of taxes you take in. Which way you should go depends on where you are.
6%
Flag icon
A basic rule of mathematical life: if the universe hands you a hard problem, try to solve an easier one instead, and hope the simple version is close enough to the original problem that the universe doesn’t object.
7%
Flag icon
The great insight of Eudoxus and Archimedes was that it doesn’t matter whether it’s a circle or a polygon with very many very short sides. The two areas will be close enough for any purpose you might have in mind. The area of the little fringe between the circle and the polygon has been “exhausted” by our relentless iteration. The circle has a curve to it, that’s true. But every tiny little piece of it can be well approximated by a perfectly straight line, just as the tiny little patch of the earth’s surface we stand on is well approximated by a perfectly flat plane.* The slogan to keep in ...more
9%
Flag icon
Cauchy asked not, “How shall we define 1 − 1 + 1 − 1 + …” but “What is 1 − 1 + 1 − 1 + … ?” and that this habit of mind led them into unnecessary perplexities and controversies which were often really verbal.
10%
Flag icon
The danger of overemphasizing algorithms and precise computations is that algorithms and precise computations are easy to assess. If we settle on a vision of mathematics that consists of “getting the answer right” and no more, and test for that, we run the risk of creating students who test very well but know no mathematics at all. This might be satisfying to those whose incentives are driven by test scores foremost and only, but it is not satisfying to me.
11%
Flag icon
An important rule of mathematical hygiene: when you’re field-testing a mathematical method, try computing the same thing several different ways. If you get several different answers, something’s wrong with your method.
12%
Flag icon
Law of Large Numbers. I won’t state that theorem precisely (though it is stunningly handsome!), but you can think of it as saying the following: the more coins you flip, the more and more extravagantly unlikely it is that you’ll get 80% heads.
12%
Flag icon
The understanding that the results of an experiment tend to settle down to a fixed average when the experiment is repeated again and again is not new.
12%
Flag icon
Measuring the absolute number of brain cancer deaths is biased toward the big states; but measuring the highest rates—or the lowest ones!—puts the smallest states in the lead.
Sanchit
Law of Large Numbers simplified
13%
Flag icon
That’s how the Law of Large Numbers works: not by balancing out what’s already happened, but by diluting what’s already happened with new data, until the past is so proportionally negligible that it can safely be forgotten.
14%
Flag icon
a partially ordered set. That’s a fancy way of saying that some pairs of disasters can be meaningfully compared, and others cannot.
14%
Flag icon
Don’t talk about percentages of numbers when the numbers might be negative.
16%
Flag icon
Dividing one number by another is mere computation; figuring out what you should divide by what is mathematics.
16%
Flag icon
“equidistant letter sequence,” henceforth ELS,
16%
Flag icon
In Hebrew, numbers can be recorded in alphabetic characters, so the birth and death dates of the rabbis provided more letter sequences to play with.
17%
Flag icon
What does this mean for you, if you’re fortunate enough to have some money to invest? It means you’re best off resisting the lure of the hot new fund that made 10% over the last twelve months. Better to follow the deeply unsexy advice you’re probably sick of hearing, the “eat your vegetables and take the stairs” of financial planning: instead of hunting for a magic system or an advisor with a golden touch, put your money in a big dull low-fee index fund and forget about it. When you sink your savings into the incubated fund with the eye-popping returns, you’re like the newsletter getter who ...more
18%
Flag icon
The universe is big, and if you’re sufficiently attuned to amazingly improbable occurrences, you’ll find them. Improbable things happen a lot.
18%
Flag icon
In the British statistician R. A. Fisher’s famous formulation, “the ‘one chance in a million’ will undoubtedly occur, with no less and no more than its appropriate frequency, however surprised we may be that it should occur to us.”
18%
Flag icon
Wiggle room is what the Baltimore stockbroker has when he gives himself plenty of chances to win; wiggle room is what the mutual fund company has when it decides which of its secretly incubating funds are winners and which are trash. Wiggle room is what McKay and Bar-Natan used to work up a list of rabbinical names that jibed well with War and Peace. When you’re trying to draw reliable inferences from improbable events, wiggle room is the enemy.
19%
Flag icon
The point of Bennett’s paper is to warn that the standard methods of assessing results, the way we draw our thresholds between a real phenomenon and random static, come under dangerous pressure in this era of massive data sets, effortlessly obtained. We need to think very carefully about whether our standards for evidence are strict enough, if the empathetic salmon makes the cut.
19%
Flag icon
The more chances you give yourself to be surprised, the higher your threshold for surprise had better be.
21%
Flag icon
The “does nothing” scenario is called the null hypothesis. That is, the null hypothesis is the hypothesis that the intervention you’re studying has no effect. If you’re the researcher who developed the new drug, the null hypothesis is the thing that keeps you up at night. Unless you can rule it out, you don’t know whether you’re on the trail of a medical breakthrough or just barking up the wrong metabolic pathway.
21%
Flag icon
So here’s the procedure for ruling out the null hypothesis, in executive bullet-point form: Run an experiment. Suppose the null hypothesis is true, and let p be the probability (under that hypothesis) of getting results as extreme as those observed. The number p is called the p-value. If it is very small, rejoice; you get to say your results are statistically significant. If it is large, concede that the null hypothesis has not been ruled out.
22%
Flag icon
When we’re testing the effect of a new drug, the null hypothesis is that there is no effect at all; so to reject the null hypothesis is merely to make a judgment that the effect of the drug is not zero. But the effect could still be very small—so small that the drug isn’t effective in any sense that an ordinary non-mathematical Anglophone would call significant.
23%
Flag icon
For Skinner, a theory of mind just was a theory of behavior, and the interesting projects for psychologists thus did not concern thoughts or feelings at all, but rather the manipulation of behavior by means of reinforcement.
23%
Flag icon
Watson held that scientists were in the business of observing the results of experiments, and only that; there was no room for hypotheses about consciousness or souls. “No one has ever touched a soul or seen one in a test-tube,”
23%
Flag icon
Under the null hypothesis, the frequency with which initial sounds appeared multiple times in the same line would be unchanged if the words were put in a sack, shaken up, and laid out again in random order. And this is just what Skinner found in his sample of a hundred sonnets. Shakespeare failed the significance test. Skinner writes: “In spite of the seeming richness of alliteration in the sonnets, there is no significant evidence of a process of alliteration in the behavior of the poet to which any serious attention should be given. So far as this aspect of poetry is concerned, Shakespeare ...more
25%
Flag icon
Assuming the truth of something we quietly believe to be false is a time-honored method of argument that goes all the way back to Aristotle; it is the proof by contradiction, or reductio ad absurdum. The reductio is a kind of mathematical judo, in which we first affirm what we wish eventually to deny, with the plan of throwing it over our shoulder and defeating it by means of its own force.
26%
Flag icon
If you’re committed to the view that a highly improbable outcome should lead you to question the fairness of the game, you’re going to be the person shooting off an angry e-mail to the lottery commissioner every Thursday of your life, no matter which numbered balls drop out of the cage.
26%
Flag icon
Among the first N numbers, about N/log N are prime; this is the Prime Number Theorem, proven at the end of the nineteenth century by the number theorists Jacques Hadamard and Charles-Jean de la Vallée Poussin.
26%
Flag icon
The logarithm of a positive number N, called log N, is the number of digits it has.
26%
Flag icon
The flogarithm (whence also the logarithm) is a very slowly growing function indeed: the flogarithm of a thousand is 4, the flogarithm of a million, a thousand times greater, is 7, and the flogarithm of a billion is still only 10.
27%
Flag icon
But what Zhang proved is that there are infinitely many pairs of primes that differ by at most 70 million. In other words, that the gap between one prime and the next is bounded by 70 million infinitely often—thus, the “bounded gaps” conjecture.
27%
Flag icon
The Goldbach conjecture, that every even number greater than 2 is the sum of two primes, is another one that would have to be true if primes behaved like random numbers.
27%
Flag icon
The most famous of all is the conjecture made by Pierre de Fermat in 1637, which asserted that the equation An + Bn = Cn has no solutions with A, B, C, and n positive whole numbers with n greater than 2.
28%
Flag icon
In other words, a study that accurately measures the effect of a gene is likely to be rejected as statistically insignificant, while any result that passes the p < .05 test is either a false positive or a true positive that massively overstates the gene’s effect.
28%
Flag icon
But noise is just as likely to push you in the opposite direction from the real effect as it is to tell the truth. So we’re left in the dark by a result that offers plenty of statistical significance but very little confidence. Scientists call this problem “the winner’s curse,” and it’s one reason that impressive and loudly touted experimental results often melt into disappointing sludge when the experiments are repeated.
29%
Flag icon
file drawer problem—a scientific field has a drastically distorted view of the evidence for a hypothesis when public dissemination is cut off by a statistical significance threshold. But we’ve already given the problem another name. It’s the Baltimore stockbroker.
30%
Flag icon
The confidence interval is the range of hypotheses that the reductio doesn’t demand that you trash, the ones that are reasonably consistent with the outcome you actually observed.
30%
Flag icon
For Neyman and Pearson, the purpose of statistics isn’t to tell us what to believe, but to tell us what to do. Statistics is about making decisions, not answering questions. A significance test is no more or less than a rule, which tells the people in charge whether to approve a drug, undertake a proposed economic reform, or tart up a website.
30%
Flag icon
The purpose of a court is not truth, but justice. We have rules, the rules must be obeyed, and when we say that a defendant is “guilty” we mean, if we are careful about our words, not that he committed the crime he’s accused of, but that he was convicted fair and square according to those rules.
« Prev 1 3