More on this book
Community
Kindle Notes & Highlights
Read between
November 9 - November 29, 2018
Black Swan effects are necessarily increasing, as a result of complexity, interdependence between parts, globalization, and the beastly thing called “efficiency” that makes people now sail too close to the wind.
Even at an individual level, wealth means more headaches; we may need to work harder at mitigating the complications arising from wealth than we do at acquiring
Megafragility. Health as a function of temperature curves inward.
the more nonlinear the response, the less relevant the average, and the more relevant the stability around such average.
Recall from our traffic example in Chapter 18 that 90,000 cars for an hour, then 110,000 cars for the next one, for an average of 100,000, and traffic will be horrendous. On the other hand, assume we have 100,000 cars for two hours, and traffic will be smooth and time in traffic short.
the function is convex (antifragile), then the average of the function of something is going to be higher than the function of the average of something. And the reverse when the function is concave (fragile).
Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less.
Let me summarize the argument: if you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.
we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily.
since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.
keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man.
Few realize that we are moving into the far more uneven distribution of 99/1 across many things that used to be 80/20: 99 percent of Internet traffic is attributable to less than 1 percent of sites, 99 percent of book sales come from less than 1 percent of authors
When it comes to health care, Ezekiel Emanuel showed that half the population accounts for less than 3 percent of the costs, with the sickest 10 percent consuming 64 percent of the total
More data—such as paying attention to the eye colors of the people around when crossing the street—can make you miss the big truck.
if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason. Likewise
before you are proven right, you will be reviled; after you are proven right, you will be hated for a while, or, what’s worse, your ideas will appear to be “trivial” thanks to retrospective distortion.
David Edgerton showed that in the early 2000s we produce two and a half times as many bicycles as we do cars and invest most of our technological resources in maintaining existing equipment or refining old technologies
Technothinkers tend to have an “engineering mind”—to put it less politely, they have autistic tendencies. While they don’t usually wear ties, these types tend, of course, to exhibit all the textbook characteristics of nerdiness—mostly lack of charm, interest in objects instead of persons, causing them to neglect their looks. They love precision at the expense of applicability. And they typically share an absence of literary culture.
This absence of literary culture is actually a marker of future blindness because it is usually accompanied by a denigration of history, a byproduct of unconditional neomania. Outside of the niche and isolated genre of science fiction, literature is about the past. We do not learn physics or biology from medieval textbooks, but we still read Homer, Plato, or the very modern Shakespeare. We
For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
Every year that passes without extinction doubles the additional life expectancy.4 This is an indicator of some robustness. The robustness of an item is proportional to its life!
The proportionality of life expectancy does not need to be tested explicitly—it is the direct result of “winner-take-all” effects in longevity.
Actually there is an Arabic proverb to that effect: he who does not have a past has no future.
So we confuse the necessary and the causal: because all surviving technologies have some obvious benefits, we are led to believe that all technologies offering obvious benefits will survive.
If you announce to someone “you lost $10,000,” he will be much more upset than if you tell him “your portfolio value, which was $785,000, is now $775,000.” Our brains have a predilection for shortcuts, and the variation is easier to notice (and store) than the entire record.
These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects.
What is artisanal has the love of the maker infused in it, and tends to satisfy—we don’t have this nagging impression of incompleteness we encounter with electronics.
So I follow the Lindy effect as a guide in selecting what to read: books that have been around for ten years will be around for ten more; books that have been around for two millennia should be around for quite a bit of time, and so forth.
So attending breakthrough conferences might be, statistically speaking, as much a waste of time as buying a mediocre lottery ticket, one with a small payoff. The odds of the papers being relevant—and interesting—in five years is no better than one in ten thousand. The fragility of science!
One of my students (who was majoring in, of all subjects, economics) asked me for a rule on what to read. “As little as feasible from the last twenty years, except history books that are not about the last fifty years,” I blurted out,
This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge).
an extension of via negativa and Fat Tony’s don’t-be-a-sucker rule: the non-natural needs to prove its benefits, not the natural—according to the statistical principle outlined earlier that nature is to be considered much less of a sucker than humans. In a complex domain, only time—a long time—is evidence.
iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible—and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.
lame.) Remarkably, convexity effects work in an identical
Not using models of nonlinear effects such as convexity biases while “doing empirical work” is like having to catalog every apple falling from a tree and call the operation “empiricism” instead of just using Newton’s equation.
Antibiotics. Every time you take an antibiotic, you help, to some degree, the mutation of germs into antibiotic-resistant strains.
Traditionally, medicine used to be split into three traditions: rationalists (based on preset theories, the need of global understanding of what things were made for), skeptical empiricists (who refused theories and were skeptical of ideas making claims about the unseen), and methodists (who taught each other some simple medical heuristics stripped of theories and found an even more practical way to be empiricists).
We estimated that cutting medical expenditures by a certain amount (while limiting the cuts to elective surgeries and treatments) would extend people’s lives in most rich countries, especially the United States. Why? Simple basic convexity analysis; a simple examination of conditional iatrogenics: the error of treating the mildly ill puts them in a concave position.
“if all the medications were dumped in the sea, it would be better for mankind but worse for the fishes.”
From such examples, I derived the rule that what is called “healthy” is generally unhealthy, just as “social” networks are antisocial, and the “knowledge”-based economy is typically ignorant.
Few have considered that money has its own iatrogenics, and that separating some people from their fortune would simplify their lives and bring great benefits in the form of healthy stressors.
But just imagine how by the subtractive perspective, via negativa, we can be better off by getting tougher: no sunscreen, no sunglasses if you have brown eyes, no air-conditioning, no orange juice (just water), no smooth surfaces, no soft drinks, no complicated pills, no loud music, no elevator, no juicer, no … I stop.
If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).
Cowardice enhanced by technology is all connected: society is fragilized by spineless politicians, draft dodgers afraid of polls, and journalists building narratives, who create explosive deficits and compound agency problems because they want to look good in the short term.
This set the bar very high for me: dignity is worth nothing unless you earn it, unless you are willing to pay a price for it.
If you take risks and face your fate with dignity, there is nothing you can do that makes you small; if you don’t take risks, there is nothing you can do that makes you grand, nothing. And when you take risks, insults by half-men (small men, those who don’t risk anything) are similar to barks by nonhuman animals: you can’t feel insulted by a dog.
Predicting—any prediction—without skin in the game can be as dangerous for others as unmanned nuclear plants without the engineer sleeping on the premises. Pilots should be on the plane.
I am stating here that I find it profoundly unethical to talk without doing, without exposure to harm, without having one’s skin in the game, without having something at risk. You express your opinion; it can hurt others (who rely on it), yet you incur no liability. Is this fair?
want predictors to have visible scars on their body from prediction errors, not distribute these errors to society.