More on this book
Community
Kindle Notes & Highlights
Read between
November 3 - November 28, 2023
The principle of charity stipulates that you try to understand a message as if you were yourself its author. It, and revulsion at its violations, are Lindy compatible.
It is immoral to be in opposition to the market system and not live (somewhere in Vermont or Northwestern Afghanistan) in a hut or cave isolated from it. But there is worse: It is much more immoral to claim virtue without fully living with its direct consequences.
As we saw with the interventionistas, a certain class of theoretical people can despise the details of reality. If you manage to convince yourself that you are right in theory, you don’t really care how your ideas affect others. Your ideas give you a virtuous status that makes you impervious to how they affect others.
If your private life conflicts with your intellectual opinion, it cancels your intellectual ideas, not your private life. And a solution to the vapid universalism we discussed in the Prologue: If your private actions do not generalize, then you cannot have general ideas.
true virtue lies mostly in also being nice to those who are neglected by others, the less obvious cases, those people the grand charity business tends to miss.
Further, the highest form of virtue is unpopular. This does not mean that virtue is inherently unpopular, or correlates with unpopularity, only that unpopular acts signal some risk taking and genuine behavior. Courage is the only virtue you cannot fake.
Sticking up for truth when it is unpopular is far more of a virtue, because it costs you something—your reputation.
I conjecture that when you leave people alone, they tend to settle for practical reasons. People on the ground, those with skin in the game, are not too interested in geopolitics or grand abstract principles,
Just as paganism cannot be pigeon-holed, the same applies to libertarianism. It does not fit the structure of a political “party”—only that of a decentralized political movement. The very concept doesn’t allow for the straitjacket of a strong party line and unified policy
Nevertheless, we libertarians share a minimal set of beliefs, the central one being to substitute the rule of law for the rule of authority. Without necessarily realizing it, libertarians believe in complex systems. And, since libertarianism is a movement, it can still exist as splintered factions within other political parties.
Most Christians, when it comes to central medical, ethical, and decision-making situations (like myself, an Orthodox Christian) do not act any differently than atheists. Those who do (such as Christian scientists) are few. Most Christians have accepted the modern trappings of democracy, oligarchy, or military dictatorship, all these heathen political regimes, rather than seeking theocracies. Their decisions on central matters are indistinguishable from those of an atheist.
So when we look at religion, and, to some extent, ancestral superstitions, we should consider what purpose they serve, rather than focusing on the notion of “belief,” epistemic belief in its strict scientific definition. In science, belief is literal belief; it is right or wrong, never metaphorical. In real life, belief is an instrument to do things, not the end product.
In that sense harboring superstitions is not irrational by any metric: nobody has managed to build a criterion for rationality based on actions that bear no cost. But actions that harm you are detectable, if not observable.
Survival comes first, truth, understanding, and science later.
for the world to be “ergodic,” there needs to be no absorbing barrier, no substantial irreversibilities.
Judging people by their beliefs is not scientific. There is no such thing as the “rationality” of a belief, there is rationality of action. The rationality of an action can be judged only in terms of evolutionary considerations.
The axiom of revelation of preferences (originating with Paul Samuelson, or possibly the Semitic gods), as you recall, states the following: you will not have an idea about what people really think, what predicts people’s actions, merely by asking them—they themselves don’t necessarily know. What matters, in the end, is what they pay for goods, not what they say they “think” about them, or the various possible reasons they give you or themselves for that.
It is therefore my opinion that religion exists to enforce tail risk management across generations, as its binary and unconditional rules are easy to teach and enforce. We have survived in spite of tail risks; our survival cannot be that random.
Superstitions can be vectors for risk management rules. We have as potent information that people who have them have survived; to repeat, never discount anything that allows you to survive.
we do not have enough grounds to discuss “irrational beliefs.” We do with irrational actions. Extending such logic, we can show that much of what we call “belief” is some kind of background furniture for the human mind, more metaphorical than real. It may work as therapy.
There is a difference between beliefs that are decorative and different sorts of beliefs, those that map to action.
How much you truly “believe” in something can be manifested only through what you are willing to risk for it.
what is rational is that which allows for survival.
Anything that hinders one’s survival at an individual, collective, tribal, or general level is, to me, irrational.
When you consider beliefs in evolutionary terms, do not look at how they compete with each other, but consider the survival of the populations that have them.
To take stock: a situation is deemed non-ergodic when observed past probabilities do not apply to future processes. There is a “stop” somewhere, an absorbing barrier that prevents people with skin in the game from emerging from it—and to which the system will invariably tend. Let us call these situations “ruin,” as there is no reversibility away from the condition. The central problem is that if there is a possibility of ruin, cost-benefit analyses are no longer possible.
If you incur a tiny probability of ruin as a “one-off” risk, survive it, then do it again (another “one-off” deal), you will eventually go bust with a probability of one hundred percent. Confusion arises because it may seem that if the “one-off” risk is reasonable, then an additional one is also reasonable. This can be quantified by recognizing that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases.
Another common error in the psychology literature concerns what is called “mental accounting.” The Thorp, Kelly, and Shannon school of information theory requires that, for an investment strategy to be ergodic and eventually capture the return of the market, agents increase their risks as they are winning, but contract after losses, a technique called “playing with the house money.” In practice, it is done by threshold, for ease of execution, not complicated rules: you start betting aggressively whenever you have a profit, never when you have a deficit, as if a switch was turned on or off.
...more
Unless you are perfectly narcissistic and psychopathic—even then—your worst-case scenario is never limited to the loss of only your life. Thus, we see the point that individual ruin is not as big a deal as collective ruin. And of course ecocide, the irreversible destruction of our environment, is the big one to worry about.
Courage is when you sacrifice your own well-being for the sake of the survival of a layer higher than yours.
It doesn’t cost me much to go with my “refined paranoia,” even if wrong. For all it takes is for my paranoia to be right once, and it saves my life.
I make the case for risk loving, for systematic “convex” tinkering, and for taking a lot of risks that don’t have tail risks but offer tail profits.
Never compare a multiplicative, systemic, and fat-tailed risk to a non-multiplicative, idiosyncratic, and thin-tailed one.
There are two categories in which random events fall: Mediocristan and Extremistan. Mediocristan is thin-tailed and affects the individual without correlation to the collective. Extremistan, by definition, affects many people. Hence Extremistan has a systemic effect that Mediocristan doesn’t.
Mediocristan risks are subjected to the Chernoff bound. The Chernoff bound can be explained as follows. The probability that the number of people who drown in their bathtubs in the United States doubles next year—assuming no changes in population or bathtubs—is one per several trillions lifetimes of the universe. This cannot be said about the doubling of the number of people killed by terrorism over the same period.
We close this chapter with a few summarizing lines. One may be risk loving yet completely averse to ruin. The central asymmetry of life is: In a strategy that entails ruin, benefits never offset risks of ruin. Further: Ruin and other changes in condition are different animals. Every single risk you take adds up to reduce your life expectancy. Finally: Rationality is avoidance of systemic ruin.
When the beard (or hair) is black, heed the reasoning, but ignore the conclusion. When the beard is gray, consider both reasoning and conclusion. When the beard is white, skip the reasoning, but mind the conclusion.

