More on this book
Community
Kindle Notes & Highlights
For example, if we are being told the outcome of a coin flip, there are only two possible messages: “heads” or “tails.” Before we get the message, either alternative is equally likely; after we get the message, we have learned precisely one bit of information.
Roughly speaking, then, the information content of a message goes up as the probability of a given message taking that form goes down.
But Shannon wanted to be a little bit more precise than that. In particular, he wanted it to be the case that if we receive two messages that are completely independent of each other, the total information we get is equal to the sum of the information contained in each individual message.
His final result is this: The “self-information” contained in a message is equal to minus the logarithm of the probability that the message would take that particular form.
The information is the difference between the maximum possible entropy and the actual entropy of the macrostate.
DOES LIFE MAKE SENSE?
For every 1 high-energy photon we receive, the Earth radiates about 20 low-energy photons.
The Earth emits the same amount of energy as it receives, but with 20 times higher entropy.
LIFE IN MOTION
Scientists haven’t yet agreed on a single definition, but there are a number of features that are often associated with living organisms: complexity, organization, metabolism, information processing, reproduction, response to stimuli, aging.
In Ireland Schrödinger gave a series of public lectures, which were later published as What Is Life? He was interested in examining the phenomenon of life from the perspective of a physicist, and in particular an expert on quantum mechanics and statistical mechanics. Perhaps the most remarkable thing about the book is Schrödinger’s deduction that the stability of genetic information over time is best explained by positing the existence of some sort of “aperiodic crystal” that stored the information in its chemical structure. This insight helped inspire Francis Crick to leave physics in favor
...more
Entropy certainly plays a big role in the nature of life, but there are important aspects that it doesn’t capture. Entropy characterizes individual states at a single moment in time, but the salient features of life involve processes that evolve through time. By itself, the concept of entropy has only very crude implications for evolution through time: It tends to go up or stay the same, not go down. The Second Law says nothing about how fast entropy will increase, or the particular methods by which entropy will grow—it’s all about Being, not Becoming.
The idea of free energy helps to tie together entropy, the Second Law, Maxwell’s Demon, and the ability of living organisms to keep going longer than nonliving objects.
FREE ENERGY, NOT FREE BEER
Useful energy can be converted into some kind of work, while useless energy is useless. One of the contributions of Josiah Willard Gibbs was to formalize these concepts, by introducing the concept of “free energy.” Schrödinger didn’t use that term in his lectures because he worried that the connotations were confusing: The energy isn’t really “free” in the sense that you can get it for nothing; it’s “free” in the sense that it’s available to be used for some purpose.
Gibbs realized that he could use the concept of entropy to cleanly divide the total amount of energy into the useful part, which he called “free,” and the useless part:
When a physical process creates entropy in a system with a fixed total amount of energy, it uses up free energy; once all the free energy is gone, we’ve reached equilibrium.
Maxwell’s Demon as a paradigm for life. The Demon maintains order—a separation of temperatures—in the box, against the influence of the environment, by processing information through the transformation of free energy into high-entropy heat.
From the point of view of natural selection, there are many reasons why a complex, persistent structure might be adaptively favored: An eye, for example, is a complex structure that clearly contributes to the fitness of an organism. But increasingly complex structures require that we turn increasing amounts of free energy into heat, just to keep them intact and functioning. This picture of the interplay of energy and information therefore makes a prediction: The more complex an organism becomes, the more inefficient it will be at using energy for “work” purposes—simple mechanical operations
...more
COMPLEXITY AND TIME
The Kolmogorov complexity is just the length of that shortest possible computer program.
So while the entropy of the universe increases straightforwardly from low to high as time goes by, the complexity is more interesting: It goes from low, to relatively high, and then back down to low again. And the question is: Why? Or perhaps: What are the ramifications of this form of evolution?
10
RECURRENT NIGHTMARES
In 1890 Henri Poincaré proved an intriguing mathematical theorem, showing that certain physical systems would necessarily return to any particular configuration infinitely often, if you just waited long enough. This result was seized upon by a young mathematician named Ernst Zermelo, who claimed that it was incompatible with Boltzmann’s purported derivation of the Second Law of Thermodynamics from underlying reversible rules of atomic motion.
POINCARÉ’S CHAOS
ZERMELO VERSUS BOLTZMANN
TROUBLES OF AN ETERNAL UNIVERSE
The reversibility objection merely notes that there are an equal number of entropy-decreasing evolutions as entropy-increasing ones; the recurrence objection points out that the entropy-decreasing processes will eventually happen some time in the future. It’s not just that a system could decrease in entropy—if we wait long enough, it is eventually guaranteed to do so.
If the universe truly lasts forever, having neither a beginning nor an end, what is the Past Hypothesis supposed to mean? There was some moment, earlier than the present, when the entropy was small. But what about before that? Was it always small—for an infinitely long time—until some transition occurred that allowed the entropy to grow? Or was the entropy also higher before that moment, and if so, why is there a special low-entropy moment in the middle of the history of the universe?
There are at least three ways out of this dilemma, and Boltzmann alluded to all three of them.
First, the universe might really have a beginning, and that beginning would involve a low-entropy boundary condition.
Second, the assumptions behind the Poincaré recurrence theorem might simply not hold in the real world.
The third way out of the recurrence objection is not a way out at all—it’s a complete capitulation. Admit that the universe is eternal, and that recurrences happen, so that the universe witnesses moments when entropy is increasing and moments when it is decreasing. And then just say: That’s the universe in which we live.
FLUCTUATING AROUND EQUILIBRIUM
THE ANTHROPIC APPEAL
SWERVING THROUGH ANTIQUITY
Can the real world possibly be like that? Can we live in an eternal universe that spends most of its time in equilibrium, with occasional departures that look like what we see around us?
UN-BREAKING AN EGG
BOLTZMANN BRAINS
It’s not right to say, “I know I am not a Boltzmann brain, so clearly the universe is not a random fluctuation.” The right thing to say is, “If I were a Boltzmann brain, there would be a strong prediction: Everything else about the universe should be in equilibrium. But it’s not. Therefore the universe is not a random fluctuation.”
We therefore conclude that the universe is not a fluctuation, and that the order is a memory of conditions when things started. This is not to say that we understand the logic of it. For some reason, the universe at one time had a very low entropy for its energy content, and since then the entropy has increased. So that is the way toward the future. That is the origin of all irreversibility, that is what makes the processes of growth and decay, that makes us remember the past and not the future, remember the things which are closer to that moment in history of the universe when the order was
...more
WHO ARE WE IN THE MULTIVERSE?
It’s too much to ask that we are typical among all observers in the universe, because that’s making a strong claim about parts of the universe we’ve never observed. But we can at least say that we are typical among observers exactly like us—that is, observers with the basic physiology and the same set of memories that we have, the same coarse-grained experience of the universe.
If the universe fluctuates around thermal equilibrium for all eternity, not only will most observers appear all by themselves from the surrounding chaos, but the same is true for the subset of observers with precisely the features that you or I have—complete with our purported memories of the past. Those memories will generally be false, and fluctuating into them is very unlikely, but it’s still much more unlikely than fluctuating the entire universe. Even this minimal necessary condition for carrying out statistical reasoning—we should take ourselves to be chosen randomly from the set of
...more
The universe we observe is not a fluctuation—at least, to be more careful, a statistical fluctuation in an eternal universe that spends most of its time in equilibrium. So that’s what the universe is not; what it is, we still have to work out.
EN...
This highlight has been truncated due to consecutive passage length restrictions.
11
QUANTUM TIME
Classical mechanics isn’t a particular theory; it’s a paradigm, a way of conceptualizing what a physical theory is, and one that has demonstrated an astonishing range of empirical success.

