From Eternity to Here
Rate it:
Open Preview
Read between November 20, 2020 - March 27, 2022
29%
Flag icon
The Second Law is robust; it depends on the definition of entropy as the logarithm of a volume within the space of states, but not on the precise way in which we choose that volume. Nevertheless, in practice we do make certain choices and not others, so this transparent attempt to avoid the issue is not completely satisfying.
29%
Flag icon
Averaging within small regions of space isn’t a procedure that we hit upon randomly, nor is it a peculiarity of our human senses as opposed to the senses of a hypothetical alien; it’s a very natural thing, given how the laws of physics work.
29%
Flag icon
So even though in principle our choice of how to coarse-grain microstates into macrostates seems absolutely arbitrary, in practice Nature hands us a very sensible way to do it.
29%
Flag icon
RUNNING ENTROPY BACKWARD
30%
Flag icon
The arrow of time isn’t a consequence of the fact that “entropy increases to the future”; it’s a consequence of the fact that “entropy is very different in one direction of time than the other.”
30%
Flag icon
The important thing is that entropy increases in the same temporal direction for everyone within the observable universe, so that they can agree on the direction of the arrow of time.
30%
Flag icon
THE DECONSTRUCTION OF BENJAMIN BUTTON
30%
Flag icon
Why is it difficult/ impossible to choose a state of the universe with the property that, as we evolve it forward in time, some parts of it have increasing entropy and some parts have decreasing entropy?
30%
Flag icon
ENTROPY AS DISORDER
30%
Flag icon
Order appears spontaneously at the macroscopic level, but it’s ultimately a matter of disorder at the microscopic level.
30%
Flag icon
We’re going to have a lot to say about how gravity wreaks havoc with our everyday notions of entropy, but for now suffice it to say that the interaction of gravity with other forces seems to be able to create order while still making the entropy go up—temporarily, anyway. That is a deep clue to something important about how the universe works; sadly, we aren’t yet sure what that clue is telling us.
30%
Flag icon
THE PRINCIPLE OF INDIFFERENCE
30%
Flag icon
We have this large set of microstates, which we divide up into macrostates, and declare that the entropy is the logarithm of the number of microstates per macrostate. Then we are asked to swallow another considerable bite: The proposition that each microstate within a macrostate is “equally likely.”
31%
Flag icon
Statistical mechanics, the version of thermodynamics based on atoms, is essentially probabilistic—we don’t know for sure what is going to happen; we can only argue that certain outcomes are overwhelmingly likely.
31%
Flag icon
Conventional statistical mechanics, in other words, makes a crucial assumption: Given that we know we are in a certain macrostate, and that we understand the complete set of microstates corresponding to that macrostate, we can assume that all such microstates are equally likely. We can’t avoid invoking some assumption along these lines; otherwise there’s no way of making the leap from counting states to assigning probabilities. The equal-likelihood assumption has a name that makes it sound like a dating strategy for people who prefer to play hard to get: the “Principle of Indifference.”
31%
Flag icon
And the Principle of Indifference is basically the best we can do. When all we know is that a system is in a certain macrostate, we assume that every microstate within that macrostate is equally likely. (With one profound exception—the Past Hypothesis—to be discussed at the end of this chapter.) It would be nice if we could prove that this assumption should be true, and people have tried to do that.
31%
Flag icon
The real reason we use the Principle of Indifference is that we don’t know any better. And, of course, because it seems to work.
31%
Flag icon
OTHER ENTROPIES, OTHER ARROWS
31%
Flag icon
As a state evolving through time moves from a low-entropy condition to a high-entropy condition, if we choose to forget everything other than the macrostate to which it belongs, we end up knowing less and less about which state we actually have in mind. In other words, if we are told that a system belongs to a certain macrostate, the probability that it is any particular microstate within that macrostate decreases as the entropy increases, just because there are more possible microstates it could be. Our information about the state—how accurately we have pinpointed which microstate it is—goes ...more
31%
Flag icon
Instead of thinking of entropy as something that characterizes individual states—namely, the number of other states that look macroscopically similar—we could choose to think of entropy as characterizing what we know about the state.
31%
Flag icon
In the Boltzmann way of thinking about entropy, the knowledge of which macrostate we are in tells us less and less about the microstate as entropy increases; the Gibbs approach inverts this perspective and defines entropy in terms of how much we know. Instead of starting with a coarse-graining on the space of states, we start with a probability distribution: the percentage chance, for each possible microstate, that the system is actually in that microstate right now. Then Gibbs gives us a formula, analogous to Boltzmann’s, for calculating the entropy associated with that probability ...more
31%
Flag icon
Neither the Boltzmann formula nor the Gibbs formula for entropy...
This highlight has been truncated due to consecutive passage length restrictions.
31%
Flag icon
To derive something like the Second Law from the Gibbs approach, you have to “forget” something about the evolution. When you get right down to it, that’s philosophically equivalent to the coarse-graining we had to do in the Boltzmann approach; we’ve just moved the “forgetting” step to the equations of motion, rather than the space of states.
31%
Flag icon
And that’s not the end of it; there are several other ways of thinking about entropy, and new ones are frequently being proposed in the literature.
31%
Flag icon
After quantum mechanics came on the scene, John von Neumann proposed a formula for entropy that is specifically adapted to the quantum context.
31%
Flag icon
As we’ll discuss in the next chapter, Claude Shannon suggested a definition of entropy that was very similar in spirit to Gibbs’s, but in the framework of information theory rather than physics.
31%
Flag icon
The point is not to find the one true definition of entropy; it’s to come up with concepts that serve useful functions in the appropriate contexts. Just don’t let anyone bamboozle you by pretending that one definitio...
This highlight has been truncated due to consecutive passage length restrictions.
31%
Flag icon
We’ve been dealing with the thermodynamic arrow of time, the one defined by entropy and the Second Law. There is also the cosmological arrow of time (the universe is expanding), the psychological arrow of time (we remember the past and not the future), the radiation arrow of time (electromagnetic waves flow away from moving charges, not toward them), and so on.
32%
Flag icon
PROVING THE SECOND LAW
32%
Flag icon
WHEN THE LAWS OF PHYSICS AREN’T ENOUGH
32%
Flag icon
THE PAST HYPOTHESIS
33%
Flag icon
Why did the universe have a low entropy near the Big Bang?
33%
Flag icon
9
33%
Flag icon
INFORMATION AND LIFE
33%
Flag icon
Of all the ways in which the arrow of time manifests itself, memory—and in particular, the fact that it applies to the past but not the future—is the most obvious, and the most central to our lives.
33%
Flag icon
This implies that our ability to remember the past but not the future must ultimately be explained in terms of entropy, and in particular by recourse to the Past Hypothesis that the early universe was in a very low-entropy state.
33%
Flag icon
PICTURES AND MEMORIES
33%
Flag icon
We can reconstruct the past accurately only by assuming a Past Hypothesis, in addition to knowledge of our current macrostate.
33%
Flag icon
by demanding that our history stretch from a low-entropy beginning to here, we dramatically restrict the space of allowed trajectories, leaving us with those for which our records are (for the most part) reliable reflections of the past.
33%
Flag icon
COGNITIVE INSTABILITY
34%
Flag icon
The truth is, we don’t have any more direct empirical access to the past than we have to the future, unless we allow ourselves to assume a Past Hypothesis.
34%
Flag icon
Indeed, the Past Hypothesis is more than just “allowed”; it’s completely necessary, if we hope to tell a sensible story about the universe.
34%
Flag icon
Without the Past Hypothesis, we simply can’t tell any intelligible story about the world; so we seem to be stuck with it, or stuck with trying to find a theory that actually explains it.
34%
Flag icon
CAUSE AND EFFECT
34%
Flag icon
MAXWELL’S DEMON
34%
Flag icon
RECORDING AND ERASING
35%
Flag icon
The act of erasing information necessarily transfers entropy to the outside world.
35%
Flag icon
INFORMATION IS PHYSICAL
35%
Flag icon
Information is physical. More concretely, possessing information allows us to extract useful work from a system in ways that would have otherwise been impossible.
35%
Flag icon
Shannon was interested in finding efficient and reliable ways of sending signals across noisy channels. He had the idea that some messages carry more effective information than others, simply because the message is more “surprising” or unexpected. If I tell you that the Sun is going to rise in the East tomorrow morning, I’m not actually conveying much information, because you already expected that was going to happen. But if I tell you the peak temperature tomorrow is going to be exactly 25 degrees Celsius, my message contains more information, because without the message you wouldn’t have ...more
1 6 13