More on this book
Community
Kindle Notes & Highlights
A “state” of a physical system is “all of the information about the system, at some fixed moment in time, that you need to specify its future evolution,111 given the laws of physics.”
It will often be convenient to think about “every possible state the system could conceivably be in.” That is known as the space of states of the system.
In Newtonian mechanics, the space of states is called “phase space,” for reasons that are pretty mysterious. It’s just the collection of all possible positions and momenta of every object in the system.
Once we get to quantum mechanics, the space of states will consist of all possible wave functions describing the quantum system; the technical term is Hilbert space.
Any good theory of physics has a space of states, and then some rule describing how a particular state evolves in time.
In this abstract context, a “dimension” is just “a number you need to specify a point in the space.”
NEWTON IN REVERSE
Newtonian mechanics is invariant under time reversal.
In classical mechanics, we define the operation of time reversal to not simply play the original set of states backward, but also to reverse the momenta. And then, indeed, classical mechanics is perfectly invariant under time reversal.
RUNNING PARTICLES BACKWARD
Even though the theoretical predictions had been established for a while, this experiment wasn’t actually carried out until 1998, by the CPLEAR experiment at the CERN laboratory in Geneva, Switzerland.119 They found that their beam of particles, after oscillating back and forth between kaons and antikaons, decayed slightly more frequently (about 2/3 of 1 percent) like a kaon than like an antikaon; the oscillating beam was spending slightly more time as kaons than as antikaons. In other words, the process of going from a kaon to an antikaon took slightly longer than the time-reversed process of
...more
THREE REFLECTIONS OF NATURE
We have time reversal T, which exchanges past and future. We also have parity P, which exchanges right and left. We discussed parity in the context of our checkerboard worlds, but it’s just as relevant to three-dimensional space in the real world. Finally, we have “charge conjugation” C, which is a fancy name for the process of exchanging particles with their antiparticles. The transformations C, P, and T all have the property that when you repeat them twice in a row you simply return to the state you started with.
In the case of parity violation, it was Lee and Yang who sat down and performed a careful analysis of the problem. They discovered that there was ample experimental evidence that electromagnetism and the strong nuclear force both were invariant under P, but that the question was open as far as the weak nuclear force was concerned.
Lee and Yang were awarded the Nobel Prize in Physics in 1957; Wu should have been included among the winners, but she wasn’t.
At the end of the day, all of the would-be symmetries C, P, and T are violated in Nature, as well as any combination of two of them together.
The obvious next step is to inquire about the combination of all three: CPT. In other words, if we take some process observed in nature, switch all the particles with their antiparticles, flip right with left, and run it backward in time, do we get a process that obeys the laws of physics?
As far as any experiment yet performed can tell, CPT is a perfectly good symmetry of Nature. And it’s more than that; under certain fairly reasonable assumptions about the laws of physics, you can prove that CPT must be a good symmetry—this result is known imaginatively as the “CPT Theorem.” Of course, even reasonable assumptions might be wrong, and neither experimentalists nor theorists have shied away from exploring the possibility of CPT violation. But as far as we can tell, this particular symmetry is holding up.
CONSERVATION OF INFORMATION
Our ability to successfully define “time reversal” so that some laws of physics are invariant under it depends on one other crucial assumption: conservation of information. This is simply the idea that two different states in the past always evolve into two distinct states in the future—they never evolve into the same state. If that’s true, we say that “information is conserved,” because knowledge of the future state is sufficient to figure out what the appropriate state in the past must have been. If that feature is respected by some laws of physics, the laws are reversible , and there will
...more
In the real world, apparent loss of information happens all the time.
To understand how reversible underlying laws give rise to macroscopic irreversibility, we must return to Boltzmann and his ideas about entropy.
8
ENTROPY AND DISORDER
In the last chapter we discussed how the underlying laws of physics work equally well forward or backward in time (suitably defined). That’s a microscopic description, in which we keep careful track of each and every constituent of a system. But very often in the real world, where large numbers of atoms are involved, we don’t keep track of nearly that much information. Instead, we make simplifications—thinking about the average color or temperature or pressure, rather than the specific position and momentum of each atom. When we think macroscopically, we forget (or ignore) detailed information
...more
SMOOTHING OUT
The basic idea we want to understand is “how do macroscopic features of a system made of many atoms evolve as a consequence of ...
This highlight has been truncated due to consecutive passage length restrictions.
ENTROPY À LA BOLTZMANN
In the immortal words of Peter Venkman: “Back off, man, I’m a scientist.”
Boltzmann’s goal in thinking this way was to provide a basis in atomic theory for the Second Law of Thermodynamics, the statement that the entropy will always increase (or stay constant) in a closed system. The Second Law had already been formulated by Clausius and others, but Boltzmann wanted to derive it from some simple set of underlying principles. You can see how this statistical thinking leads us in the right direction—“systems tend to evolve from uncommon arrangements into common ones” bears a family resemblance to “systems tend to evolve from low-entropy configurations into
...more
Boltzmann was able to crack the puzzle of how to define entropy in terms of microscopic rearrangements. We use the letter W—from the German Wahrscheinlichkeit , meaning “probability” or “likelihood”—to represent the number of ways we can rearrange the microscopic constituents of a system without changing its macroscopic appearance. Boltzmann’s final step was to take the logarithm of W and proclaim that the result is proportional to the entropy.
Boltzmann’s formula for the entropy, which is traditionally denoted by S (you wouldn’t have wanted to call it E, which usually stands for energy), states that it is equal to some constant k, cleverly called “Boltzmann’s constant,” times the logarithm of W, the number of microscopic arrangements of a system that are macroscopically indistinguishable.
S=k log W
BOX OF GAS REDUX
So this is the origin of the arrow of time, according to Boltzmann and his friends. We start with a set of microscopic laws of physics that are time-reversal invariant: They don’t distinguish between past and future. But we deal with systems featuring large numbers of particles, where we don’t keep track of every detail necessary to fully specify the state of the system; instead, we keep track of some observable macroscopic features. The entropy characterizes (by which we mean, “is proportional to the logarithm of ”) the number of microscopic states that are macroscopically indistinguishable.
...more
For the rest of this chapter we will bring to light the various assumptions that go into Boltzmann’s way of thinking about entropy, and try to decide just how plausible they are.
USEFUL AND USELESS ENERGY
A system that has the maximum entropy it can have is in equilibrium.
Once there, the system basically has nowhere else to go; it’s in the kind of configuration that is most natural for it to be in. Such a system has no arrow of time, as the entropy is not increasing (or decreasing). To a macroscopic observer, a system in equilibrium appears static, not changing at all.
Entropy measures the uselessness of a configuration of energy.
High entropy implies equilibrium, which implies that the energy is useless, and indeed we see that our piston isn’t going anywhere.
DON’T SWEAT THE DETAILS
Who decides when two specific microscopic states of a system look the same from our macroscopic point of view?
Boltzmann’s formula for entropy hinges on the idea of the quantity W, which we defined as “the number of ways we can rearrange the microscopic constituents of a system without changing its macroscopic appearance.”
In the last chapter we defined the “state” of a physical system to be a complete specification of all the information required to uniquely evolve it in time; in classical mechanics, it would be the posi...
This highlight has been truncated due to consecutive passage length restrictions.
Now that we are considering statistical mechanics, it’s useful to use the term microstate to refer to the precise state of a system, in contrast with the macrostate, which specifies only ...
This highlight has been truncated due to consecutive passage length restrictions.
Then the shorthand definition of W is “the number of microstates corresponding to ...
This highlight has been truncated due to consecutive passage length restrictions.
The process of dividing up the space of microstates of some particular physical system (gas in a box, a glass of water, the universe) into sets that we label “macroscopically indistinguishable” is known as coarse-graining.
It’s a little bit of black magic that plays a crucial role in the way we think about entropy. In Figure 45 we’ve portrayed how coarse-graining works; it simply divides up the space of all states of a system into regions (macrostates) that are indistinguishable by macroscopic observations. Every point within one of those regions corresponds to a different microstate, and the entropy associated with a given microstate is proportional to the logarithm of the area (or really volume, as it’s a very high-dimensional space) of the macrostate to which it belongs.
This whole business should strike you as just a little bit funny. Two microstates belong to the same macrostate when they are macroscopically indistinguishable. But that’s just a fancy way of saying, “when we can’t tell the difference between them on the basis of macroscopic observations.” It’s the appearance of “we” in that statement that should make you nervous. Why should our powers of observation be involved in any way at all? We like to think of entropy as a feature of the world, not as a feature of our ability to perceive the world.