More on this book
Community
Kindle Notes & Highlights
Read between
December 12, 2022 - September 6, 2023
DAVID: What justifies the principles of rationality? Argument, as usual. What, for instance, justifies our relying on the laws of deduction, despite the fact that any attempt to justify them logically must lead either to circularity or to an infinite regress? They are justified because no explanation is improved by replacing a law of deduction.
However, it is an interesting fact that the physical universe admits processes that create knowledge about itself, and about other things too. We may reasonably try to explain this fact in the same way as we explain other physical facts, namely through explanatory theories. You will see in Chapter 6 of The Fabric of Reality that I think that the Turing principle is the appropriate theory in this case. It says that it is possible to build a virtual-reality generator whose repertoire includes every physically possible environment. If the Turing principle is a law of physics, as I have argued
...more
A phenomenon is ‘fundamental’ if a sufficiently deep understanding of the world depends on understanding that phenomenon.
Modern biology does not try to define life by some characteristic physical attribute or substance – some living ‘essence’ – with which only animate matter is endowed. We no longer expect there to be any such essence, because we now know that ‘animate matter’, matter in the form of living organisms, is not the basis of life. It is merely one of the effects of life, and the basis of life is molecular. It is the fact that there exist molecules which cause certain environments to make copies of those molecules.
But all life on Earth is based on replicators that are molecules. These are called genes, and biology is the study of the origin, structure and operation of genes, and of their effects on other matter. In most organisms a gene consists of a sequence of smaller molecules, of which there are four different kinds, joined together in a chain.
Genes are in effect computer programs, expressed as sequences of A, C, G and T symbols in a standard language called the genetic code which, with very slight variations, is common to all life on Earth. (Some viruses are based on a related type of molecule, RNA, while prions are, in a sense, self-replicating protein molecules.) Special structures within each organism’s cells act as computers to execute these gene programs. The execution consists of manufacturing certain molecules (proteins) from simpler molecules (amino acids) under certain external conditions.
Typically, a gene is chemically ‘switched on’ in certain cells of the body, and then instructs those cells to manufacture the corresponding protein. For example, the hormone insulin, which controls blood sugar levels in vertebrates, is such a protein. The gene for manufacturing it is present in almost every cell of the body, but it is switched on only in certain specialized cells in the pancreas, and then only when it is needed.
At the molecular level, this is all that any gene can program its cellular computer to do: manufacture a certain chemical. But genes succeed in being replicators because these low-level chemical programs add up, through layer upon layer of complex control and feedback, to sophisticated high-level instructions. Jointly, the insulin gene and the genes involved in switching it on and off amount to a complete program for the regulation of sugar in the bloodstream.
A gene can function as a replicator only in certain environments. By analogy with an ecological ‘niche’ (the set of environments in which an organism can survive and reproduce), I shall also use the term niche for the set of all possible environments which a given replicator would cause to make copies of it.
Not everything that can be copied is a replicator. A replicator causes its environment to copy it: that is, it contributes causally to its own copying.
For example, the insulin gene causes only one small step in the enormously complicated process of its own replication (that process being the whole life cycle of the organism). But the overwhelming majority of variants of that gene would not instruct cells to manufacture a chemical that could do the job of insulin. If the insulin genes in an individual organism’s cells were replaced by slightly different molecules, that organism would die (unless it were kept alive by other means), and would therefore fail to have offspring, and those molecules would not be copied. So whether copying takes
...more
An organism is not a replicator: it is part of the environment of replicators – usually the most important part after the other genes. The remainder of the environment is the type of habitat that can be occupied by the organism (such as mountain tops or ocean bottoms) and the particular lifestyle within that habitat (such as hunter or filter-feeder) which enables the organism to survive for long enough for its genes to be replicated.
Organisms are not copied during reproduction; far less do they cause their own copying. They are constructed afresh according to blueprints embodied in the parent organisms’ DNA.
Genes embody knowledge about their niches. Everything of fundamental significance about the phenomenon of life depends on this property, and not on replication per se.
It is the survival of knowledge, and not necessarily of the gene or any other physical object, that is the common factor between replicating and non-replicating genes. So, strictly speaking, it is a piece of knowledge rather than a physical object that is or is not adapted to a certain niche. If it is adapted, then it has the property that once it is embodied in that niche, it will tend to remain so.
The point is that although all known life is based on replicators, what the phenomenon of life is really about is knowledge. We can give a definition of adaptation directly in terms of knowledge: an entity is adapted to its niche if it embodies knowledge that causes the niche to keep that knowledge in existence.
the point I am making here does not depend on our being able to predict what will happen, but only on the proposition that what will happen will depend on what knowledge our descendants have, and on how they choose to apply it. Thus one cannot predict the future of the Sun without taking a position on the future of life on Earth, and in particular on the future of knowledge.
The colour of the Sun ten billion years hence depends on gravity and radiation pressure, on convection and nucleosynthesis. It does not depend at all on the geology of Venus, the chemistry of Jupiter, or the pattern of craters on the Moon. But it does depend on what happens to intelligent life on the planet Earth. It depends on politics and economics and the outcomes of wars. It depends on what people do: what decisions they make, what problems they solve, what values they adopt, and on how they behave towards their children.
any theory of the structure of the universe in all but its earliest stages must take a position on what life will or will not be doing by then. There is no getting away from it: the future history of the universe depends on the future history of knowledge. Astrologers used to believe that cosmic events influence human affairs; science believed for centuries that neither influences the other. Now we see that human affairs influence cosmic events.
Life achieves its effects not by being larger, more massive or more energetic than other physical processes, but by being more knowledgeable. In terms of its gross effect on the outcomes of physical processes, knowledge is at least as significant as any other physical quantity.
Again we were too parochial, and were led to the false conclusion that knowledge-bearing entities can be physically identical to non-knowledge-bearing ones; and this in turn cast doubt on the fundamental status of knowledge. But now we have come almost full circle. We can see that the ancient idea that living matter has special physical properties was almost true: it is not living matter but knowledge-bearing matter that is physically special. Within one universe it looks irregular; across universes it has a regular structure, like a crystal in the multiverse. So knowledge is a fundamental
...more
if only you could observe through a multiverse telescope, life and its consequences would be obvious at a glance. You need only look for complex structures that seem irregular in any one universe, but are identical across many nearby universes. If you see any, you will have found some physically embodied knowledge.
replicator An entity that causes certain environments to make copies of it.
niche The niche of a replicator is the set of all possible environments in which the replicator would cause its own replication. The niche of an organism is the set of all possible environments and life-styles in which it could live and reproduce.
adaptation The degree to which a replicator is adapted to a niche is the degree to which it causes its own replication in that niche. More generally, an entity is adapted to its niche to the extent that it embodies knowledge that causes the niche to keep that knowledge in existence.
All present-day computers, whatever quantum-mechanical processes they may exploit, are merely different technological implementations of the same classical idea, that of the universal Turing machine. That is why the repertoire of computations available to all existing computers is essentially the same: they differ only in their speed, memory capacity and input–output devices. That is to say, even the lowliest of today’s home computers can be programmed to solve any problem, or render any environment, that our most powerful computers can, provided only that it is given additional memory,
...more
Quantum computation is more than just a faster, more miniaturized technology for implementing Turing machines. A quantum computer is a machine that uses uniquely quantum-mechanical effects, especially interference, to perform wholly new types of computation that would be impossible, even in principle, on any Turing machine and hence on any classical computer. Quantum computation is therefore nothing less than a distinctively new way of harnessing nature.
The earliest inventions for harnessing nature were tools powered by human muscles. They revolutionized our ancestors’ situation, but they suffered from the limitation that they required continuous human attention and effort during every moment of their use. Subsequent technology overcame that limitation: human beings managed to domesticate certain animals and plants, turning the biological adaptations in those organisms to human ends. Thus the crops could grow, and the guard dogs could watch, even while their owners slept. Another new type of technology began when human beings went beyond
...more
Quantum computation, which is now in its early infancy, is a distinct further step in this progression. It will be the first technology that allows useful tasks to be performed in collaboration between parallel universes. A quantum computer would be capable of distributing components of a complex task among vast numbers of parallel universes, and then sharing the results.
Just how efficiently can given aspects of reality be rendered? What computations, in other words, are practicable in a given time and under a given budget? This is the basic question of computational complexity theory which, as I have said, is the study of the resources that are required to perform given computational tasks.
In order to predict what a typical classical system will do after only a moderate period, one would have to determine its initial state to an impossibly high precision. Thus it is said that in principle, the flap of a butterfly’s wing in one hemisphere of the planet could cause a hurricane in the other hemisphere. The infeasibility of weather forecasting and the like is then attributed to the impossibility of accounting for every butterfly on the planet. However, real hurricanes and real butterflies obey quantum theory, not classical mechanics. The instability that would rapidly amplify slight
...more
The laws of quantum mechanics require an object that is initially at a given position (in all universes) to ‘spread out’ in the multiverse sense. For instance, a photon and its other-universe counterparts all start from the same point on a glowing filament, but then move in trillions of different directions. When we later make a measurement of what has happened, we too become differentiated as each copy of us sees what has happened in our particular universe. If the object in question is the Earth’s atmosphere, then a hurricane may have occurred in 30 per cent of universes, say, and not in the
...more
This parallel-universe multiplicity is the real reason for the unpredictability of the weather. Our inability to measure the initial conditions accurately is completely irrelevant. Even if we knew the initial conditions perfectly, the multiplicit...
This highlight has been truncated due to consecutive passage length restrictions.
Evidently there are computational tasks that are ‘intractable’ if we attempt to perform them using any existing computer, but which would be tractable if we were to use quantum-mechanical objects as special-purpose computers.
But the point of universality is that it should be possible to program a single machine, specified once and for all, to perform any possible computation, or render any physically possible environment. In 1985 I proved that under quantum physics there is a universal quantum computer.
A universal quantum computer could perform any computation that any other quantum computer (or any Turing-type computer) could perform, and it could render any finite physically possible environment in virtual reality. Moreover, it has since been shown that the time and other resources that it would need to do these things would not increase exponentially with the size or detail of the environment being rendered, so the relevant computations would be tractable by the standards of complexity theory.
The classical theory of computation, which was the unchallenged foundation of computing for half a century, is now obsolete except, like the rest of classical physics, as an approximation scheme. The theory of computation is now the quantum theory of computation.
From a fundamental standpoint it does not matter how useful quantum computation turns out to be, nor does it matter whether we build the first universal quantum computer next week, or centuries from now, or never. The quantum theory of computation must in any case be an integral part of the world-view of anyone who seeks a fundamental understanding of reality. What quantum computers tell us about connections between the laws of physics, universality, and apparently unrelated strands of explanation of the fabric of reality, we can discover – and are already discovering – by studying them
...more
quantum computation Computation that requires quantum-mechanical processes, especially interference. In other words, computation that is performed in collaboration between parallel universes.
exponential computation A computation whose resource requirements (such as the time required) increase by a roughly constant factor for each additional digit in the input.
tractable/intractable (Rough-and-ready rule:) A computational task is deemed tractable if the resources required to perform it do not increase exponentia...
This highlight has been truncated due to consecutive passage length restrictions.
chaos The instability in the motion of most classical systems. A small difference between two initial states gives rise to exponentially growing deviations between the two resulting trajectories. But reality obeys quantum and not classical physics. Unpredictability caused by chaos is in general swa...
This highlight has been truncated due to consecutive passage length restrictions.
universal quantum computer A computer that could perform any computation that any other quantum computer could perform, and render any finite, physicall...
This highlight has been truncated due to consecutive passage length restrictions.
decoherence If different branches of a quantum computation, in different universes, affect the environment differently, then interference is reduced and the computation may fail. Decoherence is the principal obstacle to the practical realization of more powerful quantum computers.
The laws of physics permit computers that can render every physically possible environment without using impractically large resources. So universal computation is not merely possible, as required by the Turing principle, it is also tractable.
Quantum phenomena may involve vast numbers of parallel universes and therefore may not be capable of being efficiently simulated within one universe. However, this strong form of universality still holds because quantum computers can efficiently render every physically possible quantum environment, even when vast numbers of universes are interacting. Quantum computers can also efficiently solve certain mathematical problems, such as factorization, which are classically intractable, and can implement types of cryptography which are classically impossible. Quantum computation is a qualitatively
...more
Thus, abstract mathematical entities we think we are familiar with can nevertheless surprise or disappoint us. They can pop up unexpectedly in new guises, or disguises. They can be inexplicable, and then later conform to a new explanation. So they are complex and autonomous, and therefore by Dr Johnson’s criterion we must conclude that they are real. Since we cannot understand them either as being part of ourselves or as being part of something else that we already understand, but we can understand them as independent entities, we must conclude that they are real, independent entities.
Virtual reality – which can make one environment seem to be another – underlines the fact that when observation is the ultimate arbiter between theories, there can never be any certainty that an existing explanation, however obvious, is even remotely true. But when proof is the arbiter, it is supposed, there can be certainty.
It is said that the rules of logic were first formulated in the hope that they would provide an impartial and infallible method of resolving all disputes. This hope can never be fulfilled. The study of logic itself revealed that the scope of logical deduction as a means of discovering the truth is severely limited. Given substantive assumptions about the world, one can deduce conclusions; but the conclusions are no more secure than the assumptions.
In particular, the core idea that mathematical knowledge and scientific knowledge come from different sources, and that the ‘special’ source of mathematics confers absolute certainty upon it, is to this day accepted uncritically by virtually all mathematicians. Nowadays they call this source mathematical intuition, but it plays exactly the same role as Plato’s ‘memories’ of the realm of Forms.

