More on this book
Community
Kindle Notes & Highlights
Read between
December 12, 2022 - September 6, 2023
solipsism The theory that only one mind exists and that what appears to be external reality is only a dream taking place in that mind.
criticism Rational criticism compares rival theories with the aim of finding which of them offers the best explanations according to the criteria inherent in the problem.
science The purpose of science is to understand reality through explanations. The characteristic (though not the only) method of criticism used in science is experimental testing.
But the refutation of inductivism, and also the real solution of the problem of induction, depends on recognizing that science is a process not of deriving predictions from observations, but of finding explanations. We seek explanations when we encounter a problem with existing ones. We then embark on a problem-solving process. New explanatory theories begin as unjustified conjectures, which are criticized and compared according to the criteria inherent in the problem. Those that fail to survive this criticism are abandoned. The survivors become the new prevailing theories, some of which are
...more
Where Galileo differed was in his conception of the relationship between physical reality on the one hand, and human ideas, observations and reason on the other. He believed that the universe could be understood in terms of universal, mathematically formulated laws, and that reliable knowledge of these laws was accessible to human beings if they applied his method of mathematical formulation and systematic experimental testing. As he put it, ‘the Book of Nature is written in mathematical symbols’. This was in conscious comparison with that other Book on which it was more conventional to rely.
Problem-solving, after all, is a process that takes place entirely within human minds. Galileo may have seen the world as a book in which the laws of nature are written in mathematical symbols. But that is strictly a metaphor; there are no explanations in orbit out there with the planets.
Behaviourism is the doctrine that it is not meaningful to explain human behaviour in terms of inner mental processes. To behaviourists, the only legitimate psychology is the study of people’s observable responses to external stimuli. Thus they draw exactly the same boundary as solipsists, separating the human mind from external reality; but while solipsists deny that it is meaningful to reason about anything outside that boundary, behaviourists deny that it is meaningful to reason about anything inside.
There is a large class of related theories here, but we can usefully regard them all as variants of solipsism. They differ in where they draw the boundary of reality (or the boundary of that part of reality which is comprehensible through problem-solving), and they differ in whether, and how, they seek knowledge outside that boundary. But they all consider scientific rationality and other problem-solving to be inapplicable outside the boundary – a mere game.
There is an assumption built into this question. It is that theories can be classified in a hierarchy, ‘mathematical’ ‘scientific’ ‘philosophical’, of decreasing intrinsic reliability. Many people take the existence of this hierarchy for granted, despite the fact that these judgements of comparative reliability depend entirely on philosophical arguments, arguments that classify themselves as quite unreliable!
Explanations are not justified by the means by which they were derived; they are justified by their superior ability, relative to rival explanations, to solve the problems they address.
The rejection of ‘mere’ explanations on the grounds that they are not justified by any ultimate explanation inevitably propels one into futile searches for an ultimate source of justification. There is no such source.
If, according to the simplest explanation, an entity is complex and autonomous, then that entity is real.
The complexity of a piece of information is defined in terms of the computational resources (such as the length of the program, the number of computational steps or the amount of memory) that a computer would need if it was to reproduce that piece of information.
Observational evidence is indeed evidence, not in the sense that any theory can be deduced, induced or in any other way inferred from it, but in the sense that it can constitute a genuine reason for preferring one theory to another.
Given a shred of a theory, or rather, shreds of several rival theories, the evidence is available out there to enable us to distinguish between them. Anyone can search for it, find it and improve upon it if they take the trouble. They do not need authorization, or initiation, or holy texts. They need only be looking in the right way – with fertile problems and promising theories in mind. This open accessibility, not only of evidence but of the whole mechanism of knowledge acquisition, is a key attribute of Galileo’s conception of reality.
Thus physical reality is self-similar on several levels: among the stupendous complexities of the universe and multiverse, some patterns are nevertheless endlessly repeated. Earth and Jupiter are in many ways dramatically dissimilar planets, but they both move in ellipses, and they are made of the same set of a hundred or so chemical elements (albeit in different proportions), and so are their parallel-universe counterparts. The evidence that so impressed Galileo and his contemporaries also exists on other planets and in distant galaxies. The evidence being considered at this moment by
...more
The very existence of general, explanatory theories implies that disparate objects and events are physically alike in some ways.
There are laws and explanations, reductive and emergent. There are descriptions and explanations of the Big Bang and of subnuclear particles and processes; there are mathematical abstractions; fiction; art; morality; shadow photons; parallel universes. To the extent that these symbols, images and theories are true – that is, they resemble in appropriate respects the concrete or abstract things they refer to – their existence gives reality a new sort of self-similarity, the self-similarity we call knowledge.
Dr Johnson’s criterion (My formulation) If it can kick back, it exists. A more elaborate version is: If, according to the simplest explanation, an entity is complex and autonomous, then that entity is real.
self-similarity Some parts of physical reality (such as symbols, pictures or human thoughts) resemble other parts. The resemblance may be concrete, as when the images in a planetarium resemble the night sky; more importantly, it may be abstract, as when a statement in quantum theory printed in a book correctly explains an aspect of the structure of the multiverse.
What computers can or cannot compute is determined by the laws of physics alone, and not by pure mathematics.
A universal computer is usually defined as an abstract machine that can mimic the computations of any other abstract machine in a certain well-defined class. However, the significance of universality lies in the fact that universal computers, or at least good approximations to them, can actually be built, and can be used to compute not just each other’s behaviour but the behaviour of interesting physical and abstract entities.
I define the repertoire of a virtual-reality generator as the set of real or imaginary environments that the generator can be programmed to give the user the experience of. My question about the ultimate limits of virtual reality can be stated like this: what constraints, if any, do the laws of physics impose on the repertoires of virtual-reality generators?
it seems that any virtual-reality generator must have at least three principal components: a set of sensors (which may be nerve-impulse detectors) to detect what the user is doing, a set of image generators (which may be nerve-stimulation devices), and a computer in control.
The program in a virtual-reality generator embodies a general, predictive theory of the behaviour of the rendered environment. The other components deal with keeping track of what the user is doing and with the encoding and decoding of sensory data; these, as I have said, are relatively trivial functions. Thus if the environment is physically possible, rendering it is essentially equivalent to finding rules for predicting the outcome of every experiment that could be performed in that environment. Because of the way in which scientific knowledge is created, ever more accurate predictive rules
...more
Imagination is a straightforward form of virtual reality. What may not be so obvious is that our ‘direct’ experience of the world through our senses is virtual reality too. For our external experience is never direct; nor do we even experience the signals in our nerves directly – we would not know what to make of the streams of electrical crackles that they carry. What we experience directly is a virtual-reality rendering, conveniently generated for us by our unconscious minds from sensory data plus complex inborn and acquired theories (i.e. programs) about how to interpret them.
Every last scrap of our external experience is of virtual reality. And every last scrap of our knowledge – including our knowledge of the non-physical worlds of logic, mathematics and philosophy, and of imagination, fiction, art and fantasy – is encoded in the form of programs for the rendering of those worlds on our brain’s own virtual-reality generator.
The heart of a virtual-reality generator is its computer, and the question of what environments can be rendered in virtual reality must eventually come down to the question of what computations can be performed.
We are not investigating what sorts of virtual-reality generator can be built, or even, necessarily, what sorts of virtual-reality generator will ever be built, by human engineers. We are investigating what the laws of physics do and do not allow in the way of virtual reality. The reason why this is important has nothing to do with the prospects for making better virtual-reality generators. It is that the relationship between virtual reality and ‘ordinary’ reality is part of the deep, unexpected structure of the
It can be proved that, for every environment in the repertoire of a given virtual-reality generator, there are infinitely many Cantgotu environments that it cannot render.
Thus the feasibility of a universal virtual-reality generator depends on the existence of a universal computer – a single machine that can calculate anything that can be calculated. As I have said, this sort of universality was first studied not by physicists but by mathematicians. They were trying to make precise the intuitive notion of ‘computing’ (or ‘calculating’ or ‘proving’) something in mathematics. They did not take on board the fact that mathematical calculation is a physical process (in particular, as I have explained, it is a virtual-reality rendering process), so it is impossible
...more
If a question is non-computable that does not mean that it has no answer, or that its answer is in any sense ill-defined or ambiguous. On the contrary, it means that it definitely has an answer. It is just that physically there is no way, even in principle, of obtaining that answer (or more precisely, since one could always make a lucky, unverifiable guess, of proving that it is the answer).
In virtual-reality terms: no physically possible virtual-reality generator can render an environment in which answers to non-computable questions are provided to the user on demand. Such environments are of the Cantgotu type. And conversely, every Cantgotu environment corresponds to a class of mathematical questions (‘what would happen next in an environment defined in such-and-such a way?’) which it is physically impossible to answer.
The Turing principle It is possible to build a virtual-reality generator whose repertoire includes every physically possible environment.
This is the strongest form of the Turing principle. It not only tells us that various parts of reality can resemble one another. It tells us that a single physical object, buildable once and for all (apart from maintenance and a supply of additional memory when needed), can perform with unlimited accuracy the task of describing or mimicking any other part of the multiverse.
This is just the sort of self-similarity that is necessary if, according to the hope I expressed in Chapter 1, the fabric of reality is to be truly unified and comprehensible. If the laws of physics as they apply to any physical object or process are to be comprehensible, they must be capable of being embodied in another physical object – the knower. It is also necessary that processes capable of creating such knowledge be physically possible. Such processes are called science. Science depends on experimental testing, which means physically rendering a law’s predictions and comparing it with
...more
The laws of physics, by conforming to the Turing principle, make it physically possible for those same laws to become known to physical objects. Thus, the laws of physics may be said to mandate their own comprehensibility.
Now I return to the question I posed in the previous chapter, namely whether, if we had only a virtual-reality rendering based on the wrong laws of physics to learn from, we should expect to learn the wrong laws. The first thing to stress is that we do have only virtual reality based on the wrong laws to learn from! As I have said, all our external experiences are of virtual reality, generated by our own brains. And since our concepts and theories (whether inborn or learned) are never perfect, all our renderings are indeed inaccurate. That is to say, they give us the experience of an
...more
In the Popperian scheme of things, explanations always lead to new problems which in turn require further explanations.
There is nevertheless a comprehensive self-similarity in physical reality that is expressed in the Turing principle: it is possible to build a virtual-reality generator whose repertoire includes every physically possible environment. So a single, buildable physical object can mimic all the behaviours and responses of any other physically possible object or process. This is what makes reality comprehensible.
what justifies our relying on our best explanations as guides to practical decision-making? More generally, whatever criteria we used to judge scientific theories, how could the fact that a theory satisfied those criteria today possibly imply anything about what will happen if we rely on the theory tomorrow? This is the modern form of the ‘problem of induction
What justifies the prediction, if it isn’t the evidence? DAVID: Argument. CRYPTO-INDUCTIVIST: Argument? DAVID: Only argument ever justifies anything – tentatively, of course. All theorizing is subject to error, and all that. But still, argument can sometimes justify theories. That is what argument is for.
according to Popperian scientific methodology, crucial experiments play a pivotal role in deciding between it and its rivals. The rivals were refuted; it survived.
CRYPTO-INDUCTIVIST: So what exactly was it about those actual past outcomes that justified the prediction, as opposed to other possible past outcomes which might well have justified the contrary prediction? DAVID: It was that the actual outcomes refuted all the rival theories, and corroborated the theory that now prevails.
When Popper speaks of ‘rival theories’ to a given theory, he does not mean the set of all logically possible rivals: he means only the actual rivals, those proposed in the course of a rational controversy.
Anyway, I was explaining why it’s not so strange that the reliability of a theory should depend on what false theories people have proposed in the past. Even inductivists speak of a theory being reliable or not, given certain ‘evidence’. Well, Popperians might speak of a theory being the best available for use in practice, given a certain problem-situation. And the most important features of a problem-situation are: what theories and explanations are in contention, what arguments have been advanced, and what theories have been refuted. ‘Corroboration’ is not just the confirmation of the
...more
In the Popperian picture of scientific progress, it is not observations but problems, controversies, theories and criticism that are primary. Experiments are designed and performed only to resolve controversies. Therefore only experimental results that actually do refute a theory – and not just any theory, it must have been a genuine contender in a rational controversy – constitute ‘corroboration’. And so it is only those experiments that provide evidence for the reliability of the winning theory.
More generally, it is a principle of rationality that theories are postulated in order to solve problems. Therefore any postulate which solves no problem is to be rejected. That is because a good explanation qualified by such a postulate becomes a bad explanation.
Nothing in the concepts of ‘rational argument’ or ‘explanation’ relates the future to the past in any special way. Nothing is postulated about anything ‘resembling’ anything.
In general, perverse but unrefuted theories which one can propose off the cuff fall roughly into two categories. There are theories that postulate unobservable entities, such as particles that do not interact with any other matter. They can be rejected for solving nothing (‘Occam’s razor’, if you like). And there are theories, like yours, that predict unexplained observable anomalies. They can be rejected for solving nothing and spoiling existing solutions. It is not, I hasten to add, that they conflict with existing observations. It is that they remove the explanatory power from existing
...more

