More on this book
Community
Kindle Notes & Highlights
‘Corroboration’ is not just the confirmation of the winning theory. It requires the experimental refutation of rival theories.
only experimental results that actually do refute a theory – and not just any theory, it must have been a genuine contender in a rational controversy – constitute ‘
That qualification is, in effect, a new theory, but you have given no argument either against the prevailing theory of my gravitational properties, or in favour of the new one.
does not solve – or even purport to solve – any current problem, nor have you suggested a new, interesting problem that it could solve.
Knowing is not the same as understanding.
Theories postulating anomalies without explaining them are less likely than their rivals to make true predictions.
More generally, it is a principle of rationality that theories are postulated in order to solve problems. Therefore any postulate which solves no problem is to be rejected.
But in any case, surely it is more interesting to argue about what the truth is, than about what some particular thinker, however great, did or did not think.
Well, vacuously, yes, inasmuch as any theory about the future would assert that it resembled the past in some sense.
short, if there is no argument in favour of a postulate, then it is not reliable. Past, present or future. Anomaly or no anomaly. Period.
the more we consider the implications of your proposed anomaly, the more unanswered questions we find. This is not just a matter of your theory being incomplete. These questions are dilemmas.
In general, perverse but unrefuted theories which one can propose off the cuff fall roughly into two categories. There are theories that postulate unobservable entities, such as particles that do not interact with any other matter. They can be rejected for solving nothing (‘Occam’s razor’, if you like). And there are theories, like yours, that predict unexplained observable anomalies. They can be rejected for solving nothing and spoiling existing solutions.
they remove the explanatory power from existing theories by asserting that the predictions of those theories have exceptions, but not explaining how.
What justifies the principles of rationality? Argument, as usual.
They are justified because no explanation is improved by replacing a law of deduction.
However, it is an interesting fact that the physical universe admits processes that create knowledge about itself, and about other things too.
The misconception is about the very nature of argument and explanation.
It is not based on anything or justified by anything. And it doesn’t have to be, because its purpose is to solve problems – to show that a given problem is solved by a given explanation.
In trying to associate life with a basic physical concept (albeit the wrong one, motion), he recognized that life is a fundamental phenomenon of nature.
More generally, a replicator is any entity that causes certain environments to copy it. Not all replicators are biological, and not all replicators are molecules.
But all life on Earth is based on replicators that are molecules. These are called genes,
Genes are in effect computer programs, expressed as sequences of A, C, G and T symbols in a standard language called the genetic code which, with very slight variations, is common to all life on Earth.
the molecular level, this is all that any gene can program its cellular computer to do: manufacture a certain chemical.
the degree of adaptation of a replicator depends not only on what that replicator does in its actual environment, but also on what a vast number of other objects, most of which do not exist, would do, in a vast number of environments other than the actual one. We
Organisms are not copied during reproduction; far less do they cause their own copying. They are constructed afresh according to blueprints embodied in the parent organisms’ DNA.
shows that the shape of each nose is caused by that gene, and not by the shape of any previous nose.
So an organism is the immediate environment which copies the real replicators: the organism’s genes.
First, it is to kick the rendered environment and to be kicked back in return
Second, it is to provide the intention behind the rendering.
Genes embody knowledge about their niches.
It is the survival of knowledge, and not necessarily of the gene or any other physical object, that is the common factor between replicating and non-replicating genes. So, strictly speaking, it is a piece of knowledge rather than a physical object that is or is not adapted to a certain niche.
what the phenomenon of life is really about is knowledge. We can give a definition of adaptation directly in terms of knowledge: an entity is adapted to its niche if it embodies knowledge that causes the niche to keep that knowledge in existence.
The conventional argument for the insignificance of life gives too much weight to bulk quantities like size, mass and energy.
The human race as a whole (or, if you like, its stock of memes) probably already has enough knowledge to destroy whole planets, if its survival depended on doing so.
Life achieves its effects not by being larger, more massive or more energetic than other physical processes, but by being more knowledgeable.
it is not living matter but knowledge-bearing
bearing matter that is physically special.
A quantum computer is a machine that uses uniquely quantum-mechanical effects, especially interference, to perform wholly new types of computation that would be impossible, even in principle, on any Turing machine and hence on any classical computer.
What computations, in other words, are practicable in a given time and under a given budget? This is the basic question of computational complexity theory which, as I have said, is the study of the resources that are required to perform given computational tasks.
What counts for ‘tractability’, according to the standard definitions, is not the actual time taken to multiply a particular pair of numbers, but the fact that the time does not increase too sharply when we apply the same method to ever larger numbers.
The implication for computer prediction is that planetary motions, the epitome of classical predictability, are untypical classical systems.
The flapping of butterflies’ wings does not, in reality, cause hurricanes because the classical phenomenon of chaos depends on perfect determinism, which does not hold in any single universe.
Unpredictability has nothing to do with the available computational resources. Classical systems are unpredictable (or would be, if they existed) because of their sensitivity to initial conditions.
but are unpredictable because they behave differently in different universes, and so appear random in most universes.
Intractability, by contrast, is a computational...
This highlight has been truncated due to consecutive passage length restrictions.
four, a trillion; and so on. Thus the number of different histories that we have to calculate if we want to predict what will happen in such cases increases exponentially with the number of interacting particles. That is why the task of computing how a typical quantum system will behave is well and truly intractable.
Intractability is in principle a greater impediment to universality than unpredictability could ever be.
If it requires so much computation to work out what will happen in an interference experiment, then the very act of setting up such an experiment and measuring its outcome is tantamount to performing a complex computation.
Yet since we could readily obtain its result just by performing this experiment, it is not really intractable after all.

