More on this book
Community
Kindle Notes & Highlights
For example, the philosopher John Locke wrote in 1689 that the mind is like ‘white paper’ on to which sensory experience writes, and that that is where all our knowledge of the physical world comes from.
On each occasion when that prediction comes true, and provided that it never fails, the probability that it will always come true is supposed to increase.
As the ancient philosopher Heraclitus remarked, ‘No man ever steps in the same river twice, for it is not the same river and he is not the same man.’
But one thing that all conceptions of the Enlightenment agree on is that it was a rebellion, and specifically a rebellion against authority in regard to knowledge.
What was needed for the sustained, rapid growth of knowledge was a tradition of criticism. Before the Enlightenment,
If an explanation could easily explain anything in the given field, then it actually explains nothing.
That is what a good explanation will do for you: it makes it harder for you to fool yourself.
Feeling insignificant because the universe is large has exactly the same logic as feeling inadequate for not being a cow. Or a herd of cows. The universe is not there to overwhelm us; it is our home, and our resource. The bigger the better.
People are significant in the cosmic scheme of things; and The Earth’s biosphere is incapable of supporting human life.
That implies, in particular, that progress in science cannot exceed a certain limit defined by the biology of the human brain. And we must expect to reach that limit sooner rather than later. Beyond it, the world stops making sense (or seems to).
And so, again, everything that is not forbidden by laws of nature is achievable, given the right knowledge.
The ability to create and use explanatory knowledge gives people a power to transform nature which is ultimately not limited by parochial factors, as all other adaptations are, but only by universal laws.
This is another reason that ‘one per cent inspiration and ninety-nine per cent perspiration’ is a misleading description of how progress happens: the ‘perspiration’ phase can be automated – just as the task of recognizing galaxies on astronomical photographs was. And the more advanced technology becomes, the shorter is the gap between inspiration and automation.
You do not become less of a person if you lose a limb in an accident; it is only if you lose your brain that you do.
That means that a typical child born in the United States today is more likely to die as a result of an astronomical event than a plane crash.
Actually, Paley did not know the overall purpose of the mouse (though we do now – see ‘Neo-Darwinism’ below). But even a single eye would suffice to make Paley’s triumphant point – namely that the evidence of apparent design for a purpose is not only that the parts all serve that purpose, but that if they were slightly altered they would serve it less well, or not at all. A good design is hard to vary:
Ideas can be replicators too. For example, a good joke is a replicator: when lodged in a person’s mind, it has a tendency to cause that person to tell it to other people, thus copying it into their minds.
Most ideas are not replicators: they do not cause us to convey them to other people. Nearly all long-lasting ideas, however, such as languages, scientific theories and religious beliefs, and the ineffable states of mind that constitute cultures such as being British, or the skill of performing classical music, are memes (or ‘memeplexes’ – collections of interacting memes).
So, both human knowledge and biological adaptations are abstract replicators: forms of information which, once they are embodied in a suitable physical system, tend to remain so while most variants of them do not.
In other words, the problem has been not that the world is so complex that we cannot understand why it looks as it does, but it is that it is so simple that we cannot yet understand it.
His book is primarily about one particular emergent phenomenon, the mind – or, as he puts it, the ‘I’. He asks whether the mind can consistently be thought of as affecting the body – causing it to do one thing rather than another, given the all-embracing nature of the laws of physics. This is known as the mind–body problem. For instance, we often explain our actions in terms of choosing one action rather than another, but our bodies, including our brains, are completely controlled by the laws of physics, leaving no physical variable free for an ‘I’ to affect in order to make such a choice.
Our own brains are, likewise, computers which we can use to learn about things beyond the physical world, including pure mathematical abstractions. This ability to understand abstractions is an emergent property of people which greatly puzzled the ancient Athenian philosopher Plato.
He noticed that the theorems of geometry – such as Pythagoras’ theorem – are about entities that are never experienced: perfectly straight lines with no thickness, intersecting each other on a perfect plane to make a perfect triangle. These are not possible objects of any observation. And yet people knew about them – and not just superficially: at the time, such knowledge was the deepest knowledge, of anything, that human beings had ever had. Where did it come from?
But causation and the laws of physics are not themselves physical objects. They are abstractions, and our knowledge of them comes – just as for all other abstractions – from the fact that our best explanations invoke them.
Since human brains are physical objects obeying the laws of physics, and since the Analytical Engine is a universal simulator, it could be programmed to think, in every sense that humans can (albeit very slowly and requiring an impractically vast number of punched cards).
But his test is rooted in the empiricist mistake of seeking a purely behavioural criterion: it requires the judge to come to a conclusion without any explanation of how the candidate AI is supposed to work. But, in reality, judging whether something is a genuine AI will always depend on explanations of how it works.
it is to explain how the observable features of the object came about. In the case of the Turing test, we deliberately ignore the issue of how the knowledge to design the object was created. The test is only about who designed the AI’s utterances: who adapted its utterances to be meaningful – who created the knowledge in them? If it was the designer, then the program is not an AI. If it was the program itself, then it is an AI.
The field of artificial (general) intelligence has made no progress because there is an unsolved philosophical problem at its heart: we do not understand how creativity works. Once that has been solved, programming it will not be difficult. Even artificial evolution may not have been achieved yet, despite appearances. There the problem is that we do not understand the nature of the universality of the DNA replication system.
preferences about the relative merits of pizza and hamburger have changed, then the group’s preference between pizza and hamburger must not be deemed to have changed either. This constraint can again be regarded as a matter of rationality: if no members of the group change any of their opinions about a particular comparison, nor can the group.