More on this book
Kindle Notes & Highlights
Read between
March 7 - August 9, 2018
The development of integrated circuits which resemble the structure of the brain is called neuromorphic computing.
The information inside most integrated circuits is purely digital. That means all information is represented in strings of binary "bits", each bit being either 0 or 1.
Once converted into a digital format, the input data can be stored in our brains as digital information. Digital information does not decay over time: a digital photograph stored as a file on a computer does not fade like a physical photograph. Digital information can be transmitted without loss and degradation. In contrast, analogue data will inevitable degrade each time it is copied (try making successive photocopies of photocopies).
Even though your brain is composed of a material with the consistency of rice pudding, you would not feel as though you were made of rice pudding — your consciousness would feel substrate-independent, independent from the physical world, and insulated from physical damage and the ravages of time.
The thresholding mechanism prevents the accumulation of electrical noise, both in microprocessors and in our brains. It is as if our thoughts exist in a realm above the physical world, and insulated from physical damage.
This means that the behaviour of the multiple-component system could never be predicted from just studying the single component. In practice, it means that the behaviour of the multi-component system can be wildly different — completely unrecognisable, in fact — from the behaviour of the single component.
Multi-component nonlinear systems which behave in these surprising ways are said to exhibit emergent behaviour. Human intelligence and consciousness is typical of this emergent behaviour, emerging from the interactions of billions of neurons.
The science which considers these nonlinear systems — and their emergent behaviour — is called complexity. We shall consider complexity in this chapter.
Complexity — and its implications for science — was
largely ignored by researchers until the second half of the twentieth century. The catalyst for complexity research was the invention of the digital computer. Suddenly, it became possible to simulate complex systems in the laboratory.
The first example of this surprising emergent behaviour in computer simulations came with the discovery of chaos. In the 1960s, the meteorologist Edward Lorenz working at MIT performed a very simplified simulation of weather systems on a computer and found...
This highlight has been truncated due to consecutive passage length restrictions.
2x2-1
In some cases the thoughts may be decisions, or what are perceived to be the exercise of will. In this light, chaos provides a mechanism that allows for free will within a world governed by deterministic laws."
In this book, we are particularly interested in emergence as it is clear that consciousness is a form of emergent behaviour which results from the interaction of billions of neurons.
Many scientists would feel uneasy — or even reject outright — this notion of emergence, the notion that the behaviour of a system cannot be reduced to considering the behaviour of one of its component parts. That is because the philosophy seems to strike against the way we have done science for 400 years.
René Descartes described his own scientific method in 1638: "To divide all the difficulties under examination into as many parts as possible, and as many as were required to solve them in the best way, and to conduct my thoughts in a given order, beginning with the simplest
and most easily understood objects, and gradually ascending, as it were step by step, to the knowledge of the most complex."
This philosophy that a full understanding of behaviour can be achieved by considering the smallest elements of a system is called reductionism.
However, the scientist who believes in the importance of emergent behaviour would say the philosophy of reductionism is like saying: "I understand how a brain works because I understand how a neuron works", or "I understand how a computer works because I understand how a transistor works".
The difference of opinion between the reductionists and those who believe in the importance of emergence is often portrayed as an ideological conflict. So...
This highlight has been truncated due to consecutive passage length restrictions.
Well, as is usually the case when you have two sides with strong arguments, both sides are right — and both sides are wrong. It all depends on the behaviour of the co...
This highlight has been truncated due to consecutive passage length restrictions.
Specifically, it depends on whether the components which make-up the system ar...
This highlight has been truncated due to consecutive passage length restrictions.
Reductionism is a convincing argument — with just one problem: it is only true for linear components.
As soon as you shift to consider nonlinear components (such as neurons and transistors), the argument breaks down.
"The interaction of components on one scale can lead to complex global behavior on a larger scale that in general cannot be deduced from knowledge of the individual components."
By simplifying the system, you lose some aspect of the overall behaviour. In that case, your only option is to consider the large-scale emergent behaviour. On the plus side, that emergent behaviour due to nonlinearity can be very interesting indeed (consciousness, for example).
It is believed that there is a 1,500 mile-long crystal of iron at the centre of the Earth, and geologists know how it behaves because they know how single atoms of iron behave: the behaviour scales linearly.
However, sometimes large numbers of atoms can combine to produce surprising behaviour which is highly nonlinear. The study of these materials is called condensed matter physics.
Another example of emergent behaviour when large numbers of particles interact is superconductivity.
The phenomenon of superconductivity occurs when electrons act together and can freely move through a conductor as though there is
no resis...
This highlight has been truncated due to consecutive passage length restrictions.
As stated earlier, you cannot simplify large complex systems which result from the interaction of nonlinear components — the whole thing becomes a mathematical nightmare.
This is the reason why — as stated earlier — scientists only became able to tackle the problems of complexity when digital computers were invented. Maybe at this point you are starting to realise why complexity presents such a problem for conventional science.
Melanie Mitchell is a professor of computer science who has worked at the Santa Fe Institute which is the world's leading centre for complexity research. In 2009, Mitchell wrote a book entitled Complexity: A Guided Tour which is an introduction and overview of complexity. Chapter Two of that book provides a clear explanation of nonlinearity. The mathematical "nightmare" that nonlinearity poses for reductionism is described by Mitchell:
"Linearity is a reductionist's dream, and nonlinearity can sometimes be a reductionist's nightmare."
according to Melanie Mitchell, reductionism can be described as "The whole is equal to the sum of its parts".
Giulio Tononi has described a numeric value which can be calculated from that connectivity of information. According to Tononi, this numeric value — which is called Φ (the Greek letter "phi") — then represents the consciousness of the network. According to Tononi, this calculation could be applied to anything: from an iPhone to the Milky Way, and the calculated value of Φ would reveal whether or not the object was conscious.
After reading more about Tonini's work, it seems rather clear to my mind that Φ is a measure of the emergent behaviour of a system: the more emergent the behaviour, the more likely the system is to be conscious.

