More on this book
Community
Kindle Notes & Highlights
If we manage to combine a universal economic safety net with strong communities and meaningful pursuits, losing our jobs to the algorithms mi...
This highlight has been truncated due to consecutive passage length restrictions.
Notwithstanding the danger of mass unemployment, what we should worry about even more is the shift in authority from humans to algorithms, which might destroy any remaining faith in the liberal story ...
This highlight has been truncated due to consecutive passage length restrictions.
The liberal story cherishes human liberty as its number one value. It argues that all authority ultimately stems from the free will of individual humans, as it is expressed in their feelings, desires and choices.
In politics,
liberalism believes that the voter knows best. It therefore upholds ...
This highlight has been truncated due to consecutive passage length restrictions.
In econ...
This highlight has been truncated due to consecutive passage length restrictions.
liberalism maintains that the customer is always right. It therefore hails ...
This highlight has been truncated due to consecutive passage length restrictions.
In personal m...
This highlight has been truncated due to consecutive passage length restrictions.
liberalism encourages people to listen to themselves, be true to themselves, and follow their hearts – as long as they do not infringe on the liberties of others. This ...
This highlight has been truncated due to consecutive passage length restrictions.
In Western political discourse the term ‘liberal’ is sometimes used today in a much narrower partisan sense, to denote those who support specific causes like gay marriage, gun control and abortion. Yet most so-called conservatives also embrace the broad liberal world view. Especially in the United States, both Republicans and Democrats should occasionally take a break from their heated quarrels to remind themse...
This highlight has been truncated due to consecutive passage length restrictions.
‘There is no such thing as society. There is [a] living tapestry of men and women … and the quality of our lives will depend upon how much each of us is prepared to take responsibility for ourselves.’
Referendums and elections are always about human feelings, not about human rationality.
If democracy were a matter of rational decision-making, there would be absolutely no reason to give all people equal voting rights – or perhaps any voting rights.
In the wake of the Brexit vote, eminent biologist Richard Dawkins protested that the vast majority of the British public – including himself – should never have been asked to vote in the referendum, because they lacked the necessary background in economics and political science.
Democracy assumes that human feelings reflect a mysterious and profound ‘free will’, that this ‘free will’ is the ultimate source of authority, and that while some people are more intelligent than others, all humans are equally free.
The liberal belief in the feelings and free choices of individuals is neither natural nor very ancient. For thousands of years people believed that authority came from divine laws rather than from the human heart, and that we should therefore sanctify the word of God rather than human liberty. Only in the last few centuries did the source of authority shift from celestial deities to flesh-and-blood humans.
Soon authority might shift again – from humans to algorithms. Just as divine authority was legitimised by religious mythologies, and human authority was justified by the liberal story, so the coming technological revolution might establish the authority of Big Data algorithms, while undermining the very idea of individual freedom.
feelings are biochemical mechanisms that all mammals and birds use in order to quickly calculate probabilities of survival and reproduction. Feelings aren’t based on intuition, inspiration or freedom – they are based on calculation.
Feelings of sexual attraction arise when other biochemical algorithms calculate that a nearby individual offers a high probability of successful mating, social bonding, or some other coveted goal. Moral feelings such as outrage, guilt or forgiveness derive from neural mechanisms that evolved to enable group cooperation.
Feelings are thus not the opposite of rationality – they embody evolutionary rationality.
We usually fail to realise that feelings are in fact calculations, because the rapid process of calculation occurs far below our threshold of awareness.
We don’t feel the millions of neurons in the brain computing probabilities of survival and reproduction, so we erroneously believe that our fear of snakes, our choice of sexual mates, or our opinions about the Eur...
This highlight has been truncated due to consecutive passage length restrictions.
When the biotech revolution merges with the infotech revolution, it will produce Big Data algorithms that can monitor and understand my feelings much better than I can, and then authority will probably shift from humans to computers.
As scientists gain a deeper understanding of the way humans make decisions, the temptation to rely on algorithms is likely to increase. Hacking human decision-making will not only make Big Data algorithms more reliable, it will simultaneously make human feelings less reliable. As governments and corporations succeed in hacking the human operating system, we will be exposed to a barrage of precision-guided manipulation, advertisement and propaganda. It might become so easy to manipulate our opinions and emotions that we will be forced to rely on algorithms in the same way that a pilot suffering
...more
Christian and Muslim theology similarly focus on the drama of decision-making, arguing that everlasting salvation or damnation depends on making the right choice.
As authority shifts from humans to algorithms, we may no longer see the world as the playground of autonomous individuals struggling to make the right choices. Instead, we might perceive the entire universe as a flow of data, see organisms as little more than biochemical algorithms, and believe that humanity’s cosmic vocation is to create an all-encompassing data-processing system – and then merge into it.
People might object that algorithms could never make important decisions for us, because important decisions usually involve an ethical dimension, and algorithms don’t understand ethics. Yet there is no reason to assume that algorithms won’t be able to outperform the average human even in ethics. Already today, as devices like smartphones and autonomous vehicles undertake decisions that used to be a human monopoly, they start to grapple with the same kind of ethical problems that have bedevilled humans for millennia.
Up till now, these arguments have had embarrassingly little impact on actual behaviour, because in times of crisis humans all too often forget about their philosophical views and follow their emotions and gut instincts instead.
Levite
The moral of the parable is that people’s merit should be judged by their actual behaviour, rather than by their religious affiliaton.
Human emotions trump philosophical theories in countless other situations. This makes the ethical and philosophical history of the world a rather depressing tale of wonderful ideals and less than ideal behaviour. How many Christians actually turn the other cheek, how many Buddhists actually rise above egoistic obsessions, and how many Jews actually love their neighbours as themselves? That’s just the way natural selection has shaped Homo sapiens.
Computer algorithms, however, have not been shaped by natural selection, and they have neither emotions nor gut instincts. Hence in moments of crisis they could follow ethical guidelines much better than humans – provided we find a way to code ethics in precise numbers and statistics. If
infernal
Similarly, if your self-driving car is programmed to swerve to the opposite lane in order to save the two kids in its path, you can bet your life this is exactly what it will do. Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering.
Given that human drivers kill more than a million people each year, that isn’t such a tall order.
fallible
Yet the real problem with robots is exactly the opposite. We should fear them because they will probably always obey their masters and never rebel.
There is nothing wrong with blind obedience, of course, as long as the robots happen to serve benign masters.
Nevertheless, before we rush to develop and deploy killer robots, we need to remind ourselves that the robots always reflect and amplify the qualities of their code. If the code is restrained and benign – the robots will probably be a huge improvement over the average human soldier.
Big Data algorithms might also empower a future Big Brother, so that we might end up with an Orwellian surveillance regime in which all individuals are monitored all the time.
In the late twentieth century democracies usually outperformed dictatorships because democracies were better at data-processing. Democracy diffuses the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given twentieth-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all the information fast enough and make the right decisions. This is part of the reason why the Soviet Union made far worse decisions than the
...more
However, soon AI might swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. Indeed, AI might make centralised systems far more efficient than diffused systems, because machine learning works better the more information it can analyse.
Democracy in its present form cannot survive the merger of biotech and infotech. Either democracy will successfully reinvent itself in a radically new form, or humans will come to live in ‘digital dictatorships’.
Instead of just collective discrimination, in the twenty-first century we might face a growing problem of individual discrimination.
We will increasingly rely on algorithms to make decisions for us, but it is unlikely that the algorithms will start to consciously manipulate us. They won’t have any consciousness. Science fiction tends to confuse intelligence with consciousness, and assume that in order to match or surpass human intelligence, computers will have to develop consciousness.
The basic plot of almost all movies and novels about AI revolves around the magical moment when a computer or a robot gains consciousness.
Intelligence is the ability to solve problems. Consciousness is the ability to feel things such as pain, joy, love and anger. We tend to confuse the two because in humans and other mammals intelligence goes hand in hand with consciousness.
Mammals solve most problems by feeling things. Computers, however, solve problems in a very different way.
Just as airplanes fly faster than birds without ever developing feathers, so computers may come to solve problems much better than mammals without ever developing feelings.
True, AI will have to analyse human feelings accurately in order to treat human illnesses, identify human terrorists, recommend human mates and navigate a street full of human pedestrians. But it could do so without having any feelings of its own. An algorithm does not need to feel joy, anger or fear in order to recognise the different biochemical patterns of joyful, angry or frightened apes.

