Calum Chace's Blog, page 14
January 30, 2017
Future Bites 3 – Abundance accelerated
The third in a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.
As promised, this one is more optimistic.
[image error]
Most professional drivers have lost their jobs, and although many have found new ones, they rarely pay anything like as much as the drivers used to earn. A host of other job categories are becoming the preserve of machines, including call centre operatives and radiographers. A few people still cling onto the notion that new types of jobs will be created to replace the old ones taken by machines, but most accept that the game is up. The phrase “Economic Singularity” is in widespread use.
Pollsters report what everyone already knows: there is a rising tide of anger. Crime is soaring, and street protests have turned violent. Populist politicians are blaming all sorts of minorities, and while nobody really believes them, many suspend their disbelief in order to give themselves some kind of hope.
The government knows that it must act quickly. In desperation it enacts legislation which was ridiculed just a few months previously.
it offers a separate, higher level of unemployment benefit to people who willingly give up their jobs to others. In addition to elevated unemployment payments, these so-called “job sacrificers” are allowed to live in their existing homes, with bills and maintenance paid for by the government.
[image error]
In addition, they receive free access to a new entertainment service which allows them to stream a wide range of music, films, and video games. This new service is funded by a consortium of American and Chinese tech giants who now occupy all of the top ten positions in global rankings of companies by enterprise value thanks to their enormously popular AI-powered services. (Netflix was acquired by one of them for a gigantic premium to stop it protesting.)
Governments around the world are in negotiations with the tech giants and other business leaders about making some of the basic needs of life free to jobless people, including food, clothing, housing and transport. They argue that innovation will continue to improve the quality and performance of each product and service thanks to the remaining demand for luxury versions from those who are still employed, many of whom are earning enormous sums of money.
It has not escaped the attention of policy makers that a gulf is opening up between the jobless and those in work. Nobody has yet suggested a generally acceptable solution.
* This un-forecast is not a prediction. Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this. It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning. Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t. In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.


January 14, 2017
Future Bites 2 – Populism paves the way for something worse
The second in what looks like becoming a series of un-forecasts* – little glimpses of what may lie ahead in the century of two singularities.
The third one will be more optimistic. Honest.
[image error]
In the five years of President Trump, corporate taxes were slashed and federal spending on infrastructure projects was boosted. Companies and individuals were exhorted (and sometimes extorted) to buy American, and imports were cut by tariff and non-tariff barriers. The impact was profound. Initially, US GDP rose sharply as its firms repatriated hundreds of $billions of profits from their foreign subsidiaries, and jobs were created to carry out the infrastructure projects.
But the government spending was inefficient, and there were persistent reports of large-scale corruption, some of it involving members of the Trump family. Cross-border trade and investment slumped as more and more countries retaliated against US protectionism.
More importantly, job growth was constrained and then outweighed by the beginnings of cognitive automation, and the unmistakeable signs of widespread and lasting technological unemployment.
[image error]
By the end of Trump’s term, inflation was rising fast, along with the national debt. Unemployment was at 15%, and regional military conflicts were becoming both chronic and acute as America had withdrawn from its role as the global peace-keeper. Americans were increasingly scared, and they looked for a scapegoat. President Trump declined the Republican Party’s fretful offer to be its candidate again in 2020, and railed against (and frequently sued) anyone who criticised his track record, blaming Muslims, Mexicans, and the covert activities of “internal traitors”, who he declined to identify.
[image error]
Polls showed the Republicans heading for electoral disaster, and a tight contest between a reluctant Michelle Obama and a rising new party which called for law and order, a clamp-down on dissent and protest, internment for certain racial minorities, and a major increase in military expenditure. Hundreds of thousands of newly unemployed people participated in mass rallies, wearing armbands and giving identical salutes to the party’s garish flag.
* This un-forecast is not a prediction. Predictions are almost always wrong, so we can be pretty confident that the future will not turn out exactly like this. It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning. Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t. In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.
January 3, 2017
Betting on technological unemployment
[image error]
Daniel Lemire is a Canadian professor of computer science. He believes that cognitive automotive will not cause lasting unemployment. I believe the opposite, as I have written in various places, including this blog post and my book, The Economic Singularity.
Neither Daniel nor I has a crystal ball, and we both recognise that we could be wrong. But we have both thought long and hard about the prospect, and we are both fairly confident in our predictions. So after chatting about the issue online for a while, we have agreed a bet.
There are currently around 1.7m long-haul truck drivers in the US. If that number falls to 250,000 between now and the end of the year 2030, then Daniel will pay $100 to a charity of my choice. If not, then I will make the charitable donation.
This is my second long bet (see here for the first). I did not expect that becoming a futurist would also make me a gambler!
[image error]


December 31, 2016
A dozen AI-related forecasts for 2017
[image error]
Machines will equal or surpass human performance in more cognitive and motor skills. For instance, speech recognition in noisy environments, and aspects of NLP – Natural Language Processing. Google subsidiary DeepMind will be involved in several of the breakthroughs.
Unsupervised learning in neural networks will be the source of some of the most impressive results.
In silico models of the brains of some very small animals will be demonstrated. Some prominent AI researchers will predict the arrival of strong AI – Artificial General Intelligence, or AGI – in just a few decades.
[image error]
Speech will become an increasingly common way for humans to interact with computers. Amazon’s early lead with Alexa will be fiercely challenged by Google, Microsoft, Facebook and Apple.
Some impressive case studies of AI systems saving significant costs and raising revenues will cause CEOs to “get” AI, and start demanding that their businesses use it. Companies will start to appoint CAIOs – Chief AI Officers.
Self-driving vehicles (Autos) will continue to demonstrate that they are ready for prime time. They will operate successfully in a wide range of weather conditions. Countries will start to jockey for the privilege of being the first jurisdiction to permit fully autonomous vehicles throughout their territory. There will be some accidents, and controversy over their causes.
Some multi-national organisations will replace their translators with AIs.
[image error]
Some economists will cling to the Reverse Luddite Fallacy, continuing to deny that cognitive automation could cause lasting unemployment because that is not what has happened in the past. Others will demand that governments implement drastic changes in the education system so that people can be re-trained when they lose their jobs. But more and more people will come to accept that many if not most people are going to be unemployed and unemployable within a generation or so, and that we may have to de-couple incomes from jobs.
As a result, the debate about Universal Basic Income – UBI – will become more realistic, as people realise that subsistence incomes will not suffice. Think tanks will be established to study the problem and suggest solutions.
AI systems will greatly reduce the incidence of fake news.
[image error]
There will be further security scares about the Internet of Things, and some proposed consumer applications will be scaled back. But careful attention to security issues will enable successful IoT implementations in high-value infrastructural contexts like railways and large chemical processing plants. The term “fourth industrial revolution” will continue to be applied – unhelpfully – to the IoT.
2016 was supposed to be the year when VR finally came of age. It wasn’t, partly because the killer app is games, and hardcore gamers like to spend hours on a session, and the best VR gear is too heavy for that. Going out on a limb, that problem won’t be solved in 2017.


December 30, 2016
AI in 2016: a dozen highlights
[image error]
March: AlphaGo combines deep reinforcement learning with deep neural networks to beat the best human player of the board game Go. [Article]
April: Nvidia unveils a “supercomputer for AI and deep learning”. With a price tag of $129k, it delivers 170 teraflops, and is 12 times more powerful than the company’s 2015 offering. Nvidia’s share price continues its skyward trajectory. [Article]
April: Researchers from Microsoft and several Dutch institutions create a new Rembrandt. Not a copy of an existing picture, but a new image in the exact style of the master, 3-D printed to replicate his brush-strokes. [Article]
September: DeepMind unveils WaveNet, a convoluted neural net which produces the most realistic computer-generated speech achieved to date. [Article]
September: Google unveils an image captioning system that achieves 93.9% accuracy on the ImageNet classification task, and makes it available as an open source model in its Tensor Flow software library. [Article]
[image error]
September: Google, Facebook, Amazon, IBM and Microsoft join forces to create the Partnership on Artificial Intelligence to Benefit People and Society, an organisation intended to facilitate collaboration and ensure transparency and safety. [Article]
September: Uber launches trials of self-driving taxis in Pittsburgh, open to the public. [Article] It was beaten to the punch by NuTonomy, a much smaller company in Singapore. [Article] A month later, a self-driving truck operated by Otto, a group of ex-Googlers acquired by Uber, delivers 50,000 beers from a brewery to a customer 120 miles away. [Article]
September: The Economic Singularity is published, with encouraging reviews.
December 29, 2016
Reviewing last year’s AI-related forecasts
This time last year I made some forecasts about how AI would change, and how it would change us. It’s time to look back and see how those forecasts for 2016 panned out.
Not a bad result: seven unambiguous yes, four mixed, and one outright no. Here are the forecasts (and you can see the original article here.)
[image error]
AlphaGo is the big one: it caught most people by surprise, and is still seen as one of the major landmarks in AI development, along with Deep Blue beating Kasparov in 1997 and Watson beating Jennings in 2011. Admittedly AlphaGo had already beaten excellent human Go players in 2015, but most observers agreed with Lee SeDol’s confident estimate that he would win in March.
My least successful forecast was that Google would re-launch Glass. It didn’t. Instead, 2016 was the year when smart watches reached peak hype and then faded again. I remain confident that AI-powered head-up displays for consumers will be back, whether or not it will be called Glass.
There was a significant development regarding Google’s robot companies, but it was a negative one: Boston Dynamics was quietly put up for sale.
Intel admitted that it was moving from a tick-tock rhythm of chip development to a slower tic-tac-toe one, but Nvidia stormed into the breach, positioning itself as the Intel for AI, declaring rapid advances and scoring a vertigo-inducing stock market performance.
The Internet of Things did hit the headlines in October, but for the wrong reasons, when a multitude of connected devices were commandeered for a botnet attack. The IoT is increasingly being mis-labelled as the Fourth Industrial Revolution – see here. Grrr.
Next up (tomorrow), a review of 2016’s AI highlights.


December 18, 2016
Future Bites 1
The first in what may or may not become a series of un-forecasts*, little glimpses of what may lie ahead in the century of two singularities.
[image error]
It’s 2025 and self-driving trucks, buses, taxis and delivery vans are the norm. Almost all of America’s five million professional drivers are out of work. They used to earn white-collar salaries for their blue-collar work, which means it is now virtually impossible for them to earn similar incomes. A small minority have re-trained and become coders, or virtual reality architects or something, but most are on welfare, and / or earning much smaller incomes in the gig economy. And they are angry.
The federal government, fearful of social unrest (or at least disastrous electoral results), steps in to replace 80% of their income, guaranteed for two years. This calms the drivers’ anger, but other people on welfare are protesting, demanding to know why their benefit levels are so much lower.
Meanwhile, many thousands of the country’s 1.3m lawyers are being laid off. And their salaries were much higher. The government knows it cannot fund 80% replacement of those incomes, but the lawyers are a vociferous bunch.
And there are doctors, journalists, warehouse managers, grocery store workers…
* This un-forecast is not a prediction. Predicitons are almost always wrong, so we can be sure that the future will not turn out exactly like this. It is intended to make the abstract notion of technological unemployment more real, and to contribute to scenario planning. Failing to plan is planning to fail: if you have a plan, you may not achieve it, but if you have no plan, you most certainly won’t. In a complex environment, scenario development is a valuable part of the planning process. Thinking through how we would respond to a sufficient number of carefully thought-out scenarios could well help us to react more quickly when we see the beginnings of what we believe to be a dangerous trend.


November 20, 2016
Discussing AI with George Osborne
One of the many worrying aspects of the Brexit referendum in the UK and the Trumpularity in the US is that most politicians are not yet talking about the challenges posed by the coming impact of powerful artificial intelligence. This needs to change.
A conversation I had recently with George Osborne (until recently the UK’s Chancellor of the Exchequer) gives grounds for hope.
The video below (16 minutes) contains excerpts from a recent panel discussion called “Ask Me Anything About the Future”. Hosted by Bloomberg, it was organised by Force Over Mass, an early-stage investment fund manager. It was very ably chaired by David Wood, who runs the London Futurists meetup group.





The video of the whole event (1hr 39 mins) is here.


November 13, 2016
It’s not the Fourth Industrial Revolution!
Industrie 4.0
Klaus Schwab is a clever man. After a rapid ascent through the ranks of German commercial life, he founded the World Economic Foundation (WEF) in 1971. The WEF is best known for organising a five-day annual meeting of the global business and political elite at the ski resort of Davos in Switzerland. He has a list of awards and honorary doctorates as long as your arm.
Schwab has done much to popularise the notion that we are entering a fourth industrial revolution – not least by writing a book of that name. He didn’t invent the phrase: rather he has broadened the term Industrie 4.0, which was adopted by a group of leading German industrialists in 2012 to persuade their government to help the country move towards “smart manufacturing”, in which artificial intelligence and Big Data are deployed to make production processes more efficient and more flexible.
In that limited context the name makes some sense, but the Fourth Industrial Revolution label is expansionist, and has claimed the Internet of Things, among other aspects of our increasingly AI-affected world. It is a misleading and unhelpful label.
The sixth fourth industrial revolution
Smart manufacturing is not the first development to be called the Fourth Industrial Revolution. As this Slate article from January 2016 points out, it is at least the sixth “fourth industrial revolution”. (The others, since you ask, were atomic energy in 1948, ubiquitous electronics in 1955, computers in 1970, the information age in 1984, and finally, nanotechnology.)
Furthermore, if we are in the business of chopping the industrial revolution into pieces, it is by no means clear that there were only three of them before Industrie 4.0 came along. One of the most helpful ways to understand the industrial revolution is to view it as the arrival of four transformative technologies on the following approximate timeline:
1712: primitive steam engines, textile manufacturing machines, and canals
1830: mobile steam engines and railways
1875: steel and heavy engineering, the chemicals industry
1910: oil, electricity, mass production, cars, airplanes and mass travel
Labelling the Internet of Things, or even smart manufacturing, as the Fourth Industrial Revolution is both confusing and plain wrong. In fact they are both part of something much bigger than another lap of the industrial revolution, momentous as that process was. They are part of the information revolution.
The information revolution
The information revolution begins as information and knowledge become increasingly important factors of production, alongside capital, labour, and raw materials. Information acquires economic value in its own right. Services become the mainstay of the overall economy, pushing manufacturing into second place, and agriculture into third. (Some politicians don’t like this idea, but there is not much they can do about it. You can’t wish manufacturing back into pole position, and trying to legislate it would bring ruin.)
An Austrian economist named Fritz Machlup calculated that knowledge industries accounted for a third of US GDP in 1959, and argued that this qualified the country as an information society. That seems as good a date as any to pick as the start of the information revolution.
Not just semantics
Why is this important? Is it just semantics? No. First, labels are an important part of language, and language is what allows us to communicate effectively, to kill mammoths, and to build walls and pyramids. When labels point to the wrong things, or point to different things for different people, you get confusion instead of communication.
Secondly, the information revolution is the most important event in our species’ short but dramatic history. It is our third great transformative wave. The first was the agricultural revolution which turned foragers into farmers. That gave us mastery over animals, and generated food surpluses which allowed our population to grow enormously. It made the lives of individual humans considerably less pleasant on average, but it greatly advanced the species.
The second, of course, was the industrial revolution, which in many ways gave us mastery of the planet. Coupled with the enlightenment and the discovery of the scientific method, it ended the perpetual tyranny of famine and starvation, and brought the majority of the species out of the abject poverty which had been the fate of almost every human before. For most people in the developed world it created lifestyles which would have been the envy of kings and queens in previous generations.
The information revolution will do even more. If we survive the two singularities – the economic and the technological ones – it will make us godlike. If we flunk those transitions, we may go extinct, or perhaps just be thrown back to something like the middle ages. Since you are reading this it is likely that you know what I am talking about, and understand the reasoning behind these apparently melodramatic claims. Too few of our fellow humans do, and we need to change that. Muddying the waters of our understanding of the information revolution by calling parts of it the Fourth Industrial Revolution does not help.


November 6, 2016
The Simulation Hypothesis: an economical twist (part 2 of 2)
Offending Copernicus
Of course this is all wild and ultimately pointless speculation, so I won’t be at all upset if you decide it is more worthwhile to go watch a game of baseball or cricket instead of reading the rest of this post. But if you’re still with me, then isn’t it a curious coincidence that you happen to be alive right at the time when humanity is rushing headlong towards the creation of AGI and superintelligence? And that you might very possibly be alive to see it happen? Doesn’t that situation offend against the Copernican principle, also known as the mediocrity principle, which urges scepticism about any explanation that places us at a privileged vantage point in time or space within the universe?
Another curious thing about the situation is the extraordinary amount of computronium that our simulators seem to have dedicated to the task of creating our context. That “context” is a universe of 100 billion galaxies each containing 100 billion stars. (Although no aliens, as far as we can tell.) And this universe has been running for 13.7 billion years.
Maybe our simulators have a great deal of computronium lying around, and maybe they don’t perceive time the way you and I do. Maybe they didn’t have to sit around twiddling their (indeterminate number of) thumbs while the earth began to cool, the autotrophs began to droll, neanderthals developed tools, we built a wall, we built the pyramids, and so on. (3)
The economical twist
An alternative possibility to all this expense is that our simulation only appears to have 10,000 billion billion star systems and a 13.7 billion-year history. If you have the capability to create beings like us then you presumably have the capability to convince us of the reality of whatever context you would like us to believe in. And you would save a fortune in time and computronium by faking the context. So if we do live in a simulation that was created for a specific purpose, surely it is more likely that we were created quite recently – maybe only a few minutes ago – kitted out with fake memories and the appearance of an enormous backstory and a vast spacial hinterland?
If that is true, it begs the question of what happens to us after the simulation completes its purpose. Perhaps it gets re-run in order to optimise the outcome. Perhaps it is just switched off and we all go to sleep, dreamless. Or perhaps we have the opportunity to combine our minds with that of the superintelligence that is created, and ascend to the level of the simulators.
Turtles all the way up
Of course the simulators themselves may be going through a similar chain of reasoning about their own existence. Perhaps we are somewhere in the midst of a deep pile of nested simulations, each created by and ultimately dependant on the one above. In a description of our world sometimes attributed to Hindu legend, the earth is a flat plate supported by elephants standing on the back of a tortoise. When asked what the tortoise is standing on, one adherent replied, “it’s turtles all the way down”. (4) Perhaps, in our case, it’s turtles all the way up.
Notes
(3) Big Bang Theory, theme tune
(4) A conversation between Bertrand Russell and an elderly lady mentioned in Stephen Hawking’s “A Brief History of Time”

