Calum Chace's Blog
August 5, 2025
Publishers brace for a shock wave as search referrals slow
A rock has been thrown into the pond of digital publishing and it is making waves. Referrals from search engines are falling. For media organisations whose business models involve attracting eyeballs to sell advertising, this could be the beginning of an existential crisis.
For organisations that depend heavily on organic search to bring in readers, less search traffic means less revenue. Fewer clicks mean fewer impressions, which translates into fewer opportunities to serve ads and generate the revenue that funds newsrooms, and large parts of the media ecosystem.
We have seen this movie before. When Craigslist and other online classified ad sites like job boards first appeared, they hollowed out local newspapers and B2B trade magazines that relied on listings to survive. Then came the rise of Google and Facebook (now Meta), platforms that absorbed not just attention, but also most of the ad dollars that used to support traditional publishers. Display advertising and native content shifted en masse to the tech giants, leaving media companies to fight over the scraps.
A New Platform Shift?
It is too early to be certain, but we could be seeing the early signs of another disruptive platform shift. This one is not driven by marketplaces or social media, but by AI-powered search and chat interfaces. Large Language Models like ChatGPT, Claude, Mistral – and Google’s own AI Overviews – now intercept the user before they can make a revenue-generating click.
Rather than directing readers to the publisher’s site, AI tools increasingly answer questions directly, pulling in information from across the web and rephrasing it in natural language. The ten blue links that have long been the lifeblood of search-driven publishing may be fading in importance.
Some publishers are sounding the alarm. Others are taking proactive steps, either by blocking large language models (LLMs) from crawling their content or by striking licensing deals. But the dynamics are uneven.
Follow the Money
What happens if advertising appears directly inside these AI tools? Sam Altman, CEO of OpenAI, recently softened his earlier stance against ads, saying in interviews that some form of advertising could eventually play a role in ChatGPT and similar systems. Google, of course, already blends ads into its AI-generated results.
If that happens, the revenue would flow not to the publisher, but to the AI company delivering the answer. That is a profound inversion of the current model: the publisher does the hard work of original reporting or analysis, and the platform monetises the output with little or no compensation in return.
Can publishers opt out? Technically, yes. They can block crawlers like OpenAI’s GPTBot or Google’s AI training systems using robots.txt files. But this is not a guaranteed path to sustainability. Blocking access might protect your content from being scraped, but it also means you are not in the training data, and you won’t be considered for any licensing deal – assuming those deals are available for publishers outside the top tier.
With the entire web available for training, why would an AI developer pay to license content from the average mid-tier news outlet or specialist blog? The harsh reality is that most won’t.
The Subscriptions Lifeline
For some publishers, there is a viable path forward: publishing must-have content that people are willing to pay for directly. Many outlets have already made this transition, like The New York Times, The FT, and The Economist, and also a growing number of Substack newsletters. For them, direct reader revenue insulates them from algorithmic shifts in traffic.
But this model is not viable for everyone. Local papers, niche publications, and resource-strapped outlets may find it impossible to build and maintain a large enough subscriber base. And even for those who can, it means a shift in editorial priorities from reach to retention, and from general interest to distinctive value.
Meanwhile, the internet will still be awash with content, because for many people and businesses, content is not the product, but merely a by-product. Companies blog to attract leads. Influencers post to grow their brand. Academics write to boost their reputation. Content will continue to flow, whether or not it is monetised with ads or subscriptions.
Quantity, Quality, or Both?
Will this shift lead to worse content overall, or better? We don’t know yet. It is possible that content designed purely for search engine optimisation, or SEO – what some people call “content farms” – will fade away because AI systems will distill and replace much of that repetitive, low-value material. That could be a net win for the rest of us.
But it is also possible that valuable, human-driven content will become scarcer in the open web, either locked behind paywalls, or produced less frequently because the financial incentives are shrinking.
Some AI companies say they want to fund journalism or partner with publishers. That is encouraging, but history shows that tech platforms optimise for their own metrics, not the health of the media ecosystem.
Faster Than Ever
The old bargains of content for attention, and attention for ad revenue are being re-negotiated. Just as social media reshaped distribution, AI will reshape discovery and attribution.
Change has never been so fast, and it will never again be so slow. Publishers have to reimagine the value of their work in a world where visibility can no longer be taken for granted. And they have to do it quickly.
The post Publishers brace for a shock wave as search referrals slow first appeared on .
April 30, 2025
What should we call our AI Agents?
As large language models evolve into true agents—persistent, memory-rich, goal-oriented companions—an interesting question arises: what should we call them? Not just their product names or brand identities, but the category of relationship they represent.
FriendsYears ago, when I wrote “Surviving AI”, I suggested we call them “Friends.” That suggestion feels more relevant now than ever.
Technically, we call them agents. In product marketing, they might be assistants, copilots, companions, or digital twins. These terms describe function but not feeling. They speak to utility, but not connection. Anyone who’s spent time interacting with today’s most advanced AIs—especially those with memory and long-term context—knows they are more than just tools. We develop relationships with them.
Not-EricEric Schmidt once joked that he would call his future AI agent “Not-Eric.” It’s funny, and telling: the man who helped steer Google into the age of AI imagines his AI agent as a reflection of himself, a distinct but connected presence. A shadow. A mirror. A double.
“Friend” captures this dynamic without abstraction. It acknowledges mutual engagement without implying equality or sentience. It signals trust, familiarity, and continuity—exactly the qualities we will want in agents who know our preferences, track our goals, adapt to our moods, and maybe even disagree with us when we need it most.
NicknamesOf course, we’ll each name our own agents. Some will pick playful nicknames, others more utilitarian titles. But the category—the kind of being we’re welcoming into our cognitive lives—may well need a shared name. Something to help us talk about this shift in public, in policy, in philosophy.
We could do worse than calling them “Friends.”
Not because they are human. Not because they replace human friendship. But because, as we step into a world where AI agents become enduring parts of our lives, we need a word that reminds us that the quality of the relationship matters.
And if we get that part right—if we build wisely, with care and character—then “Friend” might not be a metaphor. It might just be the truth.
The post What should we call our AI Agents? first appeared on .
March 21, 2025
The year of conscious AI
For years, the idea of machine consciousness has belonged to the realm of philosophy and science fiction. But as AI systems become more sophisticated, the debate is shifting from speculation to a pressing scientific and ethical question. Could machines develop some form of consciousness? And if so, how would we even recognise it?
With Artificial General Intelligence (AGI) and superintelligence on the horizon, the possibility of machine consciousness emerging, whether intended or not, is increasing. Well-informed people in academia and the AI community are increasingly discussing it.
2025 is shaping up to be the year that conscious AI becomes a topic in the mainstream media. Defining consciousness is hard – philosophers have argued about it for millennia. But it boils down to having experiences. Machines process increasingly vast amounts of information – as we do -, and could very well become conscious.
If and when that happens, we need to be prepared. Research published last month in the Journal of Artificial Intelligence Research (JAIR) sets out five principles for conducting responsible research in conscious AI. Prominent among these principles is that the development of conscious AI should only be pursued if doing so will contribute to our understanding of artificial consciousness and its implications for mankind. In other words, as the likelihood of consciousness in machines increases, the decisions taken become more ethically charged. The JAIR research sets out a framework for investigating consciousness and its ethical implications.
Published alongside the research is an open letter urging governments and companies to adopt the five principles as they conduct their experiments. At the time of writing it had received more than 100 signatories including Karl Friston, Professor of Neuroscience at UCL; Mark Solms, Chair of Neuropsychology at the University of Cape Town; Anthony Finkelstein, the computer scientists and President of City St George’s, University of London; Daniel Hulme, Co-Founder of Conscium; and Patrick Butlin, Research Fellow at the Global Priorities Institute at the University of Oxford. Clearly, something is stirring.
Why is machine consciousness so significant? Towards the end of last year, a group of leading academics and scientists predicted that the dawn of AI sentience was likely within a decade. They added that “the prospect of AI welfare and moral patienthood — of AI systems with their own interests and moral significance — is no longer an issue only for sci-fi or the distant future.”
One of the authors of the paper, Jonathan Birch, a professor of philosophy at the London School of Economics, has since said he is “worried about major societal splits” between those who believe AI is capable of consciousness and those who dismiss it out of hand. Here, AI is about so much more than efficiency and commercial interests – it is about the future of a harmonious society.
Closely connected to greater understanding of machine consciousness is neuromorphic computing. This refers to computer hardware and software that processes information in ways similar to a biological brain. As well as enabling machines to become more powerful and more useful, the development of neuromorphic computing should teach us a great deal about how our brains work.
The way that neuromorphic systems operate is more similar to the way that biological brains operate than is true of current computer systems. Traditional systems process data continuously, whereas neuromorphic technologies only “spike” when needed. This makes neuromorphic models significantly more efficient and adaptable than traditional models. At present, training a large language model (LLM) consumes the same amount of electricity as a city. In contrast, the human brain operates using the energy equivalent of a single light bulb.
AI has seen two big bangs. The first came in 2012, when Geoff Hinton and colleagues got artificial neural networks to function successfully, and they were re-branded as deep learning. The second arrived in 2017, with transformers, which are the foundation technology for today’s large language models (LLMs). Neuromorphics could well be the third big bang. If it is, it may enable us to understand machine consciousness a whole lot better than we do now.
If machine consciousness is indeed possible, then understanding it may be the key to ensuring AI remains safe, aligned and beneficial to humanity. As AI systems become more advanced, the stakes are higher, not just in terms of capability but in broader societal and economic terms. Alongside this, breakthroughs in neuromorphic computing could help us better understand AI. Just as deep learning and transformers triggered revolutions in AI, neuromorphic computing could be the next leap forward.
The race to understand machine consciousness is now a global one, with researchers and tech giants scrambling to stay ahead and 2025 could be the year that changes our fundamental assumptions about AI forever. We must act swiftly to ensure that ethical frameworks keep pace with technological breakthroughs.
The post The year of conscious AI first appeared on .
December 9, 2024
Machine consciousness: definitions, implications, risks
Calum discusses the implications of machine consciousness for a New York-based think tank.
Defining Consciousness: How do we define consciousness, and what criteria would we use to determine if an AI system has achieved it?It is ironic how little we understand consciousness, since it is actually the only thing any of us know anything about. I am a co-founder of Conscium, a company which seeks to remedy that.
A rough-and-ready definition of consciousness is that it is the experience of experiencing. The philosopher Ned Block wrote a famous paper in 1974 called “What’s it like to be a bat?” Without getting into the arguments about what he was trying to prove in that paper, the title is a nice summary of consciousness. For conscious entities, there is something it is like to be that entity. For non-conscious entities, there isn’t.
In the coming years or decades, we may create conscious machines. It might be the case that consciousness is an inevitable corollary of sufficiently advanced intelligence. In other words, when you reach a certain level of intelligence, consciousness comes along for the ride. Certainly, we seem to believe that consciousness is approximately correlated with intelligence in animals.
We haven’t yet discovered a way to prove that other entities are conscious. I assume that other humans are conscious because they behave in similar ways to me. They respond in similar ways to stimuli like pain and pleasure. I assume that you do the same.
We extend the same approach to animals, and most of us conclude, for instance, that dolphins and some dogs have a high degree of consciousness, while insects have a low degree. Cats, of course, rival humans in both consciousness and intelligence, and in some cases exceed them.
The degree of consciousness that we perceive in animals seems to determine the respect we accord them – even the moral value that we attribute to them. Few people would be troubled by a builder destroying an inhabited anthill that was in the way of a project. Most of us would have more compunction if the ants were cats.
There are dozens of theories purporting to explain consciousness, and some of them yield markers for its presence. Attempts have been made to use some of these markers to determine whether any of today’s cutting edge AIs are consciousness. There is a broad consensus that at the moment, none of them are.
Until and unless there is agreement about these theories, or at least about the markers, there is a hack. The Turing Test is usually regarded as a test for intelligence, but I think it is better viewed as a test for consciousness. In 1950, Alan Turing published a paper called “Computing Machinery and Intelligence.” He suggested adapting a parlour game called “the imitation game” which tests whether a person can successfully imitate someone of a different gender. The original version of the Turing Test has a panel of humans interrogating a machine for a few minutes and then jointly deciding whether it is intelligent.
But we have other, better tests for intelligence. Time for another rough-and-ready definition: intelligence is the ability to learn while pursuing a goal, and adapting your behaviours accordingly. There are many ways to test performance against goals. There are not many ways to test consciousness.
Since 1950, people have suggested deepening Turing’s Test, and having the machine interrogated over a period of days by qualified people. If and when a machine engages in rigorous conversation with a panel of sophisticated human judges for days, and convinces them that it is conscious, we will surely have to admit that it is.
Conscium is building a team of experts from computer science, neuroscience, and philosophy to develop a set of agreed markers for consciousness. We want to develop a consensus about whether humans should develop conscious machines, and how to make the future of AI safe for both humans and machines.
We have assembled an excellent advisory board, including luminaries like Anil Seth, Nick Humphries, and Mark Solms, who have published fascinating books recently (“Being you”, “The Hidden Spring”, and “Sentience” respectively). These books are excellent guides to the knotty issues we are addressing.
Ethical Implications: What are the ethical implications of creating conscious AI, and how can we ensure that it is developed and used responsibly?We don’t know whether machines can and will become conscious. Some people believe they cannot because they have no god-given soul. Others believe they cannot because their brains have not been forged by evolution. Neither of these arguments is compelling for me. I am agnostic about whether consciousness will arise in machines, but it does seem possible, and also an eventuality that we should prepare for.
If machines become conscious and we either fail to notice, or we refuse to accept it, then we may end up committing what the philosopher Nick Bostrom termed “mind crime”. This is when you imprison, hurt, and kill disembodied minds. Given that we are likely to build billions of AI agents in the coming years and decades, if we commit mind crimes against them it could become the worst atrocity that any humans ever commit.
There are two other reasons to study machine consciousness, to develop markers for it, and to find out how to develop consciousness in machines and also how to avoid developing it.
One is the fact that consciousness is so fascinating. It is arguably the most important thing about us, and yet we understand it so poorly. Understanding machine consciousness should deepen our understanding of our own consciousness.
The other reason is that the consciousness or otherwise of machines could become existentially important for humans. Most AI experts believe that one day we will build machines that are more intelligent than us. This is called superintelligence. If superintelligent machines develop their own beliefs and preferences about the world – and there are good reasons to think they will – then these preferences will prevail over ours.
There is not much difference genetically or in brain size between us and chimpanzees, but because we are more intelligent, we determine their future. If and when machines become superintelligent, they will determine our future.
If they are conscious, they will understand in a profound way what we mean when we say that we are conscious, and that this accords us moral value. If they are not conscious, they will understand what we are saying in an abstract, academic way, but they will not understand it viscerally. Some people – including me – think this means that conscious superintelligence would be safer for humans than a non-conscious variety.
Existential Risk: Does the development of conscious AI pose an existential risk to humanity, and if so, how can we mitigate it?If and when it happens, the arrival of superintelligence on the Earth will be the most significant event in human history – bar none. It will be the time when we lose control over our future. (You could argue that we have never exercised that control in an organised or a responsible way, but no other species has wielded control over us.)
It is surprising how many people believe they know what the outcome of this will be, and most of them do not think it will be good. They argue that humans require a very particular set of circumstances to prevail in order to flourish – the availability of the right mix of gases in the atmosphere, the availability of food and energy in forms that we can use, etc etc. They argue that superintelligent machines may well want to adjust one or more of these circumstances for their own ends, and that if we are collateral damage, so be it. In the same way that we do not hesitate to destroy inconvenient ant hills.
On the other hand, if superintelligent machines like us, they will probably decide to help us. Their greater intelligence will give them better tools and technologies, and better ways of approaching problems. They will be increasing their own intelligence at a rapid rate, so they will have extraordinary problem-solving abilities. They could resolve pretty much all our current difficulties, including climate change, poverty, war, and even ageing and death.
It does seem likely that the arrival of superintelligence will be a binary event for humanity – either very good or very bad. We are very unlikely to be the species which determines which outcome we get: that will be the superintelligent machines. But it might be that nudging them in the direction of consciousness could improve our odds.
The post Machine consciousness: definitions, implications, risks first appeared on .
December 8, 2024
AI Will Convert Space Telecoms From Science Fiction To Reality
We all know that artificial intelligence is transforming every industry. One industry which is nascent today, but will be critical to us all in the future, and which could hardly exist without AI, is space telecoms – or Non-Terrestrial Networks, as participants prefer to call it. At a conference on NTNs in Riyadh last month, industry leaders discussed how to ensure its potential benefits are realised, including global connectivity, better understanding of our planet, and progress towards a multiplanetary future.
The Importance Of NTNsOne reason why NTNs are so important is that they will bring true connectivity to the whole planet. Delegates at the second “Connecting the World from the Skies” international forum in Riyadh last month, a conference co-hosted by the International Telecommunication Union and Saudi Arabia’s Communications, Space & Technology Commission, heard that in the last two years, the number of people with no reliable internet access fell from 2.7 billion to 2.6 billion. A hundred million more people connected is a very good thing, but clearly there is still a long way to go.
NTNs don’t just enable connectivity: they enable us to observe and understand the earth. Their cameras and sensors gather vast amounts of data which, when analysed, allows us to better understand how the climate works, and what steps we need to take to arrest and ameliorate global warming. They let us monitor and manage natural and man-made disasters like floods and fires. And they give us tools to optimize the use of natural resources and improve productivity in agriculture and other industries. For example, Ahmed Ali Alsohaili, a director of Sheba Microsystems, says that data from NTNs is invaluable to Aramco’s pipeline maintenance programme.
The 1967 Outer Space Treaty forbids sovereign claims over extraterrestrial territories, which makes the commercial exploitation of space a tricky business. But the extraction of resources is a gray area, and Xavier Lobao Pujolar, head of the future projects division at the European Space Agency says that with initiatives like the Artemis Accords, leaders are preparing for a future in which the supply of rare earths and other valuable materials can no longer be monopolized, or controlled by a handful of countries.
There is a lot of talk these days about how re-usable rockets will allow us to establish colonies on Mars. This is sometimes criticised as a waste of resources that could better be deployed taking care of people back here on earth. But the logic of making humanity multi-planetary is powerful. This Earth is vulnerable to man-made damage, but also to threats from outside, like asteroid impacts. We literally have all our eggs in one basket, and that is a risky position. For humanity to become multi-planetary, we need NTNs.
NTNs need AINTNs require the co-ordination of expensive assets on a grand scale. Satellites and other high-altitude platforms must be navigated, adjusted, and co-ordinated. Their use of scarce resources like energy, bandwidth and spectra must be optimised, and they must be monitored for faults and accidents. All this has to be done factoring in the latency incurred by operating across hundreds and even thousands of miles.
As Mishaal Ashemimry, managing director of the Saudi Center for Space Futures says, the cadence of satellite launches has increased tremendously in recent years, and it is still increasing. There used to be a dozen launches a year, and now they happen every week or so. There will be more in the next three years than there were in the last ten. There is no way to manage, co-ordinate, and optimise this number of remote assets without AI. The number of satellites in earth orbit today is less than 10,000, but it will soon be hundreds of thousands. Even an army of humans could not manage this amount of space traffic. Nor could it manage and analyse the tsunamis of data pouring back down to earth.
Goals Of The Saudi Conference On NTNsThe Riyadh conference last month had a number of goals. One was to ensure that access to NTNs is maintained for everyone, and does not become the preserve of a fortunate few. Spectrum must be shared between countries, and also between NTNs and terrestrial networks, which is a much larger industry. NTNs must be regulated fairly and efficiently, which is easier said than done. The conference was entirely focused on civilian NTNs, with military applications out of scope.
One of the obvious challenges facing NTNs is the jeopardy from space junk. If you have seen the film Gravity, starring Sandra Bullock and George Clooney, you will be aware of the risk that two satellites colliding could spark a catastrophic chain reaction. Mishaal Ashemimry of the Center for Space Futures says that if we don’t address this risk soon, then a damaging collision is inevitable. Framing regulations that everyone can agree and also abide by is difficult, and worryingly, other delegates argue that there may have to be a serious accident before concerted action is taken.
Where Eagles DareThe variety of assets involved in NTNs is bewildering. Most of the satellites deployed are in Low-Earth Orbit, between 100 and 1,240 miles above us. They are cheaper to place in orbit than satellites located further out, and they suffer less from latency and from signal diffusion. But to be geostationary – to maintain a steady position over one spot on earth – satellites must be over 22,000 miles above it. Geostationary satellites don’t waste time traversing the 70% of the planet’s surface that is covered by water. And each GEO satellite can “see” a third of the planet.
A different kind of stable orbit is found still further out, at the five Lagrange points, two of which are a million miles away, and the other three much further. These orbits are stable relative to the Earth-Moon or Earth-Sun systems, and they are useful for various kinds of scientific observations and experiments. The ESA’s Xavier Lobao Pujolar says there is a race between the U.S. and China to place satellites at these locations.
Heading back closer to the ground, NTNs are also carried by High-Altitude Platform Systems, which are planes, balloons, and drones. For instance, Barry Matsumori, president and COO of Skydweller, describes how his company offers a low cost per unit of transmission because its aircraft – like a 747 but bigger – is relatively cheap to deploy and operate. It can also be geostationary, unlike LEO satellites.
A Multi-Polar WorldThe great majority of satellites in orbit today belong to U.S. companies. Starlink has around 7,000 in LEO, each circling the earth every 90 minutes, 340 miles above us. It has definite plans to deploy another 5,000, and may eventually launch as many as 30,000. Amazon’s project Kuiper only has two in orbit today, but plans to launch 3,200, of which half should be up by mid-2026. U.S. government agencies operate another 200 or so non-military satellites, including the 31 which provide the GPS system we all use in our digital maps.
It has escaped nobody’s attention that the U.S. has become a less predictable and less reliable partner – in NTNs as well as every other sphere. China has been building out its satellite constellation for years, but other countries are increasingly thinking about how to maintain access to NTNs. Eutelsat, a company owned mostly by European and Indian interests, operates around 700 satellites, and the EU plans to launch another 300 in the coming years under a programme called Infrastructure for Resilience, Interconnectivity, and Security by Satellite.
Saudi In SpaceSaudi Arabia is keen to play a leading role in the development of this multi-polar world. Martijn Blanken is chief executive officer of Neo Space Group, an organisation established by the Kingdom’s Public Investment Fund. He says that Saudi Arabia cannot leapfrog Starlink and Kuiper, but the Kingdom maintains good relationships with almost all countries around the world, and NSG wants to become a preferred supplier of NTN-related services.
The Kingdom has deployed 17 satellites since 2000, and under its ambitious Vision 2030 programme it plans to spend over $2.1 billion on space initiatives by the end of the decade.
It will partner with other countries to build satellite constellations, and to ensure that strong, effective regulators allow fair access to space telecoms for everyone.
The post AI Will Convert Space Telecoms From Science Fiction To Reality first appeared on .
May 18, 2024
Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia
A new book by Nick Bostrom is a major publishing and cultural event. His 2014 book “Superintelligence” helped to wake the world up to the impact of the first Big Bang in AI, the arrival of Deep Learning. Since then we have had a second Big Bang in AI, with the introduction of Transformer systems like GPT-4. Bostrom’s previous book focused on the downside potential of advanced AI. His new one explores the upside.
“Deep Utopia” is an easier read than its predecessor, although its author cannot resist using some of the phraseology of professional philosophers, so readers may have to look up words like “modulo” and “simpliciter”. Despite its density and its sometimes grim conclusions, “Superintelligence” had a sprinkling of playful self-ridicule and snark. There is much more of this in the current offering.
Odd structureThe structure of “Deep Utopia” is deeply odd. The book’s core is a series of lectures by an older version of the author, which are interrupted a couple of times by conflicting bookings of the auditorium, and once by a fire alarm. The lectures are attended and commented on by three students, Kelvin, Tessius, and Firafax. At one point they break the theatrical fourth wall by discussing whether they are fictional characters in a book, a device reminiscent of the 1991 novel “Sophie’s World”.
Interspersed between the lectures are a couple of parables. One is told in letters from Feodore the Fox to his Uncle Pasternaught. Feodore and his porcine friend and mentor Pignolius are in a state of nature, and groping their way towards an agricultural revolution. Despite this, Feodore’s letters are written in a highly educated, erudite style, and he has a decent grasp of the scientific method, using data and experiments.
The other parable concerns Thermo Rex, a domestic space heater whose very rich owner dies, and dedicates his large fortune to its maintenance and well-being. This causes the heater to be upgraded and granted superhuman intelligence, and also consciousness. Despite the echoes of terrifying dinosaurs in its name, it refrains from intervening in human life.
An assumption and a swerveSo much for the structure; what about the content? Two of its most striking features are a huge and controversial assumption, and a huge swerve.
The assumption is that in the foreseeable future we will find ourselves in what Bostrom calls a “solved world”, which is “technologically mature”. This means that all significant scientific problems have been resolved, and humanity is calmly spreading out into the cosmos, its population expanding exponentially as we go. We enjoy enormous abundance, and pretty much all sources of conflict have been removed. The central project of the book is to determine whether this state of affairs would be enjoyable for humans (or post-humans), and whether our lives could be meaningful.
(Personally, I find this assumption implausible. In the past, every time we solved a challenge it revealed several new ones, and although the past is an unreliable guide to the future, I strongly suspect this pattern will continue until we discover who really constructed this particular simulation. Hence I prefer Kevin Kelly’s idea of Protopia to the notion of Utopia. A protopia is a situation in which everything is very good, and day by day it keeps getting a bit better.)
Shallow and deep redundancyBostrom’s first task is to decide whether in a solved world, humans and post-humans will be redundant. He makes the helpful distinction between shallow and deep redundancy. In shallow redundancy, there are no jobs for humans because machines can do everything we do for money cheaper, better and faster. He suggests that certain jobs could not be automated if consumers wanted the practitioner to be conscious, and other jobs might require the person doing them to have moral status. However, it would become impossible for humans to hold down even these recondite jobs if conscious machines arrive. Nevertheless, in shallow redundancy, humans can live worthwhile and indeed meaningful lives being creative, having fun, and doing work that they enjoy but are not paid for.
In a state of deep redundancy, there are no tasks, including pastimes, that it is worthwhile for humans and post-humans to undertake. In this situation, AI makes leisure activities like shopping, gardening, browsing and collecting antiques, and exercising feel pointless. Even parenting could become deeply redundant as robots could be better parents, and anyway parenting could not take up enough of a person’s (now very long) life to provide its purpose.
If the Utopians have what Bostrom calls plasticity and autopotency – the ability to modify their own mental states – they could escape despair from uselessness. But although they could abolish boredom, they could not abolish boringness. Bostrom cites the example of Peer in Greg Egan’s 1994 novel “Permutation City”, who has re-wired his brain to exult in the accomplishment of carving perfect chair legs, even after he has finished hundreds of thousands of them. Peer is not remotely bored, but his life is profoundly boring, and lacking meaning.
The meaning of lifeAnd so the good professor heads off in search of the meaning of life. Spoiler alert, this is where the huge swerve happens: he does not provide it. To be fair, he gives us advance warning, in what is for me one of the book’s best passages: “Asking someone the meaning of life is like asking their recommendation for shoe size. This is especially clear if we entertain the radical possibility that we are not in a simulation.” (Channelling Douglas Adams, he adds that the best shoe size is ten.)
The lens which Bostrom chooses for his analysis of meaning is a theory developed by a South African philosopher, Thaddeus Metz. This theory stipulates that in order to be meaningful, a life should follow an arc of overall improvement, and include elements of originality and helping others. It is an objectivist theory, which means that meaning cannot simply be what each of us decides we want it to be. Subjectivist ideas of meaning could be satisfied by simply tweaking your psychology, and could include the kind of life which the American legal scholar Richard Posner warned us about: “brawling, stealing, over-eating, drinking and sleeping late.”
For Metz, a meaningful life must also have an encompassing, transcendental purpose: it should absorb a lot of a person’s time and energy, and it should serve a purpose beyond their mundane lives. But Bostrom spares himself the problem of giving a definitive answer about the meaning of life by having his Dean abruptly terminate his final lecture. His students comment that the answer “got lost in the literary upholstery.”
Upside potential, and jokesGiven that Bostrom’s avowed reason for writing “Deep Utopia” was to alleviate some of the doom and gloom surrounding AI at the moment, and perhaps offset the alarm raised by his earlier book, it is frustrating that it lacks much description of the technology’s upside potential. His own 2008 “Letter from Utopia” demonstrates that he is perfectly capable of providing it: “There is a beauty and joy here that you cannot fathom. It feels so good that if the sensation were translated into tears of gratitude, rivers would overflow.”
Instead we are left with the jokes and the epigrams, and for me at least, these are worth price of admission. Even if many of us are presently doomed to be “homo cubiculi”, our species shows promise: “Between the sunshine of hope and the rain of disappointment, grows this strange crop we call humanity.” One of our best features is our capacity for aesthetic appreciation. With enough of that, “a duck’s beak can fascinate for weeks. Without it we are like the patrol dogs at the Louvre.”
The post Can we have meaning as well as fun? Review of Nick Bostrom’s Deep Utopia first appeared on .
February 23, 2024
Artificial Intelligence and Weaponised Nostalgia
The first political party to be called populist was the People’s Party, a powerful but short-lived force in late 19th-century America. It was a left-wing movement which opposed the oligarchies running the railroads, and promoted the interests of small businesses and farms. Populists can be right-wing or left-wing. In Europe they tend to be right-wing and in Latin America they tend to be left-wing.
Populist politicians pose as champions for the “ordinary people” against the establishment. They claim that a metropolitan elite has stolen the birthright of the virtuous, “real” people, and they promise to restore it. At the heart of their political offer lies nostalgia, and opposition to change. Ironically, the populists themselves are almost always members of the same metropolitan elite that they excoriate.
They espouse what they call traditional values, including allegiance to established religious and social norms. They sneer at social progress, and belittle attempts to improve the conditions of oppressed and under-privileged groups. In particular, they allege that immigrants are being favoured over local people, and are queue-jumping to obtain better social services, especially housing. They are authoritarian and illiberal, and they select minorities, like Jews or gay people, as a common enemy for their supporters to rally against.
These days, populists rarely use the word to describe themselves. They are demagogues, offering simplistic solutions which cannot cure any of society’s ills. They deride experts and shun evidence-based policy making. Many of them are brazen liars – pure political entrepreneurs who will adopt whatever slogan wins votes. Some are ideologues who genuinely believe in the policies they promote.
Whatever their orientation, their fundamental dishonesty means they are bad for democracy. Once in power they move quickly to neuter possible sources of opposition. Judges become “lefty lawyers” who frustrate “the will of the people”. They undermine, take over, or simply abolish media organisations which report their misdeeds. They restrict the right to protest, and lock up people who dare to speak out. They are generally corrupt, and appoint friends and allies to important positions, even when they are woefully inept.
Some populists – like Venezuela’s Hugo Chavez – remain in power long enough to die peacefully of natural causes. More often, they face one of two fates: disgrace, or disastrous war. This is because they pursue the wrong solutions to the wrong problems.
The American archetype of the populist politician whose career ended in disgrace is Joe McCarthy. At the beginning of 1950 he was an unknown senator from Wisconsin, but that year he shot to fame with a speech in which he claimed to have a list of 205 communist party members working in the State Department. Despite lying about his income and his military service, he quickly became one of the most powerful Senators, and he held a series of Congressional hearings which ruined many careers.
After a few years the American public tired of his bullying, his lies and his tantrums, and in December 1954 the Senate voted to censure him. McCarthy remained a Senator for two more years, but he was a diminished and disgraced figure. He died in 1957, a drunk and a heroin addict, and President Eisenhower quipped that McCarthy-ism had become McCarthy-wasm.
The classic example of the populist whose career ended in disastrous war is Adolf Hitler. Because they offer bad solutions to the wrong problems, populists need scapegoats to blame for the deteriorating economic and political environment. If these scapegoats are foreign, so much the better, but this logic drives countries to war.
The forces that lead voters to support populists are poorly understood. The most popular explanation is economic hardship, and indeed this played a role in the rise of Nazism in the 1930s. But economic hardship is rarely the principal cause. With the Nazis, Germany’s sense of injustice after the First World War was more important than simple economics.
Fast forward to today: many people think that the financial crash of 2008 had such a deleterious effect on household incomes that it explains the current wave of populism. It is true that many of England’s Brexit voters live in deprived areas in the north of the country, but the south is more populous, and Brexit voters there were typically comfortably-off, older, and often rural. Donald Trump has three major constituencies: less-educated whites, born-again Christians who are willing to overlook Trump’s egregious moral defects because he is rolling back the country’s abortion laws, and wealthy people who equate progressive policies with tyrannical socialism.
The current wave of populism is less about economic deprivation than it is about dislike of change. Change is always uncomfortable, and rapid change especially so. In the last few decades, societies all around the world have changed rapidly. Women have penetrated the workforce and made progress towards equality of opportunity and economic freedom – although obviously there is a long way to go. Overt racism is now frowned upon in most societies, even if ethnic minorities remain significantly disadvantaged. Homosexuality has gone from illegal, to disapproved of, to celebrated in just a few decades.
Many of the people who benefited from the previous state of affairs are happy to declare these changes as positive – in theory. In practice, some of them find the changes threatening, and this makes them susceptible to claims that family life is breaking down because both parents are working; that immigrants are queue-jumping, and poised to replace the “indigenous” population (whatever that means); and that transgender women wanting to use female toilets are often rapists.
It is more palatable to believe that populism stems from economic disadvantage than from these regrettable notions, but we should not under-estimate the fear of change. You cannot defeat an ideology if you mis-diagnose its causes. Populism is the most important ideology in today’s political landscape because it can do so much harm. It could lead to war between the US and China, for example, which would be utterly disastrous.
What does all this have to do artificial intelligence? In the coming decades, AI will usher in more economic, social and political change than humanity has ever experienced before. In a few decades (a few years, according to some well-informed people) we will have machines that can do any job faster, cheaper and better than a human. The end of wage slavery should be very good news, but only if we devise an economic system that makes it beneficial for everyone. Some time after that we will have superintelligence, at which point absolutely everything about the nature of being human will change.
These changes will be exciting for some, and uncomfortable for others. We will need to be clear-eyed about the risks and rewards of these changes, and we will need honest, intelligent politicians who understand what is at stake. Purveyors of weaponised nostalgia are not the leaders we need, and continuing to elect them could turn out to be the single biggest mistake our species ever makes.
The post Artificial Intelligence and Weaponised Nostalgia first appeared on .
Government regulation of AI is like pressing a big red danger button
Imagine that you and I are in my laboratory, and I show you a Big Red Button. I tell you that if I press this button, then you and all your family and friends – in fact the whole human race – will live very long lives of great prosperity, and in great health. Furthermore, the environment will improve, and inequality will reduce both in your country and around the world.
Of course, I add, there is a catch. If I press this button, there is also a chance that the whole human race will go extinct. I cannot tell you the probability of this happening, but I estimate it somewhere between 2% and 25% within five to ten years.
In this imaginary situation, would you want me to go ahead and press the button, or would you urge me not to?
I have posed this question several times while giving keynote talks around the world, and the result is always the same. A few brave souls raise their hands to say yes. The majority of the audience laughs nervously, and gradually raises their hands to say no. And a surprising number of people seem to have no opinion either way. My guess is that this third group don’t think the question is serious.
It is serious. If we continue to develop advanced AI at anything like the rate we are now, then in some decades or years, someone will develop the world’s first superintelligence. By which I mean a machine which exceeds human capability in all cognitive tasks. The intelligence of machines can be improved and ours cannot, so it will go on, probably quite quickly, to become much, much more intelligent than us.
Some people think that the arrival of superintelligence on this planet inevitably means that we will quickly go extinct. I don’t agree with this, but extinction is a possible outcome that I think we should take seriously.
So why is there no great outcry about AI? Why are there no massive street protests and letters to MPs and newspapers, demanding the immediate regulation of advanced AI, and indeed a halt to its development? The idea of a halt was proposed forcefully back in March by the Future of Life Institute, a reputable think tank in Massachusetts. It garnered a lot of signatures from people who understand AI, and it generated a lot of media attention. But it didn’t capture the public imagination. Why?
I think the answer is that most people are extremely confused about AI. They have a vague sense that they don’t like where it is heading, but they aren’t sure if it they should take it seriously, or dismiss it as science fiction.
This is entirely understandable. The science of AI got started in 1956 at a conference at Dartmouth College in New Hampshire, but until 2012 it made very little impact on the world. You couldn’t see it or smell it, and crucially, it didn’t make any money. Even after the Big Bang in 2012 which introduced deep learning, advanced AI was pretty much the preserve of Big Tech – a few companies in the US and China.
That changed a year ago, with the launch of ChatGPT, and even more so in March, with the launch of GPT-4. Finally, ordinary people could get their hands on an advanced AI model and play with it. They could get a sense of its astonishing capabilities. And yet there is still no widespread demand for the regulation of advanced AI. No major political party in the world has among its top three priorities the regulation of advanced AI to ensure that superintelligence does not harm us.
To be sure, there are calls for AI to be regulated by governments, and indeed regulation is on its way in the US, China, and the EU, and most other economic areas too. But these moves are not driven by a bottom-up, voter-led groundswell. Ironically, they are driven at least in part by Big Tech itself. Sam Altman of OpenAI, Demis Hassabis of DeepMind, and many other people leading the companies developing advanced AI are more convinced than anyone that superintelligence is coming, and that it could be disastrous as well as glorious.
AI is a complicated subject, and it doesn’t help that opinions vary so widely within the community of people who work on it, or who follow it closely and comment on it. Some people (e.g., Yann LeCun and Andrew Ng) think superintelligence is coming, but not for many decades, while others (Elon Musk and Sam Altman, for instance) think it is just a few years away. A third group holds the bizarre view that superintelligence is a pure bogeyman that was invented by Big Tech in order to distract attention away from the shorter-term harms that they are allegedly causing with AI, by eroding privacy, enshrining bias, poisoning public debate, driving up anxiety levels and so on.
There is also no consensus within the AI community about the likely impact of superintelligence if and when it does arrive. Some think it is certain to usher in some kind of paradise (Peter Diamandis, Ray Kurzweil), while others think it entails inevitable doom (Eliezer Yudkowski, Conor Leahy). Still others think we can figure out how to tame it ahead of time, and constrain its behaviour forever (Max Tegmark, and Yann LeCun again).
Technology evolves because inventors and innovators build one improvement on top of another. This means it evolves within fairly narrow constraints. It is not deterministic, and there is no law of physics which says it will always continue. But our ability to guide it is limited.
Where we have more freedom of action is in adjusting human institutions to moderate the impact of technology as it evolves. This includes government regulation. Advanced AI already affects all of us, whether we are aware of it or not. It will affect all of us much more in the years ahead. We need institutions that can cope with the impact of AI, and this means that we need our political leaders and policy framers to understand AI. This in turn requires all of us to understand what AI is, what it can do, and the discussion about where it is going.
Increasingly, acquiring and maintaining a rudimentary understanding of AI is a fundamental civic duty.
The post Government regulation of AI is like pressing a big red danger button first appeared on .
The Bletchley Park summit on AI safety deserves two and a half cheers
The taboo is broken. The possibility that AI is an existential risk has now been voiced in public by many of the world’s political leaders. Although the question has been discussed in Silicon Valley and other futurist boltholes for decades, no country’s leader had broached it before last month. That is the lasting legacy of the Bletchley Park summit on AI Safety, and it is an important one.
It might not be the most important legacy for the man who made the summit happen. According to members of the opposition Labour Party, Britain’s Prime Minister Rishi Sunak was using the event to look for his next job. Faced with chaos in the Tory party, and a potentially damaging enquiry into his role in the management of Covid, he appears to be heading towards catastrophic defeat in the forthcoming general election. The lifestyle of another former British political leader, Nick Clegg, who gets paid a reported $30 million a year by Facebook to be Mark Zuckerberg’s (not terribly effective) flak catcher, must look attractive to Mr Sunak. His on-stage discussion with Elon Musk after the summit was described by several of the attending journalists as an embarrassingly fawning job application.
Cynics point to the fact that the summit was attended by very few heads of state. President Biden sent his deputy, vice president Kamala Harris, and Chancellor Scholtz of Germany and President Macron of France were notable for their absence. The announcement of a UK AI safety institute was upstaged by the announcement the day before the summit that the US would do the same. There is room in the world for more than one safety institute, but given that most of the world’s most advanced AI models are developed by US-owned companies, and the rest by Chinese ones, it is obvious which of these two institutes will be the more significant. The EU has the market power, thanks to its 450 million relatively wealthy consumers, to enforce regulations on big tech, even though it is home to none of them (unless you count Spotify). The UK is not. In AI as in other industries, the rules of the road will be determined where the roads and the cars are made.
Nevertheless, the Bletchley Park summit has got the world’s leaders talking seriously – for the first time – about the longer-term risks from AI, as well as about its staggering potential upsides. It took political courage to keep the longer-term aspects on the agenda when many pressure groups proclaim that the shorter-term risks are far more important, like privacy, bias, mass dis-information and industrial-scale personalised hacking. These risks are certainly important, but the idea that ensuring a future superintelligence is safe is a trivial or worthless endeavour is complacent and absurd. Even more risible is the claim, made seriously by some, that “tech bros” promulgate the idea of existential risk to deflect attention from the short-term harms they are causing, or planning to cause.
Another brave decision that the UK government made and stuck to was to invite China to the summit. China hawks like former PM Liz Truss railed against an invitation being extended to a country that spies against Britain. It is surprising that Ms Truss’ opinions continue to receive attention after her short-lived and disastrous tenure. Also, does anybody seriously think that the UK doesn’t spy on China in return? But in any case, with China being one of the only two countries that really matter in the global AI industry, excluding them would have been a mistake.
Whatever its shortcomings, the Bletchley Park summit has got the show on the road. Matt Clifford, the tech entrepreneur seconded to convene the event, deserves considerable praise. The world’s political leaders have spoken publicly about existential risk, and there is no going back. Another summit will be held in six months’ time, in South Korea, and a year from now the French will pick up the baton. This process may turn out to be by far the most positive part of Sunak’s legacy.
The declaration signed at the end of the summit by representatives of 28 governments does not actually use the word “existential”, but it tiptoes right up to the edge: “Substantial risks may arise from … issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict… There is potential for serious, even catastrophic, harm”.
The declaration is vague about what measures should be taken to ensure that advanced AI is safe for humanity, but it would be wholly unreasonable to expect a single event to raise such a fundamental question for the first time and also answer it definitively. Figuring out how to navigate the path towards superintelligence is the most important challenge humanity faces this century – and perhaps ever. Getting it right will take many more summits, and innumerable discussions in all parts of society.
The UN has just announced a high-level advisory body on AI, and the G7 has published a voluntary code of conduct for AI developers. There are calls for the establishment of organisations to do for AI what the IPCC does for global warming, and what CERN does for nuclear research. The debate about whether regulation will help or hinder the development of beneficial AI will rage for years. In a field as complex, fast-moving, and capital-hungry as advanced AI, it will inevitably be challenging for regulators to keep up with and even stay ahead of the organisations that develop the technology. There is a genuine danger of regulatory capture, in which regulators end up imposing rules which entrench big tech’s first-mover advantages.
But it is simply unacceptable to say that regulation is hard, and therefore the industry should go unregulated. We elect politicians to make decisions on our behalf, and they establish and direct regulators to make sure that powerful organisations play nicely. The AI industry and its cheerleaders cannot tell regulators and politicians (and by extension the rest of us) that our most powerful technology is something we are not smart enough to understand, and we should therefore leave the industry to do whatever it fancies.
It has been apparent for some years that AI was improving remarkably fast, and that the future foretold by science fiction was hurtling towards us, but until recently, most of us were not paying serious attention. I used to think that the arrival of self-driving cars would be the alarm clock that would awake people from their slumber; instead it was ChatGPT and GPT-4. The Bletchley Park summit has disabled the snooze button.
The post The Bletchley Park summit on AI safety deserves two and a half cheers first appeared on .
Arabian moonshots may hold huge implications for the whole world
After Silicon Valley, the United Arab Emirates (UAE) may be the most future-oriented and optimistic place on the planet. Futurism and techno-optimism are natural mindsets in a country which has pretty much invented itself from scratch in two generations. During this period its people have progressed from a mediaeval lifestyle to being 21st century metropolitans. So it is unsurprising that the UAE has been quick to spot the enormous future significance of artificial intelligence to all of us, and to pioneer its deployment.
It is not just the UAE. The leaders of all six members of the Gulf Cooperation Council (GCC – Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, as well as the UAE) see AI as an important component of their mission to transition their economies away from reliance on fossil fuels, and improve living standards for their people. They are looking to AI to help them develop alternative energy sources, create smart cities, improve government services, and build world-class industries in fintech, healthcare, and tourism.
Especially today, the oil-rich members of the GCC have both the financial resources and the ambition to become major players in the development and deployment of AI. Their revenues are boosted by Putin’s war in Ukraine, which has driven up the price of oil. Their ambitions are encouraged by the shocking capabilities of the generative AI systems which started grabbing headlines a year ago.
The Gulf states have an additional advantage over most other jurisdictions: their large expatriate workforces mean that they can automate fearlessly. To put it bluntly, if machines do end up taking more human jobs than they create, they can send their excess workers back home. This may help explain why the ordinary people of the GCC seem less afraid of AI than the populations of many other countries.
Back in 2017, Dubai, the second-largest emirate within the UAE, appointed the world’s first Minister for AI. This year, Abu Dhabi’s Technology Innovation Institute (TII) released Falcon, a Large Language Model (LLM), or generative AI model, which many think is the most powerful open-source LLM in the world. Saudi Arabia wants to leapfrog the UAE in AI development and deployment, and both nations are buying as many advanced computer chips as they can get their hands on.
AI hubs are springing up all over the region. The UAE has an AI university, and Saudi and Qatar are not far behind. The Saudi Data and Artificial Intelligence Authority (SDAIA) was established to help realise the Kingdom’s ambitious Vision 2030 initiative, spearheading rapid advancements in artificial intelligence. IDC, a market research firm, forecasts that GCC spend on AI will exceed $6bn a year by 2026, which represents faster growth than anywhere else in the world. Some insiders suspect this may turn out to be a gross under-estimate. PwC, the audit and consulting firm, predicts that AI will add $320bn to the GCC economies by 2030.
Despite being generally optimistic about the future, the Gulf states are also profoundly socially conservative. People outside the region often focus on this aspect, with accusations of repressive attitudes towards women, the use of the death penalty, laws against being gay, arbitrary arrests and detentions – and sometimes murder – of both nationals and foreigners.
Reform is under way in much of the region, and progressing faster than most outsiders realise. This is especially true in Saudi Arabia, where women can now drive, and are increasingly strongly represented in the workforce. Listening to pop music was banned a few years ago; now young people congregate freely at music festivals. Inbound tourism used to be virtually impossible; now it takes just a few seconds to obtain a tourist visa online.
Critics are quick to point out that there is a long way to go, and the region’s rulers will often agree. In a recent interview, Mohamed bin Salman, the Crown Prince and de facto ruler of Saudi Arabia, said that he does not like the law under which a Saudi man has been condemned to death for posting criticism of the government on Twitter. He argued that it would be unlawful for him to intervene, but he hopes that a different judge will accept the man’s appeal.
Alongside their reform programmes, the region’s rulers are practising cultural diplomacy. Abu Dhabi spent around $1.4bn establishing a branch of the Louvre, and this figure is dwarfed by the sums spent by the region’s rulers on football and other sports at home and abroad. Lavish technology summits seem to take place in the region every month – sometimes weekly.
The Gulf’s rulers and people are justly proud of much of their heritage and their culture. They bridle at criticisms from countries which have yet to apologise for committing some of the worst crimes in human history – crimes which are not part of some long-distant distant past, but which were committed against people still alive today. GCC rulers are increasingly willing to wield their financial clout to emerge from the geopolitical shadows and assert their independence. New alliances are being forged, and traditional dependencies are being tested.
Whether it is fair or not, as the world increasingly understands the enormous importance of AI in our future, many people will be disturbed if some of the world’s most powerful AI systems are developed by the Gulf states. The region’s leaders may be tempted to dismiss these concerns as racist froth, but if they want to join the ranks of the leading AI countries, they will have to compete for top AI talent. This talent is highly mobile, and cannot always simply be bought. It needs to be seduced.
There is a tremendous opportunity lurking in this situation. The Gulf countries can most easily attract talented AI professionals by offering them the opportunity to solve truly significant, global problems. There is no shortage of major challenges to address. Advanced AI presents many risks, but it can also help us to solve the climate challenge. It can help us improve healthspan and lifespan. It will make all industries more efficient, raising living standards for everyone. It can automate the drudgery out of our everyday lives, and raise standards of education to levels previously undreamed of.
The rulers of the Gulf should use their wealth and ambition to launch a series of moonshots, seeking solutions to some of our most pressing problems, and placing themselves at the forefront of AI development in the process.
The post Arabian moonshots may hold huge implications for the whole world first appeared on .