Calum Chace's Blog, page 3

June 22, 2023

The Four Cs: when AIs outsmart humans

Startling progress

On 14 March, OpenAI launched GPT-4. People who follow AI closely were stunned by its capabilities. A week later, the US-based Future of Life Institute published an open letter urging the people who run the labs creating Large Language Models (LLMs) to declare a six-month moratorium, so that the world could make sure this increasingly powerful technology is safe. The people running those labs – notably Sam Altman of OpenAI and Demis Hassabis of Google DeepMind – have called for government regulation of their industry, but they are not declaring a moratorium.

What’s all the fuss about? Is advanced AI really so dangerous? In a word, yes. We can’t predict the future, so we can’t be sure what future AIs will and will not be able to do. But we do know that their capability depends to a large degree on the amount of computational power available to them, and the quantity (and to a lesser extent the quality) of the data they are trained on. The amount of computational horsepower that $1,000 buys has been growing exponentially for decades, and despite what some people say, it is likely to continue doing so for years to come. We might be reaching the limits of data available, since the latest LLMs have been trained on most of the data on the internet, but that has also doubled every couple of years recently, and will probably continue to do so.

Exponential

So AIs are going to carry on getting more powerful at an exponential rate. It is important to understand what that word “exponential” means. Imagine that you are in a football stadium (either soccer or American football will do) which has been sealed to make it water-proof. The referee places a single drop of water in the middle of the pitch. One minute later he places two drops there. Another minute later, four drops, and so on. How long do you think it would take to fill the stadium with water? The answer is 49 minutes. But what is really surprising – and disturbing – is that after 45 minutes, the stadium is just 7% full. The people in the back seats are looking down and pointing out to each other that something curious is happening. Four minutes later they have drowned. That is the power of exponential growth.

Powerful AIs generate two kinds of risk, and we should be thinking carefully about both of them. The first type is sometimes called AI Ethics, but a better term is Responsible AI. This covers concerns such as privacy, bias, mass personalised hacking attacks, deep fakes, and lack of explainability. These are problems today, and will become more so as the machines get smarter. But they are not the things we worry about when we talk about machines getting smarter than us.

AI Safety

That is the realm of the other type of risk, which is known as AI Safety, or AI Alignment. The really big risk, the one that keeps Elon Musk and many other people up at night, is that AIs will become superintelligent – so smart that we become the second smartest species on the planet. That is a position currently held by chimpanzees. There are fewer than half a million of them, and there are eight billion of us. Their survival depends entirely on decision that we humans make: they have no say in the matter. Fortunately for them, they don’t know that.

We don’t know whether a superintelligence will automatically be conscious, but it doesn’t have to be conscious in order to jeopardise our very existence. It simply needs to have goals which are inconvenient to us. It could generate these goals itself, or we could give it an unfortunate goal by mistake – as King Midas did when he asked his local deity to make everything he touched turn to gold, and discovered that this killed his family and rendered his food inedible.

We’ll come back to superintelligence, because first we should consider a development which is probably not an existential risk to our species, but which could cause appalling disruption if we are unlucky. That is technological unemployment.

Automation

People often say that automation has been happening for centuries and it has not caused lasting unemployment. This is correct – otherwise we would all be unemployed today. When a process or a job is automated it becomes more efficient, and this creates wealth, which creates demand, which creates jobs.

Actually it is only correct for humans. In 1915 there were 21.5 million horses working in America, mostly pulling vehicles around. Today the horse population of America is 2 million. If you’ll pardon the pun, that is unbridled technological unemployment.

To say that automation cannot cause lasting widespread unemployment is to say that the past is an entirely reliable guide to the future. Which is silly. If that were true we would not be able to fly. The question is, will machines ever be able to do pretty much everything that humans can do for money. The answer to that question is yes, unless AI stops developing for some reason. Automation will presumably continue to create new demand and new jobs, but it will be machines that carry out those new jobs, not humans. What we don’t know is when this will happen, and how suddenly.

If we are smart, and perhaps a bit lucky, we can turn all this to our advantage, and I explore how in my book The Economic Singularity. We can have a world in which humans do whatever we want to do, and we can have a second renaissance, with all humans living like wealthy retirees, or aristocrats.

What about superintelligence? Can we find a solution to that risk? I see four scenarios – the four Cs.

Cease

We stop developing AIs. Everyone in the world complies with a permanent moratorium. It might be possible to impose such a moratorium in the West and in China (which has already told its tech giants to slow down), and currently, almost all advanced AI is developed in these places. But as computational power and data continue to grow, it will become possible for rogue states like Russia and North Korea, for Mafia organisations, and for errant billionaires to create advanced AIs. The idea that all humans could refrain forever from developing advanced AI is implausible.

Control

We figure out how to control entities that are much smarter than us, and getting smarter all the time. We constrain their behaviour forever. There are very smart people working on this project, and I wish them well, but it seems implausible to me. How could an ant control a human?

Catastrophe

A superintelligence either dislikes us, fears us, or is indifferent to us, and decides to re-arrange some basic facts about the world – for instance the presence of oxygen in the atmosphere – in a way that is disastrous for us. Some people think this is almost inevitable, but I disagree. We simply don’t know what a superintelligence will think and want. But it is a genuine risk, and it is irresponsible to deny it.

Consent

The fourth scenario is that a superintelligence (or superintelligences) emerges and decides that we are harmless, and also of some value. If we are fortunate, it decides to help us to flourish even more than we were already doing. Personally, I think our most hopeful future is to merge with the superintelligence – probably by uploading our minds into its substrate – and take a giant leap in our own evolution. We would no longer be human, but we would live as long as we want to, with unimaginable powers of understanding and enjoyment. Which doesn’t sound too bad to me.

The post The Four Cs: when AIs outsmart humans first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2023 09:44

GPT-4 and education. With Donald Clark

Aristotle for everyone

The launch of GPT-4 in March has provoked concerns and searching questions, and nowhere more so than in the education sector. Last month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model.

Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.

Donald Clark joined the London Futurists Podcast to discuss these developments. Clark founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.

Education is a slow learner

In that book, Clark joked that “education is a bit of a slow learner.” Education has long seemed relatively immune to the charms of technology, but GPT-4 and similar models look set to change that. Previously, most education technology projects were a bit like mosquitos : lots of buzz, tricky to spot, and short-lived. Advanced AI will make past improvements in pedagogy look like rounding errors. GPT-4 has been trained on a corpus of text not far short of the sum total of all human learning. No human teacher gets anywhere near its breadth of knowledge.

A universal tutor based on GPT technology would not only deploy all this knowledge. It would also lack the fads and misconceptions which bedevil teaching at all levels, such as the long-debunked but still-prevalent theories of “learning styles”, including the idea that different people learn better aurally, visually, or kinetically.

Personalised

AI tutors will be personalised – tailored to the needs of individual learners. We all take a different amount of time to master a subject, and grouping 30 students in a class makes it hard to cater for this. Giving everyone a personal AI tutor means that we can all master each part of a subject before we move on to the next part that builds on it. Many wealthy parents know this very well, and hire teachers to tutor their children in one-to-one sessions after school.

Khan Academy, an online teaching service set up in 2006, has been pioneering new pedagogical techniques since the outset, and it is now using GPT-4 to offer something approaching this personalised AI tutor. It acts as a Socratic teacher, asking questions rather than giving answers, and it practises mastery learning, or competency-based learning, urging you to stick with a unit until you have internalised it properly. Maths is perhaps the most important subject to do this in.

Duolingo is another company rushing to deploy services based on GPT-4 for its customer base of a 100 million-plus. Having used both services myself, I have higher expectations for Khan Academy, although Duolingo does produce a pretty good podcast for Spanish language learners.
Raising standards in healthcare and education

Bill Gates wrote recently that the biggest impacts of advanced AI would be felt in education and healthcare. Clark reports that globally, the average general medical practitioner mis-diagnoses just under 5% of cases presented. AI will enable doctors to reduce this number considerably, to the point that it would be considered medical malpractice for a doctor to fail to at least consult an AI when making a diagnosis. There is no corresponding metric in education, but it is certainly true that human teachers vary, they are fallible, and they often fall behind the current thinking in both their subject and in pedagogical techniques. AI will help to raise standards enormously.

A disproportionate amount of the money spent on education is directed to the children of the wealthy, although increasingly, in most countries, families and students are contributing to that spend. Much less is spent on the less privileged 50% of the population. It is ironic, then, that GPT technology seems to threaten the incomes of white-collar people much more than blue-collar people. But in any case, Clark hopes that AI could enable us to re-balance the emphasis, and cultivate more and better learning in less wealthy parts of our communities.

Does the personalised tutor require AGI?

To get today’s GPTs to be fully-fledged Aristotelian personal tutors, they need to be improved in three ways. First they need long-term memories as well as the modest short-term memories (buffers) which they have today. Second, they need a world view, which can also be called common sense. Third, they need the ability to test and assess the provenance of their knowledge, which will eliminate the hallucinations they are currently prone to.

These are not trivial additions, and indeed would take a GPT a long way towards AGI, artificial general intelligence, a machine with all the cognitive capabilities of an adult human. GPT-4 already has elements of a theory of physics, but few people would claim that it fully understands its place in the world, or that of the humans it interacts with.

Which raises the question whether the fully-fledged Aristotelian personalised tutor is possible before we reach AGI. When we do reach AGI we are already into the era of superintelligence, which will raise a whole slew of much bigger questions.

The post GPT-4 and education. With Donald Clark first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2023 09:24

June 7, 2023

GPT-4 and the EU’s AI Act. With John Higgins

The EU AI Act

The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.

John Higgins joined the London Futurists Podcast to discuss the AI Act. He is the Chair of Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the UK’s IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.

Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.

Two Big Bangs in AI

When a new technology or service is launched it is hard to know the eventual scale of its impact, but like most observers, Higgins thinks that large language models will turn out to have a deep and broad impact.

The AI systems generating so much noise at the moment use a type of AI called transformers. These were launched in 2017 by a Google paper called “Attention is all you need”, riffing on the 1967 Beatles song “All you need is love”. The launch of transformer AIs was arguably the second Big Bang in AI.

The first Big Bang in AI came five years before, in 2012, when the first deep learning systems were introduced by Geoff Hinton and colleagues, who thereby revived one of the oldest forms of AI, known as neural networks. These systems discriminate, in the sense that they classify things into appropriate categories. They analyse.

The second Big Bang in AI, the arrival of transformers, is giving us generative AI, which produce text, images, and music. Their abilities are remarkable, and they will have enormous impacts.

Anglo-Saxon vs Franco-German regulation

Higgins describes the Franco-German approach to regulation as seeking to enshrine a framework of rules before a new technology, product, or service can be deployed. (He acknowledges this is an over-simplification.) The Anglo-American approach, by contrast, is to allow the service to be launched, and fix any market failures once they have become apparent.

Traditionally, the Franco-German approach has prevailed in the EU, and especially so since Brexit. In addition there is a precautionary principle, which says that if there is a possibility of harm being caused, then measures should be in place to mitigate that harm at the outset.

This is not the same as saying that a technology cannot deployed until possible harms are identified and eliminated. The EU has not collectively signed up to the Future of Life Institute (FLI) letter calling for a six-month moratorium on the development of large language models, and there is no general appetite to follow the Italian government’s short-lived ban on ChatGPT. The idea is to get a regulatory framework in place at the outset rather than to delay deployment.

Risk-based regulation

Regulators often face a choice between making rules about a technology, and making rules about its applications. The preference in the EU is generally to address the applications, and in particular any applications which may impact safety, and human rights. Higgins thinks this generally produces regulations which are not onerous, because they mostly oblige developers to take precautions that responsible ones would take anyway. For instance, they should ask themselves whether there is a likelihood of bias lurking in the data, and whether it is possible to be transparent about the algorithms.

When a large language model has been trained on a large percentage of all the data in the internet, it is obviously hard to avoid bias. So the developers of these models should be as open as possible about how they were trained. Organisations deploying AI need to consider the practices of all the other organisations in their supply chain, and what steps each of them has taken to mitigate possible harms. The level of scrutiny and precaution should vary according to the degree of possible harm. A healthcare provider should generally exercise great diligence than a games developer, for instance.

Another over-simplification of the AI Act is that it instructs developers to take appropriate steps (undefined) to ensure that no harms (undefined) are caused which rise above a particular threshold (also undefined). Put like this it may seem unfair, but what alternatives are there? It is not possible to define all the possible harms in advance, nor all the possible steps to mitigate those harms. It is unacceptable to most people to allow developers to let rip and deploy whatever system they like without accountability, on the grounds (as argued recently by Eric Schmidt, former chair of Google) that regulators are completely out of their depth and will generally cause more problems than they prevent, so the tech companies must be left to govern themselves. It is also unacceptable to ban all new systems from being launched.

The EU process

The first step in the creation of EU legislation is that the Council of Ministers, which comprises the heads of government of the member countries, asks the EU Commission to draft some legislation. The Commission is the EU’s civil service. When that draft is ready, it is reviewed by a committee of the Council of Ministers, who then pass their revised version to the EU Parliament, the collection of MEPs who are elected by EU citizens. MEPs review the draft in various committees, and the result of that process is brought to the full Parliament for a vote. In the case of the AI Act, that vote is expected during June.

Finally, the three institutions (Commission, Council and Parliament) engage in negotiations called “trilogue”. The current best guess is that an agreed text of the AI Act will be ready by the end of the year.

The quality of legislation and regulation produced by this process is controversial. Some people think the 2016 General Data Protection Regulation (GDPR) is eminently sensible, while others – including Higgins – think it is a sledgehammer to crack a nut.

The FLI open letter

Two weeks after OpenAI launched GPT-4, the Massachusetts-based Future of Life Institute (FLI) published an open letter calling for a six-month moratorium on the development of large language models. The British Computer Society (BCS) which Higgins chaired published a response arguing against it. He argues that it is a bad idea to call for actions which are impossible, and that although good actors might observe a moratorium, bad actors would not, so the effect would be to deny ourselves the enormous benefits these models can provide, without avoiding the risks that their further development will cause.

EU tech giants

Higgins argues that the prize for the EU in using large language models and other advanced AI systems is to create improved public services, and also to enhance the productivity of its private sector companies. Europe is home to many well-run companies producing extremely high-quality products and services – think of the German car industry and the legion of small manufacturing companies in northern Italy. He thinks this is more important than creating a European Google.

Europe’s single market is a work in progress. It works well for toilet rolls and tins of beans, and as prosaic as that sounds, it is an enormously beneficial system, created by the hard work of people from all over Europe – not least the UK, where Margaret Thatcher was one of its earliest and strongest proponents, before she turned Eurosceptic. The lies and exaggerations about the EU that were spread for many years by the Murdoch press, Boris Johnson and others have concealed from many Britons what an impressive and important achievement it is. As the Joni Mitchell song says, you don’t know what you’ve got ‘till it’s gone, and the British are quickly realising how much they have lost by leaving. Unfortunately the malign hands of Murdoch, the Mail and the Telegraph continue to impede the obviously sensible step of re-joining the single market and the customs union, if not the EU itself.

Impressive as it is, the EU single market is far from complete, and Higgins believes that indigenous tech companies are more hindered by this than many other types of company. It is much easier to start and grow a tech giant in the genuinely single markets of the US and China than it is in Europe. Higgins argues that we cannot fix this, at least in the short term, so Europe should focus on the areas where it has great strengths. But it remains something of a mystery that the EU contains global champions in pharmaceuticals, energy, luxury goods, and financial services, to name a few, but seems unable to build any in technology.

 

The post GPT-4 and the EU’s AI Act. With John Higgins first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2023 04:06

May 31, 2023

Longevity, a $56 trillion opportunity. With Andrew Scott

In unguarded moments, politicians occasionally wish that retired people would “hurry up and die”, on account of the ballooning costs of pensions and healthcare. Andrew J Scott confronts this attitude in his book, “The 100-Year Life”, which has been sold a million copies in 15 languages, and was runner up in both the FT/McKinsey and Japanese Business Book of the Year Awards. Scott joined the London Futurists Podcast to discuss his arguments.

Scott is a professor of economics at the London Business School, a Research Fellow at the Centre for Economic Policy Research, and a consulting scholar at Stanford University’s Centre on Longevity. He is an adviser to the UK government and a variety of international organisations.

The aging of society is good news, not bad

People are living longer. Fewer children are dying before the age of five, fewer parents are dying while their children are young, and more people are surviving long enough to meet their grandchildren. This is surely a good news story, but it is too often seen as a bad thing because society is aging, and there are fewer working people to fund the pensions of those who have retired.

The aging of society means we must change public policy towards aging, and also our own attitudes. Whatever age you are today, you have more life ahead of you than previous generations did, which means that more needs to be invested in your future – by your government, but more importantly by you. You need to make investments in your future health, skills, and relationships.

We need more life stages

We need a new way of looking at the three stages of life: education, work, and retirement. It is easy to forget that this perspective is quite new: both teenagers and retirees are recently invented concepts. Governments are already trying to raise the retirement age, and reduce the level of pension payments, and this is understandably unpopular with many voters. It is unacceptable to simply extend the kind of working life familiar to baby boomers to make it last 60 years. We need to move to a multi-stage working life, with career breaks, periods of part-time work, and transitions to new industries becoming the norm rather than the exception.

In the twentieth century, people took much of their lifetime’s leisure after they retired. In this century, people will take probably more of their leisure before they retire.

“The 100-year life” is a non-academic book, and it has clearly struck a chord with a large audience, but Scott wants to help mould public policy, so he has also been writing papers on longevity economics in order to influence central bankers and government officials, the people who make the rules. One of his priorities has been to put a dollar value on extended lifespans. Governments have long placed a value on human life in the context of healthcare, with concepts like quality-adjusted life years, or QALYs, used to inform decisions about whether to fund particular medicines, for instance. They have not typically applied the same logic to the value of extending lifespan.

Four archetypes

Scott and some colleagues employed four archetypal characters to portray the possible outcomes of aging. The first are the Struldbruggs, from Jonathan Swift’s 1726 satirical novel Gulliver’s Travels. Members of this unfortunate race are immortal, but they do age. Struldbruggs above 80 are regarded as useless, and their health keeps deteriorating. Scott thinks this is how many of us currently view the prospects for senior citizens, and this is a serious challenge for policy makers.

The second archetype is Dorian Gray, the Oscar Wilde character who stopped aging until the moment he died, because he magically arranged for a portrait to age and decay instead. This archetype represents the outcome where people get older but do not age biologically, and then die suddenly.

The third archetype is Peter Pan, the character invented by J.M. Barrie. Peter ages, but only very slowly, and he lives a very long time. The fourth and final archetype is Wolverine, the Marvel comics character, who can reverse physical damage, including the damage caused by aging.

$56 trillion

Applying economic values to the extra healthspan and lifespan implied in the Dorian Gray and Peter Pan archetype yields very large numbers. For example, extending every life in the USA for one year with no loss of health would generate $56 trillion. For most countries the result is 4% of GDP.

Scott’s next book, called “Evergreen”, due out in March 2024, poses a simple question. You are very likely to get old. You fear getting old. What are you going to do to ensure that you age well?

Beyond 100

Adding an extra year of healthy life is all well and good, but there is an increasing number of smart people (including people that Scott works with) who think we could live much longer than that. Indeed, many people think that longevity escape velocity is not far away – the time when medical science gives you an extra year of life every year that passes, so you never actually get older. Scott is nervous about this because he thinks our institutions and our attitudes are not yet ready for it.

However, the social, economic and legal mechanisms required to handle everyone surviving to 100 in good health are pretty much the same as those required for us all to live to 120, or 500, or beyond.

Revolutions in longevity

We’ve had the first longevity revolution, which was to get survival rates for the first five years much higher. Arguably there was a second longevity revolution in which we tackled smoking, and many of the communicable diseases which stopped people progressing through middle age into old age. Globally, life expectancy at birth is now 73, which is an extraordinary achievement. We’re about to have the next longevity revolution (either the second or the third), which is to change how we age.

Many people find the idea of greatly increased longevity fantastical. They don’t believe it, and they often don’t like it. But if you talk instead about the malleability of aging, it makes more sense. We all know that rich people live longer, healthier lives than poor people. We all know some old people who are in great shape and others who are frail, and we know that these outcomes can be produced by healthier lifestyles as well as by the genetic lottery. It’s short step from improving healthspan to improving lifespan.

The three-dimensional longevity dividend

Governments around the world are finally facing up to the demographic challenge, which means we need to work for longer. What they are signally failing to do, says Scott, is to help us to age better, and help us to re-train successfully, so that we can all enjoy the extra years of work. As our longevity increases, we need our healthspan to keep up, and we also need our productive capacity to do the same. Not just our ability to work for more years, but our ability to remain fully engaged with life – to enjoy it. Scott calls this a three-dimensional longevity dividend, and he thinks it is almost trivially obvious that this would be good for the economy too.

As usual, Singapore is ahead of the curve on these matters. The UAE is also focused on geroscience and associated social planning. More broadly, governments everywhere are addressing the problem that people start withdrawing from the labour market from the age of fifty. It’s not usually because they decide to retire early and enjoy a life of leisure. It’s usually because of poor health, or the need to care for someone in poor health, or because their skills become outdated, or because they face ageism in the workplace. Figuring out how to keep these people working – and enjoying their work – should be a top priority for policy makers.

Another development that Scott finds interesting is the concern raised by the falling life expectancy figures in the US and the UK. These governments do finally seem to be considering life expectancy to be an important indicator of policy success, along with GDP. He would like to see governments set targets for healthy life expectancy.

Make friends with your future self

You have more life ahead of you, so you need to make a friend of your future self, and give your future self more options. Those options require good health, good relationships, and good finances. It’s not rocket science, but it’s important.

The post Longevity, a $56 trillion opportunity. With Andrew Scott first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2023 04:21

May 17, 2023

How to use GPT-4 yourself. With Ted Lappas

The last few episodes of the London Futurists Podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In the latest episode, Ted Lappas, a data scientist and academic, helps us to understand what GPT technology can do for each of us individually.

Lappas is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London’s largest independent AI consultancy before it was acquired last year by the media giant WPP.

Head start

Lappas uses GPTs for pretty much every task that involves generating or manipulating text. This includes drafting emails, writing introductions to book chapters, summarising other people’s written work, and re-writing text. He also uses it to write computer code, something that GPT-4 is much better at than its predecessors. The main value in all of these use cases is to provide a head start. It gets past any terror of the blank page with a rough first draft – and in many cases the first draft is not all that rough, it’s maybe 70% of the way to final.

What is slowing GPTs down?

Given this formidable level of usefulness, why has GPT-4 not turned the world of work upside-down, at least for knowledge workers? Lappas thinks there are two reasons. The first is the fear that any data people are working with may find its way into the hands of competitors. The second reason is that before a mature ecosystem of plug-ins develops, GPT is just a brain: it cannot interact with the world, or even with the internet, which reduces its usefulness.

Plug-ins from the likes of Expedia and Instacart are changing this, as will systems like AutoGPT and BabyAGI, which have the ability to connect with other systems and apps. (AutoGPT and its like are hard to use and frustrating at the moment, but that will improve.) This unfolding ecosystem of extensions has been compared with the development of the iPhone app store, which made Apple’s smartphone an invaluable tool. It is much easier to create a plug-in for GPT-4 than it was to create an app for the iPhone: all it takes is 30 minutes and a basic grasp of the Python programming language.

Specific use cases

Lappas gives a specific example of how he uses GPT-4 in his work. He is a reviewer for an academic journal, and for each issue he is allocated around 15 papers. This means he has to read and summarise the papers themselves, plus around five reviews of each paper, and the “rebuttal” (responses) by the paper’s authors. This adds up to around 60 documents. There is no way to avoid reading them all, but GPT-4 makes it dramatically easier and faster for him to produce the summaries. He says it makes him more like a conductor of an orchestra than a writer, and it saves him five or six hours on each issue of the journal.

GPT-4 and marketing copy

Another field where Lappas is aware of GPT-4 making waves is the writing of marketing copy. He lives and works in Athens, and several of his friends there write posts for Amazon and other websites. If they are productive, they can churn out fifteen a day. Some are now using GPT-4 to produce the initial drafts, and some are not. Those who are not are starting to fall behind. He thinks that if he is seeing this happening in Greece, which is not a particularly technophile country, then it must be happening elsewhere too.

Websites like Fiverr and Upwork are forums for companies needing copy to advertise projects for freelance copywriters to bid for. These sites are seeing an increase in the number of pitches for each piece of work, and the suggestion is that freelancers are using GPT-4 to increase the amount of projects they can bid for and fulfil. Unfortunately the quality of the resulting work is not always high – it hasn’t been edited thoroughly – and clients often warn that copy produced by machines will not be accepted. After all, if a machine can produce the copy then the client company could instruct GPT-4 themselves, rather than paying a freelancer to do it.

Higher up the value chain are copy writers who generate their content through bespoke interviews with the personnel of the client, or people suggested by the client, and then transform those discussions into sparkling English. Reports suggest that GPT-4 is making fewer inroads into this level of the market, although that will probably change in time. Lappas reports that his colleagues at WPP are acutely interested in how quickly this march up the value chain will happen.

GPT and travel writing

One of the ways this will happen is that the AI models can be fine-tuned by ingesting samples of work by an experienced copy writer. I do this to help with my “Exploring” series of illustrated travel books. On the GPT-4 platform Chatbase, I trained a new bot by feeding it the contents of six of my previous books. In the settings, I specify the parameters for the book in general, for instance telling the bot to avoid flowery language with lots of adjectives, and to avoid impressionistic introductions and conclusions.

I then write a tailored prompt for each new chapter of the book I am currently working on. Churning out copy at several hundred words a minute, the bot does a reasonable job of imitating my writing style, although I still have to do a substantial amount of editing and fact checking, both to improve the readability, and also to weed out the factual errors and the so-called “hallucinations” – the apparently random inventions it includes.

Exploring Marseille

Advice for the curious and the nervous

Lappas strongly advises everyone who wants to understand what GPTs are capable of now and what they will be capable of in future to play with the models. He also urges us all to invest the time and effort to learn a bit of Python, which will make a wide range of tools available. He is seeing some evidence that people are taking this advice in the rapidly growing number of TikTok videos sharing GPT-based success stories. He notes that GPT-4 itself can actually help you learn how to use the technology, which means it is now easier to learn Python than it was last year, and it is also more worthwhile.

Even if the idea of doing any coding at all is abhorrent, it is still worth playing with GPT-4, and it is worth using it on a project that has legs. It costs £20 a month to get access to GPT-4, so you might as well get your money’s worth. Diving into the model with no clear goal in mind may leave you relatively unimpressed. It is when the model helps you achieve a task which would otherwise have taken a couple of hours or more that you realise its power.

Generating images

GPT-4 was preceded by Dall-E, Midjourney, and many other image generating AIs, but they are still harder to use effectively than their text-oriented relatives. GPTs are now pretty good at analysing and explaining imagery, identifying celebrities in photos, for instance, or explaining that a map of the world made up of chicken nuggets is a joke because maps are not usually created that way.

Midjourney is often said to be the best system for generating images from scratch, although it isn’t the easiest one to use. Like other, similar systems, it still struggles with certain kinds of images, notably fingers, which often appear six to each hand instead of five.

Another useful process is known as in-painting, where the system ingests an image and edits it. For instance it could replace a cocker spaniel with a Labrador, and adapt the background to make the new image seamless. This process is not yet good enough for use by WPP agencies, Lappas reports, but it is close.

Open source

On platforms like Hugging Face you can find open source models, which you can tailor to your own requirements. The open source models are not yet as powerful as the models from the tech giants and the AGI labs (OpenAI and DeepMind) but they are improving quickly, and according to a recently tweeted memo that was presented as a leak from Google, they will soon overtake the commercial models. Their interfaces are less user-friendly at the moment, but again, that will probably change quickly.

2024 is the year of video

Lappas thinks that image generation will be largely solved by the end of 2023, and that 2024 will be the year of video. He concludes by saying that now is the time to jump in and explore GPTs – for profit and for fun.

The post How to use GPT-4 yourself. With Ted Lappas first appeared on .

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on May 17, 2023 09:10

May 9, 2023

GPT: to ban or not to ban? That is the question

OpenAI launched GPT-4 on 14th March, and its capabilities were shocking to people within the AI community and beyond. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s leading AI labs to pause the development of even larger GPT (generative pre-trained transformer) models until their safety can be ensured. Geoff Hinton went so far as to resign from Google in order to be free to talk about the risks.

Recent episodes of the London Futurists Podcast have presented the arguments for and against this call for a moratorium. Jaan Tallinn, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In the most recent episode, David Wood and I, the podcast co-hosts, summarise the key points and give our own opinions. The following nine propositions and questions are a framework for that summary.

1. AGI is possible, and soon

The arrival of GPT-4 does not prove anything about how near we are to developing artificial general intelligence (AGI), an AI with all the cognitive abilities of an adult human. But it does suggest to many experts and observers that the challenge may be less difficult than previously thought. GPT-4 was trained on a vast corpus of data – most of the internet, apparently – and then fine tuned with guidance from humans checking its answers to questions. The training took the form of an extended game of “peekaboo” in which the system hid words from itself, and tried to guess them from their context.

The result is an enormously capable prediction machine, which selects the next best word in a sentence. Many people have commented that to some degree, this appears to be what we do when speaking.

Opinion among AI researchers is divided about what is required to get us from here to AGI. Some of them think that continuing to scale up deep learning systems (including transformers) will do the trick, while others think that whole new paradigms will be needed. But the improvement from GPT-2 to 3, and then to 4, suggests to many that we are nearer than we previously thought, and it is high time to start thinking about what happens if and when we get there. The latest median forecast on the Metaculus prediction market for the arrival of full AGI is 2032.

2. AGI is an X-risk

It is extremely unlikely that humans possess the greatest possible level of intelligence, so if and when we reach AGI, the machines will push past our level and become superintelligences. This could happen quickly, and we would soon become the second-smartest species on the planet by a significant margin. The current occupants of that position are chimpanzees, and their fate is entirely in our hands.

We don’t know whether consciousness is a by-product of sufficiently complex information processing, so we don’t know whether a superintelligence will be sentient or conscious. We also don’t know what would give rise to agency, or self-motivation. But an AI doesn’t need to be conscious or have agency in order to be an existential risk (an X-risk) for us. It just needs to be significantly smarter than us, and have goals which are problematic for us. This could happen deliberately or by accident.

People like Eliezer Yudkowsky, the founder of the original X-risk organisation, now called the Machine Intelligence Research Institute (MIRI), are convinced that sharing the planet with a superintelligence will turn out badly for us. I acknowledge that bad outcomes are entirely possible, but I’m not convinced they are inevitable. If we are neither a threat to a superintelligence, nor a competitor for any important resource, it might well decide that we are interesting, and worth keeping around and helping.

3. Four Cs

The following four scenarios capture the possible outcomes.

• Cease: we stop developing advanced AIs, so the threat from superintelligence never materialises. We also miss out on the enormous potential upsides.
• Control: we figure out a way to set up advanced AIs so that their goals are aligned with ours, and they never decide to alter them. Or we figure out how to control entities much smarter than ourselves. Forever.
• Consent: the superintelligence likes us, and understands us better than we understand ourselves. It allows us to continue living our lives, and even helps us to flourish more than ever.
• Catastrophe: either deliberately or inadvertently, the superintelligence wipes us out. I won’t get into torture porn, but extinction isn’t the worst possible outcome.

4. Pause is possible

I used to think that relinquishment – pausing or stopping the development of advanced AIs – was impossible, because possessing a more powerful AI will increasingly confer success in any competition, and no company or army will be content with continuous failure. But I get the sense that most people outside the AI bubble would impose a moratorium if it was their choice. It isn’t clear that FLI has got quite enough momentum this time round, but maybe the next big product launch will spark a surge of pressure. Given enough media attention, public opinion in the US and Europe could drive politicians to enforce a moratorium, and most of the action in advanced AI is taking place in the US.

5. China catching up is not a risk

One of the most common arguments against the FLI’s call for a moratorium is that it would simply enable China to close the gap between its AIs and those of the USA. In fact, the Chinese Communist Party has a horror of powerful minds appearing in its territory that are outside its control. It also dislikes its citizens having tools which could rapidly spread what it sees as unhelpful ideas. So it has already instructed its tech giants to slow down the development of large language models, especially consumer-oriented ones.

6. Pause or stop?

The FLI letter calls for a pause of at least six months, and when pressed, some advocates admit that six months will not be long enough to achieve provable permanent AI alignment, or control. Worthwhile things could be achieved, such as a large increase in the resources dedicated to AI alignment, and perhaps a consensus about how to regulate the development of advanced AI. But the most likely outcome of a six-month pause is an indefinite pause. A pause long enough to make real progress towards permanent provable alignment. It could take years, or decades, to determine whether this is even possible.

7. Is AI Safety achievable?

I’m reluctant to admit it, but I am sceptical about the feasibility of the AI alignment project. There is a fundamental difficulty with the attempt by one entity to control the behaviour of another entity which is much smarter. Even if a superintelligence is not conscious and has no agency, it will have goals, and it will require resources to fulfil those goals. This could bring it into conflict with us, and if it is, say, a thousand times smarter than us, then the chances of us prevailing are slim.

There are probably a few hundred people working on the problem now, and the call for a pause may help increase this number substantially. That is to be welcomed: human ingenuity can achieve surprising results.

8. Bad actors

In a world where the US and Chinese governments were obliging their companies and academics to adhere to a moratorium, it would still be possible for other actors to flout it. It is hard to imagine President Putin observing it, for instance, or Kim Jong Un. There are organised crime networks with enormous resources, and there are also billionaires. Probably, none of these people or organisations could close the gap between today’s AI and AGI at the moment, but as Moore’s Law (or something like it) continues, their job would become easier. AI safety researchers talk about the “overhang” problem, referring to a future time when the amount of compute power available in the world is sufficient to create AGI, and the techniques are available, but nobody realises it for a while. The idea of superintelligence making its appearance in the world controlled by bad actors is terrifying.

9. Tragic loss of upsides

DeepMind, one of the leading AI labs, has a two-step mission statement: step one is to solve intelligence – i.e., create a superintelligence. Step two is to use that to solve every other problem we have, including war, poverty, and even death. Intelligence is humanity’s superpower, even if the way we deploy it is often perverse. If we could greatly multiply the intelligence available to us, there is perhaps no limit to what we can achieve. To forgo this in order to mitigate a risk – however real and grave that risk – would be tragic if the mitigation turned out to be impossible anyway.

Optimism and pessimism

Nick Bostrom, another leader of the X-risk community, points out that both optimism and pessimism are forms of bias, and therefore, strictly speaking, to be avoided. But optimism is both more fun and more productive than pessimism, and both David and I are optimists. David thinks that AI safety may be achievable, at least to some degree. I fear that it is not, but I am hopeful that Consent is the most likely outcome.

The post GPT: to ban or not to ban? That is the question first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on May 09, 2023 12:17

May 4, 2023

Evomics deploys AI and nuclear medicine in the fight against cancer

AI and nuclear medicine

Some of the most innovative uses of artificial intelligence in healthcare today are in the field of nuclear medicine, and thanks to AI, nuclear medicine is demonstrating great potential for cancer treatment. There are around 20 million new cancer cases a year, and around 10 million deaths, which is around one in six of all deaths. The problem, and therefore the opportunity, is vast.

One of the leading companies in the field is Evomics, based in Shanghai and Vienna. It is using the same technology to develop both diagnostics and therapies, and it has ambitious plans for its EV101 compound, which it thinks could become a $10bn blockbuster.

Nuclear medicine compared with radiation therapy

It is easy to confuse nuclear medicine with radiation therapy, or radiotherapy. Radiotherapists bombard tissue with radiation from an external source in order to remove or reduce cancerous cells. Nuclear medicine, by contrast, injects radioactive molecules into the bloodstream, where they act as a drug. The radioactive molecule is known as a “radioligand” (from the Latin “ligare”, to bind), and it can perform a diagnostic function or a therapeutic one.

Once inside the body, the molecule “recognises” proteins expressed by tumorous cells and binds to them. This causes the radioligand to decay, and emit positrons, the antimatter counterpart to electrons. When the positrons encounter electrons they annihilate each other, producing a pair of high-energy photons, which are detected by a device called a Positron Electron Tomography (PET) scanner. This is the diagnostic mode of nuclear medicine.

In therapeutic mode, the radioligand delivers a dose of radiation to the tumorous cell that it binds to, and causes it to die. The radioactivity is limited, so several rounds of treatment are usually needed to tackle the cancer.

Theranostics

Evomics uses the same compound for both diagnostic and therapeutic applications, but in different dosages. EV101 is the therapeutic application and EV201 is the diagnostic version. In combination they are known as a “theranostic” (therapy + diagnostic).

The proteins targeted by the Evomics radioligand are known as fibroblast activation proteins (FAPs). Fibroblasts are long, thin cells which normally help create tissue structure, including collagen, but when they malfunction they replicate uncontrollably, which is what causes cancer. The FAP targeted by EV101 and EV201 is associated with almost all forms of cancer.

“I don’t fear failure”

Evomics’ CEO, Dr Shiwei Wang, co-founded the company with his twin brother Shifeng, and two Vienna-based professors of nuclear medicine, Dr Li Xiang and Dr Marcus Hacker. Before starting Evomics, Dr Wang spent a decade investing in biotech startups, while working in corporate development in the pharmaceutical industry. His employer was not a financial investor, but was looking to secure access to and experience of important emerging technologies. This gave Dr Wang privileged insights into the most promising new technologies. He decided that the combination of nuclear medicine and AI would generate enormous benefits for patients, and would fuel the rise of major new businesses.

Dr Wang started his new company with backing from his former employer, but Evomics remains low-profile at present, so its sources of funding are undisclosed at present.

During his stint in corporate development, Dr Wang collaborated with the venture capital community centred on Sand Hill Road in Silicon Valley, and one of the things he learned from them was that a great technology or a great product are not enough to guarantee success. In addition to that, a business founder needs exceptional motivation. Starting and growing a business is not for the faint-hearted: resilience and self-belief is essential for anyone who is trying to do things a new way. Dr Wang told me that he thinks his single greatest asset as CEO of Evomics is that “I don’t fear failure”. Interestingly, he thinks his self-confidence might stem in part from being a twin.

Applying AI

Deep learning AI systems are at the heart of Evomics’ philosophy, and essential to its work. They optimise the planning and preparation of medical interventions, analyse the images produced, and automate and accelerate the production of reports that clinicians can use. By making all stages of the process more efficient, the AI can reduce the number of scans required, and also the dosage of radiation the patient is subjected to. As we all know, speed is essential in cancer treatment, so by speeding up the analysis process and getting valuable insights to clinicians sooner, diagnoses can be made sooner and more accurately, and lives saved.

AI algorithms can also provide a valuable second opinion when clinicians disagree about the interpretation of images or data.

In future, Evomics is also working on the deployment of large language models, the transformer AI models which have captivated the world in the form of ChatGPT and GPT-4, to analyse and write up the findings of its other techniques.

Heart disease

The gamma rays given off by radioligands can be useful beyond cancer: Evomics is also developing software for SPECT imaging. SPECT stands for single-photon emission computed tomography, which can detect the presence of clogged arteries which could cause heart disease.

Hurdles and opportunities

Significant challenges remain to the successful application of AI in nuclear medicine. The models become more effective when trained on large data sets, and the volume of data available today is limited, and it is rarely organised and labelled consistently. Most deep learning models are trained on two-dimensional images, but the scanned images from nuclear medicine are three-dimensional. And most important, patients, clinicians, and regulators must trust AI before it can be universally deployed. Understandably, many still regard AI systems as unexplainable black boxes.

These hurdles were reduced to some degree by the Covid-19 pandemic, which provided a significant boost to the use of AI techniques in medical imaging, including in nuclear medicine.

In 2016, AI researcher Geoff Hinton, known as the father of deep learning AI, famously said that “if you work as a radiologist you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath.” He was wrong, in that today, seven years later, human radiologists are still in demand, and indeed many countries have a shortage of them. But as the cliché goes, even if AI won’t take your job any time soon, a human who knows how to work with AI probably will.

The bigger picture is that AI is making more and better technologies available, and we all benefit from that.

The post Evomics deploys AI and nuclear medicine in the fight against cancer first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on May 04, 2023 12:27

The AI suicide race. With Jaan Tallinn

From Skype to Safe AI

In the 1990s and early noughties, Jaan Tallinn led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He was also one of the earliest investors in DeepMind, before they were acquired by Google. Since then, he has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He joined the London Futurists Podcast to discuss the recent calls for a pause in the development of advanced AI systems.

Two Cambridge XRisk organisations

In the previous decade, Tallinn co-founded not one but two Cambridge organisations studying the existential risk from AI and other developments. He describes the Centre for the Study of Existential Risk (CSER, pronounced as Caesar), founded in Cambridge, England in 2012, as a “broad-spectrum antibiotic” because it looks at a wide range of risks, including climate change as well as AI. It is more research-based and more academic, and hence slower moving than the Future of Life Institute (FLI). Established in Cambridge, Massachusetts, in 2014, FLI focuses on the two most urgent risks – nuclear weapons and AI.

From Jaan to Yann

FLI’s leadership claims it is now apparent that AI risk is orders of magnitude more serious than all other risks combined, but this is a controversial claim, rejected by highly credentialed members of the AI community such as Yann Lecun, Chief AI scientist at Meta. In fact the debate over AI risk has been tagged as a struggle between Jaan and Yann.

There is absolutely no doubt in Tallinn’s mind that the risk from advanced AI is important and urgent. He invites people to look at the progress from GPT-2 to GPT-4, and ask themselves what probability they would assign to control over AI development being yanked from human hands. How likely is it that GPT-6 will be developed by GPT-5 rather than by humans?

Since the launches of ChatGPT and GPT-4, there seem to be fewer people arguing that artificial general intelligence (AGI), an AI with all the cognitive abilities of an adult human, is impossible, or centuries away. There are some, including members of the editorial team at The Economist, as demonstrated in the previous episode of the London Futurists Podcast. But most XRisk sceptics argue that AGI is decades away rather than centuries away, or impossible. All over the world, Tallinn, says, people are waking up to the progress of AI, and asking how a small number of people in Silicon Valley can be allowed to take such extraordinary risks with the future of our species.

Moratorium

As a result, FLI has changed its strategy. It used to call for debate, but now it is calling for an immediate pause in the development of advanced AI. On 22 March – a week after the launch of GPT-4 – it published an open letter, calling for a six-month pause in the development of large language models like GPT-4.

In a sense, this means that OpenAI’s strategy is working. Its CEO Sam Altman has long argued that making large language models available to the public was important to spark a general debate about the future of AI, and that is now happening. What is not happening, however, is collaboration between the leading labs on AI alignment. Instead they seem to be locked, perhaps reluctantly, in a race to develop and deploy the most advanced systems. Tallinn labels this a suicide race.

People who are uncertain about the likely timing of the arrival of superintelligence should be open to the suggestion of a delay. The harder job is to persuade people – like Yann LeCun – who are confident that superintelligence is nowhere near. Tallinn says he is unaware of any valid arguments for LeCun’s position.

Pause or stop?

FLI had a lot of internal discussion about whether to call for a six-month pause, or an indefinite one. One argument for the six-month version was that if the leading labs were unable to agree it, that would demonstrate that they are caught up in an unhealthy and uncontrollable competition. Another argument for it was to forestall the objection that a moratorium would enable China to overtake the US in AI. It is implausible that China can erase the US’ lead in advanced AI systems in just six months. As it turns out, China has already instructed its tech giants to slow down the development of large language models: the Chinese Communist Party does not welcome consumer-facing products and services which could dilute its control.

Tallinn acknowledges that he is not really seeking a six-month pause, but a longer one, perhaps indefinite. He accepts that six months is not likely to be sufficient to ensure that large language models are provably safe in perpetuity. He argues that they are “summoned” into existence by being trained on vast amounts of data. They are then “tamed” by Reinforcement Learning from Human Feedback (RLHF). He fears that before long, a trained model may emerge which is impossible to tame.

So what can be achieved during a pause, if not AI alignment? FLI has published a document called “Policymaking in the Pause”, which calls for more funding for AI alignment research, and for the strict monitoring and regulation of leading AI labs.

Losing the upside potential of AI

A major problem with the pause, and one of the main reasons why people oppose it, is that an indefinite pause would deprive us of the upside potentials of advanced AI – which many people believe to be enormous. DeepMind has a two-step mission statement: solve intelligence, and use that to solve everything else. And they really mean everything: problems like war, poverty, even death may well be preventable if we can throw enough intelligence at them.

An end to the development of large language models does not necessarily mean an end to the development of all AI. But there is no doubt that these models are by far the most promising (and hence risky) types of AI that we have developed so far.

Bad actors

Even if the labs and the governments in the US and China agreed to a moratorium, it is hard to imagine North Korea or Putin’s Russia agreeing. There are also international crime syndicates and a large number of billionaires to consider. It may be possible to detect the creation of large language model labs today, as they require such large numbers of expensive GPUs, and consume so much energy. But these constraints will fall rapidly in the coming years as Moore’s law or something like it continues.

If push comes to shove, Tallinn accepts that it might one day be necessary to take military action against an organisation or a state which persisted in creating the wrong kind of AI lab.

But whatever bad actors might do, Tallinn argues that a race to create ever-more powerful AIs is a suicide race. If you know that a mass murderer is rampaging, and that your family’s death is inevitable, you wouldn’t kill them yourself. He also argues that a pause will buy us time to either make AI safe, or work out how to avoid it being developed when the necessary hardware and energy costs are much lower.

AI leaders and public opinion

Tallinn reports that the leaders of the main AI labs – with the exception of Yann LeCun at Meta – are somewhat sympathetic to FLI’s arguments. They don’t say so in public, but they are aware that advanced AI presents serious risks, and many of them have said that there could come a point when the development of advanced AI should slow down. Tallinn thinks that like everyone else, they have been taken by surprise by the dramatic improvement in large language model performance.

Anecdotal evidence suggests that when lay people think seriously about the possible risks from advanced AI, they support the idea of a pause. The question now is whether Tallinn and his colleagues can raise it far enough up the media agenda. He is getting numerous calls every day from journalists, and he thinks the public will become more and more agitated, and the question will become, how to “cash out” that concern to disrupt the suicide race.

Over the next few months, Tallinn hopes that the US government will require all AI labs to be registered, and then that no AI models requiring more than 1025 FLOPs (floating point operations) will be allowed. Long enough for us to work out how to stop the suicide race.

 

 

The post The AI suicide race. With Jaan Tallinn first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on May 04, 2023 12:22

April 30, 2023

Is AGI possible? With Kenn Cukier

Most media coverage of AI is weak

The launch of the large language model known as GPT-4 has re-ignited the debate about where AI is going, and how fast. A paper by some researchers at Microsoft (which is the major investor in OpenAI, the creator of GPT-4) claimed to detect in GPT-4 some sparks of AGI – artificial general intelligence, a system with all the cognitive abilities of an adult human. The Future of Humanity Institute, an organisation based at MIT that studies existential risks, published an open letter calling for a six-month pause in the development of advanced AI.

But with a few honourable exceptions, and despite the best efforts of many individual journalists, the coverage of AI in most media outlets remains pretty poor. The usual narrative is “Look at this shiny, scary new thing. But don’t worry, it will all turn out to be hype in the end.”

The Economist, an honourable exception

One of those honourable exceptions is The Economist, and Kenneth Cukier, its Deputy Executive Editor, joined the London Futurists Podcast to discuss the prospects for advanced AI. Until recently, Cukier was the host of the paper’s weekly tech podcast Babbage, and he is a co-author of the 2013 book “Big Data”, a New York Times best-seller that has been translated into over 20 languages.

It has been said that The Economist is great at predicting the past but bad at predicting the future. Recently, it has improved its coverage of the future a great deal in one respect – namely its coverage of AI. For the first few years after the 2012 Big Bang in AI, The Economist used to delight in sneering about AI hype, but now it is hard to think of any media outlet that understands today’s AI better or explains it better. Not sparing his blushes, Cukier played a significant role in that change.

One thing that The Economist still doesn’t do with regard to AI is to cast its eye more than five years or so into the future. It avoids discussing what AGI and superintelligence will mean, or a genuine exploration of the Economic Singularity, when machines do pretty much everything that humans can do for money. Cukier suggests that these developments are probably fifty years away, and that although this is within the probable lifespan of some of the paper’s younger staff members, newspapers have not generally been in the business of looking that far ahead.

Increasingly sceptical

He acknowledges that informed speculation could be useful to readers, as perceptions of the future have important secondary effects, such as determining choices about what to study, or what careers to aim for. But he is increasingly sceptical that machines will ever fully replace humans in the workplace, or that AGI is possible. In this respect, he seems to be heading in the opposite direction to most observers, and he seems to be at odds with the people who are working towards the goal of AGI, including Sam Altman and Demis Hassabis, who run the world’s two leading AI labs, OpenAI and DeepMind respectively. Both are confident that AGI is possible, and Altman thinks it may be created within a decade. The central estimate on the prediction market Metaculus for the arrival of a basic form of AGI is currently 2026, just three years away. (It was 2028 when we recorded the episode, which was before the release of GPT-4.)

Cukier thinks that the debate over advanced AI is challenging because people have different definitions of things like AGI, and some of the underlying concepts have turned out to be unhelpful. For instance he suggests that machines first passed the Turing Test some years ago, but the test turned out to be about deception and human frailty rather than about machine capability. This is only true if you regard the test as being passed in a few minutes, whereas its more interesting version would take at least 24 hours, and involve a number of people well-versed in AI. The futurist Ray Kurzweil has bet the entrepreneur Mitch Kapoor $20,000 that a machine will pass this version of the test in 2029. (Personally, I think the Turing Test identifies consciousness, not intelligence.)

AGI is a tricky concept – even a crazy one

The concept of “general” intelligence is tricky. Pretty much all humans have it, but the level varies enormously between us. So what level does an AI have to reach to be considered an AGI?

Cukier goes further. He thinks the idea of an AI which has all the cognitive abilities of an adult human is “crazy” and unattainable. He also thinks it would be undesirable even if it was possible, because humans do so many unwise things – falling in love, smoking cigarettes, getting confused about maths.

He also thinks there is a magical, spiritual dimension to human intelligence, which can never be replicated within a machine. This leads him to conclude that machines can never become conscious, whatever AI engineers may claim about consciousness (as well as intelligence) being substrate independent.

The ship of Theseus

A useful thought experiment to test this claim is to consider a future person who has been diagnosed with a fatal brain disease. They can be rescued by replacing their fleshy neurons, one at a time, with silicon equivalents. Obviously we don’t have the technology to do this today, but there seems to be no reason why it couldn’t happen in the future. At what point in the changeover process would the person’s consciousness disappear? This is known as the Ship of Theseus question, after a famous ship in ancient Greece which underwent so many repairs down the years that not one original component remained. There was vigorous debate among Greek philosophers about whether it remained the same ship or not.

In the case of the ship, the question is academic: it doesn’t really matter whether the ship’s identity is preserved. In the case of the patient, it matters a great deal whether consciousness is preserved.

J.S. Bach and Carl Sagan

Until recently, it was thought that only humans could write music that would inspire profound emotions. Today, machines can write music in the style of Bach which affects listeners even more profoundly than the original. The choice of Bach is not random: Carl Sagan was asked why no Bach was included in the music that is accompanying the tiny spaceship Voyager on its journey into deep space. His reply was that Bach’s music is so sublime that it would have been boasting.

Cukier responds that machines can only imitate existing creators; they cannot create a new type of art in the way that human innovators have done repeatedly throughout history.

Understanding flight

One argument advanced by Cukier and others to support the claim that machines can never be created that will equal the human mind is that we do not understand how the mind works. The problem with this argument is that we know it is not necessary to understand something to be able to create or re-create it. The Wright brothers did not understand how birds fly, and they did not understand how their own flying machines worked. But they did work. Likewise, steam engines were developed before the laws of thermodynamics.

Cukier offers a final, dramatic thought on the subject of fully human-level AI: it would be the pinnacle of human hubris, and even idolatrous of us to seek to become godlike by creating AGI.

The post Is AGI possible? With Kenn Cukier first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2023 12:59

Against pausing AI research. With Pedro Domingos

Should AI research be paused?

Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow-down desirable, given that AI can also lead to very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow-down is desirable, is it practical?

Professor Pedro Domingos of the University of Washington is best known for his book “The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World”. It describes five different “tribes” of AI researchers, each with their own paradigms, and it argues that progress towards human-level general intelligence requires a unification of these different approaches – not just a scaling up of deep learning models, or combining them with symbolic AI. (The other three tribes use evolutionary, Bayesian, and analogical algorithms.) Domingos joined the London Futurists Podcast to discuss these questions.

GPTs

Generative Pre-Trained Transformers, or GPTs, are currently demonstrating both the strengths and the weaknesses of deep learning AI systems. They are very good at learning, but they also just make things up. Symbolic learning AIs don’t do this because they are reasoning systems. Some researchers still think that the remarkable abilities of GPTs indicate that there is a “straight shot” from today’s best deep learning systems to artificial general intelligence, or AGI – a system with all the cognitive abilities of an adult human. Domingos doubts this, although he can imagine a deep learning model being augmented with some other types of AI to produce a hybrid system which was widely perceived as an AGI.

In fact, Domingos thinks that even a hybrid system which employed techniques championed by all five of the tribes he describes would still fall short of AGI. Humans can recognise many breeds of dogs as dogs, after seeing a couple of pictures of just one breed. None of the AI tribes has a clear path to achieving that. He thinks that AI is doing what all new scientific disciplines do: it is borrowing techniques from other fields (neuroscience, statistics, evolution, etc.) while it figures out its own, unique techniques. He suspects that AI cannot be a mature field until it has developed its own unique techniques.

Timeline to AGI

Domingos has developed a neat answer to the impossible but unavoidable question of when AGI might arrive: “a hundred years – give or take an order of magnitude”. In other words, anywhere between ten years and a thousand. Progress in science is not linear: we are in a period of rapid progress right now, but such periods are usually separated by periods where relatively little happens. The length of these relatively fallow periods are determined by our own creativity, so Domingos likes the American computer scientist Alan Kay’s dictum that the best way to predict the future is to invent it.

The economic value of AGI would be enormous, and there are many people working on the problem. The chances of success are reduced, however, because almost all of those people are pursuing the same approach, working on large language models. Domingos sees one of his main roles as trying to widen the community’s focus.

Criticising the call for a moratorium

Domingos is vehemently opposed to the call by the Future of Life (FLI) for a six-month moratorium on the development of advanced AI. He has tweeted that “The AI moratorium letter was an April Fools’ joke that came out a few days early due to a glitch.”

He thinks the letter’s writers made a series of mistakes. First, he believes the level of urgency and alarm about existential risk expressed in the letter is completely disproportionate to the capability of current AI systems, which he is adamant are nowhere near to AGI. He can understand lay people making this mistake, but he is shocked and disappointed that genuine AI experts – and the letter has been signed by many of those – would do so.

Secondly, he ridicules the letter’s claims that GPTs will cause civilisation to spin out of control by flooding the internet with misinformation, or by destroying all human jobs in the near term.

Third, he thinks it is a risible idea that a group of AI experts could work with regulators over a six-month period to mitigate threats like these, and ensure that AI is henceforth safe beyond reasonable doubt. We have had the internet for more than half a century, and the web for more than thirty years, and we are far from agreeing how to regulate them. Many people think they cause significant harms as well as great benefits, yet few would argue that they should be shut down, or development work on them paused.

Three camps in the AI pause debate

There are three schools of thought regarding a possible pause on AI development. Domingos is joined by Yann LeCun, Andrew Ng and others in thinking we should not pause, because the threat is not yet great, and the upsides of advanced AI outweigh the threat. The second school is represented by Stuart Russell, Elon Musk and others who are calling for a pause. The third school’s most prominent advocate is Eliezer Yudkowsky, who thinks that AGI may well be near, and that the risk from it is severe. He thinks all further research should be subject to a relentlessly enforced ban until safety can be assured – which he thinks could take a long time.

These camps consist largely of people who are smart and well-intentioned, but unfortunately the debate about FLI’s open letter has become ill-tempered, which probably makes it harder for the participants to understand each other’s point of view. Domingos acknowledges this, but argues that the signatories to the letter have raised the temperature of the debate by making outlandish claims.

In fact he notes that the debate about the open letter is not new. Rather, it is surfacing a long-standing debate between people in and around the AI community, which was already acrimonious.

Stupid AI and bad actors

Domingos thinks another of the mistakes in the letter is that it addresses the wrong problems. Even though he thinks AGI could conceivably arrive within ten years, he thinks it is about as likely that he will get struck by lightning, something he does not worry about at all. He does think it would be worthwhile for some people to be thinking about the existential risk from AGI, but not a majority. He thinks that by the time AGI does arrive, it is likely to be so different from the kinds of AI we have today that such preparatory thinking might turn out to be useless.

Domingos has spent decades trying to inform policy makers and the general public about the real pros and cons of AI, and one of the reasons the FLI letter irritates him is that he fears it is undoing any progress he and others have made.

GPT-4 has read the entire web, so we humans make the mistake of thinking that it is smart, like any human who had read the entire web would be. But in fact it is stupid. And the solution to that stupidity is to make it smarter, not to keep it as stupid as it is today. That way it could make good judgements rather than bad ones about who gets a loan, who goes to jail, and so on.

In addition to its stupidity, the other main short-term risk that Domingos sees from AI is bad actors. Cyber criminals will deploy and develop better AIs regardless what the good actors do, and so will governments which act in bad faith. Arresting the development of AI by the better actors would be like saying that police cars can never improve, even if criminals are driving faster and faster ones.

Control

Domingos thinks that humans will always be able to control the objective function (goal) of an advanced AI, because we write it. It is true that the AI may develop sub-objectives which we don’t control, but we can continuously check the AI’s outputs, and look for constraint violations. He says, “solving AI problems is exponentially hard, but checking the solutions is easy. Therefore powerful AI does not imply loss of control by us humans.” The challenge will be to ensure that control is exercised wisely, and for good purposes.

He speculates that maybe at some point in the future, the full-time job of most humans will be checking that AI systems are continuing to follow their prescribed objective functions.

The post Against pausing AI research. With Pedro Domingos first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on April 30, 2023 12:56