Calum Chace's Blog, page 2

October 12, 2023

The legal singularity. With Ben Alarie

The law is a promising area for AI

The legal profession is rarely accused of being at the cutting edge of technological development. Lawyers may not still use quill pens, but they’re not exactly famous for their IT skills. Nevertheless, it has a number of characteristics which make it eminently suited to the deployment of advanced AI systems. Lawyers are deluged by data, and commercial law cases can be highly lucrative.

One man who knows more about this than most is Benjamin Alarie, a Professor at the University of Toronto Faculty of Law, and a successful entrepreneur. In 2015, he co-founded Blue J, a Toronto-based company which uses machine learning to analyse large amounts of data to predict a court’s likely verdict in legal cases. It is used by the Department of Justice in Canada and Canada’s Revenue Agency.

Alarie has just published “The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better.” He joined the London Futurists Podcast to discuss the future of AI in the legal profession.

Automation

One way in which AI is impacting the law is automation. Traditionally, a lot of legal work was repetitive and robotic. Junior lawyers working on transactions and on litigation spent days cooped up in “deal rooms”, working their weary way through piles of boxes, looking for the word or phrase in a document which could undermine a deal, or clinch a lawsuit. This is known as “discovery” in the US and “disclosure” in the UK. Machines excel at the close analysis of huge volumes of text, and much of this work has already been automated.

For instance, a human lawyer would take days to identify and summarise the change-of-control provisions in multiple commercial leases. A machine can do the same work in seconds. Law firms have not become less profitable since this work was automated by machines, so the firms are probably doing a great deal more of it, but at lower unit cost. They have learned how to sell the capabilities of their machines instead of simply selling the time of their junior employees.

Predictions

Lawyers make a lot of predictions. They are forever second-guessing each other, and also the judges who will decide their cases. Their advice to clients is based on these predictions. Deep learning and generative AIs are prediction machines, so they should be extremely helpful to lawyers.

When judges make decisions, they take into account all the evidence that is formally presented. But they also take into account the way the parties present themselves – their dress, their accents, their posture, their gestures, and their facial expressions. They may try to downplay some of this information, in order to avoid being biased or prejudiced. But as humans, they cannot help but be influenced by it. Indeed it is often a critical part of their job to make judgments about the honesty of a defendant or a witness, and about their ability to observe and explain the circumstances of a case.

Machines do not have that human ability to assess the disposition or capability of a party in a case. What they do have is the ability to remember every detail about the hundreds of thousands of prior cases which could be relevant to a particular decision. This is what enables Alarie’s company Blue J to predict the outcome of a particular case with over 90% accuracy. This is impressive: large amounts of money depend on the judgment whether or not to proceed with a case, so traditionally, it is the job of the most senior lawyers to predict the likely outcome of a case, and to advise a client whether or not to go to court.

Centaurs

As is so often the case with AI at this stage of its development, the ideal situation is to combine the capabilities of the machine and the human lawyer. The pairing of the two is often compared with the mythical centaur, which was half-man and half-horse. In this case the combination is the machine’s comprehensive knowledge of past cases, and the human’s ability to assess other people. The interesting thing, though, is that the machines are quickly getting much better, and the humans are not. And humans don’t scale, whereas machines do.

As AI is increasingly widely used, and the accuracy of predictions improves, a likely result is that a smaller proportion of cases will go to trial because the litigants will all have a better understanding of whether they would win or lose. There might, however, be a countervailing increase in the amount of strategic litigation, in which people launch carefully-chosen cases in order to nudge the law in a direction they favour.

The legal singularity

When AI systems are much improved, Alarie asks whether the law could become a solved problem, with machines able to predict with such a high confidence the outcome of any potential lawsuit, so that no cases ever reached a court? He calls this the legal singularity. There could also be a legislators’ singularity, in which AI helped politicians and administrators to frame new laws and adjust existing ones in real time to make them more precise, fairer, and more efficient.

Alarie’s intuition is that the law is not a determinate system – in other words, it will never be possible to forecast all cases accurately. However he does think that the system can and will move a long way towards being determinate from where it stands today, and that everyone will benefit from that happening.

Of course, the beneficent outcome is not guaranteed. The dark version of the legal singularity would have one or more AIs imposing harsh control on all of us, perhaps on behalf of an autocratic ruler, or perhaps in service of its own totalitarian logic. But Alarie is an optimist, and expects that the sunnier version will prevail, especially if enough people of good will start thinking about these issues soon, and working out how to steer the evolution of the law in the right direction.

The post The legal singularity. With Ben Alarie first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2023 20:06

What’s new in Longevity? With Martin O’Dea

Martin O’Dea is the CEO of Longevity Events Limited, and the principal organiser of the annual Longevity Summit Dublin. In a past life, O’Dea lectured on business strategy at Dublin Business School. He has been keeping a close eye on the longevity space for more than ten years, and is well placed to speak about how the field is changing. O’Dea sits on a number of boards including the LEV Foundation, which was set up by Aubrey de Grey with a mission to prevent and reverse human age-related disease. O’Dea joined the London Futurists Podcast to discuss what we can expect from the forthcoming Longevity Summit in Dublin.

Long-lived animals

O’Dea is understandably reluctant to pick favourites among the speakers appearing in the four days of the summit, but when pushed, he nominates two speakers who will talk about animals with very long lifespans. Emma Teeling specialises in research on bats, which have much longer lifespans that you would expect given their size. Steve Austad has recently published a very well-received book, “The Methuselah Zoo”, which points out that evolution has developed a wide range of strategies to avoid cancer in long-lived species. Scientific research has tended to focus on short-lived species because the impact of interventions can more easily be studied in them, so we still have a lot to learn from longer-lived animals.

Another highlight for O’Dea will be a talk by Michael Levin, who researchers the electrophysiology of the cell, which involves stimulating cells with electrical impulses to alter the development of an organism.

A four-day conference sounds like a lot of stage time to fill, but O’Dea insists that the real problem was reducing the number of speakers to fit the time available. Longevity is one of the world’s fast-growing and most exciting areas of scientific research, and this is increasingly understood by investors, the media, and members of the general public.

How mainstream is longevity science?

The focus of the Dublin summit is the harder problems of longevity – the problems that cannot easily be addressed by commercial organisations. Aubrey de Grey has pioneered this kind of research for decades, and there have been ups and downs in that time. 2013 was a particularly interesting year, with the publication of seminal research about the hallmarks of aging, and also Google’s foundation of Calico, a surprisingly secretive organisation using big data to try to understand the mechanisms of aging.

O’Dea has the sense that the idea of science giving us all much longer lifespans and much better healthspans is on the cusp of becoming mainstream. Every few years there is a new breakthrough which gets us closer to that tipping point, but it is impossible to know what will finally get us across the threshold.

A few years ago it was big news if a research team received a million pound grant. Now that is commonplace. Last year one group raised £180m, and it was not a major news story within the longevity community.

The media is a little behind the investment community. The Dublin Summit will be covered by the New Scientist, and a couple of significant documentaries will be filmed there. Mainstream outlets like the BBC, CNN, and the world’s major newspapers are still not devoting much attention to the summit, but O’Dea feels sure it won’t be long before they do.

Lifespan and healthspan

As for the general public, O’Dea acknowledges that the idea of radically extended lifespans is still too much of a swallow for most people, but the idea of defeating some of the major diseases that afflict us as we age is not. It is ironic that most people would be delighted to learn that heart disease, cancer, and dementia had all been overcome, even though they look askance at calls to stop aging itself, which is what causes those three major killers.

Tackling aging is not only important because it can stop us all dying from this trio of fatal diseases. It is also vital to make our later years enjoyable, indeed endurable. Sadly, most people don’t die quietly and suddenly in their sleep. Most of us will endure years of pain and worry as we fight one or more of the three killer diseases. These afflictions also impose huge financial burdens on the taxpayer. Most of the money that your country’s health service will ever spend on you is spent in your final years – indeed, often in your final year – and if we could improve healthspan as well as lifespan, we could remove this burden.

Aubrey de Grey’s current project is to achieve robust mouse rejuvenation, which means giving an extra year of life to middle-aged mice. The project is a large study costing a great deal of money, and O’Dea argues that it is the most important piece of scientific research in the longevity field – and perhaps any scientific field. The study’s 1,000 mice have not yet lived long enough to announce any major results at the summit, but there may be important findings to talk about next year.

$100 billion

Although there is an encouraging increase in the amount of money dedicated to longevity research, we still need multiples more, because the mechanisms of aging are fantastically complex. Instead of hundreds of millions of dollars, we need hundred of billions. In a previous podcast, Andrew Steele (who will also be at the summit) argued that we should not speculate about how many years it will take to reach longevity escape velocity (the moment when science gives you an additional year of life every year that passes). Instead we should talk about how much money it will take. His best guess is that the amount required is in the ballpark of $100 billion.

The best part of any conference is always said to be the networking, and O’Dea says this is particularly true of the Longevity Summits. No-one is obliged to keep information from anyone else by corporate non-disclosure agreements, and the underlying purpose of the attending community is so exciting and energising.

The post What’s new in Longevity? With Martin O’Dea first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2023 19:58

August 11, 2023

Investing in AI, With John Cassidy

Kindred Capital

Venture capital is the lifeblood of technology startups, including young companies deploying advanced AI. John Cassidy is a Partner at Kindred Capital, a UK-based venture capital firm. Before he became an investment professional, he co-founded CCG.ai, a precision oncology company which he sold to Dante Labs in 2019. He joined the London Futurists Podcast to discuss how venture capital firms are approaching AI today.

Kindred Capital was founded in 2015 by Mark Evans, Russell Buckley, and Leila Zegna. It has raised three funds, each of around $100 million, and is focused on early-stage investments, known in the industry as pre-seed and seed rounds. It likes to invest in platforms, and picks and shovels, which means businesses which can become part of the essential infrastructure for many larger companies. Its preferred sectors are ‘techbio’ (by which he means tech-focused biotech businesses), software (especially software as a service, or SAAS), energy and fintech. Its main geographies are the Europe, the UK, and Israel.

Among its recent AI investments is Scarlet, which is building a continuous compliance infrastructure for companies operating in the highly regulated medical software industry. Another is Cradle Bio, a generative AI tool which allows protein engineers to use deep learning AI systems and models like AlphaFold to identify new and better proteins for medicines and industrial enzymes.

Bubbles and reality

The venture capital industry is highly cyclical, and notoriously prone to excess. In recent years it has applied over-exuberant valuations to blockchain companies, and in companies offering ten-minute delivery services, but the dotcom bubble at the turn of the century was perhaps the most infamous example. Cassidy hopes the current wave of excitement about AI is different from those situations. There is some exaggeration of the capabilities of transformer AIs, and some people argue breathlessly that they are virtually artificial general intelligence (AGI) systems, which is not true. But underlying that hubris, large language models and generative AI are starting to demonstrate the transformational capabilities that will ensure this is no bubble, because they can create real efficiencies, and generate real money.

It is often said that an economic boom is like a gold rush, and in a gold rush you are better off selling picks and shovels to the miners, than digging or panning for gold yourself. Nvidia is a great example of a company doing the equivalent of selling picks and shovels to miners, and its valuation is exuberant. Cradle Bio, Kindred’s portfolio company that helps protein engineers use generative AI to design molecules for medicines and industrial enzymes, is also in the picks and shovels business. Cassidy says the number of proteins which scientists have been studied so far is vanishingly small compared to the number of all possible proteins: it’s like the ratio between a single grain of sand and all the sand in the world. So there is a lot to go for.

The trick for investors, of course, is to identify which of the companies operating in the new value chains will be successful, and which are built on castles of sand. Some of the biggest companies in the world today, like Amazon and Google, were formed during the dotcom bubble, but a great many more disappeared without trace, taking large pools of capital with them.

Founders

At the pre-seed stage, the factor which matters most is the capability of the founder or founders. During the journey from startup to successful exit (stock market flotation, or sale to a bigger company), everything about the company will change, including its technology, its product, and its business model. Pretty much the only thing that can remain constant is the founder. Cassidy spends his time trying to identify and develop relationships with founders and potential founders who have the spark, (“the creative destruction in their being”), which means they have an outside chance of starting a company and guiding it through all the enormous changes and challenges that lie between the start point and the finish point.

These founders are extraordinarily talented and driven, but that is not enough. They have to be irrational enough to believe that they can change the world – that they can lift themselves by tugging on their own shoelaces – while also having great judgement, which tells them which strategies and tactics will work in a given situation, and which ones won’t.

Cassidy suggests it is useful here to apply the model of fluid and crystallised intelligence, which was first suggested in 1963 by the psychologist Raymond Cattell. Crystallised intelligence is the trump card of older people, who have seen many of the possible strategies deployed, and learned from experience what works and what doesn’t. They also know the written and unwritten rules which guide organisations. Fluid intelligence is the ability – more evident in younger people – to solve problems from first principles, and to ask “why do we do it this way?” when everyone else takes a sub-optimal approach for granted. The best founders possess both these types of intelligence.

Lessons from Silicon Valley

Cambridge is where Cassidy went to do his PhD, and he was enchanted by the geeky conversations he overheard in pubs, where people talked about how to engineer new proteins. As he was growing his precision oncology company business, CCG.ai, he also spent a lot of time in Silicon Valley, where the conversation in bars was all about how to create new types of company, and how to be successful in new and creative ways. He thinks Cambridge (and indeed, Europe as a whole) has a lot to learn from Silicon Valley, and there is much to do in order to build the availability of growth capital, and a helpful institutional environment. But he is confident it can be done, because of the exceptional talent emerging all the time from universities there.

There is still a fear of failure in Europe, whereas in Silicon Valley if you start a company, raise some money, but fold the company again six months or a year later, nobody holds that against you. This should be second nature to scientists, who make progress by disproving one hypothesis in order to develop a better one.

Europe is also in danger of hobbling its tech industry by regulating both the products and services it develops, and also the mergers and acquisitions that enable it to reward success. If the only way to exit a successful high-growth business is to float it on the NASDAQ, then Europe cannot expect to build a cluster of home-grown tech giants.

Another factor often cited to explain why Europe has no tech giants is that its single market remains a work in progress, with Brexit being a big step backwards. Cassidy argues that the US’ single market is also imperfect, at least in his area of healthcare, as individual states have different regulatory frameworks. He also argues that any company that wants to scale must learn how to work in different environments, and starting a company in Europe can mean you simply acquire the skills to do that sooner.

Focusing AI on clinical trials

Cassidy is excited about the future of AI in biotechnology. Much of the current action in healthcare AI is devoted to designing new molecules, but the biggest hurdles to getting new drugs to market lie in the clinical trial process that lies downstream of protein engineering. This is where pharmaceutical companies spend the vast majority of their budgets – and their time. AI could enable efficiency improvements – large and small – which would collectively get drugs to patients much faster and much more cheaply.

The post Investing in AI, With John Cassidy first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 09:01

The Death of Death. With Jose Cordeiro

An enthusiastic transhumanist

One of the most intriguing possibilities raised by the exponential growth in the power of our technology is that within the lifetimes of people already born, death may become optional. This idea was championed with exuberant enthusiasm by Jose Cordeiro on the London Futurists Podcast.

Jose Cordeiro was born in Venezuela, to parents who fled Franco’s dictatorship in Spain. He has closed the circle, by returning to Spain (via the USA) while another dictatorship grips Venezuela. His education and early career as an engineer were thoroughly blue chip – MIT, Georgetown University, INSEAD, then Schlumberger and Booz Allen.

Today, Cordeiro is the most prominent transhumanist in Spain and Latin America, and indeed a leading light in transhumanist circles worldwide. He is a loyal follower of the ideas of Ray Kurzweil, and in 2018 he co-wrote “The Death of Death” with David Wood.

Immortal cells and organisms

Cordeiro has been described as “a hopeless optimist always bursting with energy”. He proclaims that life is beautiful, and we should all enjoy more of it than nature has endowed us with. Some of his optimism about the prospects for longevity stems from the existence of immortal cells in our bodies, and the existence of immortal organisms, like bacteria, some hydras, and some kinds of jellyfish. They don’t age, so if they are not killed by predators or accidents, they can live indefinitely. Bacteria are the oldest life form on the planet, so life on Earth actually started without aging built in.

Ray Kurzweil is a polarising figure, but he deserves much credit for alerting many people to the astonishing impact of Moore’s Law, which is the observation that $1,000-worth of compute gets twice as powerful every 18 months. Moore’s Law means that compute power is growing exponentially, and Kurzweil realised decades ago that this could give us machines with all the cognitive capabilities of adult humans within his lifetime. In the 1980s, Kurzweil was working at MIT with Marvin Minsky, one of the founding fathers of the science of artificial intelligence. Cordeiro studied there, and when he took some courses with Minsky, he came across Kurzweil, and read his book, “The Age of Intelligent Machines”.

Living with death

It’s an odd fact that many people are blasé about the idea of radically extended longevity. There is a very common tendency to say that 80 years is a good and proper length of time to live, and wanting more is greedy and inappropriate. Cordeiro thinks this attitude arises from our need to make death less horrifying. We convince ourselves that death gives meaning to life, and so, to coin a phrase, we are able to live with death.

But is there any reason to believe that humans could be given radically longer lifespans in the near term? The oldest person who ever lived died at the age of 122 back in 1997, and average life expectancy in the US and the UK have actually declined in recent years.

Methuselah worms

Cordeiro argues that in the last decade or so, exciting progress has been made on extending the lifespans of various animal models: the lifespans of some mice have been doubled. Some fruit flies have had their lifespans multiplied by four, and some worms by ten, so there are now so-called “Methuselah worms” that have lived the human equivalent of 1,000 years.

No human has had their lifespan extended like this, but some human cells have been rejuvenated. The 2012 Nobel Prize for Medicine was given to a Japanese scientist called Shinya Yamanaka. His team have proved than skin cells can be rejuvenated, and now he is working on eyes, which are relatively small organs, without many connections to the rest of the body. They have succeeded with mice and with monkeys, and human tests are starting.

The most recent advances have taken scientists by surprise, because they are enabled by the exponential growth in the power of computer technology, and of new techniques like CRISPR-Cas9. This exponential growth also means that future advances will come much faster than most of us expect.

If cancer can stop aging, so can we

Most of the cells in our bodies age, but cells known as germ cells, which are responsible for reproduction, do not. They make eggs in women and sperm in men, and they exist in all multi-cellular organisms. The other type of cells, which do age, are called somatic cells, or body cells. If somatic cells mutate and become cancerous, then they do not age either. Cordeiro jokes that if cancer can learn how to stop aging, then so can we.

There is no single theory about how and why aging happens that is universally accepted. Instead there is vigorous debate between the protagonists of a variety of theories. For instance some people think that aging is like the wear and tear of a car. Parts of a car get rusty, or fall off because a screw works loose, and similar processes occur at the cellular level in biological organisms. Other people think that aging is built-in obsolescence. Over millions of years, evolution has repeatedly “discovered” that a species thrives when its older members die, not least because this allows younger, improved members of the species to take over.

Cordeiro takes a radical approach to this debate: he dismisses it as unimportant. He argues that all we need to do is to work out how the cells and organisms that do not age manage to avoid it, and then copy those techniques.

Evolution was wrong

Cordeiro also has no time for the argument that evolution arranged for us to age, so there must be a good reason for it. He points out that evolution has endowed us with many defects that science has enabled us to overcome, such as disease, and deteriorating eyesight. He adds that aging takes such varied forms that it cannot have a single purpose. Even within the class of vertebrates called mammals, there are mice which live two years, and whales that live hundreds of years. Aging must be doing very different things in these animals to have such different manifestations.

The optimism that longevity research will make great advances in the coming years stems partly from the exponential rate of improvement of technologies that it is using, and also partly from the fact that so much more resource is being applied to it now. A few years ago, the amount of money invested in the research was in the $millions. Today it is in the $billions, and soon it will be $trillions. Cordeiro believes that within a few years, longevity medicine will be the largest industry in the history of humanity. He is convinced that Ray Kurzweil is right to believe that by 2029 we will achieve longevity escape velocity (LEV), which means that every year that passes, science gives you an extra year of life to offset the year you just spent. The implication of this is that if you manage to live to 2030, death should become optional for you.

Death and politicians

Politicians really should pay attention to these developments. Not just because the end of aging would be the most significant development in human history, but also because there is a huge longevity dividend. Age and the diseases it causes – heart disease, dementia and cancer – consume most of the health budget of every country on the planet. And they are barely managing to cope. If we can cure aging we can slash this cost.

The most useful contribution that a region or a country could make, Cordeiro argues, would be to declare aging a curable disease. This would attract massive funding, and an influx of scientific talent. 90% of human deaths are caused by aging, and age-related diseases. All the other causes – malaria, suicide, drugs, war, famine, and so on – account for only 10%.

The post The Death of Death. With Jose Cordeiro first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 08:56

AI and professional services. With Shamus Rae

Collar colour

Not long ago, people assumed that repetitive, blue-collar jobs would be the first to be disrupted by advancing artificial intelligence. Since the arrival of generative AI, it looks like white-collar jobs will be impacted first. Jobs like accounting, management consulting, and the law. Who would have guessed that lawyers would find themselves at the cutting edge of technology.

Shamus Rae is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit firms. Shamus joined the London Futurists Podcast to discuss how AI will impact professional services in the next few years.

Shamus was ideally placed to launch Engine B, having spent 13 years as a partner at the audit firm KPMG, where he was Head of Innovation and Digital Disruption. But his background is in technology, not accounting. Back in the 1990s he founded and sold a technology-oriented outsourcing business, and then built a 17,000-strong outsourcing business for IBM in India from scratch.

Data

The top priority for Engine B is data. Shamus argues that unless an organisation’s data is up-to-date, accurate, and held in standardised formats, you can’t do anything useful to it with advanced AI. So Engine B spends a lot of its time obtaining the right data from clients, and making it comply with those standards. Getting the plumbing right, Shamus says.

Most of this data is used by the kind of the kind of pattern recognition deep learning models that were introduced by the 2012 Big Bang in AI. But Engine B is also starting to use the generative AI that were introduced by the 2017 Big Bang, and is building co-pilots for its smaller clients – the larger firms are building their own.

The audit firms used to think there was competitive advantage in their data models, and their individual approaches to handling client data, but when the ICAEW reviewed the approaches, they found they were all pretty much the same. This shouldn’t be surprising: data science is not a core skill for accountants, and nor should it be.

This does not mean that data is not important and confidential. Engine B is religious about never looking at the content of client data, and never copying it, anonymising it, or storing it.

Data swamps and data lakes

Most of the data held by most companies is in a bad state, and has to be cleaned up and regularised before it can be used. To coin a phrase, data swamps have to be transformed into data lakes. For example, a large company will lease many buildings; each of these leases is likely to have evolved over time, and it may not be immediately obvious which lease is the current and applicable one. You can find this out by correlating information from payment records, and this can be done automatically, without a human nosing around in the data.

There are about 300 accounting systems used by large companies around the world, and most of them can be tailored to particular client requirements. In addition, some clients – like Tesla – actually write their own accounting systems rather than using the industry standards like SAP and Oracle. So the variety of accounting systems that Engine B has to tap into is enormous. Nevertheless, it claims to be able to start extracting useful data from almost any accounting system within an hour.

Engine B currently has paying clients in the US and the UK, and works on the audits for 50,000 of the companies that its clients work for. Its clients are global firms, and they will shortly be rolling out the service elsewhere in Europe and the rest of the world. It expects to be working on around 200,000 audits in a year’s time, so growth is fast. Shamus says the company is more advanced in the accounting sector than the legal sector, but that the arrival of generative AI is changing the balance.

Training future partners

When the prospect is raised of AI automating the simpler, more repetitive tasks in auditing, the question is always asked: how will young accountants get trained? Shamus replies that the skills acquired during years of “ticking and bashing” can be acquired less painfully and more quickly. In future, accountants might think that their predecessors who had to endure that process were put through it as a sort of therapy for those who went through it before them.

A comparable process for lawyers was to wade through thousands of legal documents in a “deal room” during the review of a transaction like an investment or an acquisition. Much of this “disclosure” work has now been automated, with no apparent loss of expertise within the legal profession.

New business models

But the automation of ticking and bashing does give the audit firms a problem, as it undermines their funding model, in which clients are charged a significant sum for a mass of juniors to carry out grunt work, and partners earn a share of this income to add to the larger fees they charge for their own more limited time.

Shamus thinks the professional services firms will have to abandon their current triangle-shaped organigrams, with a lot of junior people at the bottom, a smaller number of managers in the middle, and a very small number of partners at the top. They will have to adopt a diamond-shaped organigram, because most of the junior jobs will have been automated.

Lawyers and accountants will also have to learn how to sell more than just billable hours. They will have to sell the value of the AI systems which are replacing much of the work previously done by junior humans.

In Shamus’ experience, senior people in professional services do appreciate that GPT technology means their industries are about to experience dramatic change. But there is still a level of denial: people often think that everyone else’s job will change, but not theirs.

The post AI and professional services. With Shamus Rae first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 08:50

July 13, 2023

AI and new styles of learning. With David Giron

The education sector may well be impacted by advanced AI more profoundly than any other. This is partly because of the obvious potential benefit of applying more intelligence to education, and partly because education has resisted so much change in the past.

42 as the meaning of … learning

David Giron is the Director of one of the world’s most innovative educational institutions, 42 Codam College in Amsterdam. He was previously the head of studies at Codam’s parent school 42 in Paris, which was founded in 2013, so he has now spent 10 years putting the school’s radical ideas into practice. He joined the London Futurists Podcast to explain how 42 works, and how the world of education will be impacted by technology in general, and by generative AI in particular.

42 is a software engineering school, in which all learning is completely peer-to-peer. There are no teachers or lecturers. The learning process is hands-on: students don’t talk about programming; they learn by doing it. The recipe has proved successful: the school now has 50 campuses around the world, in 30 countries, with 18,000 students currently enrolled. As you may have already guessed, it is named after the famous joke in Douglas Adams’ “Hitch-hiker’s Guide to the Galaxy” that 42 is the meaning of life.

Placing students at the centre

Giron says the philosophy of 42 is not antagonistic towards more traditional approaches to education, but it sees the student as passive and peripheral in them, whereas it seeks to place the student at the centre of the learning process. Rather than receiving learning, they have to seek it.

Examination and evaluation is modelled on academic evaluation: students are selected randomly (within constraints) to peer review each other’s work.

Mastery learning

The 42 school practises “competency-based learning” or “mastery learning”, which was advocated by the educationalist Sir Ken Robinson in one of the most-watched TED talks ever (here). This means that students do not proceed from one module to the next until they have demonstrated mastery of the first one.

This is particularly important in maths, and maths-related subjects like software engineering, because failure to understand one module means that your understanding of everything that follows will be shaky at best. Therefore some students at 42 finish the course in six months while others take two years. There is no stigma attached to this: it is not a race.

Broader applicability

Giron believes that 42’s approach is applicable to many other subjects – perhaps all subjects, but most of the subjects where it has been tried are technical ones. The same hurdles keep cropping up: equipment and consumables. Software engineering requires no capital investment and no material inputs. This is obviously not true of other branches of engineering, like chemical engineering, or woodworking.

Although Giron notes that 42’s approach has been very successful, and could be applied more widely, he does not claim that it should be adopted universally. Every student is different, and what works for one will not necessarily work for the next. 42 is simply offering one new approach to the educational mix. 42 receives frequent visits from other educationalists who are curious to learn about its approach, but as a previous guest on the London Futurists Podcast commented, education is a bit of a slow learner.

Metrics and failure

The most important measurement of success for Giron is the enthusiasm of employers to hire 42’s graduates, including employers who have already hired some in the past. The second measurement is the satisfaction of the students themselves, and this is tested regularly.

Some elite schools claim that if no students are failed, then the bar is being set too low. Others argue there is no reason why every student should not succeed, at least if they were able to gain admittance in the first place. Giron says he adopts a third approach, which is that students should experience failure, but that this should happen within the school, and it should not mean they have to leave. The experience of failure can inculcate resilience, but it should not be allowed to undermine the fundamental confidence of a student.

Covid

Face-to-face contact between students was seen as an important element of 42’s approach, so Covid was especially challenging. When the lockdowns hit, the school took a month to re-design the learning process to be online-only, but the level of drop-outs soared, and the students who persevered took longer to complete the course. It also turned out that for many students, 42 provided the whole of their social life, and when they were no longer able to see each other at school, some of them had no social contact whatsoever.

Re-adjusting to normal life after Covid was also bumpy, but Giron reports that everything is pretty much back to normal now.

Generative AI

Initial responses to ChatGPT and similar models were often polarised. Some people immediately said that we have entered a new world and everything will change. Others demurred, dismissing the excited talk as hype. Enough time has now passed since the launches of ChatGPT and GPT-4 to make a more balanced judgement. Giron does believe that these models will have enormous impact. For instance, some simple software engineering tasks, such as building static websites, will probably be completely automated. But he says that the adoption of GPT technology will be slower than many people expect, and it will not replace humans in most software engineering roles.

Surprisingly, only 3 or 4 of every 10 students at Codam are using GPTs regularly. Adoption is picking up, but it remains gradual.

An interesting consequence of GPTs is that the sequence of coding is sometimes reversed. Previously, you would write some code, and if you followed best practice you would then write some commentary on it to help future engineers use or debug the code. Now you can write the commentary, and have GPTs write the code based on that. Effectively, this is programming in natural language.

Agility

It is too soon to know exactly what impacts GPTs and other advanced AIs will have on education, and the impact will be very different depending on the timescale. The change in the next year will be eclipsed by the change in the next five years, and again in the next decade. In a period of exponential technological progress, the most important characteristic to cultivate is agility.

The post AI and new styles of learning. With David Giron first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on July 13, 2023 00:45

July 7, 2023

AI-developed drug breakthrough. With Alex Zhavoronkov

Healthcare is one of the sectors likely to see the greatest benefits from the application of advanced AI. A number of companies are now using AI to develop drugs faster, cheaper, and with fewer failures along the way. One of the leading members of this group is Insilico Medicine, which has just announced the first AI-developed drug to enter phase 2 clinical trials. Alex Zhavoronkov, co-founder of Insilico Medicine, joined the London Futurists Podcast to explain the significance of this achievement.

Idiopathic Pulmonary Fibrosis

The drug in question is designed to tackle Idiopathic Pulmonary Fibrosis, or IPF. “Fibrosis” means thickening or scarring of tissue, and “pulmonary” refers to the lungs. The walls of the lungs are normally thin and lacy, but IPF makes them stiff and scarred. It is a common disease among the over-60s, and is often fatal.

Insilico is unusual among the community of AI drug development companies in that most of them go after well-known proteins, whereas Insilico has identified a new one. In 2019, Insilico’s AIs identified a number of target proteins which could be causing IPF, by scouring large volumes of data. They whittled the number down to 20, and tested five of them, which resulted in one favoured candidate. They proceeded to use another set of AI models to identify molecules which could disrupt the activity of the target protein. This second step involved the relatively new type of AI that is called generative AI.

GANs and GPTs

The first generative AIs were introduced in 2014 (the same year that Zhavoronkov founded Insilico Medicine), and are known as Generative Adversarial Networks, or GANs. This involves two AI models competing with each other – one to create an image, and the other to criticise it until it is essentially perfect. The second, and better-known class of generative AIs are transformer AIs, which were introduced in a 2017 paper by Google researchers called “Attention is all you need.” These are familiar to us all from ChatGPT and GPT-4: GPT stands for Generative Pre-trained Transformer.

To identify a molecule which can disrupt the target protein, Insilico gives the crystalline structure of the protein to as many as 500 different generative AI models, and instructs them to design molecules which will bind with the protein productively. Over a few days, these models compete to find the best molecule for the job. Human chemists in around 40 Contract Research Organisations (CROs), mostly in China and India, review the most promising 100 or so of the resulting molecules, and around 15-20 of them are synthesised and tested. The characteristics of the best performing molecules are fed back into the array of generative AI systems for further review. This was all done in 2019.

Clinical trials

The resulting molecules were tested for both efficacy and safety in mice and other animals, including dogs. By 2021 the company was ready for phase zero of the clinical trial process, which was a preliminary test for safety in humans, conducted on eight healthy volunteers in Australia. This was followed by a phase one clinical trial, which is a large-scale test for safety in humans. This was carried out on healthy volunteers in New Zealand and China, and had to be particularly thorough because IPF is a chronic condition rather than an acute one, so people will be taking a drug for it for years rather than weeks or months.

Now, Insilico is able to proceed to the phase two study, dosing patients with IPF in China and the USA. Part of the challenge at this point is to find a large number of patients with good life expectancy, and the company is still recruiting.

Savings and consolidation

Overall, Zhavoronkov thinks that Insilico has shaved a couple of years off the six-year discovery and development process. But more importantly, 99% of candidate molecules fail, so the most important improvement offered by AI drug discovery and development lies in reducing this failure rate.

A couple of years ago, the community of companies applying AI to drug development consisted of 200 or so organisations. Biotech was a hot sector during Covid, with lots of money chasing a relatively small number of genuine opportunities. Some of that heat has dissipated, and investors have got better at understanding where the real opportunities lie, so a process of consolidation is under way in the industry. Zhavoronkov thinks that perhaps only a handful will survive, including companies like Schrödinger Inc., which has been selling software since the 1990s, and has moved into drug discovery.

New technologies, new opportunities

For the companies that survive this consolidation process, the opportunities are legion. For instance, Zhavoronkov is bullish about the prospects for quantum computing, and thinks it will make significant impacts within five years, and possibly within two years. Insilico is using 50 qubit machines from IBM, which he commends for having learned a lesson about not over-hyping a technology from its unfortunate experience with Watson, its AI suite of products which fell far short of expectations. Microsoft and Google also have ambitious plans for the technology. Generative AI for drug development might turn out to be one of the first really valuable use cases for quantum computing.

The arrival of GPTs has made Zhavoronkov a little more optimistic that his underlying goal of curing aging could be achieved in his lifetime. Not through AI-led drug discovery, which is still slow and expensive, even if faster and cheaper than the traditional approach. Instead, GPTs and other advanced AIs hold out the promise of understanding human biology far better than we do today. Pharmaceuticals alone probably won’t cure aging any time soon, but if people in their middle years today stay healthy, they may enjoy very long lives, thanks to the technologies being developed today.

 

The post AI-developed drug breakthrough. With Alex Zhavoronkov first appeared on .

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on July 07, 2023 07:17

June 22, 2023

The Four Cs: when AIs outsmart humans

Startling progress

On 14 March, OpenAI launched GPT-4. People who follow AI closely were stunned by its capabilities. A week later, the US-based Future of Life Institute published an open letter urging the people who run the labs creating Large Language Models (LLMs) to declare a six-month moratorium, so that the world could make sure this increasingly powerful technology is safe. The people running those labs – notably Sam Altman of OpenAI and Demis Hassabis of Google DeepMind – have called for government regulation of their industry, but they are not declaring a moratorium.

What’s all the fuss about? Is advanced AI really so dangerous? In a word, yes. We can’t predict the future, so we can’t be sure what future AIs will and will not be able to do. But we do know that their capability depends to a large degree on the amount of computational power available to them, and the quantity (and to a lesser extent the quality) of the data they are trained on. The amount of computational horsepower that $1,000 buys has been growing exponentially for decades, and despite what some people say, it is likely to continue doing so for years to come. We might be reaching the limits of data available, since the latest LLMs have been trained on most of the data on the internet, but that has also doubled every couple of years recently, and will probably continue to do so.

Exponential

So AIs are going to carry on getting more powerful at an exponential rate. It is important to understand what that word “exponential” means. Imagine that you are in a football stadium (either soccer or American football will do) which has been sealed to make it water-proof. The referee places a single drop of water in the middle of the pitch. One minute later he places two drops there. Another minute later, four drops, and so on. How long do you think it would take to fill the stadium with water? The answer is 49 minutes. But what is really surprising – and disturbing – is that after 45 minutes, the stadium is just 7% full. The people in the back seats are looking down and pointing out to each other that something curious is happening. Four minutes later they have drowned. That is the power of exponential growth.

Powerful AIs generate two kinds of risk, and we should be thinking carefully about both of them. The first type is sometimes called AI Ethics, but a better term is Responsible AI. This covers concerns such as privacy, bias, mass personalised hacking attacks, deep fakes, and lack of explainability. These are problems today, and will become more so as the machines get smarter. But they are not the things we worry about when we talk about machines getting smarter than us.

AI Safety

That is the realm of the other type of risk, which is known as AI Safety, or AI Alignment. The really big risk, the one that keeps Elon Musk and many other people up at night, is that AIs will become superintelligent – so smart that we become the second smartest species on the planet. That is a position currently held by chimpanzees. There are fewer than half a million of them, and there are eight billion of us. Their survival depends entirely on decision that we humans make: they have no say in the matter. Fortunately for them, they don’t know that.

We don’t know whether a superintelligence will automatically be conscious, but it doesn’t have to be conscious in order to jeopardise our very existence. It simply needs to have goals which are inconvenient to us. It could generate these goals itself, or we could give it an unfortunate goal by mistake – as King Midas did when he asked his local deity to make everything he touched turn to gold, and discovered that this killed his family and rendered his food inedible.

We’ll come back to superintelligence, because first we should consider a development which is probably not an existential risk to our species, but which could cause appalling disruption if we are unlucky. That is technological unemployment.

Automation

People often say that automation has been happening for centuries and it has not caused lasting unemployment. This is correct – otherwise we would all be unemployed today. When a process or a job is automated it becomes more efficient, and this creates wealth, which creates demand, which creates jobs.

Actually it is only correct for humans. In 1915 there were 21.5 million horses working in America, mostly pulling vehicles around. Today the horse population of America is 2 million. If you’ll pardon the pun, that is unbridled technological unemployment.

To say that automation cannot cause lasting widespread unemployment is to say that the past is an entirely reliable guide to the future. Which is silly. If that were true we would not be able to fly. The question is, will machines ever be able to do pretty much everything that humans can do for money. The answer to that question is yes, unless AI stops developing for some reason. Automation will presumably continue to create new demand and new jobs, but it will be machines that carry out those new jobs, not humans. What we don’t know is when this will happen, and how suddenly.

If we are smart, and perhaps a bit lucky, we can turn all this to our advantage, and I explore how in my book The Economic Singularity. We can have a world in which humans do whatever we want to do, and we can have a second renaissance, with all humans living like wealthy retirees, or aristocrats.

What about superintelligence? Can we find a solution to that risk? I see four scenarios – the four Cs.

Cease

We stop developing AIs. Everyone in the world complies with a permanent moratorium. It might be possible to impose such a moratorium in the West and in China (which has already told its tech giants to slow down), and currently, almost all advanced AI is developed in these places. But as computational power and data continue to grow, it will become possible for rogue states like Russia and North Korea, for Mafia organisations, and for errant billionaires to create advanced AIs. The idea that all humans could refrain forever from developing advanced AI is implausible.

Control

We figure out how to control entities that are much smarter than us, and getting smarter all the time. We constrain their behaviour forever. There are very smart people working on this project, and I wish them well, but it seems implausible to me. How could an ant control a human?

Catastrophe

A superintelligence either dislikes us, fears us, or is indifferent to us, and decides to re-arrange some basic facts about the world – for instance the presence of oxygen in the atmosphere – in a way that is disastrous for us. Some people think this is almost inevitable, but I disagree. We simply don’t know what a superintelligence will think and want. But it is a genuine risk, and it is irresponsible to deny it.

Consent

The fourth scenario is that a superintelligence (or superintelligences) emerges and decides that we are harmless, and also of some value. If we are fortunate, it decides to help us to flourish even more than we were already doing. Personally, I think our most hopeful future is to merge with the superintelligence – probably by uploading our minds into its substrate – and take a giant leap in our own evolution. We would no longer be human, but we would live as long as we want to, with unimaginable powers of understanding and enjoyment. Which doesn’t sound too bad to me.

The post The Four Cs: when AIs outsmart humans first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2023 09:44

GPT-4 and education. With Donald Clark

Aristotle for everyone

The launch of GPT-4 in March has provoked concerns and searching questions, and nowhere more so than in the education sector. Last month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model.

Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.

Donald Clark joined the London Futurists Podcast to discuss these developments. Clark founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.

Education is a slow learner

In that book, Clark joked that “education is a bit of a slow learner.” Education has long seemed relatively immune to the charms of technology, but GPT-4 and similar models look set to change that. Previously, most education technology projects were a bit like mosquitos : lots of buzz, tricky to spot, and short-lived. Advanced AI will make past improvements in pedagogy look like rounding errors. GPT-4 has been trained on a corpus of text not far short of the sum total of all human learning. No human teacher gets anywhere near its breadth of knowledge.

A universal tutor based on GPT technology would not only deploy all this knowledge. It would also lack the fads and misconceptions which bedevil teaching at all levels, such as the long-debunked but still-prevalent theories of “learning styles”, including the idea that different people learn better aurally, visually, or kinetically.

Personalised

AI tutors will be personalised – tailored to the needs of individual learners. We all take a different amount of time to master a subject, and grouping 30 students in a class makes it hard to cater for this. Giving everyone a personal AI tutor means that we can all master each part of a subject before we move on to the next part that builds on it. Many wealthy parents know this very well, and hire teachers to tutor their children in one-to-one sessions after school.

Khan Academy, an online teaching service set up in 2006, has been pioneering new pedagogical techniques since the outset, and it is now using GPT-4 to offer something approaching this personalised AI tutor. It acts as a Socratic teacher, asking questions rather than giving answers, and it practises mastery learning, or competency-based learning, urging you to stick with a unit until you have internalised it properly. Maths is perhaps the most important subject to do this in.

Duolingo is another company rushing to deploy services based on GPT-4 for its customer base of a 100 million-plus. Having used both services myself, I have higher expectations for Khan Academy, although Duolingo does produce a pretty good podcast for Spanish language learners.
Raising standards in healthcare and education

Bill Gates wrote recently that the biggest impacts of advanced AI would be felt in education and healthcare. Clark reports that globally, the average general medical practitioner mis-diagnoses just under 5% of cases presented. AI will enable doctors to reduce this number considerably, to the point that it would be considered medical malpractice for a doctor to fail to at least consult an AI when making a diagnosis. There is no corresponding metric in education, but it is certainly true that human teachers vary, they are fallible, and they often fall behind the current thinking in both their subject and in pedagogical techniques. AI will help to raise standards enormously.

A disproportionate amount of the money spent on education is directed to the children of the wealthy, although increasingly, in most countries, families and students are contributing to that spend. Much less is spent on the less privileged 50% of the population. It is ironic, then, that GPT technology seems to threaten the incomes of white-collar people much more than blue-collar people. But in any case, Clark hopes that AI could enable us to re-balance the emphasis, and cultivate more and better learning in less wealthy parts of our communities.

Does the personalised tutor require AGI?

To get today’s GPTs to be fully-fledged Aristotelian personal tutors, they need to be improved in three ways. First they need long-term memories as well as the modest short-term memories (buffers) which they have today. Second, they need a world view, which can also be called common sense. Third, they need the ability to test and assess the provenance of their knowledge, which will eliminate the hallucinations they are currently prone to.

These are not trivial additions, and indeed would take a GPT a long way towards AGI, artificial general intelligence, a machine with all the cognitive capabilities of an adult human. GPT-4 already has elements of a theory of physics, but few people would claim that it fully understands its place in the world, or that of the humans it interacts with.

Which raises the question whether the fully-fledged Aristotelian personalised tutor is possible before we reach AGI. When we do reach AGI we are already into the era of superintelligence, which will raise a whole slew of much bigger questions.

The post GPT-4 and education. With Donald Clark first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2023 09:24

June 7, 2023

GPT-4 and the EU’s AI Act. With John Higgins

The EU AI Act

The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.

John Higgins joined the London Futurists Podcast to discuss the AI Act. He is the Chair of Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the UK’s IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.

Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.

Two Big Bangs in AI

When a new technology or service is launched it is hard to know the eventual scale of its impact, but like most observers, Higgins thinks that large language models will turn out to have a deep and broad impact.

The AI systems generating so much noise at the moment use a type of AI called transformers. These were launched in 2017 by a Google paper called “Attention is all you need”, riffing on the 1967 Beatles song “All you need is love”. The launch of transformer AIs was arguably the second Big Bang in AI.

The first Big Bang in AI came five years before, in 2012, when the first deep learning systems were introduced by Geoff Hinton and colleagues, who thereby revived one of the oldest forms of AI, known as neural networks. These systems discriminate, in the sense that they classify things into appropriate categories. They analyse.

The second Big Bang in AI, the arrival of transformers, is giving us generative AI, which produce text, images, and music. Their abilities are remarkable, and they will have enormous impacts.

Anglo-Saxon vs Franco-German regulation

Higgins describes the Franco-German approach to regulation as seeking to enshrine a framework of rules before a new technology, product, or service can be deployed. (He acknowledges this is an over-simplification.) The Anglo-American approach, by contrast, is to allow the service to be launched, and fix any market failures once they have become apparent.

Traditionally, the Franco-German approach has prevailed in the EU, and especially so since Brexit. In addition there is a precautionary principle, which says that if there is a possibility of harm being caused, then measures should be in place to mitigate that harm at the outset.

This is not the same as saying that a technology cannot deployed until possible harms are identified and eliminated. The EU has not collectively signed up to the Future of Life Institute (FLI) letter calling for a six-month moratorium on the development of large language models, and there is no general appetite to follow the Italian government’s short-lived ban on ChatGPT. The idea is to get a regulatory framework in place at the outset rather than to delay deployment.

Risk-based regulation

Regulators often face a choice between making rules about a technology, and making rules about its applications. The preference in the EU is generally to address the applications, and in particular any applications which may impact safety, and human rights. Higgins thinks this generally produces regulations which are not onerous, because they mostly oblige developers to take precautions that responsible ones would take anyway. For instance, they should ask themselves whether there is a likelihood of bias lurking in the data, and whether it is possible to be transparent about the algorithms.

When a large language model has been trained on a large percentage of all the data in the internet, it is obviously hard to avoid bias. So the developers of these models should be as open as possible about how they were trained. Organisations deploying AI need to consider the practices of all the other organisations in their supply chain, and what steps each of them has taken to mitigate possible harms. The level of scrutiny and precaution should vary according to the degree of possible harm. A healthcare provider should generally exercise great diligence than a games developer, for instance.

Another over-simplification of the AI Act is that it instructs developers to take appropriate steps (undefined) to ensure that no harms (undefined) are caused which rise above a particular threshold (also undefined). Put like this it may seem unfair, but what alternatives are there? It is not possible to define all the possible harms in advance, nor all the possible steps to mitigate those harms. It is unacceptable to most people to allow developers to let rip and deploy whatever system they like without accountability, on the grounds (as argued recently by Eric Schmidt, former chair of Google) that regulators are completely out of their depth and will generally cause more problems than they prevent, so the tech companies must be left to govern themselves. It is also unacceptable to ban all new systems from being launched.

The EU process

The first step in the creation of EU legislation is that the Council of Ministers, which comprises the heads of government of the member countries, asks the EU Commission to draft some legislation. The Commission is the EU’s civil service. When that draft is ready, it is reviewed by a committee of the Council of Ministers, who then pass their revised version to the EU Parliament, the collection of MEPs who are elected by EU citizens. MEPs review the draft in various committees, and the result of that process is brought to the full Parliament for a vote. In the case of the AI Act, that vote is expected during June.

Finally, the three institutions (Commission, Council and Parliament) engage in negotiations called “trilogue”. The current best guess is that an agreed text of the AI Act will be ready by the end of the year.

The quality of legislation and regulation produced by this process is controversial. Some people think the 2016 General Data Protection Regulation (GDPR) is eminently sensible, while others – including Higgins – think it is a sledgehammer to crack a nut.

The FLI open letter

Two weeks after OpenAI launched GPT-4, the Massachusetts-based Future of Life Institute (FLI) published an open letter calling for a six-month moratorium on the development of large language models. The British Computer Society (BCS) which Higgins chaired published a response arguing against it. He argues that it is a bad idea to call for actions which are impossible, and that although good actors might observe a moratorium, bad actors would not, so the effect would be to deny ourselves the enormous benefits these models can provide, without avoiding the risks that their further development will cause.

EU tech giants

Higgins argues that the prize for the EU in using large language models and other advanced AI systems is to create improved public services, and also to enhance the productivity of its private sector companies. Europe is home to many well-run companies producing extremely high-quality products and services – think of the German car industry and the legion of small manufacturing companies in northern Italy. He thinks this is more important than creating a European Google.

Europe’s single market is a work in progress. It works well for toilet rolls and tins of beans, and as prosaic as that sounds, it is an enormously beneficial system, created by the hard work of people from all over Europe – not least the UK, where Margaret Thatcher was one of its earliest and strongest proponents, before she turned Eurosceptic. The lies and exaggerations about the EU that were spread for many years by the Murdoch press, Boris Johnson and others have concealed from many Britons what an impressive and important achievement it is. As the Joni Mitchell song says, you don’t know what you’ve got ‘till it’s gone, and the British are quickly realising how much they have lost by leaving. Unfortunately the malign hands of Murdoch, the Mail and the Telegraph continue to impede the obviously sensible step of re-joining the single market and the customs union, if not the EU itself.

Impressive as it is, the EU single market is far from complete, and Higgins believes that indigenous tech companies are more hindered by this than many other types of company. It is much easier to start and grow a tech giant in the genuinely single markets of the US and China than it is in Europe. Higgins argues that we cannot fix this, at least in the short term, so Europe should focus on the areas where it has great strengths. But it remains something of a mystery that the EU contains global champions in pharmaceuticals, energy, luxury goods, and financial services, to name a few, but seems unable to build any in technology.

 

The post GPT-4 and the EU’s AI Act. With John Higgins first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2023 04:06