Calum Chace's Blog, page 2

February 23, 2024

Government regulation of AI is like pressing a big red danger button

Imagine that you and I are in my laboratory, and I show you a Big Red Button. I tell you that if I press this button, then you and all your family and friends – in fact the whole human race – will live very long lives of great prosperity, and in great health. Furthermore, the environment will improve, and inequality will reduce both in your country and around the world.

Of course, I add, there is a catch. If I press this button, there is also a chance that the whole human race will go extinct. I cannot tell you the probability of this happening, but I estimate it somewhere between 2% and 25% within five to ten years.

In this imaginary situation, would you want me to go ahead and press the button, or would you urge me not to?

I have posed this question several times while giving keynote talks around the world, and the result is always the same. A few brave souls raise their hands to say yes. The majority of the audience laughs nervously, and gradually raises their hands to say no. And a surprising number of people seem to have no opinion either way. My guess is that this third group don’t think the question is serious.

It is serious. If we continue to develop advanced AI at anything like the rate we are now, then in some decades or years, someone will develop the world’s first superintelligence. By which I mean a machine which exceeds human capability in all cognitive tasks. The intelligence of machines can be improved and ours cannot, so it will go on, probably quite quickly, to become much, much more intelligent than us.

Some people think that the arrival of superintelligence on this planet inevitably means that we will quickly go extinct. I don’t agree with this, but extinction is a possible outcome that I think we should take seriously.

So why is there no great outcry about AI? Why are there no massive street protests and letters to MPs and newspapers, demanding the immediate regulation of advanced AI, and indeed a halt to its development? The idea of a halt was proposed forcefully back in March by the Future of Life Institute, a reputable think tank in Massachusetts. It garnered a lot of signatures from people who understand AI, and it generated a lot of media attention. But it didn’t capture the public imagination. Why?

I think the answer is that most people are extremely confused about AI. They have a vague sense that they don’t like where it is heading, but they aren’t sure if it they should take it seriously, or dismiss it as science fiction.

This is entirely understandable. The science of AI got started in 1956 at a conference at Dartmouth College in New Hampshire, but until 2012 it made very little impact on the world. You couldn’t see it or smell it, and crucially, it didn’t make any money. Even after the Big Bang in 2012 which introduced deep learning, advanced AI was pretty much the preserve of Big Tech – a few companies in the US and China.

That changed a year ago, with the launch of ChatGPT, and even more so in March, with the launch of GPT-4. Finally, ordinary people could get their hands on an advanced AI model and play with it. They could get a sense of its astonishing capabilities. And yet there is still no widespread demand for the regulation of advanced AI. No major political party in the world has among its top three priorities the regulation of advanced AI to ensure that superintelligence does not harm us.

To be sure, there are calls for AI to be regulated by governments, and indeed regulation is on its way in the US, China, and the EU, and most other economic areas too. But these moves are not driven by a bottom-up, voter-led groundswell. Ironically, they are driven at least in part by Big Tech itself. Sam Altman of OpenAI, Demis Hassabis of DeepMind, and many other people leading the companies developing advanced AI are more convinced than anyone that superintelligence is coming, and that it could be disastrous as well as glorious.

AI is a complicated subject, and it doesn’t help that opinions vary so widely within the community of people who work on it, or who follow it closely and comment on it. Some people (e.g., Yann LeCun and Andrew Ng) think superintelligence is coming, but not for many decades, while others (Elon Musk and Sam Altman, for instance) think it is just a few years away. A third group holds the bizarre view that superintelligence is a pure bogeyman that was invented by Big Tech in order to distract attention away from the shorter-term harms that they are allegedly causing with AI, by eroding privacy, enshrining bias, poisoning public debate, driving up anxiety levels and so on.

There is also no consensus within the AI community about the likely impact of superintelligence if and when it does arrive. Some think it is certain to usher in some kind of paradise (Peter Diamandis, Ray Kurzweil), while others think it entails inevitable doom (Eliezer Yudkowski, Conor Leahy). Still others think we can figure out how to tame it ahead of time, and constrain its behaviour forever (Max Tegmark, and Yann LeCun again).

Technology evolves because inventors and innovators build one improvement on top of another. This means it evolves within fairly narrow constraints. It is not deterministic, and there is no law of physics which says it will always continue. But our ability to guide it is limited.

Where we have more freedom of action is in adjusting human institutions to moderate the impact of technology as it evolves. This includes government regulation. Advanced AI already affects all of us, whether we are aware of it or not. It will affect all of us much more in the years ahead. We need institutions that can cope with the impact of AI, and this means that we need our political leaders and policy framers to understand AI. This in turn requires all of us to understand what AI is, what it can do, and the discussion about where it is going.

Increasingly, acquiring and maintaining a rudimentary understanding of AI is a fundamental civic duty.

The post Government regulation of AI is like pressing a big red danger button first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2024 08:47

The Bletchley Park summit on AI safety deserves two and a half cheers

The taboo is broken. The possibility that AI is an existential risk has now been voiced in public by many of the world’s political leaders. Although the question has been discussed in Silicon Valley and other futurist boltholes for decades, no country’s leader had broached it before last month. That is the lasting legacy of the Bletchley Park summit on AI Safety, and it is an important one.

It might not be the most important legacy for the man who made the summit happen. According to members of the opposition Labour Party, Britain’s Prime Minister Rishi Sunak was using the event to look for his next job. Faced with chaos in the Tory party, and a potentially damaging enquiry into his role in the management of Covid, he appears to be heading towards catastrophic defeat in the forthcoming general election. The lifestyle of another former British political leader, Nick Clegg, who gets paid a reported $30 million a year by Facebook to be Mark Zuckerberg’s (not terribly effective) flak catcher, must look attractive to Mr Sunak. His on-stage discussion with Elon Musk after the summit was described by several of the attending journalists as an embarrassingly fawning job application.

Cynics point to the fact that the summit was attended by very few heads of state. President Biden sent his deputy, vice president Kamala Harris, and Chancellor Scholtz of Germany and President Macron of France were notable for their absence. The announcement of a UK AI safety institute was upstaged by the announcement the day before the summit that the US would do the same. There is room in the world for more than one safety institute, but given that most of the world’s most advanced AI models are developed by US-owned companies, and the rest by Chinese ones, it is obvious which of these two institutes will be the more significant. The EU has the market power, thanks to its 450 million relatively wealthy consumers, to enforce regulations on big tech, even though it is home to none of them (unless you count Spotify). The UK is not. In AI as in other industries, the rules of the road will be determined where the roads and the cars are made.

Nevertheless, the Bletchley Park summit has got the world’s leaders talking seriously – for the first time – about the longer-term risks from AI, as well as about its staggering potential upsides. It took political courage to keep the longer-term aspects on the agenda when many pressure groups proclaim that the shorter-term risks are far more important, like privacy, bias, mass dis-information and industrial-scale personalised hacking. These risks are certainly important, but the idea that ensuring a future superintelligence is safe is a trivial or worthless endeavour is complacent and absurd. Even more risible is the claim, made seriously by some, that “tech bros” promulgate the idea of existential risk to deflect attention from the short-term harms they are causing, or planning to cause.

Another brave decision that the UK government made and stuck to was to invite China to the summit. China hawks like former PM Liz Truss railed against an invitation being extended to a country that spies against Britain. It is surprising that Ms Truss’ opinions continue to receive attention after her short-lived and disastrous tenure. Also, does anybody seriously think that the UK doesn’t spy on China in return? But in any case, with China being one of the only two countries that really matter in the global AI industry, excluding them would have been a mistake.

Whatever its shortcomings, the Bletchley Park summit has got the show on the road. Matt Clifford, the tech entrepreneur seconded to convene the event, deserves considerable praise. The world’s political leaders have spoken publicly about existential risk, and there is no going back. Another summit will be held in six months’ time, in South Korea, and a year from now the French will pick up the baton. This process may turn out to be by far the most positive part of Sunak’s legacy.

The declaration signed at the end of the summit by representatives of 28 governments does not actually use the word “existential”, but it tiptoes right up to the edge: “Substantial risks may arise from … issues of control relating to alignment with human intent. These issues are in part because those capabilities are not fully understood and are therefore hard to predict… There is potential for serious, even catastrophic, harm”.

The declaration is vague about what measures should be taken to ensure that advanced AI is safe for humanity, but it would be wholly unreasonable to expect a single event to raise such a fundamental question for the first time and also answer it definitively. Figuring out how to navigate the path towards superintelligence is the most important challenge humanity faces this century – and perhaps ever. Getting it right will take many more summits, and innumerable discussions in all parts of society.

The UN has just announced a high-level advisory body on AI, and the G7 has published a voluntary code of conduct for AI developers. There are calls for the establishment of organisations to do for AI what the IPCC does for global warming, and what CERN does for nuclear research. The debate about whether regulation will help or hinder the development of beneficial AI will rage for years. In a field as complex, fast-moving, and capital-hungry as advanced AI, it will inevitably be challenging for regulators to keep up with and even stay ahead of the organisations that develop the technology. There is a genuine danger of regulatory capture, in which regulators end up imposing rules which entrench big tech’s first-mover advantages.

But it is simply unacceptable to say that regulation is hard, and therefore the industry should go unregulated. We elect politicians to make decisions on our behalf, and they establish and direct regulators to make sure that powerful organisations play nicely. The AI industry and its cheerleaders cannot tell regulators and politicians (and by extension the rest of us) that our most powerful technology is something we are not smart enough to understand, and we should therefore leave the industry to do whatever it fancies.

It has been apparent for some years that AI was improving remarkably fast, and that the future foretold by science fiction was hurtling towards us, but until recently, most of us were not paying serious attention. I used to think that the arrival of self-driving cars would be the alarm clock that would awake people from their slumber; instead it was ChatGPT and GPT-4. The Bletchley Park summit has disabled the snooze button.

The post The Bletchley Park summit on AI safety deserves two and a half cheers first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2024 08:44

Arabian moonshots may hold huge implications for the whole world

After Silicon Valley, the United Arab Emirates (UAE) may be the most future-oriented and optimistic place on the planet. Futurism and techno-optimism are natural mindsets in a country which has pretty much invented itself from scratch in two generations. During this period its people have progressed from a mediaeval lifestyle to being 21st century metropolitans. So it is unsurprising that the UAE has been quick to spot the enormous future significance of artificial intelligence to all of us, and to pioneer its deployment.

It is not just the UAE. The leaders of all six members of the Gulf Cooperation Council (GCC – Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, as well as the UAE) see AI as an important component of their mission to transition their economies away from reliance on fossil fuels, and improve living standards for their people. They are looking to AI to help them develop alternative energy sources, create smart cities, improve government services, and build world-class industries in fintech, healthcare, and tourism.

Especially today, the oil-rich members of the GCC have both the financial resources and the ambition to become major players in the development and deployment of AI. Their revenues are boosted by Putin’s war in Ukraine, which has driven up the price of oil. Their ambitions are encouraged by the shocking capabilities of the generative AI systems which started grabbing headlines a year ago.

The Gulf states have an additional advantage over most other jurisdictions: their large expatriate workforces mean that they can automate fearlessly. To put it bluntly, if machines do end up taking more human jobs than they create, they can send their excess workers back home. This may help explain why the ordinary people of the GCC seem less afraid of AI than the populations of many other countries.

Back in 2017, Dubai, the second-largest emirate within the UAE, appointed the world’s first Minister for AI. This year, Abu Dhabi’s Technology Innovation Institute (TII) released Falcon, a Large Language Model (LLM), or generative AI model, which many think is the most powerful open-source LLM in the world. Saudi Arabia wants to leapfrog the UAE in AI development and deployment, and both nations are buying as many advanced computer chips as they can get their hands on.

AI hubs are springing up all over the region. The UAE has an AI university, and Saudi and Qatar are not far behind. The Saudi Data and Artificial Intelligence Authority (SDAIA) was established to help realise the Kingdom’s ambitious Vision 2030 initiative, spearheading rapid advancements in artificial intelligence. IDC, a market research firm, forecasts that GCC spend on AI will exceed $6bn a year by 2026, which represents faster growth than anywhere else in the world. Some insiders suspect this may turn out to be a gross under-estimate. PwC, the audit and consulting firm, predicts that AI will add $320bn to the GCC economies by 2030.

Despite being generally optimistic about the future, the Gulf states are also profoundly socially conservative. People outside the region often focus on this aspect, with accusations of repressive attitudes towards women, the use of the death penalty, laws against being gay, arbitrary arrests and detentions – and sometimes murder – of both nationals and foreigners.

Reform is under way in much of the region, and progressing faster than most outsiders realise. This is especially true in Saudi Arabia, where women can now drive, and are increasingly strongly represented in the workforce. Listening to pop music was banned a few years ago; now young people congregate freely at music festivals. Inbound tourism used to be virtually impossible; now it takes just a few seconds to obtain a tourist visa online.

Critics are quick to point out that there is a long way to go, and the region’s rulers will often agree. In a recent interview, Mohamed bin Salman, the Crown Prince and de facto ruler of Saudi Arabia, said that he does not like the law under which a Saudi man has been condemned to death for posting criticism of the government on Twitter. He argued that it would be unlawful for him to intervene, but he hopes that a different judge will accept the man’s appeal.

Alongside their reform programmes, the region’s rulers are practising cultural diplomacy. Abu Dhabi spent around $1.4bn establishing a branch of the Louvre, and this figure is dwarfed by the sums spent by the region’s rulers on football and other sports at home and abroad. Lavish technology summits seem to take place in the region every month – sometimes weekly.

The Gulf’s rulers and people are justly proud of much of their heritage and their culture. They bridle at criticisms from countries which have yet to apologise for committing some of the worst crimes in human history – crimes which are not part of some long-distant distant past, but which were committed against people still alive today. GCC rulers are increasingly willing to wield their financial clout to emerge from the geopolitical shadows and assert their independence. New alliances are being forged, and traditional dependencies are being tested.

Whether it is fair or not, as the world increasingly understands the enormous importance of AI in our future, many people will be disturbed if some of the world’s most powerful AI systems are developed by the Gulf states. The region’s leaders may be tempted to dismiss these concerns as racist froth, but if they want to join the ranks of the leading AI countries, they will have to compete for top AI talent. This talent is highly mobile, and cannot always simply be bought. It needs to be seduced.

There is a tremendous opportunity lurking in this situation. The Gulf countries can most easily attract talented AI professionals by offering them the opportunity to solve truly significant, global problems. There is no shortage of major challenges to address. Advanced AI presents many risks, but it can also help us to solve the climate challenge. It can help us improve healthspan and lifespan. It will make all industries more efficient, raising living standards for everyone. It can automate the drudgery out of our everyday lives, and raise standards of education to levels previously undreamed of.

The rulers of the Gulf should use their wealth and ambition to launch a series of moonshots, seeking solutions to some of our most pressing problems, and placing themselves at the forefront of AI development in the process.

The post Arabian moonshots may hold huge implications for the whole world first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2024 08:38

October 12, 2023

The legal singularity. With Ben Alarie

The law is a promising area for AI

The legal profession is rarely accused of being at the cutting edge of technological development. Lawyers may not still use quill pens, but they’re not exactly famous for their IT skills. Nevertheless, it has a number of characteristics which make it eminently suited to the deployment of advanced AI systems. Lawyers are deluged by data, and commercial law cases can be highly lucrative.

One man who knows more about this than most is Benjamin Alarie, a Professor at the University of Toronto Faculty of Law, and a successful entrepreneur. In 2015, he co-founded Blue J, a Toronto-based company which uses machine learning to analyse large amounts of data to predict a court’s likely verdict in legal cases. It is used by the Department of Justice in Canada and Canada’s Revenue Agency.

Alarie has just published “The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better.” He joined the London Futurists Podcast to discuss the future of AI in the legal profession.

Automation

One way in which AI is impacting the law is automation. Traditionally, a lot of legal work was repetitive and robotic. Junior lawyers working on transactions and on litigation spent days cooped up in “deal rooms”, working their weary way through piles of boxes, looking for the word or phrase in a document which could undermine a deal, or clinch a lawsuit. This is known as “discovery” in the US and “disclosure” in the UK. Machines excel at the close analysis of huge volumes of text, and much of this work has already been automated.

For instance, a human lawyer would take days to identify and summarise the change-of-control provisions in multiple commercial leases. A machine can do the same work in seconds. Law firms have not become less profitable since this work was automated by machines, so the firms are probably doing a great deal more of it, but at lower unit cost. They have learned how to sell the capabilities of their machines instead of simply selling the time of their junior employees.

Predictions

Lawyers make a lot of predictions. They are forever second-guessing each other, and also the judges who will decide their cases. Their advice to clients is based on these predictions. Deep learning and generative AIs are prediction machines, so they should be extremely helpful to lawyers.

When judges make decisions, they take into account all the evidence that is formally presented. But they also take into account the way the parties present themselves – their dress, their accents, their posture, their gestures, and their facial expressions. They may try to downplay some of this information, in order to avoid being biased or prejudiced. But as humans, they cannot help but be influenced by it. Indeed it is often a critical part of their job to make judgments about the honesty of a defendant or a witness, and about their ability to observe and explain the circumstances of a case.

Machines do not have that human ability to assess the disposition or capability of a party in a case. What they do have is the ability to remember every detail about the hundreds of thousands of prior cases which could be relevant to a particular decision. This is what enables Alarie’s company Blue J to predict the outcome of a particular case with over 90% accuracy. This is impressive: large amounts of money depend on the judgment whether or not to proceed with a case, so traditionally, it is the job of the most senior lawyers to predict the likely outcome of a case, and to advise a client whether or not to go to court.

Centaurs

As is so often the case with AI at this stage of its development, the ideal situation is to combine the capabilities of the machine and the human lawyer. The pairing of the two is often compared with the mythical centaur, which was half-man and half-horse. In this case the combination is the machine’s comprehensive knowledge of past cases, and the human’s ability to assess other people. The interesting thing, though, is that the machines are quickly getting much better, and the humans are not. And humans don’t scale, whereas machines do.

As AI is increasingly widely used, and the accuracy of predictions improves, a likely result is that a smaller proportion of cases will go to trial because the litigants will all have a better understanding of whether they would win or lose. There might, however, be a countervailing increase in the amount of strategic litigation, in which people launch carefully-chosen cases in order to nudge the law in a direction they favour.

The legal singularity

When AI systems are much improved, Alarie asks whether the law could become a solved problem, with machines able to predict with such a high confidence the outcome of any potential lawsuit, so that no cases ever reached a court? He calls this the legal singularity. There could also be a legislators’ singularity, in which AI helped politicians and administrators to frame new laws and adjust existing ones in real time to make them more precise, fairer, and more efficient.

Alarie’s intuition is that the law is not a determinate system – in other words, it will never be possible to forecast all cases accurately. However he does think that the system can and will move a long way towards being determinate from where it stands today, and that everyone will benefit from that happening.

Of course, the beneficent outcome is not guaranteed. The dark version of the legal singularity would have one or more AIs imposing harsh control on all of us, perhaps on behalf of an autocratic ruler, or perhaps in service of its own totalitarian logic. But Alarie is an optimist, and expects that the sunnier version will prevail, especially if enough people of good will start thinking about these issues soon, and working out how to steer the evolution of the law in the right direction.

The post The legal singularity. With Ben Alarie first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2023 20:06

What’s new in Longevity? With Martin O’Dea

Martin O’Dea is the CEO of Longevity Events Limited, and the principal organiser of the annual Longevity Summit Dublin. In a past life, O’Dea lectured on business strategy at Dublin Business School. He has been keeping a close eye on the longevity space for more than ten years, and is well placed to speak about how the field is changing. O’Dea sits on a number of boards including the LEV Foundation, which was set up by Aubrey de Grey with a mission to prevent and reverse human age-related disease. O’Dea joined the London Futurists Podcast to discuss what we can expect from the forthcoming Longevity Summit in Dublin.

Long-lived animals

O’Dea is understandably reluctant to pick favourites among the speakers appearing in the four days of the summit, but when pushed, he nominates two speakers who will talk about animals with very long lifespans. Emma Teeling specialises in research on bats, which have much longer lifespans that you would expect given their size. Steve Austad has recently published a very well-received book, “The Methuselah Zoo”, which points out that evolution has developed a wide range of strategies to avoid cancer in long-lived species. Scientific research has tended to focus on short-lived species because the impact of interventions can more easily be studied in them, so we still have a lot to learn from longer-lived animals.

Another highlight for O’Dea will be a talk by Michael Levin, who researchers the electrophysiology of the cell, which involves stimulating cells with electrical impulses to alter the development of an organism.

A four-day conference sounds like a lot of stage time to fill, but O’Dea insists that the real problem was reducing the number of speakers to fit the time available. Longevity is one of the world’s fast-growing and most exciting areas of scientific research, and this is increasingly understood by investors, the media, and members of the general public.

How mainstream is longevity science?

The focus of the Dublin summit is the harder problems of longevity – the problems that cannot easily be addressed by commercial organisations. Aubrey de Grey has pioneered this kind of research for decades, and there have been ups and downs in that time. 2013 was a particularly interesting year, with the publication of seminal research about the hallmarks of aging, and also Google’s foundation of Calico, a surprisingly secretive organisation using big data to try to understand the mechanisms of aging.

O’Dea has the sense that the idea of science giving us all much longer lifespans and much better healthspans is on the cusp of becoming mainstream. Every few years there is a new breakthrough which gets us closer to that tipping point, but it is impossible to know what will finally get us across the threshold.

A few years ago it was big news if a research team received a million pound grant. Now that is commonplace. Last year one group raised £180m, and it was not a major news story within the longevity community.

The media is a little behind the investment community. The Dublin Summit will be covered by the New Scientist, and a couple of significant documentaries will be filmed there. Mainstream outlets like the BBC, CNN, and the world’s major newspapers are still not devoting much attention to the summit, but O’Dea feels sure it won’t be long before they do.

Lifespan and healthspan

As for the general public, O’Dea acknowledges that the idea of radically extended lifespans is still too much of a swallow for most people, but the idea of defeating some of the major diseases that afflict us as we age is not. It is ironic that most people would be delighted to learn that heart disease, cancer, and dementia had all been overcome, even though they look askance at calls to stop aging itself, which is what causes those three major killers.

Tackling aging is not only important because it can stop us all dying from this trio of fatal diseases. It is also vital to make our later years enjoyable, indeed endurable. Sadly, most people don’t die quietly and suddenly in their sleep. Most of us will endure years of pain and worry as we fight one or more of the three killer diseases. These afflictions also impose huge financial burdens on the taxpayer. Most of the money that your country’s health service will ever spend on you is spent in your final years – indeed, often in your final year – and if we could improve healthspan as well as lifespan, we could remove this burden.

Aubrey de Grey’s current project is to achieve robust mouse rejuvenation, which means giving an extra year of life to middle-aged mice. The project is a large study costing a great deal of money, and O’Dea argues that it is the most important piece of scientific research in the longevity field – and perhaps any scientific field. The study’s 1,000 mice have not yet lived long enough to announce any major results at the summit, but there may be important findings to talk about next year.

$100 billion

Although there is an encouraging increase in the amount of money dedicated to longevity research, we still need multiples more, because the mechanisms of aging are fantastically complex. Instead of hundreds of millions of dollars, we need hundred of billions. In a previous podcast, Andrew Steele (who will also be at the summit) argued that we should not speculate about how many years it will take to reach longevity escape velocity (the moment when science gives you an additional year of life every year that passes). Instead we should talk about how much money it will take. His best guess is that the amount required is in the ballpark of $100 billion.

The best part of any conference is always said to be the networking, and O’Dea says this is particularly true of the Longevity Summits. No-one is obliged to keep information from anyone else by corporate non-disclosure agreements, and the underlying purpose of the attending community is so exciting and energising.

The post What’s new in Longevity? With Martin O’Dea first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2023 19:58

August 11, 2023

Investing in AI, With John Cassidy

Kindred Capital

Venture capital is the lifeblood of technology startups, including young companies deploying advanced AI. John Cassidy is a Partner at Kindred Capital, a UK-based venture capital firm. Before he became an investment professional, he co-founded CCG.ai, a precision oncology company which he sold to Dante Labs in 2019. He joined the London Futurists Podcast to discuss how venture capital firms are approaching AI today.

Kindred Capital was founded in 2015 by Mark Evans, Russell Buckley, and Leila Zegna. It has raised three funds, each of around $100 million, and is focused on early-stage investments, known in the industry as pre-seed and seed rounds. It likes to invest in platforms, and picks and shovels, which means businesses which can become part of the essential infrastructure for many larger companies. Its preferred sectors are ‘techbio’ (by which he means tech-focused biotech businesses), software (especially software as a service, or SAAS), energy and fintech. Its main geographies are the Europe, the UK, and Israel.

Among its recent AI investments is Scarlet, which is building a continuous compliance infrastructure for companies operating in the highly regulated medical software industry. Another is Cradle Bio, a generative AI tool which allows protein engineers to use deep learning AI systems and models like AlphaFold to identify new and better proteins for medicines and industrial enzymes.

Bubbles and reality

The venture capital industry is highly cyclical, and notoriously prone to excess. In recent years it has applied over-exuberant valuations to blockchain companies, and in companies offering ten-minute delivery services, but the dotcom bubble at the turn of the century was perhaps the most infamous example. Cassidy hopes the current wave of excitement about AI is different from those situations. There is some exaggeration of the capabilities of transformer AIs, and some people argue breathlessly that they are virtually artificial general intelligence (AGI) systems, which is not true. But underlying that hubris, large language models and generative AI are starting to demonstrate the transformational capabilities that will ensure this is no bubble, because they can create real efficiencies, and generate real money.

It is often said that an economic boom is like a gold rush, and in a gold rush you are better off selling picks and shovels to the miners, than digging or panning for gold yourself. Nvidia is a great example of a company doing the equivalent of selling picks and shovels to miners, and its valuation is exuberant. Cradle Bio, Kindred’s portfolio company that helps protein engineers use generative AI to design molecules for medicines and industrial enzymes, is also in the picks and shovels business. Cassidy says the number of proteins which scientists have been studied so far is vanishingly small compared to the number of all possible proteins: it’s like the ratio between a single grain of sand and all the sand in the world. So there is a lot to go for.

The trick for investors, of course, is to identify which of the companies operating in the new value chains will be successful, and which are built on castles of sand. Some of the biggest companies in the world today, like Amazon and Google, were formed during the dotcom bubble, but a great many more disappeared without trace, taking large pools of capital with them.

Founders

At the pre-seed stage, the factor which matters most is the capability of the founder or founders. During the journey from startup to successful exit (stock market flotation, or sale to a bigger company), everything about the company will change, including its technology, its product, and its business model. Pretty much the only thing that can remain constant is the founder. Cassidy spends his time trying to identify and develop relationships with founders and potential founders who have the spark, (“the creative destruction in their being”), which means they have an outside chance of starting a company and guiding it through all the enormous changes and challenges that lie between the start point and the finish point.

These founders are extraordinarily talented and driven, but that is not enough. They have to be irrational enough to believe that they can change the world – that they can lift themselves by tugging on their own shoelaces – while also having great judgement, which tells them which strategies and tactics will work in a given situation, and which ones won’t.

Cassidy suggests it is useful here to apply the model of fluid and crystallised intelligence, which was first suggested in 1963 by the psychologist Raymond Cattell. Crystallised intelligence is the trump card of older people, who have seen many of the possible strategies deployed, and learned from experience what works and what doesn’t. They also know the written and unwritten rules which guide organisations. Fluid intelligence is the ability – more evident in younger people – to solve problems from first principles, and to ask “why do we do it this way?” when everyone else takes a sub-optimal approach for granted. The best founders possess both these types of intelligence.

Lessons from Silicon Valley

Cambridge is where Cassidy went to do his PhD, and he was enchanted by the geeky conversations he overheard in pubs, where people talked about how to engineer new proteins. As he was growing his precision oncology company business, CCG.ai, he also spent a lot of time in Silicon Valley, where the conversation in bars was all about how to create new types of company, and how to be successful in new and creative ways. He thinks Cambridge (and indeed, Europe as a whole) has a lot to learn from Silicon Valley, and there is much to do in order to build the availability of growth capital, and a helpful institutional environment. But he is confident it can be done, because of the exceptional talent emerging all the time from universities there.

There is still a fear of failure in Europe, whereas in Silicon Valley if you start a company, raise some money, but fold the company again six months or a year later, nobody holds that against you. This should be second nature to scientists, who make progress by disproving one hypothesis in order to develop a better one.

Europe is also in danger of hobbling its tech industry by regulating both the products and services it develops, and also the mergers and acquisitions that enable it to reward success. If the only way to exit a successful high-growth business is to float it on the NASDAQ, then Europe cannot expect to build a cluster of home-grown tech giants.

Another factor often cited to explain why Europe has no tech giants is that its single market remains a work in progress, with Brexit being a big step backwards. Cassidy argues that the US’ single market is also imperfect, at least in his area of healthcare, as individual states have different regulatory frameworks. He also argues that any company that wants to scale must learn how to work in different environments, and starting a company in Europe can mean you simply acquire the skills to do that sooner.

Focusing AI on clinical trials

Cassidy is excited about the future of AI in biotechnology. Much of the current action in healthcare AI is devoted to designing new molecules, but the biggest hurdles to getting new drugs to market lie in the clinical trial process that lies downstream of protein engineering. This is where pharmaceutical companies spend the vast majority of their budgets – and their time. AI could enable efficiency improvements – large and small – which would collectively get drugs to patients much faster and much more cheaply.

The post Investing in AI, With John Cassidy first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 09:01

The Death of Death. With Jose Cordeiro

An enthusiastic transhumanist

One of the most intriguing possibilities raised by the exponential growth in the power of our technology is that within the lifetimes of people already born, death may become optional. This idea was championed with exuberant enthusiasm by Jose Cordeiro on the London Futurists Podcast.

Jose Cordeiro was born in Venezuela, to parents who fled Franco’s dictatorship in Spain. He has closed the circle, by returning to Spain (via the USA) while another dictatorship grips Venezuela. His education and early career as an engineer were thoroughly blue chip – MIT, Georgetown University, INSEAD, then Schlumberger and Booz Allen.

Today, Cordeiro is the most prominent transhumanist in Spain and Latin America, and indeed a leading light in transhumanist circles worldwide. He is a loyal follower of the ideas of Ray Kurzweil, and in 2018 he co-wrote “The Death of Death” with David Wood.

Immortal cells and organisms

Cordeiro has been described as “a hopeless optimist always bursting with energy”. He proclaims that life is beautiful, and we should all enjoy more of it than nature has endowed us with. Some of his optimism about the prospects for longevity stems from the existence of immortal cells in our bodies, and the existence of immortal organisms, like bacteria, some hydras, and some kinds of jellyfish. They don’t age, so if they are not killed by predators or accidents, they can live indefinitely. Bacteria are the oldest life form on the planet, so life on Earth actually started without aging built in.

Ray Kurzweil is a polarising figure, but he deserves much credit for alerting many people to the astonishing impact of Moore’s Law, which is the observation that $1,000-worth of compute gets twice as powerful every 18 months. Moore’s Law means that compute power is growing exponentially, and Kurzweil realised decades ago that this could give us machines with all the cognitive capabilities of adult humans within his lifetime. In the 1980s, Kurzweil was working at MIT with Marvin Minsky, one of the founding fathers of the science of artificial intelligence. Cordeiro studied there, and when he took some courses with Minsky, he came across Kurzweil, and read his book, “The Age of Intelligent Machines”.

Living with death

It’s an odd fact that many people are blasé about the idea of radically extended longevity. There is a very common tendency to say that 80 years is a good and proper length of time to live, and wanting more is greedy and inappropriate. Cordeiro thinks this attitude arises from our need to make death less horrifying. We convince ourselves that death gives meaning to life, and so, to coin a phrase, we are able to live with death.

But is there any reason to believe that humans could be given radically longer lifespans in the near term? The oldest person who ever lived died at the age of 122 back in 1997, and average life expectancy in the US and the UK have actually declined in recent years.

Methuselah worms

Cordeiro argues that in the last decade or so, exciting progress has been made on extending the lifespans of various animal models: the lifespans of some mice have been doubled. Some fruit flies have had their lifespans multiplied by four, and some worms by ten, so there are now so-called “Methuselah worms” that have lived the human equivalent of 1,000 years.

No human has had their lifespan extended like this, but some human cells have been rejuvenated. The 2012 Nobel Prize for Medicine was given to a Japanese scientist called Shinya Yamanaka. His team have proved than skin cells can be rejuvenated, and now he is working on eyes, which are relatively small organs, without many connections to the rest of the body. They have succeeded with mice and with monkeys, and human tests are starting.

The most recent advances have taken scientists by surprise, because they are enabled by the exponential growth in the power of computer technology, and of new techniques like CRISPR-Cas9. This exponential growth also means that future advances will come much faster than most of us expect.

If cancer can stop aging, so can we

Most of the cells in our bodies age, but cells known as germ cells, which are responsible for reproduction, do not. They make eggs in women and sperm in men, and they exist in all multi-cellular organisms. The other type of cells, which do age, are called somatic cells, or body cells. If somatic cells mutate and become cancerous, then they do not age either. Cordeiro jokes that if cancer can learn how to stop aging, then so can we.

There is no single theory about how and why aging happens that is universally accepted. Instead there is vigorous debate between the protagonists of a variety of theories. For instance some people think that aging is like the wear and tear of a car. Parts of a car get rusty, or fall off because a screw works loose, and similar processes occur at the cellular level in biological organisms. Other people think that aging is built-in obsolescence. Over millions of years, evolution has repeatedly “discovered” that a species thrives when its older members die, not least because this allows younger, improved members of the species to take over.

Cordeiro takes a radical approach to this debate: he dismisses it as unimportant. He argues that all we need to do is to work out how the cells and organisms that do not age manage to avoid it, and then copy those techniques.

Evolution was wrong

Cordeiro also has no time for the argument that evolution arranged for us to age, so there must be a good reason for it. He points out that evolution has endowed us with many defects that science has enabled us to overcome, such as disease, and deteriorating eyesight. He adds that aging takes such varied forms that it cannot have a single purpose. Even within the class of vertebrates called mammals, there are mice which live two years, and whales that live hundreds of years. Aging must be doing very different things in these animals to have such different manifestations.

The optimism that longevity research will make great advances in the coming years stems partly from the exponential rate of improvement of technologies that it is using, and also partly from the fact that so much more resource is being applied to it now. A few years ago, the amount of money invested in the research was in the $millions. Today it is in the $billions, and soon it will be $trillions. Cordeiro believes that within a few years, longevity medicine will be the largest industry in the history of humanity. He is convinced that Ray Kurzweil is right to believe that by 2029 we will achieve longevity escape velocity (LEV), which means that every year that passes, science gives you an extra year of life to offset the year you just spent. The implication of this is that if you manage to live to 2030, death should become optional for you.

Death and politicians

Politicians really should pay attention to these developments. Not just because the end of aging would be the most significant development in human history, but also because there is a huge longevity dividend. Age and the diseases it causes – heart disease, dementia and cancer – consume most of the health budget of every country on the planet. And they are barely managing to cope. If we can cure aging we can slash this cost.

The most useful contribution that a region or a country could make, Cordeiro argues, would be to declare aging a curable disease. This would attract massive funding, and an influx of scientific talent. 90% of human deaths are caused by aging, and age-related diseases. All the other causes – malaria, suicide, drugs, war, famine, and so on – account for only 10%.

The post The Death of Death. With Jose Cordeiro first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 08:56

AI and professional services. With Shamus Rae

Collar colour

Not long ago, people assumed that repetitive, blue-collar jobs would be the first to be disrupted by advancing artificial intelligence. Since the arrival of generative AI, it looks like white-collar jobs will be impacted first. Jobs like accounting, management consulting, and the law. Who would have guessed that lawyers would find themselves at the cutting edge of technology.

Shamus Rae is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit firms. Shamus joined the London Futurists Podcast to discuss how AI will impact professional services in the next few years.

Shamus was ideally placed to launch Engine B, having spent 13 years as a partner at the audit firm KPMG, where he was Head of Innovation and Digital Disruption. But his background is in technology, not accounting. Back in the 1990s he founded and sold a technology-oriented outsourcing business, and then built a 17,000-strong outsourcing business for IBM in India from scratch.

Data

The top priority for Engine B is data. Shamus argues that unless an organisation’s data is up-to-date, accurate, and held in standardised formats, you can’t do anything useful to it with advanced AI. So Engine B spends a lot of its time obtaining the right data from clients, and making it comply with those standards. Getting the plumbing right, Shamus says.

Most of this data is used by the kind of the kind of pattern recognition deep learning models that were introduced by the 2012 Big Bang in AI. But Engine B is also starting to use the generative AI that were introduced by the 2017 Big Bang, and is building co-pilots for its smaller clients – the larger firms are building their own.

The audit firms used to think there was competitive advantage in their data models, and their individual approaches to handling client data, but when the ICAEW reviewed the approaches, they found they were all pretty much the same. This shouldn’t be surprising: data science is not a core skill for accountants, and nor should it be.

This does not mean that data is not important and confidential. Engine B is religious about never looking at the content of client data, and never copying it, anonymising it, or storing it.

Data swamps and data lakes

Most of the data held by most companies is in a bad state, and has to be cleaned up and regularised before it can be used. To coin a phrase, data swamps have to be transformed into data lakes. For example, a large company will lease many buildings; each of these leases is likely to have evolved over time, and it may not be immediately obvious which lease is the current and applicable one. You can find this out by correlating information from payment records, and this can be done automatically, without a human nosing around in the data.

There are about 300 accounting systems used by large companies around the world, and most of them can be tailored to particular client requirements. In addition, some clients – like Tesla – actually write their own accounting systems rather than using the industry standards like SAP and Oracle. So the variety of accounting systems that Engine B has to tap into is enormous. Nevertheless, it claims to be able to start extracting useful data from almost any accounting system within an hour.

Engine B currently has paying clients in the US and the UK, and works on the audits for 50,000 of the companies that its clients work for. Its clients are global firms, and they will shortly be rolling out the service elsewhere in Europe and the rest of the world. It expects to be working on around 200,000 audits in a year’s time, so growth is fast. Shamus says the company is more advanced in the accounting sector than the legal sector, but that the arrival of generative AI is changing the balance.

Training future partners

When the prospect is raised of AI automating the simpler, more repetitive tasks in auditing, the question is always asked: how will young accountants get trained? Shamus replies that the skills acquired during years of “ticking and bashing” can be acquired less painfully and more quickly. In future, accountants might think that their predecessors who had to endure that process were put through it as a sort of therapy for those who went through it before them.

A comparable process for lawyers was to wade through thousands of legal documents in a “deal room” during the review of a transaction like an investment or an acquisition. Much of this “disclosure” work has now been automated, with no apparent loss of expertise within the legal profession.

New business models

But the automation of ticking and bashing does give the audit firms a problem, as it undermines their funding model, in which clients are charged a significant sum for a mass of juniors to carry out grunt work, and partners earn a share of this income to add to the larger fees they charge for their own more limited time.

Shamus thinks the professional services firms will have to abandon their current triangle-shaped organigrams, with a lot of junior people at the bottom, a smaller number of managers in the middle, and a very small number of partners at the top. They will have to adopt a diamond-shaped organigram, because most of the junior jobs will have been automated.

Lawyers and accountants will also have to learn how to sell more than just billable hours. They will have to sell the value of the AI systems which are replacing much of the work previously done by junior humans.

In Shamus’ experience, senior people in professional services do appreciate that GPT technology means their industries are about to experience dramatic change. But there is still a level of denial: people often think that everyone else’s job will change, but not theirs.

The post AI and professional services. With Shamus Rae first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on August 11, 2023 08:50

July 13, 2023

AI and new styles of learning. With David Giron

The education sector may well be impacted by advanced AI more profoundly than any other. This is partly because of the obvious potential benefit of applying more intelligence to education, and partly because education has resisted so much change in the past.

42 as the meaning of … learning

David Giron is the Director of one of the world’s most innovative educational institutions, 42 Codam College in Amsterdam. He was previously the head of studies at Codam’s parent school 42 in Paris, which was founded in 2013, so he has now spent 10 years putting the school’s radical ideas into practice. He joined the London Futurists Podcast to explain how 42 works, and how the world of education will be impacted by technology in general, and by generative AI in particular.

42 is a software engineering school, in which all learning is completely peer-to-peer. There are no teachers or lecturers. The learning process is hands-on: students don’t talk about programming; they learn by doing it. The recipe has proved successful: the school now has 50 campuses around the world, in 30 countries, with 18,000 students currently enrolled. As you may have already guessed, it is named after the famous joke in Douglas Adams’ “Hitch-hiker’s Guide to the Galaxy” that 42 is the meaning of life.

Placing students at the centre

Giron says the philosophy of 42 is not antagonistic towards more traditional approaches to education, but it sees the student as passive and peripheral in them, whereas it seeks to place the student at the centre of the learning process. Rather than receiving learning, they have to seek it.

Examination and evaluation is modelled on academic evaluation: students are selected randomly (within constraints) to peer review each other’s work.

Mastery learning

The 42 school practises “competency-based learning” or “mastery learning”, which was advocated by the educationalist Sir Ken Robinson in one of the most-watched TED talks ever (here). This means that students do not proceed from one module to the next until they have demonstrated mastery of the first one.

This is particularly important in maths, and maths-related subjects like software engineering, because failure to understand one module means that your understanding of everything that follows will be shaky at best. Therefore some students at 42 finish the course in six months while others take two years. There is no stigma attached to this: it is not a race.

Broader applicability

Giron believes that 42’s approach is applicable to many other subjects – perhaps all subjects, but most of the subjects where it has been tried are technical ones. The same hurdles keep cropping up: equipment and consumables. Software engineering requires no capital investment and no material inputs. This is obviously not true of other branches of engineering, like chemical engineering, or woodworking.

Although Giron notes that 42’s approach has been very successful, and could be applied more widely, he does not claim that it should be adopted universally. Every student is different, and what works for one will not necessarily work for the next. 42 is simply offering one new approach to the educational mix. 42 receives frequent visits from other educationalists who are curious to learn about its approach, but as a previous guest on the London Futurists Podcast commented, education is a bit of a slow learner.

Metrics and failure

The most important measurement of success for Giron is the enthusiasm of employers to hire 42’s graduates, including employers who have already hired some in the past. The second measurement is the satisfaction of the students themselves, and this is tested regularly.

Some elite schools claim that if no students are failed, then the bar is being set too low. Others argue there is no reason why every student should not succeed, at least if they were able to gain admittance in the first place. Giron says he adopts a third approach, which is that students should experience failure, but that this should happen within the school, and it should not mean they have to leave. The experience of failure can inculcate resilience, but it should not be allowed to undermine the fundamental confidence of a student.

Covid

Face-to-face contact between students was seen as an important element of 42’s approach, so Covid was especially challenging. When the lockdowns hit, the school took a month to re-design the learning process to be online-only, but the level of drop-outs soared, and the students who persevered took longer to complete the course. It also turned out that for many students, 42 provided the whole of their social life, and when they were no longer able to see each other at school, some of them had no social contact whatsoever.

Re-adjusting to normal life after Covid was also bumpy, but Giron reports that everything is pretty much back to normal now.

Generative AI

Initial responses to ChatGPT and similar models were often polarised. Some people immediately said that we have entered a new world and everything will change. Others demurred, dismissing the excited talk as hype. Enough time has now passed since the launches of ChatGPT and GPT-4 to make a more balanced judgement. Giron does believe that these models will have enormous impact. For instance, some simple software engineering tasks, such as building static websites, will probably be completely automated. But he says that the adoption of GPT technology will be slower than many people expect, and it will not replace humans in most software engineering roles.

Surprisingly, only 3 or 4 of every 10 students at Codam are using GPTs regularly. Adoption is picking up, but it remains gradual.

An interesting consequence of GPTs is that the sequence of coding is sometimes reversed. Previously, you would write some code, and if you followed best practice you would then write some commentary on it to help future engineers use or debug the code. Now you can write the commentary, and have GPTs write the code based on that. Effectively, this is programming in natural language.

Agility

It is too soon to know exactly what impacts GPTs and other advanced AIs will have on education, and the impact will be very different depending on the timescale. The change in the next year will be eclipsed by the change in the next five years, and again in the next decade. In a period of exponential technological progress, the most important characteristic to cultivate is agility.

The post AI and new styles of learning. With David Giron first appeared on .

 •  0 comments  •  flag
Share on Twitter
Published on July 13, 2023 00:45

July 7, 2023

AI-developed drug breakthrough. With Alex Zhavoronkov

Healthcare is one of the sectors likely to see the greatest benefits from the application of advanced AI. A number of companies are now using AI to develop drugs faster, cheaper, and with fewer failures along the way. One of the leading members of this group is Insilico Medicine, which has just announced the first AI-developed drug to enter phase 2 clinical trials. Alex Zhavoronkov, co-founder of Insilico Medicine, joined the London Futurists Podcast to explain the significance of this achievement.

Idiopathic Pulmonary Fibrosis

The drug in question is designed to tackle Idiopathic Pulmonary Fibrosis, or IPF. “Fibrosis” means thickening or scarring of tissue, and “pulmonary” refers to the lungs. The walls of the lungs are normally thin and lacy, but IPF makes them stiff and scarred. It is a common disease among the over-60s, and is often fatal.

Insilico is unusual among the community of AI drug development companies in that most of them go after well-known proteins, whereas Insilico has identified a new one. In 2019, Insilico’s AIs identified a number of target proteins which could be causing IPF, by scouring large volumes of data. They whittled the number down to 20, and tested five of them, which resulted in one favoured candidate. They proceeded to use another set of AI models to identify molecules which could disrupt the activity of the target protein. This second step involved the relatively new type of AI that is called generative AI.

GANs and GPTs

The first generative AIs were introduced in 2014 (the same year that Zhavoronkov founded Insilico Medicine), and are known as Generative Adversarial Networks, or GANs. This involves two AI models competing with each other – one to create an image, and the other to criticise it until it is essentially perfect. The second, and better-known class of generative AIs are transformer AIs, which were introduced in a 2017 paper by Google researchers called “Attention is all you need.” These are familiar to us all from ChatGPT and GPT-4: GPT stands for Generative Pre-trained Transformer.

To identify a molecule which can disrupt the target protein, Insilico gives the crystalline structure of the protein to as many as 500 different generative AI models, and instructs them to design molecules which will bind with the protein productively. Over a few days, these models compete to find the best molecule for the job. Human chemists in around 40 Contract Research Organisations (CROs), mostly in China and India, review the most promising 100 or so of the resulting molecules, and around 15-20 of them are synthesised and tested. The characteristics of the best performing molecules are fed back into the array of generative AI systems for further review. This was all done in 2019.

Clinical trials

The resulting molecules were tested for both efficacy and safety in mice and other animals, including dogs. By 2021 the company was ready for phase zero of the clinical trial process, which was a preliminary test for safety in humans, conducted on eight healthy volunteers in Australia. This was followed by a phase one clinical trial, which is a large-scale test for safety in humans. This was carried out on healthy volunteers in New Zealand and China, and had to be particularly thorough because IPF is a chronic condition rather than an acute one, so people will be taking a drug for it for years rather than weeks or months.

Now, Insilico is able to proceed to the phase two study, dosing patients with IPF in China and the USA. Part of the challenge at this point is to find a large number of patients with good life expectancy, and the company is still recruiting.

Savings and consolidation

Overall, Zhavoronkov thinks that Insilico has shaved a couple of years off the six-year discovery and development process. But more importantly, 99% of candidate molecules fail, so the most important improvement offered by AI drug discovery and development lies in reducing this failure rate.

A couple of years ago, the community of companies applying AI to drug development consisted of 200 or so organisations. Biotech was a hot sector during Covid, with lots of money chasing a relatively small number of genuine opportunities. Some of that heat has dissipated, and investors have got better at understanding where the real opportunities lie, so a process of consolidation is under way in the industry. Zhavoronkov thinks that perhaps only a handful will survive, including companies like Schrödinger Inc., which has been selling software since the 1990s, and has moved into drug discovery.

New technologies, new opportunities

For the companies that survive this consolidation process, the opportunities are legion. For instance, Zhavoronkov is bullish about the prospects for quantum computing, and thinks it will make significant impacts within five years, and possibly within two years. Insilico is using 50 qubit machines from IBM, which he commends for having learned a lesson about not over-hyping a technology from its unfortunate experience with Watson, its AI suite of products which fell far short of expectations. Microsoft and Google also have ambitious plans for the technology. Generative AI for drug development might turn out to be one of the first really valuable use cases for quantum computing.

The arrival of GPTs has made Zhavoronkov a little more optimistic that his underlying goal of curing aging could be achieved in his lifetime. Not through AI-led drug discovery, which is still slow and expensive, even if faster and cheaper than the traditional approach. Instead, GPTs and other advanced AIs hold out the promise of understanding human biology far better than we do today. Pharmaceuticals alone probably won’t cure aging any time soon, but if people in their middle years today stay healthy, they may enjoy very long lives, thanks to the technologies being developed today.

 

The post AI-developed drug breakthrough. With Alex Zhavoronkov first appeared on .

1 like ·   •  0 comments  •  flag
Share on Twitter
Published on July 07, 2023 07:17