Calum Chace's Blog, page 4
April 14, 2023
What do professional futurists do? With Nikolas Badminton
We’re only a few weeks into 2023, but there has been a sea-change in the thinking of many business people regarding the future. GPT-4 and similar systems look likely to usher in major changes to the way many of us work and play, and they will probably have significant impacts on markets, economies, politics, and international relations. How can businesses become more effective in anticipating and managing these changes in their business landscapes?
A new book which addresses this question of managing rapid change is Nikolas Badminton’s “Facing our Futures: How foresight, futures design and strategy creates prosperity and growth”. Over the last few years, Badminton has worked with over 300 organizations, including Google, Microsoft, NASA, the United Nations, American Express, and Rolls Royce. He also advised Robert Downey Jr.’s team for the “Age of AI” documentary series. Badminton joined the London Futurists Podcast to discuss what it means to be a professional futurist, and how to become one.
Becoming a futuristAt the age of eight, Badminton was captivated by “The Usborne Book of the Future”. Published in 1980, this book described what life would be like in 2000, at the start of the next century. It got some things right, including wearable computing, but it predicted other things which have still not come to pass, including colonies on the moon and under the ocean. Around the same time, he started playing with computers, and he carried on working with computers at university and in his early career, which focused on the use of data in advertising. Before becoming a futurist, Badminton spent most of his career as a consultant, advising clients on technology and business strategy.
In 2008, Badminton moved to Canada. In North America there was much more interest in how technology will shape our futures, and for the first time he encountered people calling themselves futurists. But coincidentally, it was after giving a presentation at a London Futurist event in 2015 that Badminton decided to become a full-time futurist.
The nature of futurismHe describes a typical engagement. Working for a large technology company with 180,000 employees, he said “here are the signals, here are the trends, here are some scenarios we can expect to see, and this is how your objectives and your strategies will be impacted.” Together with an executive – that would become a futurist-in-residence – he reviewed a wide range of outcomes, from demographic trends such as population shrinkage, to government policy on gun ownership and gun crime.
Another example is an engagement for a frozen food company which was poised to make an investment in Valencia, in southern Spain. Badminton drew their attention to a report which described a difficult future of excess heat in that part of the world, and a corresponding “apocalypse windfall” in northern Europe. The company reviewed and revised its investment plans accordingly.
The work generally consists of workshops, and collaborative projects to write fiction that captures what emerge as the most likely scenarios and bring them to life. Clients can be large corporates or startups, and they are all over the world. Badminton works on these engagements with colleagues in an organisation called the Futurist Think Tank. In the last three years, this has become 60% of his work, whereas previously most of his revenue came from keynote talks. So far, he is finding it fascinating and rewarding.
How futurists are trainedIf you are an engineer, or a government administrator, and you want to become a strategy consultant, you typically do an MBA and join McKinsey or one of the other strategy firms. There is as yet no equivalent pathway for futurists. There are universities which now offer degrees in foresight, but Badminton’s advice is to travel and experience different types of life and work. You need a broad horizon to be a futurist, and most people will not glean that from the pages of books.
The key is to be open-minded and curious, and shift your mindset from “what-is” to “what-if”. You need to be prepared to ask difficult questions, and see where the logic goes. If the people who championed Brexit had sincerely considered the potential downsides, they might not have pushed as hard for the extreme form of Brexit that was eventually imposed. Who knows, maybe David Cameron, the UK prime minister who launched the referendum which led to the decision, might not have done so.
Dark FuturesLots of people who call themselves futurists peddle what could be called “future porn”. They gush about what a wonderful world we will live in when we have self-driving cars and robot valets, and they fail to think rigorously about the timelines for the individual technologies, the potential for resistance to their adoption, and the possible downsides. When he hears these people talk, Badminton often wants to put his head in his hands and weep: these glib accounts don’t help people think seriously about the challenges ahead.
We are, he says, born of struggle, perennially subject to war and taxes. He is sceptical that we can reach a society of abundance any time soon. He describes the world envisaged by Peter Diamandis and others as a “pay-to-play situation”, which will not be available to the large swathes of humanity who struggle under a calorific deficit, and are beset by wars and corrupt government.
There are plenty of ways that technology could enable or create negative economic and social outcomes. Badminton has run around 45 “Dark Futures” meetings, in which guests give short presentations about bad or dangerous scenarios such as inappropriate use of surveillance technology, or dirty money being laundered through the property markets of desirable cities like Vancouver and London. These meetings have been held in Vancouver, Toronto, and San Francisco. Expansion to New York and London was stymied by the pandemic.
However, Badminton is an optimist. He thinks our societies are in a constant state of collapse, but we are very resilient, and we keep bouncing back in better shape than before. Alarmism about climate change and capitalism is overblown, he thinks. We do face a number of existential risks like nuclear weapons and asteroids, but there are few of these, and he likes to think of futurists as “hope engineers”.
De-growthJason Hickel and others argue that rish humans are placing too much strain on the Earth’s ecosystems. They think we should shrink our economies and become more sustainable. Badminton is attracted to parts of this ideology, but he regards it as “entirely imaginary”, and therefore a useful tool to help develop new ways of thinking, new planning scenarios and new stories.
He does think we should wean ourselves off fossil fuels more quickly, and build circular economies where nothing is wasted and everything is recycled. He would like to see cities built for pedestrians rather than cars (although he is cynical about the Saudi project to build a car-free city called The Line), and greater equality of income.
But unless abundance for all is just around the corner, then slowing economic growth is an affordable indulgence for wealthy people in the West, but a disaster for hungry people in the global south, and for people just about clawing their way into a middle class existence.
Futurism improves company performanceThe growth of interest in futurism as a career is a good thing, he thinks, but it is still the case that too few large companies employ full-time futurists. A study carried out in Belgium in 2018 by Rene Rohrbeck and Menes Etingue Kum found that over an eight-year period, future-prepared firms outperformed the average by a 33% higher profitability and with a 200% higher growth. If there is a causal link, then whichever direction it goes, the finding is positive for futurists.
The post What do professional futurists do? With Nikolas Badminton first appeared on .
April 6, 2023
GPT-4. Commotion and controversy
On the day that a London Futurists Podcast episode dedicated wholly to OpenAI’s GPT-4 system dropped, the Future of Life Institute published an open letter about the underlying technology. Signed by Stuart Russell, Max Tegmark, Elon Musk, Jaan Tallinn, and hundreds of other prominent AI researchers and commentators, the letter called for a pause in the development of the large language models like OpenAI’s GPT-4, and Google’s Bard.
It was surprising to see the name of Sam Altman, OpenAI’s CEO, on the list, and indeed it soon disappeared again. At the time of writing, there were no senior signatories from either of the two AGI labs, OpenAI and DeepMind, or from any of the AI-driven tech giants, Google, Meta, or Microsoft, Amazon or Apple. There was also no representation from the Chinese tech giants, Baidu, Alibaba, or Tencent.
Whatever you think of the letter’s prospects for success, and even the desirability of its objective, it was a powerful demonstration of the excitement and concern being generated in AI circles about GPT-4 and the other large language models. The excitement about GPT-4 is not overdone. The model is a significant advance on any previous natural language processing system.
Two big bangsThe last time there was this much excitement about AI was in 2016, when DeepMind’s AlphaGo system beat Lee Sedol, the world’s best player of the board game Go. That achievement was the result of the Big Bang in AI which occurred four years before, in 2012. When a neural network developed by Geoff Hinton and colleagues won the ImageNet competition, it was the start of the deep learning revolution, and for the first time ever, AI began to make serious money. Interestingly, Hinton’s colleagues included Ilya Sutskever, who went on to help found OpenAI, and become its Chief Scientist.
GPT is the result of what may come to be called the second big bang in AI, which happened in 2017 when some Google researchers published a paper called “Attention is all you need”. This described a new type of deep learning called Transformers, which enable systems like Dall-E and Midjourney, that generate photorealistic images when prompted by short instructions in natural language. They also enable natural language systems like GPT-4.
More attention neededThis renewed public focus on AI is a good thing. The impact of AI on every aspect of life over the coming years and decades will be so profound that the more people are thinking about it in advance, the better. Even with GPT-4 hitting headlines, and obsessing many technophiles, AI still isn’t getting the wider public attention it really deserves. It still gets beaten by populist attacks on transgender people and other somewhat spurious stories, but we must be grateful for whatever progress we can get.
PeekabooThe operation of Transformers is often summarised as token prediction. They are trained on vast corpuses of text – all of Wikipedia, and millions of copyright-free books, for instance. They ingest this text and select tokens (words, or parts of words) to be “masked”, or hidden. Based on their model of how language works, they guess what the masked token is, and according to whether the guess was right or wrong, they adjust and update the model. By doing this billions of times, Transformers get really good at predicting the next word in a sentence. In order to avoid generating repetitive text, they make some arbitrary tweaks to the probabilities. When a system is tuned to make more tweaks, it is said to have a higher “temperature”.
Critically, this masking process does not require the training data to be labelled. The systems are engaged in self-supervised training. This is unlike the deep learning systems trained on massive datasets like ImageNet, where each image has been labelled by humans.
There is a human component in the training of transformers, though, which is Reinforcement Learning from Human Feedback, or RLHF. After the masking training is complete, the system’s responses to prompts are evaluated by humans for a period, and the evaluations are fed back into the system in order to minimise error and bias.
GPT-3, 3.5, 4GPT stands for generative pre-trained transformer. GPT-3 was launched in November 2020 and boasted what was then an unheard-of number of 175 billion parameters (analogous to the synapses in a human brain). GPT-4 was released on 14 March 2023, and the number of parameters has not been disclosed. OpenAI has been criticised for reversing its policy of publishing as much as possible about its systems. It replies, not unreasonably, that if these models can cause harm in the wrong hands, it would be silly to make it easier for the bad guys to replicate them.
What is known is that the number of tokens that GPT-4 can handle – 32,000 – is much larger than the 4,100 that GPT-3 could manage. Among other things, this enables it to work with longer texts.
ChatGPT was a chatbot based on a halfway system, GPT-3.5. It was released in November 2022, and within a month it had 100m users, which was the fastest adoption rate of any app or platform.
OpenAI’s short but turbulent historyThe story of OpenAI is as dramatic as its products are impressive. The company was formed in San Francisco in 2015 by Elon Musk, Sam Altman, and friends. They invested $1 billion of their own money to get the non-profit started. Musk stood down in 2018, and the reason given at the time was potential conflict of interest with his car company Tesla, which is also a significant developer of AI technology.
More recently there are reports that he left because he feared that OpenAI was failing to compete with the other leading AGI lab, DeepMind. (I call these two labs AGI labs because they are both explicitly targeting the development of artificial general intelligence, an AI with all the cognitive abilities of an adult human.) He offered to lead the company himself, and invest a further $1bn of his own money. When his leadership bid was declined, he left, taking the $1bn with him. OpenAI was unable to pay for the AI talent it needed, and its management decided it had to become, in part, a for-profit organisation.
Microsoft is very interested in GPT technology. It contributed $2bn before the launch of ChatGPT, and has agreed to invest a further $10bn since. OpenAI’s parent company is still a non-profit, and the returns to investors in the revenue-generating subsidiary are capped at 100x. Sam Altman, it turns out, has zero financial interest in the company. He doesn’t mind: he is already a rich man.
Musk has become a trenchant critic of OpenAI, especially on Twitter. Altman has expressed continued deep respect for his former business partner, but has also observed that he sometimes behaves like a jerk.
GPT-4’s advancesOpenAI’s latest system makes fewer mistakes than its predecessors – in the jargon, it hallucinates less. It is better at passing exams too. It passed the US bar exam with a score in the top 10% of candidates, whereas GPT-3 only managed the bottom 10%. This doesn’t tell us whether the system could actually be an effective lawyer, but it is impressive.
Unlike earlier systems, GPT-4 also seems to have learned simple maths. And it often appears to be doing something indistinguishable from reasoning. This was not expected from what are essentially pattern recognition systems. It has even led a group of Microsoft employees to publish an article claiming that GPT-4 shows the first sparks of AGI, although that has been characterised as hype.
Revised timelines and threat estimatesGPT-4 is impressive enough to be causing well-informed people to revise their timelines for the arrival of AGI, and of lasting widespread unemployment. Geoff Hinton, often called the godfather of deep learning, remarked in a recent interview that he used to think AGI was at least 20 years away, and very possibly 50. Now he thinks it might be less than 20 years. He also (for the first time, as far as I know) said that it is “not inconceivable” that advanced AI could cause human extinction.
The post GPT-4. Commotion and controversy first appeared on .
March 30, 2023
Benign superintelligence, and how to get there. With Ben Goertzel
During a keynote talk that Ben Goertzel gave recently, the robot that accompanied him on stage went mute. The fault lay not with the robot, but with a human who accidentally kicked a cable out of a socket backstage. Goertzel quips that in the future, the old warning against working with children and animals may be extended to a caution against working with any humans at all.
GPT-4 heralds an enormous productivity boost, and a wrenching transformation of work
ChatGPT woke the world up to the importance of artificial intelligence last year. The media has not been so full of talk about AI since DeepMind’s AlphaGo system beat the world’s best Go player in 2016. Launched at the end of November, ChatGPT wasn’t the best AI in the world, as the prominent AI researcher Yann LeCun pointed out. But it was the first time the general public got to play with such a...
March 25, 2023
What does a Good Future look like? With futurist keynote speaker Gerd Leonhard
Polls suggest that most Millennials think the future will be terrible, or at least worse than the past, not least due to climate change and war. Gerd Leonhard fears that such a negative outlook can create a negative future, and he is exploring how to create what he calls The Good Future. By this he does not mean that everyone is rich, but that everyone’s fundamental needs are fulfilled: health...
March 14, 2023
Why you should be getting ready now for a world with quantum computers. With Ignacio Cirac
Any organisation which handles sensitive data should start preparing now for the arrival of quantum computing. The technology is unlikely to be ready for widespread use for years – maybe another couple of decades – but it has been known for some time that when it is, it will crack the encryption used by governments and armies, banks and hospitals. Messages sent today will become insecure overnight.
ChatGPT raises old and new concerns about AI. A conversation with Francesca Rossi
The latest generative AI models are sharpening up some long-standing debates about privacy, bias and transparency of AI systems. These issues are often called AI Ethics, or Responsible AI. Francesca Rossi points out that among other things, previous systems were trained on data sets which were more heavily curated, so attempts could at least be made to minimise the amount of bias they displayed.
March 8, 2023
ChatGPT has woken up the House of Commons. A conversation with Tim Clement-Jones
Some people have biographical summaries which wear you out just by reading them. Lord Clement-Jones is one of those people. He has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper. He is better known as a senior politician, becoming a life peer in 1998. He has been the Liberal Democrats’...
China and AI – fearsome dragon or paper tiger?
Advanced AI is currently pretty much a duopoly between the USA and China. The US is the clear leader, thanks largely to its tech giants – Google, Meta, Microsoft, Amazon, and Apple. China also has a fistful of tech giants – Baidu, Alibaba, and Tencent are the ones usually listed, but the Chinese government has also taken a strong interest in AI since Deep Mind’s Alpha Go system beat the world’s...
February 22, 2023
Peter James: best-selling author and transhumanist
Peter James is one of the world’s most successful crime writers. His Roy Grace series, about a detective in Brighton, England, has produced 19 consecutive Sunday Times Number One bestsellers. His legions of devoted fans await each new release eagerly, but most of them probably don’t know that James is also a transhumanist. James has written 36 novels altogether, in several genres...


