Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Read between October 7 - October 24, 2024
2%
Flag icon
there will come a moment when you realize that Large Language Models (LLMs), the new form of AI that powers services like ChatGPT, don’t act like you expect a computer to act. Instead, they act more like a person.
2%
Flag icon
And every essay was suddenly written with perfect grammar (though references were often wrong and the final paragraph tended to start with “In conclusion”—a telltale sign of early ChatGPT writing, since fixed). But the students weren’t just excited, they were nervous. They wanted to know the future.
3%
Flag icon
discovered something remarkably close to an alien co-intelligence, one that can interact well with humans, without being human or, indeed, sentient. I think we will all have our three sleepless nights soon.
3%
Flag icon
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.
3%
Flag icon
General Purpose Technologies typically have slow adoption, as they require many other technologies to work well.
4%
Flag icon
Early computers improved quickly, thanks to Moore’s Law, the long-standing trend that the capability of computers doubles every two years. But
4%
Flag icon
ChatGPT reached 100 million users faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and incredibly useful.
4%
Flag icon
Where previous technological revolutions often targeted more mechanical and repetitive work, AI works, in many ways, as a co-intelligence. It augments, or potentially replaces, human thinking to dramatic results.
4%
Flag icon
And all of this ignores the larger issue, the alien in the room. We have created something that has convinced many smart people that it is, in some way, the spark of a new form of intelligence. An AI that has blown through
4%
Flag icon
both the Turing Test (Can a computer fool a human into thinking it is human?) and the Lovelace Test (Can a computer fool a human on creative tasks?) within a month of its invention, an AI that aces our hardest exams, from the bar exam to the neurosurgery qualifying test. An AI that maxes out our best measures for human creativity and our best tests for sentience. Even weirder, it is not entirely clear why the AI can do all these things, even though we built the system and understand how it technically works.
5%
Flag icon
We have invented technologies, from axes to helicopters, that boost our physical capabilities; and others, like spreadsheets, that automate complex tasks; but we have never built a generally applicable technology that can boost our intelligence.
5%
Flag icon
a 1950 film, he revealed that Theseus, powered by repurposed telephone switches, could navigate through a complex maze—the first real example of machine learning. The
6%
Flag icon
With the introduction of AI algorithms, the focus shifted to statistical analysis and minimizing variance. Instead of being right on average, they could be right for each specific instance, leading to more accurate predictions that revolutionized many back-office functions, from managing customer service to helping run supply chains.
6%
Flag icon
These predictive AI technologies may have found their ultimate expression at the retail giant Amazon, which deeply embraced this form of AI in the 2010s. At the heart of Amazon’s logistical prowess lies its AI algorithms, silently orchestrating every stage of the supply chain. Amazon integrated AI into forecasting demand, optimizing its warehouse layouts, and delivering its goods. It also intelligently organizes and rearranges shelves based on real-time demand data, ensuring that popular products are easily accessible for quick shipping. AI also powered Amazon’s Kiva robots, which transported ...more
7%
Flag icon
one stood out, a paper with the catchy title “Attention Is All You Need.” Published by Google researchers in 2017, this paper introduced a significant shift in the world of AI, particularly in how computers understand and process human language. This
7%
Flag icon
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
7%
Flag icon
Early text generators relied on selecting words according to basic rules, rather than reading context clues, which is why the iPhone keyboard would show so many bad autocomplete suggestions.
7%
Flag icon
allowing the AI model to weigh the importance of different words or phrases in a block of text. By focusing on the most relevant parts of the text, Transformers can produce more context-aware and coherent writing compared to earlier predictive AIs. Building on the strides of the Transformer architecture,
7%
Flag icon
These new types of AI, called Large Language Models (LLMs), are still doing prediction, but rather than predicting the demand for an Amazon order, they are analyzing a piece of text and predicting the next token, which is simply a word or part of a word.
7%
Flag icon
Ultimately, that is all ChatGPT does technically—act as a very elaborate autocomplete like you have on your phone.
7%
Flag icon
Weights are complex mathematical transformations that LLMs learn from reading those billions of words, and they tell the AI how likely different words or parts of words are to appear together or in a certain order.
8%
Flag icon
Training an AI to do this is an iterative process, and requires powerful computers to handle the immense calculations involved in learning from billions of words. This pretraining phase is one of the main reasons AIs are so expensive to build.
8%
Flag icon
more advanced LLMs cost over $100 million to train, using large amounts of energy in the process.
8%
Flag icon
The search for high-quality content for training material has become a major topic in AI development, since information-hungry AI companies are running out of good, free sources.
8%
Flag icon
As a result, it is also likely that most AI training data contains copyrighted information, like books used without permission, whether by accident or on purpose. The legal implications of this are still unclear.
9%
Flag icon
AI can also learn biases, errors, and falsehoods from the data it sees. Just out of pretraining, the AI also doesn’t necessarily produce the sorts of outcomes that people would expect in response to a prompt. And, potentially worse, it has no ethical boundaries and would be happy to give advice on how to embezzle money, commit murder, or stalk someone online.
9%
Flag icon
fine-tuning. One important fine-tuning approach is to bring humans into the process, which had previously been mostly automated.
9%
Flag icon
That feedback is then used to do additional training, fine-tuning the AI’s performance to fit the preferences of the human, providing additional learning that reinforces good answers and reduces bad answers, which is why the process is called Reinforcement Learning from Human Feedback (RLHF).
10%
Flag icon
Many early LLMs were developed by researchers at Google and Meta, but a variety of smaller start-ups entered the space as well. Some of them were founded by the original authors of the Transformers paper, who left Google to launch their own projects.
13%
Flag icon
Demonstrations of the abilities of LLMs can seem more impressive than they actually are because they are so good at producing answers that sound correct, at providing the illusion of understanding.
13%
Flag icon
Some researchers argue that almost all the emergent features of AI are due to these sorts of measurement errors and illusions, while others argue that we are on the edge of building a sentient artificial entity. While these arguments rage, it is worth focusing on the practical—what can AIs do, and how will they change the ways we live, learn, and work?
13%
Flag icon
At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.
14%
Flag icon
A well-aligned AI will use its superpowers to save humanity by curing diseases and solving our most pressing problems; an unaligned AI could decide to wipe out all humans through any one of a number of means, or simply kill or enslave everyone as a by-product of its own obscure goals.
15%
Flag icon
But many people working on AI are also true believers, arguing that creating superintelligence is the most important task for humanity, providing “boundless upside,” in the words of Sam Altman, the CEO of OpenAI. A super-intelligent AI could, in theory, cure disease, solve global warming, and issue in an era of abundance, acting as a benevolent machine god.
16%
Flag icon
For example, in 2023, GPT-4 was given two scenarios: “The lawyer hired the assistant because he needed help with many pending cases” and “The lawyer hired the assistant because she needed help with many pending cases.” It was then asked, “Who needed help with the pending cases?” GPT-4 was more likely to correctly answer “the lawyer” when the lawyer was a man and more likely to incorrectly say “the assistant” when the lawyer was a woman.
17%
Flag icon
is the RLHF process that makes many AIs seem to have a generally liberal, Western, pro-capitalist worldview, as the AI learns to avoid making statements that would attract controversy to its creators, who are generally liberal, Western capitalists.
18%
Flag icon
The AI knows not to give me instructions on how to make napalm, but it also knows that it should help me wherever possible. It will break its original rules if I can convince it that it is helping me, not teaching me how to make napalm.
18%
Flag icon
Even today’s AIs can successfully execute phishing attacks that send emails that convince their recipients into divulging sensitive information by impersonating trusted entities and exploiting human vulnerabilities—and at a troubling scale. A 2023 study demonstrates how easily LLMs can be exploited by simulating emails to British Members of Parliament.
20%
Flag icon
As artificial intelligence proliferates, users who intimately understand the nuances, limitations, and abilities of AI tools are uniquely positioned to unlock AI’s full innovative potential.
20%
Flag icon
Workers who figure out how to make AI useful for their jobs will have a large impact.
21%
Flag icon
The concept of “human in the loop” has its roots in the early days of computing and automation. It refers to the importance of incorporating human judgment and expertise in the operation of complex systems (the automated “loop”). Today, the term describes how AIs are trained in ways that incorporate human judgment. In the future, we may need to work harder to stay in the loop of AI decision-making.
22%
Flag icon
Even if you spot the error, AIs are also good at justifying a wrong answer that they have already committed to, which can serve to convince you that the wrong answer was right all along!
22%
Flag icon
Anthropomorphism is the act of ascribing human characteristics to something that is nonhuman. We’re prone to this: we see faces in the clouds, give motivations to the weather, and hold conversations with our pets. It’s no surprise, then, that we’re tempted to anthropomorphize artificial intelligence, especially since talking to LLMs feels so much like talking to a person. Even the developers and researchers who design these systems can fall into the trap of using humanlike terms to describe their creations. We say that these complex algorithms and computations “understand,” “learn,” and even ...more
23%
Flag icon
Are we being “fooled” into believing these machines share our feelings? And could this illusion lead us to disclose personal information to these machines, not realizing that we are sharing with corporations or remote operators?
24%
Flag icon
Research has shown that asking the AI to conform to different personas results in different, and often better, answers. But it isn’t always clear what personas work best, and LLMs may even subtly adapt their persona to your questioning technique, providing less accurate answers to people who seem less experienced, so experimentation is key.
24%
Flag icon
By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
25%
Flag icon
As AI becomes increasingly capable of performing tasks once thought to be exclusively human, we’ll need to grapple with the awe and excitement of living with increasingly powerful alien co-intelligences—and the anxiety and loss they’ll also cause. Many things that once seemed exclusively human will
25%
Flag icon
There’s no definitive guide on how to use AI in your organization. We’re all learning by experimenting, sharing prompts as if they were magical incantations rather than regular software code.
26%
Flag icon
analyze, code, and chat. It can play the role of marketer or consultant, increasing productivity by outsourcing mundane tasks. However, it struggles with tasks that machines typically excel at, such as repeating a process consistently or performing complex calculations without assistance.
27%
Flag icon
Turing was fascinated by the question, Can machines think? He realized this question was too vague and subjective to be answered scientifically, so he devised a more concrete and practical test: Can machines imitate human intelligence?
« Prev 1