Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Kindle Notes & Highlights
Read between April 14 - April 16, 2025
3%
Flag icon
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.
4%
Flag icon
Where previous technological revolutions often targeted more mechanical and repetitive work, AI works, in many ways, as a co-intelligence. It augments, or potentially replaces, human thinking to dramatic results.
4%
Flag icon
Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing.
5%
Flag icon
We have invented technologies, from axes to helicopters, that boost our physical capabilities; and others, like spreadsheets, that automate complex tasks; but we have never built a generally applicable technology that can boost our intelligence.
7%
Flag icon
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
7%
Flag icon
To teach AI how to understand and generate humanlike writing, it is trained on a massive amount of text from various sources, such as websites, books, and other digital documents. This is called pretraining, and unlike earlier forms of AI, it is unsupervised, which means the AI doesn’t need carefully labeled data. Instead, by analyzing these examples, AI learns to recognize patterns, structures, and context in human language.
9%
Flag icon
That feedback is then used to do additional training, fine-tuning the AI’s performance to fit the preferences of the human, providing additional learning that reinforces good answers and reduces bad answers, which is why the process is called Reinforcement Learning from Human Feedback (RLHF).
12%
Flag icon
Despite being just a predictive model, the Frontier AI models, trained on the largest datasets with the most computing power, seem to do things that their programming should not allow—a concept called emergence.
13%
Flag icon
Some researchers argue that almost all the emergent features of AI are due to these sorts of measurement errors and illusions, while others argue that we are on the edge of building a sentient artificial entity.
14%
Flag icon
Since we don’t even know how to build a superintelligence, figuring out how to align one before it is made is an immense challenge. AI alignment researchers, using a combination of logic, mathematics, philosophy, computer science, and improvisation are trying to figure out approaches to this problem. A lot of research is going into considering how to design AI systems aligned with human values and goals, or at least that do not actively harm them.
14%
Flag icon
Moreover, there is no guarantee that an AI system will keep its original values and goals as it evolves and learns from its environment.
15%
Flag icon
The CEOs of the major AI companies even signed a single-sentence statement in 2023 stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
15%
Flag icon
The truth is that there is a wide variety of potential ethical concerns that also might fit under the broader category of alignment.
16%
Flag icon
The fact that the material used for pretraining represents only an odd slice of human data (often, whatever the AI developers could find and assume was free to use) introduces another set of risks: bias.
16%
Flag icon
So human biases also work their way into the training data.
16%
Flag icon
The result gives AIs a skewed picture of the world, as its training data is far from representing the diversity of the population of the internet, let alone the planet.
16%
Flag icon
AI companies have been trying to address this bias in a number of ways, with differing levels of urgency.
17%
Flag icon
In trying to get AIs to act ethically, these companies pushed the ethical boundaries with their own contract workers.
19%
Flag icon
Even absent ill intent, the very characteristics enabling beneficial applications also open the door to harm.
19%
Flag icon
Aligning an AI requires not just stopping a potential alien god but also considering these other impacts and the desire to build an AI that reflects humanity. Therefore, the alignment issue is not just something that AI companies can address on their own, though they obviously need to play a role. They have financial incentives to continue AI development, and far fewer incentives to make sure those AIs are well aligned, unbiased, and controllable.
19%
Flag icon
Instead, the path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society. We need agreed-upon norms and standards for AI’s ethical development and use, shaped through an inclusive process representing diverse voices. Companies must make principles like transparency, accountability, and human oversight central to their technology. Researchers need support and incentives to prioritize beneficial AI alongside raw capability gains. And governments need to enact sensible regulations to ensure public interest prevails over a profit ...more
19%
Flag icon
Most important, the public needs education on AI so they can pressure for an aligned future as informed citizens.
21%
Flag icon
The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
22%
Flag icon
You can lead AIs, even unconsciously, down a creepy path of obsession, and it will sound like a creepy obsessive.
23%
Flag icon
While anthropomorphism might serve a useful purpose in the short term, it raises ethical questions about deception and emotional manipulation.
23%
Flag icon
Treating AI like a person can create unrealistic expectations, false trust, or unwarranted fear among the public, policymakers, and even researchers themselves.
24%
Flag icon
By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
27%
Flag icon
The point here is that AI can assume different personas rapidly and easily, emphasizing the importance of both developer and user to these models.
29%
Flag icon
Though Siri, Alexa, and Google’s chatbots would all crack an occasional joke, the Tay catastrophe spooked companies from developing chatbots that could pass for people, especially those that used machine learning rather than scripts.
29%
Flag icon
But the disturbing realism of these AI interactions showed that it was no longer really a question of whether an AI could pass the Turing Test—these new Large Language Models were genuinely convincing, and passing the test was just a matter of time—but what AI passing the Turing Test meant for us.
35%
Flag icon
The biggest issue limiting AI is also one of its strengths: its notorious ability to make stuff up, to hallucinate.
35%
Flag icon
It does not care if the words are true, meaningful, or original. It just wants to produce a coherent and plausible text that makes you happy. Hallucinations sound likely and contextually appropriate enough to make it hard to tell lies from the truth.
35%
Flag icon
There is no definitive answer to why LLMs hallucinate, and the contributing factors may differ among models.
36%
Flag icon
But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
40%
Flag icon
Tirelessly generating concepts is something AIs are uniquely good at.
40%
Flag icon
Another key aspect of idea generation is to embrace variance. Research shows that, to find good novel ideas, we likely have to come up with many bad novel ideas because most new ideas are pretty bad. Fortunately, we are good at filtering out low-quality ideas, so if we can generate novel ideas quickly and at low cost, we are more likely to generate at least some high-quality gems. So we want the AI answers to be weird.
41%
Flag icon
Upon closer inspection, a surprisingly large amount of work is actually creative work in the form that AI is good at. Situations in which there is no right answer, where invention matters and small errors can be caught by expert users, abound.
41%
Flag icon
Even things that don’t initially appear to be creative can be.
42%
Flag icon
The meaning of art is an old debate and one unlikely to be resolved in this book or any other. And the anxiety that artists face may soon be felt by many other professions as AI overlaps with their jobs. Yet this may turn out to be a reinvigoration of creativity and art rather than its collapse.
43%
Flag icon
AI is trained on vast swaths of humanity’s cultural heritage, so it can often best be wielded by people who have a knowledge of that heritage. To get the AI to do unique things, you need to understand parts of the culture more deeply than everyone else using the same AI systems.
43%
Flag icon
Creating something interesting with AI requires you to invoke these connections to create a novel image.
44%
Flag icon
A lot of work is time-consuming by design. In a world in which the AI gives an instant, pretty good, near universally accessible shortcut, we’ll soon face a crisis of meaning in creative work of all kinds. This is, in part, because we expect creative work to take careful thought and revision, but also that time often operates as a stand-in for work.
46%
Flag icon
Only 36 job categories out of 1,016 had no overlap with AI.
46%
Flag icon
You will notice that these are highly physical jobs, ones in which the ability to move in space is critical. It highlights the fact that AI, for now at least, is disembodied. The boom in artificial intelligence is happening much faster than the evolution of practical robots, but that may change soon.
46%
Flag icon
AI has the potential to automate mundane tasks, freeing us for work that requires uniquely human traits such as creativity and critical thinking—or, possibly, managing and curating the AI’s creative output, as we discussed in the last chapter.
47%
Flag icon
Dell’Acqua developed a mathematical model to explain the trade-off between AI quality and human effort.
47%
Flag icon
They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”
48%
Flag icon
We may reserve Just Me Tasks for personal or ethical reasons, such as raising our children, making important decisions, or expressing our values.
49%
Flag icon
The next category of tasks is Delegated Tasks. These are tasks that you assign the AI and may carefully check (remember, the AI makes stuff up all the time), but ultimately do not want to spend a lot of time on. This is usually stuff you really don’t want to do and is of low importance, or time-consuming. The perfect Delegated Task is tedious, repetitive, or boring for humans but easy and efficient for AI.
49%
Flag icon
Then there are Automated Tasks, ones you leave completely to the AI and don’t even check on.
« Prev 1