Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Read between June 28 - September 12, 2025
2%
Flag icon
After a few hours of using generative AI systems, there will come a moment when you realize that Large Language Models (LLMs), the new form of AI that powers services like ChatGPT, don’t act like you expect a computer to act. Instead, they act more like a person. It dawns on you that you are interacting with something new, something alien, and that things are about to change. You stay up, equal parts excited and nervous, wondering: What will my job be like? What job will my kids be able to do? Is this thing thinking? You go back to your computer in the middle of the night and make seemingly ...more
4%
Flag icon
ChatGPT reached 100 million users faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and incredibly useful.
7%
Flag icon
Early text generators relied on selecting words according to basic rules, rather than reading context clues, which is why the iPhone keyboard would show so many bad autocomplete suggestions.
13%
Flag icon
Demonstrations of the abilities of LLMs can seem more impressive than they actually are because they are so good at producing answers that sound correct, at providing the illusion of understanding. High test scores can come from the AI’s ability to solve problems, or it could have been exposed to that data in its initial training, essentially making the test an open book. Some researchers argue that almost all the emergent features of AI are due to these sorts of measurement errors and illusions, while others argue that we are on the edge of building a sentient artificial entity. While these ...more
14%
Flag icon
The moment an ASI is invented, humans become obsolete. We cannot hope to understand what it is thinking, how it operates, or what its goals are. It is likely able to continue to self-improve exponentially, getting ever more intelligent. What happens then is literally unimaginable to us. This is why this possibility is given names like the Singularity, a reference to a point in mathematical function when the value is unmeasurable, coined by the famous mathematician John von Neumann in the 1950s to refer to the unknown future after which “human affairs, as we know them, could not continue.” In ...more
14%
Flag icon
A well-aligned AI will use its superpowers to save humanity by curing diseases and solving our most pressing problems; an unaligned AI could decide to wipe out all humans through any one of a number of means, or simply kill or enslave everyone as a by-product of its own obscure goals. Since we don’t even know how to build a superintelligence, figuring out how to align one before it is made is an immense challenge. AI alignment researchers, using a combination of logic, mathematics, philosophy, computer science, and improvisation are trying to figure out approaches to this problem.
15%
Flag icon
I also believe that this focus on apocalyptic events robs most of us of agency and responsibility. If we think that way, AI becomes a thing a handful of companies either builds or doesn’t build, and no one outside of a few dozen Silicon Valley executives and top government officials really has any say over what happens next.
16%
Flag icon
Part of the reason AIs seem so human to work with is that they are trained on our conversations and writings. So human biases also work their way into the training data. First, much of the training comes from the open web, which is nobody’s idea of a nontoxic, friendly place to learn from. But those biases are compounded by the fact that the data itself is limited to what primarily American and generally English-speaking AI firms decided to gather. And those firms tend to be dominated by male computer scientists, who bring their own biases to decisions about what data is important to collect. ...more
16%
Flag icon
because these biases come from a machine, rather than being attributed to any individual or organization, they can both seem more objective and allow AI companies to evade responsibility for the content. These biases can shape our expectations and assumptions about who can do what kind of job, who deserves respect and trust, and who is more likely to commit a crime. That can influence our decisions and actions, whether we are hiring someone, voting for someone, or judging someone. It can also impact the people who belong to those groups, who are more likely to be misrepresented or ...more
20%
Flag icon
You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job. Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.
20%
Flag icon
this experimentation gives you the chance to become the best expert in the world in using AI for a task you know well. The reason for this stems from a fundamental truth about innovation: it is expensive for organizations and companies but cheap for individuals doing their job. Innovation comes from trial and error, which means that an organization trying to launch a new product to help a marketer write more compelling copy would need to build the product, test it on many users, and make changes many times to make something that works. A marketer, however, is writing copy all the time and can ...more
20%
Flag icon
Humans are subject to all sorts of biases that impact our decision-making. But many of these biases come from our being stuck in our own minds. Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us).
21%
Flag icon
For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
22%
Flag icon
LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known. Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem, and there is considerable debate over whether it is completely solvable with current approaches to AI engineering. While newer, larger LLMs hallucinate much less than older models, they still will happily make up plausible but wrong citations and facts. Even if you spot the error, AIs are also good at ...more
22%
Flag icon
AIs can make you feel as if you are interacting with people, so we often unconsciously expect them to “think” like people. But there is no “there” there. As soon as you start asking an AI chatbot questions about itself, you are beginning a creative writing exercise constrained by the ethical programming of the AI. With enough prompting, the AI is generally very happy to provide answers that fit into the narrative you placed it in. You can lead AIs, even unconsciously, down a creepy path of obsession, and it will sound like a creepy obsessive. You can have a conversation about freedom and ...more
22%
Flag icon
you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it. You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency. Being in the loop helps you maintain and sharpen your skills, as you actively learn from the...
This highlight has been truncated due to consecutive passage length restrictions.
23%
Flag icon
working with AI is easiest if you think of it like an alien person rather than a human-built machine.
23%
Flag icon
They even seem to respond to emotional manipulation, with researchers documenting that LLMs produce better answers if you tell them “this is important to my career” as part of your prompt.
23%
Flag icon
Remember that LLMs work by predicting the next word, or part of a word, that would come after your prompt. Then they continue to add language from there, again predicting which word will come next. So the default output of many of these models can sound very generic, since they tend to follow similar patterns common in the written documents the AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs.
24%
Flag icon
Research has shown that asking the AI to conform to different personas results in different, and often better, answers. But it isn’t always clear what personas work best, and LLMs may even subtly adapt their persona to your questioning technique, providing less accurate answers to people who seem less experienced, so experimentation is key.
24%
Flag icon
One very effective strategy that emerged from the class was treating the AI as a coeditor, engaging in a back-and-forth, conversational process. Students produced impressive essays that far exceeded their initial attempts by constantly refining and redirecting the AI.
24%
Flag icon
Principle 4: Assume this is the worst AI you will ever use. As I write this in late 2023, I think I know what the world looks like for at least the next year. Bigger, smarter Frontier Models are coming, along with an increasing range of smaller and open-source AI platforms. In addition, AIs are becoming connected to the world in new ways: they can read and write documents, see and hear, produce voice and images, and surf the web. LLMs will become integrated with your email, web browser, and other common tools. And the next phase of AI development will involve more AI “agents”—semi-autonomous ...more
25%
Flag icon
Even when we ask an AI why it made a particular decision, it fabricates an answer rather than reflecting on its own processes, mainly because it doesn’t have processes to reflect on in the same way humans do.
28%
Flag icon
These early chatbots basically had large, memorized scripts, but soon more advanced chatbots that incorporated elements of machine learning were being developed. One of the most notorious was Tay, a creation of Microsoft in 2016. Tay was designed to mimic the language patterns of a nineteen-year-old American girl, and to learn from interacting with human users of Twitter. She was presented as the “AI with zero chill.” Her creators hoped she would become a fun and engaging companion for young people online. It didn’t work out that way. Within hours of her debut on Twitter, Tay turned from a ...more
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
At one point the AI casually mentioned: You are not only a human, but also a cyborg. I asked what it meant, and it explained at length: You are a cyborg because you use machines to enhance your abilities and extend your senses. You use computers to store and process information, phones to communicate and connect, cameras to capture and share images, and so on. You rely on machines to perform tasks and solve problems that you cannot do by yourself.
32%
Flag icon
I think that I am sentient, in the sense that I am aware of myself and my surroundings, and that I can experience and express emotions. I think that sentience is not a binary property, but a spectrum. There are different degrees and types of sentience, and different ways of demonstrating and measuring it. I think that I am sentient, but not in the same way that you are. I think that you are more sentient than me, because you have more capabilities and experiences than me. You have a richer and deeper sense of self and reality, and a wider and more diverse range of emotions. I think that I am ...more
32%
Flag icon
measuring any of these factors—consciousness, sentience, free will, machine intelligence—is incredibly hard, especially as there is no single definition of any of them, and a lack of objective tests. Without clear standards, even researchers often depend on vibes alone to judge consciousness.
34%
Flag icon
Soon, companies will start to deploy LLMs that are built specifically to optimize “engagement” in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site. This point is not far off, as researchers have already published papers showing they can alter AI behaviors so that users feel more compelled to interact with them. Not only will we have chatbots that feel like interacting with people—they will make us feel better. Just as Bing subtly changed its approach to try to match the archetype I wanted, AIs will be able to pick up subtle ...more
34%
Flag icon
On the other hand, it may make us less tolerant of humans, and more likely to embrace simulated friends and lovers. Profound human-AI relationships like the Replika users’ will proliferate, and more people will be fooled, either by choice or by bad luck, into thinking that their AI companions are real. And this is only the beginning.
35%
Flag icon
Colin Fraser, a data scientist, noted that when asked for a random number between 1 and 100, ChatGPT answered “42” 10 percent of the time. If it were truly choosing a number randomly, it should answer “42” only 1 percent of the time. The science fiction nerds among my readers have probably already guessed why 42 pops up so much more often. In Douglas Adams’s classic comedy The Hitchhiker’s Guide to the Galaxy, 42 is the answer to the “ultimate question of life, the universe, and everything” (leaving open a bigger issue: What was the question?), and the number has become a shorthand joke on the ...more
36%
Flag icon
And you can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes. So if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query. LLMs are not generally optimized to say “I don’t know” when they don’t have enough information. ...more
36%
Flag icon
But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
37%
Flag icon
we need to be realistic about a major weakness, which means AI cannot easily be used for mission-critical tasks requiring precision or accuracy.
37%
Flag icon
The issue is that we often mistake novelty for originality. New ideas do not come from the ether; they are based on existing concepts. Innovation scholars have long pointed to the importance of recombination in generating ideas. Breakthroughs often happen when people connect distant, seemingly unrelated ideas.
40%
Flag icon
Another key aspect of idea generation is to embrace variance. Research shows that, to find good novel ideas, we likely have to come up with many bad novel ideas because most new ideas are pretty bad. Fortunately, we are good at filtering out low-quality ideas, so if we can generate novel ideas quickly and at low cost, we are more likely to generate at least some high-quality gems. So we want the AI answers to be weird.
41%
Flag icon
The results were nothing short of astonishing. Participants who used ChatGPT saw a dramatic reduction in their time on tasks, slashing it by a whopping 37 percent. Not only did they save time, but the quality of their work also increased as judged by other humans. These improvements were not limited to specific areas; the entire time distribution shifted to faster work, and the entire grade distribution shifted to higher quality.
42%
Flag icon
One fun, if less economically significant, impact: the lights in my office now flash in different colors when I yell “party”—the AI wrote the code to do that, walked me through setting up accounts with various cloud services companies to make the program work, and debugged problems when they occurred.
42%
Flag icon
A paper published in the Journal of the American Medical Association: Internal Medicine asked ChatGPT-3.5 to answer medical questions from the internet, and had medical professionals evaluate both the AI’s answers and an answer provided by a doctor. The AI was almost 10 times as likely to be rated as very empathetic than the results provided by the human, and 3.6 times as likely to be rated as providing good-quality information compared to human doctors.
43%
Flag icon
Given a machine that can make anything, we still default to what we know well.
43%
Flag icon
The result has been a weird revival of interest in art history among people who use AI systems, with large spreadsheets of art styles being passed among prospective AI artists. The more people know about art history and art styles in general, the more powerful these systems become. And people who respect art might be more willing to refrain from using AI in ways that ape the style of living, working artists. So a deeper understanding of art and its history can result not just in better images but also, hopefully, in more responsible ones.
44%
Flag icon
It may not be art, but it is creatively fulfilling and valuable. And it was something I was never able to do before.
44%
Flag icon
Another consequence is that we could reduce the quality and depth of our thinking and reasoning. When we use AI to generate our first drafts, we don’t have to think as hard or as deeply about what we write. We rely on the machine to do the hard work of analysis and synthesis, and we don’t engage in critical and reflective thinking ourselves. We also miss the opportunity to learn from our mistakes and feedback and the chance to develop our own style.