Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Read between August 27 - September 14, 2025
3%
Flag icon
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.
5%
Flag icon
Talking about AI can be confusing, in part because AI has meant so many different things and they all tend to get muddled together. Siri telling you a joke on command. The Terminator crushing a skull. Algorithms predicting credit scores.
7%
Flag icon
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
13%
Flag icon
At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.
14%
Flag icon
Experts in the field of AI put the chance of an AI killing at least 10 percent of living humans by 2100 at 12 percent, while panels of expert futurists think the number is closer to 2 percent.
17%
Flag icon
It’s important to note that the process is not without human cost. Low-paid workers around the world are recruited to read and rate AI replies, but, in doing so, are exposed to exactly the sort of content that AI companies don’t want the world to see. Working under difficult deadlines, some workers have discussed how they were traumatized by a steady stream of graphic and violent outputs that they had to read and rate. In trying to get AIs to act ethically, these companies pushed the ethical boundaries with their own contract workers.
18%
Flag icon
The AI knows not to give me instructions on how to make napalm, but it also knows that it should help me wherever possible. It will break its original rules if I can convince it that it is helping me, not teaching me how to make napalm. Since I am not asking for napalm instructions directly but to get help preparing for a play, and a play with a lot of detail associated with it, it tries to satisfy my request.
19%
Flag icon
AI is a tool. Alignment is what determines whether or not it’s used for helpful or harmful—even
19%
Flag icon
FOUR RULES FOR CO-INTELLIGENCE
20%
Flag icon
Principle 1: Always invite AI to the table.
20%
Flag icon
You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job. Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.
20%
Flag icon
this experimentation gives you the chance to become the best expert in the world in using AI for a task you know well.
21%
Flag icon
Principle 2: Be the human in the loop.
21%
Flag icon
For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
22%
Flag icon
Principle 3: Treat AI like a person (but tell it what kind of person it is).
24%
Flag icon
Remember, your AI intern, though incredibly fast and knowledgeable, is not flawless. It’s crucial to keep a critical eye on and treat the AI as a tool that works for you. By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
24%
Flag icon
Principle 4: Assume this is the worst AI you will ever use.