More on this book
Community
Kindle Notes & Highlights
Read between
August 27 - September 14, 2025
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.
Talking about AI can be confusing, in part because AI has meant so many different things and they all tend to get muddled together. Siri telling you a joke on command. The Terminator crushing a skull. Algorithms predicting credit scores.
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
At the core of the most extreme dangers from AI is the stark fact that there is no particular reason that AI should share our view of ethics and morality.
Experts in the field of AI put the chance of an AI killing at least 10 percent of living humans by 2100 at 12 percent, while panels of expert futurists think the number is closer to 2 percent.
It’s important to note that the process is not without human cost. Low-paid workers around the world are recruited to read and rate AI replies, but, in doing so, are exposed to exactly the sort of content that AI companies don’t want the world to see. Working under difficult deadlines, some workers have discussed how they were traumatized by a steady stream of graphic and violent outputs that they had to read and rate. In trying to get AIs to act ethically, these companies pushed the ethical boundaries with their own contract workers.
The AI knows not to give me instructions on how to make napalm, but it also knows that it should help me wherever possible. It will break its original rules if I can convince it that it is helping me, not teaching me how to make napalm. Since I am not asking for napalm instructions directly but to get help preparing for a play, and a play with a lot of detail associated with it, it tries to satisfy my request.
AI is a tool. Alignment is what determines whether or not it’s used for helpful or harmful—even
FOUR RULES FOR CO-INTELLIGENCE
Principle 1: Always invite AI to the table.
You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job. Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.
this experimentation gives you the chance to become the best expert in the world in using AI for a task you know well.
Principle 2: Be the human in the loop.
For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
Principle 3: Treat AI like a person (but tell it what kind of person it is).
Remember, your AI intern, though incredibly fast and knowledgeable, is not flawless. It’s crucial to keep a critical eye on and treat the AI as a tool that works for you. By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
Principle 4: Assume this is the worst AI you will ever use.