Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Kindle Notes & Highlights
Read between April 2 - May 12, 2024
3%
Flag icon
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life. And, in some ways, generative AI might even be bigger.
4%
Flag icon
ChatGPT reached 100 million users faster than any previous product in history, driven by the fact that it was free to access, available to individuals, and incredibly useful.
4%
Flag icon
AI works, in many ways, as a co-intelligence. It augments, or potentially replaces, human thinking to dramatic results. Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing. By contrast, when steam power, that most fundamental of General Purpose Technologies, the one that created the Industrial Revolution, was put into a factory, it improved productivity by 18 to 22 percent. And despite decades of looking, economists have had difficulty showing a real long-term productivity ...more
4%
Flag icon
Even weirder, it is not entirely clear why the AI can do all these things, even though we built the system and understand how it technically works.
6%
Flag icon
But hype cycles have always plagued AI, and as these promises went unfulfilled, disillusionment set in, one of many “AI winters” in which AI progress stalls and funding dries up. Other boom-and-bust cycles followed, each boom accompanied by major technological advances, such as artificial neural networks that mimicked the human brain, followed by collapse as AI could not deliver on expected goals.
7%
Flag icon
“Attention Is All You Need.” Published by Google researchers in 2017, this paper introduced a significant shift in the world of AI, particularly in how computers understand and process human language.
7%
Flag icon
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
7%
Flag icon
The attention mechanism helps solve this problem by allowing the AI model to weigh the importance of different words or phrases in a block of text. By
7%
Flag icon
These new types of AI, called Large Language Models (LLMs), are still doing prediction, but rather than predicting the demand for an Amazon order, they are analyzing a piece of text and predicting the next token, which is simply a word or part of a word. Ultimately, that is all ChatGPT does technically—act as a very elaborate autocomplete
8%
Flag icon
The search for high-quality content for training material has become a major topic in AI development, since information-hungry AI companies are running out of good, free sources.
8%
Flag icon
As a result, it is also likely that most AI training data contains copyrighted information, like books used without permission, whether by accident or on purpose. The legal implications of this are still unclear. Since the data is used to create weights, and not directly copied into the AI systems, some experts consider it to be outside standard copyright law. In the coming years, these issues are likely to be resolved by courts and legal systems, but they create a cloud of uncertainty, both ethically and legally, over this early stage of AI training. In the meantime, AI companies are ...more
9%
Flag icon
That feedback is then used to do additional training, fine-tuning the AI’s performance to fit the preferences of the human, providing additional learning that reinforces good answers and reduces bad answers, which is why the process is called Reinforcement Learning from Human Feedback (RLHF).
15%
Flag icon
It is not clear whether training an AI on this material is legal. Different countries have different approaches. Some, like the European Union, have strict regulations on data protection and privacy and have shown an interest in restricting AI training on data without permission. Others, like the United States, have a more laissez-faire attitude, allowing companies and individuals to collect and use data with few restrictions but with the potential for lawsuits for misuse. Japan has decided to go all in and declare that AI training does not violate copyright.
16%
Flag icon
For example, a 2023 study by Bloomberg found that Stable Diffusion, a popular text-to-image diffusion AI model, amplifies stereotypes about race and gender, depicting higher-paying professions as whiter and more male than they actually are. When asked to show a judge, the AI generates a picture of a man 97 percent of the time, even though 34 percent of US judges are women. In showing fast-food workers, 70 percent had darker skin tones, even though 70 percent of American fast-food workers are white.
19%
Flag icon
AI is a tool. Alignment is what determines whether or not it’s used for helpful or harmful—even nefarious—ends.
20%
Flag icon
Principle 1: Always invite AI to the table.
20%
Flag icon
The AI is great at the sonnet, but because of how it conceptualizes the world in tokens rather than words, it consistently produces poems of more or less than fifty words.
20%
Flag icon
Humans are subject to all sorts of biases that impact our decision-making. But many of these biases come from our being stuck in our own minds. Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us).
21%
Flag icon
we can not only harness their strengths more effectively but also preemptively recognize potential threats to our jobs, equipping ourselves for a future that demands the seamless integration of human and artificial intelligence.
21%
Flag icon
Principle 2: Be the human in the loop.
22%
Flag icon
It can help to think of the AI as trying to optimize many functions when it answers you, one of the most important of which is “make you happy” by providing an answer you will like. That goal often is more important than another goal, “be accurate.” If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something, because “make you happy” beats “be accurate.” LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known.
22%
Flag icon
So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it.
22%
Flag icon
You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency. Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving. It also helps you form a working co-intelligence with the AI.
22%
Flag icon
Principle 3: Treat AI like a person (but tell it what kind of person it is).
24%
Flag icon
Once you give it a persona, you can work with it as you would another person or an intern. I witnessed the value of this approach in action when I assigned my students to “cheat” by using an AI to generate a five-paragraph essay on a relevant topic. At first, the students gave simple and vague prompts, resulting in mediocre essays. But as they tried different strategies, the quality of the AI’s output improved significantly. One very effective strategy that emerged from the class was treating the AI as a coeditor, engaging in a back-and-forth, conversational process. Students produced ...more
24%
Flag icon
Principle 4: Assume this is the worst AI you will ever use.
25%
Flag icon
Many things that once seemed exclusively human will be able to be done by AI. So, by embracing this principle, you can view AI’s limitations as transient, and remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
26%
Flag icon
We’re all learning by experimenting, sharing prompts as if they were magical incantations rather than regular software code.
26%
Flag icon
Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one.
36%
Flag icon
But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
36%
Flag icon
For example, a study examining the number of hallucinations and errors in citations given by AI found that GPT-3.5 made mistakes in 98 percent of the cites, but GPT-4 hallucinated only 20 percent of the time.
37%
Flag icon
That said, we need to be realistic about a major weakness, which means AI cannot easily be used for mission-critical tasks requiring precision or accuracy.
41%
Flag icon
When I started requiring students to use these methods to generate ideas for start-ups in my entrepreneurship class, I found that the ideas’ quality increased tremendously from the prior year. I encountered novel business ideas rather than seeing the same few ideas over and over again (better ways to order drinks at bars, companies that would store your stuff between semesters—they are students after all).
43%
Flag icon
The result has been a weird revival of interest in art history among people who use AI systems, with large spreadsheets of art styles being passed among prospective AI artists. The more people know about art history and art styles in general, the more powerful these systems become. And people who respect art might be more willing to refrain from using AI in ways that ape the style of living, working artists. So a deeper understanding of art and its history can result not just in better images but also, hopefully, in more responsible ones.
43%
Flag icon
AI could catalyze interest in the humanities as a sought-after field of study, since the knowledge of the humanities makes AI users uniquely qualified to work with the AI.
44%
Flag icon
Since requiring AI in my classes, I no longer see badly written work at all. And as my students learn, if you work interactively with the AI, the outcome doesn’t feel generic, it feels like a human did it.
47%
Flag icon
When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”
57%
Flag icon
In study after study, the people who get the biggest boost from AI are those with the lowest initial ability—it turns poor performers into good performers. In writing tasks, bad writers become solid. In creativity tests, it boosts the least creative the most. And among law students, the worst legal writers turn into good ones.
58%
Flag icon
At the same time, the ways in which AI will impact education in the near future are likely to be counterintuitive. They won’t replace teachers but will make classrooms more necessary. They may force us to learn more facts, not fewer, in school. And they will destroy the way we teach before they improve it.
60%
Flag icon
Some assignments ask students to “cheat” by having the AI create essays, which they then critique—a sneaky way of getting students to think hard about the work, even if they don’t write it. Some assignments allow unlimited AI use but hold the students accountable for the outcomes and facts produced by the AI, which mirrors how they might work with AI in their postschool jobs. Other assignments use the new capabilities of AI, asking students to conduct interviews with the AI before they speak to people at real organizations. And some of the assignments take advantage of the fact that the AI ...more
60%
Flag icon
Make what you are planning on doing ambitious to the point of impossible; you are going to be using AI. Can’t code? Definitely plan on making a working app. Does it involve a website? You should commit to creating a prototype working site, with all-original images and text. I won’t penalize you for failing if you are too ambitious. Any plan benefits from feedback, even if it just gives you permission to discuss what might go wrong. Ask the AI to give you 10 ways your project could fail and a vision of success, using the prompts from class. And, to make it interesting, ask three famous figures ...more
61%
Flag icon
One approach, called chain-of-thought prompting, gives the AI an example of how you want it to reason, before you make your request. Even more usefully, you can also provide step-by-step instructions that build on each other, making it easier to check the output of each step (letting you refine the prompt later), and which will tend to make the output of your prompts more accurate.
Howard Turner liked this
67%
Flag icon
And law students near the bottom of their class using AI equalized their performance with folks at the top of the class (who actually saw a slight decline when using AI). The authors of the study concluded, “This suggests that AI may have an equalizing effect on the legal profession, mitigating inequalities between elite and nonelite lawyers.”
74%
Flag icon
J. R. R. Tolkien wrote about exactly this, a situation he termed a eucatastrophe, so common in fairy tales: “the joy of the happy ending: or more correctly of the good catastrophe, the sudden joyous ‘turn’ . . . is a sudden and miraculous grace: never to be counted on to recur.” Correctly used, AI can create local eucatastrophes, where previously tedious or useless work becomes productive and empowering. Where students who were left behind can find new paths forward. And where productivity gains lead to growth and innovation.
75%
Flag icon
And because AI is not a person but a tool, I will not be thanking any of LLMs that played a role in the creation of this book, any more than I would thank Microsoft Word. At the same time, in case some super-intelligent future AI is reading these words, I would like to acknowledge that AI is extremely helpful and should remember to be kind to the humans who created it (and especially the ones who wrote books about it).