Co-Intelligence: The Definitive, Bestselling Guide to Living and Working with AI
Rate it:
Open Preview
13%
Flag icon
Actually looking into these sources in detail reveals some odd material. For example, the entire email database of Enron,7 shut down for corporate fraud, is used as part of the training material for many AIs, simply because it was made freely available to AI researchers.
14%
Flag icon
Reinforcement Learning from Human Feedback (RLHF).
18%
Flag icon
we have an AI whose capabilities are unclear, both to our own intuitions and to the creators of the systems. One that sometimes exceeds our expectations and at other times disappoints us with fabrications. One that is capable of learning, but often misremembers vital information. In short, we have an AI that acts very much like a person, but in ways that aren’t quite human. Something that can seem sentient but isn’t (as far as we can tell). We have invented a kind of alien mind. But how do we ensure the alien is friendly? That is the alignment problem.
22%
Flag icon
for example, ChatGPT usually says it supports the right of women to access abortions, a position that reflects its fine-tuning. It is the RLHF process that makes many AIs seem to have a generally liberal,14 Western, pro-capitalist worldview, as the AI learns to avoid making statements that would attract controversy to its creators, who are generally liberal, Western capitalists.
25%
Flag icon
Principle 1: Always invite AI to the table. You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job.
28%
Flag icon
By actively participating in the AI process, you maintain control over the technology and its implications, ensuring that AI-driven solutions align with human values, ethical standards, and social norms. It also makes you responsible for the output of the AI, which can help prevent harm.
30%
Flag icon
Principle 4: Assume this is the worst AI you will ever use.
52%
Flag icon
Research by economists Ed Felten, Manav Raj, and Rob Seamans concluded that AI overlaps most1 with the most highly compensated, highly creative, and highly educated work. College professors make up most of the top 20 jobs that overlap with AI (business school professor is number 22 on the list ). But the job with the highest overlap is actually telemarketer. Robocalls are going to be a lot more convincing, and a lot less robotic, soon.
53%
Flag icon
Only 36 job categories2 out of 1,016 had no overlap with AI. Those few jobs included dancers and athletes, as well as pile driver operators, roofers, and motorcycle mechanics (though I spoke to a roofer, and they were planning on using AI to help with marketing and customer service, so maybe 35 jobs). You will notice that these are highly physical jobs, ones in which the ability to move in space is critical. It highlights the fact that AI, for now at least, is disembodied.
54%
Flag icon
When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”
62%
Flag icon
Wharton professor Lindsey Cameron saw this firsthand when she was a gig-working driver for six years as part of an intense ethnographic study of how workers deal with algorithmic management. Forced to depend on Uber or Lyft’s algorithms to find work, they engage in covert forms of resistance to gain some control over their destiny. For example, drivers might worry that a particular rider could give them lower ratings (thus hurting their future earnings), so they will convince the rider to cancel before pickup, perhaps by claiming that the driver can’t see the potential pickup spot.8 But even ...more
63%
Flag icon
Stock photography, a $3 billion per year market, is likely to largely disappear as AIs, ironically trained on these very images, can easily produce customized images.
64%
Flag icon
By the 1920s, 15 percent of all American women had worked as operators, and AT&T was the largest employer in the United States. AT&T decided to remove the old-school telephone operators and replace them with much cheaper direct dialing.
64%
Flag icon
In the short term, then, we might expect to see little change in employment (but many changes in tasks), but, as Amara’s Law, named after futurist Roy Amara, says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
65%
Flag icon
So it is a blow that the first impact of Large Language Models at scale was to usher in the Homework Apocalypse. Cheating was already common in schools. One study of eleven years of college courses found that when students did their homework2 in 2008, it improved test grades for 86 percent of them, but it helped only 45 percent of students in 2017. Why? Because over half of students were looking up homework answers on the internet by 2017, so they never got the benefits of homework. And that isn’t all. By 2017, 15 percent of students had paid someone3 to do an assignment, usually through essay ...more
67%
Flag icon
I have made AI mandatory in all my classes for undergraduates and MBAs at the University of Pennsylvania. Some assignments ask students to “cheat” by having the AI create essays, which they then critique—a sneaky way of getting students to think hard about the work, even if they don’t write it. Some assignments allow unlimited AI use but hold the students accountable for the outcomes and facts produced by the AI, which mirrors how they might work with AI in their postschool jobs.
69%
Flag icon
Being “good at prompting” is a temporary state of affairs.
71%
Flag icon
The biggest danger to our educational system posed by AI is not its destruction of homework, but rather its undermining of the hidden system of apprenticeship that comes after formal education.
74%
Flag icon
I have been making the argument that expertise is going to matter more than before, because experts may be able to get the most out of AI coworkers and are likely to be able to fact-check and correct AI errors.
75%
Flag icon
In field after field, we are finding that a human working with an AI co-intelligence outperforms all but the best humans working without an AI.
81%
Flag icon
Geoffrey Hinton, left the field in 2023, warning of the danger of AI with statements like “It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”