More on this book
Community
Kindle Notes & Highlights
1 CREATING ALIEN MINDS
responsible for the fact that more advanced LLMs cost over $100 million to train, using large amounts of energy in the process.
2 ALIGNING THE ALIEN
Artificial Ethics for Alien Minds
When asked to show a judge, the AI generates a picture of a man 97 percent of the time, even though 34 percent of US judges are women. In showing fast-food workers, 70 percent had darker skin tones, even though 70 percent of American fast-food workers are white.
3 FOUR RULES FOR CO-INTELLIGENCE
Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.
4 AI AS A PERSON
5 AI AS A CREATIVE
But in many ways, hallucinations are a deep part of how LLMs work. They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus,
if it sticks too closely to the patterns in its training data, the model is said to be overfitted to that training data. Overfitted LLMs may fail to generalize to new or unseen inputs and generate irrelevant or inconsistent text—in short, their results are always similar and uninspired.
most AIs add extra randomness in their answers, which correspondingly raises the lik...
This highlight has been truncated due to consecutive passage length restrictions.
hallucinations can also come from the source material of the AI, which can be biased, incomplete, contradictory, or even wrong in ways that we discussed in chapter 2. The model has no way of distin...
This highlight has been truncated due to consecutive passage length restrictions.
These technical issues are compounded because they rely on patterns, rather than a storehouse of data, to create answers.
LLMs are connection machines. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning.
The implications of having AI write our first drafts (even if we do the work ourselves, which is not a given) are huge. One consequence is that we could lose our creativity and originality. When we use AI to generate our first drafts, we tend to anchor on the first idea that the machine produces, which influences our future work.
When we use AI to generate our first drafts, we don’t have to think as hard or as deeply about what we write.
6 AI AS A COWORKER
Research by economists Ed Felten, Manav Raj, and Rob Seamans concluded that AI overlaps most with the most highly compensated, highly creative, and highly educated work. College professors make up most of the top 20 jobs that overlap with AI
Jobs are composed of bundles of tasks.
Getting rid of some tasks doesn’t mean the job disappears.
most participants didn’t even bother editing the AI’s output once it was created for them. It is a problem I see repeatedly when people first use AI: they just paste in the exact question they are asked and let the AI answer it.
Recruiters with higher-quality AI were worse than recruiters with lower-quality AI. They spent less time and effort on each résumé, and blindly followed the AI recommendations. They also did not improve over time. On the other hand, recruiters with lower-quality AI were more alert, more critical, and more independent.
When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”
future of understanding how AI impacts work involves understanding how human interaction with AI changes, depending on where tasks are placed on this frontier and how the frontier will change. That takes time and experience, which is why it is
important to stick with the principle of inviting AI to everything, letting us learn the shape of the Jagged Frontier and how it maps onto the unique complex of tasks that comprise our individual jobs.
Workers, while worried about AI, tend to like using it because it removes the most tedious and annoying parts of their job, leaving them with the most interesting tasks.
How might this insight be useful in an educational setting? Can it be adapted to settings in which learners must struggle to develop essential skills? This combination of issues soul seem something ingeresting to think about.
You need to ask: What is your vision about how AI makes work better rather than worse? And this is where organizations with high degrees of trust and good cultures will have an advantage. If your employees don’t believe you care about them, they will keep their AI use hidden.
Rather, LLMs could help us flourish by making it impossible to ignore the truth any longer: a lot of work is really boring and not particularly meaningful. If we acknowledge that, we can turn our attention to improving the human experience of work.
That doesn’t mean new technologies never displace workers en masse, though. In fact, it has happened to one of the biggest job categories ever held by women—telephone operators. By the 1920s, 15 percent of all American women had worked as operators,
But the women with the most experience as operators took a larger hit to their long-term earnings, as their tenure in a now extinct job did not translate to other fields.
In study after study, the people who get the biggest boost from AI are those with the lowest initial ability—it turns poor performers into good performers.
With lower-cost workers doing the same work in less time, mass unemployment, or at least underemployment, becomes more likely, and we may see the need for policy solutions, like a four-day workweek or universal basic income, that reduce the floor for human welfare.
7 AI AS A TUTOR
we have long known how to supercharge education; we just can’t quite pull it off. Benjamin Bloom, an educational psychologist, published a paper in 1984 called “The 2 Sigma Problem.”
So it is not surprising that a powerful, adaptable, and cheap personalized tutor is the holy grail of education.
Unhappily for students, however, research shows that both homework and tests are actually remarkably useful learning tools.
One study of eleven years of college courses found that when students did their homework in 2008, it improved test grades for 86 percent of them, but it helped only 45 percent of students in 2017. Why? Because over half of students were looking up homework answers on the internet by 2017,
one practical implication is that it can help to give the AI explicit instructions that go step by step through what you want. One approach, called chain-of-thought prompting, gives the AI an example of how you want it to reason, before you make your request.
Flipped Classrooms and AI Tutors
In the near term, AI can help instructors prepare lectures that are grounded in content and take into account how students learn.
So how can active learning and passive learning coexist?
One solution to incorporating more active learning is by “flipping” classrooms. Students would learn new concepts at home, typically through videos or other digital resources, and then apply what they’ve learned in the classroom through collaborative activities, discussions, or problem-solving exercises.
Khan Academy’s Khanmigo goes beyond the passive videos and quizzes that made Khan Academy famous by including AI tutoring. Students can ask the tutor to explain concepts, of course, but it is also capable of analyzing patterns of performance to guess at why a student is struggling with a topic, providing much deeper help.
Once the exclusive privilege of million-dollar budgets and expert teams, education technology now rests in the hands of educators.
8 AI AS A COACH
Learning any skill and mastering any domain requires rote memorization, careful skills building, and purposeful practice, and the AI (and future generations of AI) will undoubtedly be better than a novice at many early skills.
The temptation, then, might be to outsource these basic skills to the AI.