More on this book
Community
Kindle Notes & Highlights
Read between
October 7 - October 24, 2024
Aside from the uncanny feeling of the whole exchange, note that the AI appears to be identifying the feelings and motivations of Kevin Roose. The ability to predict what others are thinking is called theory of mind, and it is considered exclusive to humans (and possibly, under some circumstances, great apes). Some tests suggest that AI does have theory of mind, but, like many other aspects of AI, that remains controversial, as it could be a convincing illusion.
It’s possible that these personalized AIs might ease the epidemic of loneliness that ironically affects our ever more connected world—just as the internet and social media connected dispersed subcultures. On the other hand, it may make us less tolerant of humans, and more likely to embrace simulated friends and lovers. Profound human-AI relationships like the Replika users’ will proliferate, and more people will be fooled, either by choice or by bad luck, into thinking that their AI companions are real.
I failed my own Turing Test: I was fooled by an AI of myself to think it was accurately quoting me, when in fact it was making it all up.
hallucinations are a deep part of how LLMs work. They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus, if it sticks too closely to the patterns in its training data, the model is said to be overfitted to that training data. Overfitted LLMs may fail to generalize to new or unseen inputs and generate irrelevant or inconsistent text—in short, their results are always similar and uninspired. To avoid this, most AIs add extra randomness in their
...more
These technical issues are compounded because they rely on patterns, rather than a storehouse of data, to create answers.
Anything that requires exact recall is likely to result in a hallucination, though giving AI the ability to use outside resources, like web searches, might change this equation.
we need to be realistic about a major weakness, which means AI cannot easily be used for mission-critical tasks requiring precision or accuracy.
As a result, researchers have argued that it is the jobs with the most creative tasks, rather than the most repetitive, that tend to be most impacted by the new wave of AI.
generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning. Add in the randomness that comes with AI output, and you have a powerful tool for innovation. The AI seeks to generate the next word in a sequence by finding the next likely token, no matter how weird the previous words were.
One such test is known as the Alternative Uses Test (AUT). This measures an individual’s ability to come up with a wide variety of uses for a common object. In this test, a participant is presented with an everyday object, such as a paper clip, and is asked to come up with as many different uses for the object as possible. For example, a paper clip can hold papers together, pick locks, or fish small objects out of tight spaces. The AUT is often used to evaluate an individual’s ability to think divergently and to come up with unconventional ideas.
This is part of the concern about using AI for creative work; since we can’t easily tell where the information comes from, the AI may be using elements of work that might be copyrighted or patented or just taking someone’s style without permission.
The concepts may be good, even excellent, but they can start to seem a little same-y after seeing enough of them. Thus, a large group of creative humans will usually generate a wider diversity of ideas than the AI. All of this suggests that humans still have a large role to play in innovation . . . but that they would be foolish not to include AI in that process, especially if they don’t consider themselves highly creative.
Upon closer inspection, a surprisingly large amount of work is actually creative work in the form that AI is good at. Situations in which there is no right answer, where invention matters and small errors can be caught by expert users, abound.
productivity inequality. Participants who scored lower on the first round without AI assistance benefited more from using ChatGPT, narrowing the gap between low and high scorers.
The idea of programming by intent, by asking the AI to do something and having it create the code, is likely to have significant impacts in an industry whose workers earn a total of $464 billion a year.
AI is also good at summarizing data since it is adept at finding themes and compressing information, though at the ever-present risk of error. As an example, I added tiny science fiction references
A study by researchers at the University of Chicago used ChatGPT to analyze the conference-call transcripts of large companies, asking the AI to summarize the risks that companies faced. Risk obviously plays a big role in stock market returns, so financial firms have spent a lot of time and money using specialized, older forms of machine learning to try to identify the uncertainties associated with various corporations. ChatGPT, without any specialized stock market knowledge, tended to outperform these more specialized models, working as a “powerful predictor of future stock price volatility.”
Of course, the unresolved question is whether AI is more or less accurate than humans, and whether its extended abilities to do creative, human work makes up for its errors. The trade-offs are often surprising.
But what happens when AI touches on the most deeply human creative tasks—art? Artists have been reacting with alarm to the rapid encroachment of AI tools. Some of that concern is aesthetic. “A grotesque mockery of what it is to be human” is how famed musician Nick Cave described an AI attempt to create lyrics “in the style of a Nick Cave song.” Animator Hayao Miyazaki called AI art “an insult to life itself.” When one artist won a competition with an AI-generated piece, it caused an outcry, but the winning artist defended the AI’s work. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
So now, in many ways, humanities majors can produce some of the most interesting “code.” Writers are often the best at prompting AI for written material because they are skilled at describing the effects they want prose to create (“end on an ominous note,” “make the tone increasingly frantic”). They are good editors, so they can provide instructions back to the AI (“make the second paragraph more vivid”). They can quickly run experiments with audiences and styles by knowing many examples of both (“make this like something in The New Yorker,” “do this in the style of John McPhee”). And they can
...more
AI image generators have trained deeply on the past paintings and watercolors, architecture and photographs, fashion and historical images. Creating something interesting with AI requires you to invoke these connections to create a novel image.
work. AI could catalyze interest in the humanities as a sought-after field of study, since the knowledge of the humanities makes AI users uniquely qualified to work with the AI.
He found that recruiters who used high-quality AI became lazy, careless, and less skilled in their own judgment. They missed out on some brilliant applicants and made worse decisions than recruiters who used low-quality AI or no AI at all.
When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt human learning, skill development, and productivity. He called this “falling asleep at the wheel.”
The future of delegation will require further reductions in hallucination rates, and better transparency of AI decision-making, so that we can trust it more. The whole goal of delegation is to save us time and allow us to focus on tasks where we can be, or want to be, of value.
As AIs start to act more like agents, capable of executing on goals autonomously, we will see more automation of tasks, but that is still a work in progress. For example, I gave an early form of AI agent (with the cute but slightly worrying name of BabyAGI) the goal of writing the best closing sentence to this paragraph on the future of agents. It lost its way a bit in the process, developing a twenty-one-step plan for solving the problem of writing a single sentence (with steps like “Explore methods to ensure AI agents are used responsibly to improve economic decision-making”) and going down
...more
All this shadow use leads to the final concern, the justified worry that workers might be training their own replacements by figuring out how to work with AI. If someone has figured out how to automate 90 percent of a particular job, and they tell their boss, will the company fire 90 percent of their coworkers? Better not to speak up.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically. It may take a while to sort it out, but we will do so. In fact, we must do so—it’s too late to put the genie back in the bottle.