More on this book
Community
Kindle Notes & Highlights
Where previous technological revolutions often targeted more mechanical and repetitive work, AI works, in many ways, as a co-intelligence. It augments, or potentially replaces, human thinking to dramatic results. Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing.
We have invented technologies, from axes to helicopters, that boost our physical capabilities; and others, like spreadsheets, that automate complex tasks; but we have never built a generally applicable technology that can boost our intelligence. Now humans have access to a tool that can emulate how we think and write, acting as a co-intelligence to improve (or replace) our work.
But among the many papers on different forms of AI being published by industry and academic experts, one stood out, a paper with the catchy title “Attention Is All You Need.” Published by Google researchers in 2017, this paper introduced a significant shift in the world of AI, particularly in how computers understand and process human language. This paper proposed a new architecture, called the Transformer, that could be used to help a computer better process how humans communicate. Before the Transformer, other methods were used to teach computers to understand language, but they had
...more
Despite being just a predictive model, the Frontier AI models, trained on the largest datasets with the most computing power, seem to do things that their programming should not allow—a concept called emergence.
But how do we ensure the alien is friendly? That is the alignment problem.
This is part of the reason that a number of scientists and influential figures have called for a halt to the development of AI. AI research, in their mind, is akin to the Manhattan Project, meddling with forces that can cause the extinction of humanity, for unclear benefits.
The complication is that AI does not really plagiarize, in the way that someone copying an image or a block of text and passing it off as their own is plagiarizing. The AI stores only the weights from its pretraining, not the underlying text it trained on, so it reproduces a work with similar characteristics but not a direct copy of the original pieces it trained on. It is, effectively, creating something new, even if it is a homage to the original.
human biases also work their way into the training data.
compounded
American and generally English-speaking AI firms
male computer sci...
This highlight has been truncated due to consecutive passage length restrictions.
amplifies stereotypes about race and gender,
advanced LLMs are often more subtle, in part because the models are fine-tuned to avoid obvious stereotyping. The biases are still there, however.
AI companies have been trying to address this bias in a number of ways, with differing levels of urgency. Some of them just cheat,
Reinforcement Learning from Human Feedback (RLHF) process,
allows human raters to penalize the AI for producing harmful content (whether racist or incoherent) and reward it for producing good content.
But biases do not necessarily go away.
For all that, RHLF is not foolproof. The AI does not always have clear rules and can be manipulated into acting badly. One technique for doing so is called prompt injection,
It is also possible to jailbreak
And it can’t be done just by governments, though regulation is certainly needed. While the Biden administration issued an executive order setting some initial rules for managing AI development, and world governments have made coordinated statements about the responsible use of AI, the devil is in the details. Government regulation is likely to continue to lag the actual development of AI capabilities, and might stifle positive innovation in an attempt to stop negative outcomes. Plus, as international competition heats up, the question of whether national governments are willing to slow down
...more
four principles of working with AI:
Principle 1: Always invite AI to the table.
You should try inviting AI to help you in everything you do, barring legal or ethical barriers. As you experiment, you may find that AI help can be satisfying, or frustrating, or useless, or unnerving. But you aren’t just doing this for help alone; familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job. Given that AI is a General Purpose Technology, there is no single manual or instruction book that you can refer to in order to understand its value and its limits.
The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
Principle 2: Be the human in the loop. For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem, and there is considerable debate over whether it is completely solvable with current approaches to AI engineering.
So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it.
Principle 3: Treat AI like a person (but tell it what kind of person it is).
Treating AI like a person can create unrealistic expectations, false trust, or unwarranted fear among the public, policymakers, and even researchers themselves.
as imperfect as the analogy is, working with AI is easiest if you think of it like an alien person rather than a human-built machine.
To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle.
It can help to tell the system “who” it is, because that gives it a perspective.
One very effective strategy that emerged from the class was treating the AI as a coeditor, engaging in a back-and-forth, conversational process.
Principle 4: Assume this is the worst AI you will ever use.
We are playing Pac-Man in a world that will soon have PlayStation 6s.
So, by embracing this principle, you can view AI’s limitations as transient, and remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
AI is terrible at behaving like traditional software.
AI doesn’t act like software, but it does act like a human being. I’m not suggesting that AI systems are sentient like humans, or that they will ever be. Instead, I’m proposing a pragmatic approach: treat AI as if it were human because, in many ways, it behaves like one. This mindset, which echoes my “treat it like a person” principle of AI, can significantly improve your understanding of how and when to use AI in a practical, if not technical, sense.
There is no definitive answer to why LLMs hallucinate, and the contributing factors may differ among models. Different LLMs may have different architectures, training data, and objectives. But in many ways, hallucinations are a deep part of how LLMs work. They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus, if it sticks too closely to the patterns in its training data, the model is said to be overfitted to that training data. Overfitted LLMs
...more
Anything that requires exact recall is likely to result in a hallucination, though giving AI the ability to use outside resources, like web searches, might change this equation.
And you can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes. So if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query. LLMs are not generally optimized to say “I don’t know” when they don’t have enough information.
...more
These small hallucinations are hard to catch because they are completely plausible. I was able to notice the issues only after an extremely close reading and research on every fact and sentence in the output. I may still have missed something (sorry, whoever fact-checks this chapter). But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
we need to be realistic about a major weakness, which means AI cannot easily be used for mission-critical tasks requiring precision or accuracy.
This is the paradox of AI creativity. The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful. The real question becomes how to use AI to take advantage of its strengths while avoiding its weaknesses. To do that, let us consider how AI “thinks” creatively.
LLMs are connection machines. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning. Add in the randomness that comes with AI output, and you have a powerful tool for innovation.
We aren’t completely out of an innovation job, however, as other studies find that the most innovative people benefit the least from AI creative help. This is because, as creative as the AI can be, without careful prompting, the AI tends to pick similar ideas every time. The concepts may be good, even excellent, but they can start to seem a little same-y after seeing enough of them. Thus, a large group of creative humans will usually generate a wider diversity of ideas than the AI. All of this suggests that humans still have a large role to play in innovation . . . but that they would be
...more
AI is excellent at generating a long list of ideas. From a practical standpoint, the AI should be invited to any brainstorming session you hold.
Upon closer inspection, a surprisingly large amount of work is actually creative work in the form that AI is good at.
AI works tremendously well as a coding assistant because writing software code combines elements of creativity with pattern matching.
AI is also good at summarizing data since it is adept at finding themes and compressing information, though at the ever-present risk of error.