More on this book
Community
Kindle Notes & Highlights
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
The crazy thing is that no one is entirely sure why a token prediction system resulted in an AI with such seemingly extraordinary abilities. It may suggest that language and the patterns of thinking13 behind it are simpler and more “law-like” than we thought and that LLMs have discovered some deep and hidden truths about them, but the answers are still unclear.
Some researchers argue that almost all the emergent features of AI17 are due to these sorts of measurement errors and illusions, while others argue that we are on the edge of building a sentient artificial entity.
“human affairs, as we know them, could not continue.”2 In an AI singularity, hyperintelligent AIs appear, with unexpected motives.
A super-intelligent AI could, in theory, cure disease, solve global warming, and issue in an era of abundance, acting as a benevolent machine god.
The complication is that AI does not really plagiarize, in the way that someone copying an image or a block of text and passing it off as their own is plagiarizing. The AI stores only the weights from its pretraining, not the underlying text it trained on, so it reproduces a work with similar characteristics but not a direct copy of the original pieces it trained on.
However, the more often a work appears8 in the training data, the more closely the underlying weights will allow the AI to reproduce the work.
The AI is great at the sonnet, but because of how it conceptualizes the world in tokens rather than words, it consistently produces poems of more or less than fifty words. Similarly, some unexpected tasks (like idea generation) are easy for AIs while other tasks that seem to be easy for machines to do (like basic math) are challenges for LLMs. To figure out the shape of the frontier, you will need to experiment.
this experimentation gives you the chance to become the best expert in the world in using AI for a task you know well. The reason for this stems from a fundamental truth about innovation:2 it is expensive for organizations and companies but cheap for individuals doing their job.
Workers who figure out how to make AI useful for their jobs will have a large impact.
As we grow more familiar with LLMs, we can not only harness their strengths more effectively but also preemptively recognize potential threats to our jobs, equipping ourselves for a future that demands the seamless integration of human and artificial intelligence.
what if we become too used to relying on AI? Throughout history, the introduction of new technologies has often sparked fears that we will lose important abilities by outsourcing tasks to machines.
The key is to keep humans firmly in the loop—to use AI as an assistive tool, not as a crutch.
AI works best with human help, and you want to be that helpful human.
Even if you spot the error, AIs are also good at justifying a wrong answer9 that they have already committed to, which can serve to convince you that the wrong answer was right all along!
work with it without being taken in by it.
provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations.
Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving. It also helps you form a working co-intelligence with the AI.
By actively participating in the AI process, you maintain control over the technology and its implications, ensuring that AI-driven solutions align with human values, ethical standards, and social norms. It also makes you responsible for the output of the AI, which can help prevent harm.
being good at being the human in the loop will mean that you will see the sparks of growing intelligence before others, giving you more of a chance to adapt to coming changes than people who do not work closely with AI.
Treat AI like a person (but tell it what kind of person it is).
Anthropomorphism is the act of ascribing human characteristics to something that is nonhuman. We’re prone to this: we see faces in the clouds, give motivations to the weather, and hold conversations with our pets.
While anthropomorphism might serve a useful purpose in the short term, it raises ethical questions about deception and emotional manipulation. Are we being “fooled” into believing these machines share our feelings? And could this illusion lead us to disclose personal information to these machines, not realizing that we are sharing with corporations or remote operators?
To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle.
The key is to give the LLM some guidance and direction on how to generate outputs that match your expectations and needs, to put it in the right “headspace” to give you interesting and unique answers.
you can view AI’s limitations as transient, and remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
AI excels at tasks that are intensely human. It can write, analyze, code, and chat. It can play the role of marketer or consultant, increasing productivity by outsourcing mundane tasks.
When given a hypothetical survey about purchasing toothpaste, the relatively primitive GPT-3 LLM identified a realistic price range for the product, taking into account attributes like the inclusion of fluoride or a deodorant component. Essentially, the AI model weighed different product features and made trade-offs, just like a human consumer would. The researchers also found that GPT-3 can generate estimates of willingness to pay (WTP) for various product attributes consistent with existing research.
They not only process and analyze data but also appear to make nuanced judgments, parse complex concepts, and adapt their responses based on the information they are given.
Humans can be difficult to interact with, but perfect AI companions are a true near-term possibility.
But soon we will each have our own perfect echo chambers. It’s possible that these personalized AIs might ease the epidemic of loneliness22 that ironically affects our ever more connected world—just as the internet and social media connected dispersed subcultures.
hallucinations are a deep part of how LLMs work. They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus, if it sticks too closely to the patterns in its training data, the model is said to be overfitted to that training data. Overfitted LLMs may fail to generalize to new or unseen inputs and generate irrelevant or inconsistent text—in short, their results are always similar and uninspired. To avoid this, most AIs add extra randomness in their
...more
the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query.
But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
This is the paradox of AI creativity. The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful.
If you can link disparate ideas from multiple fields and add a little random creativity, you might be able to create something new.
LLMs are connection machines. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning. Add in the randomness that comes with AI output, and you have a powerful tool for innovation.
other studies find that the most innovative people benefit the least7 from AI creative help. This is because, as creative as the AI can be, without careful prompting, the AI tends to pick similar ideas every time.
a large group of creative humans will usually generate a wider diversity of ideas8 than the AI.
recent research has shown that the “equal-odds rule”9 is true for creativity, which is that very creative people generate both more ideas and better ideas than other folks.
From a practical standpoint, the AI should be invited to any brainstorming session you hold.
you should expect that most of its ideas will be mediocre.
to find good novel ideas,11 we likely have to come up with many bad novel ideas because most new ideas are pretty bad. Fortunately, we are good at filtering out low-quality ideas, so if we can generate novel ideas quickly and at low cost, we are more likely to generate at least some high-quality gems. So we want the AI answers to be weird.
outsource some of the most difficult aspects of creativity.
Marketing writing, performance reviews, strategic memos—all these are within the capability of AI because they have both room for interpretation and are relatively easily fact-checked.
These improvements were not limited to specific areas; the entire time distribution shifted to faster work, and the entire grade distribution shifted to higher quality.
AI teammates helped reduce productivity inequality.
To get the AI to do unique things, you need to understand parts of the culture more deeply than everyone else using the same AI systems. So now, in many ways, humanities majors can produce some of the most interesting “code.”
we need people who have deep or broad knowledge of unusual fields to use AI in ways that others cannot, developing unexpected and valuable prompts and testing the limits of how they work.
ChatGPT mostly serves as a substitute for human effort, not a complement to our skills.