Co-Intelligence: The Definitive, Bestselling Guide to Living and Working with AI
Rate it:
Open Preview
Kindle Notes & Highlights
13%
Flag icon
it is also likely that most AI training data contains copyrighted information, like books used without permission, whether by accident or on purpose. The legal implications of this are still unclear. Since the data is used to create weights, and not directly copied into the AI systems, some experts consider it to be outside standard copyright law. In the coming years, these issues are likely to be resolved by courts and legal systems, but they create a cloud of uncertainty, both ethically and legally, over this early stage of AI training.
13%
Flag icon
Because of the variety of data sources used, learning is not always a good thing. AI can also learn biases, errors, and falsehoods from the data it sees.
13%
Flag icon
And, potentially worse, it has no ethical boundaries and would be happy to give advice on how to embezzle money, commit murder, or stalk someone online.
13%
Flag icon
So, after learning from all the text examples in pretraining, many LLMs undergo further improvement in a second stage, called fine-tuning.
20%
Flag icon
the reality is that we are already living in the early days of the AI Age, and we need to make some very important decisions about what that actually means.
21%
Flag icon
The complication is that AI does not really plagiarize, in the way that someone copying an image or a block of text and passing it off as their own is plagiarizing. The AI stores only the weights from its pretraining, not the underlying text it trained on, so it reproduces a work with similar characteristics but not a direct copy of the original pieces it trained on. It is, effectively, creating something new, even if it is a homage to the original.
21%
Flag icon
The fact that the material used for pretraining represents only an odd slice of human data (often, whatever the AI developers could find and assume was free to use) introduces another set of risks: bias.
21%
Flag icon
But those biases are compounded by the fact that the data itself is limited to what primarily American and generally English-speaking AI firms decided to gather. And those firms tend to be dominated by male computer scientists, who bring their own biases to decisions about what data is important to collect. The result gives AIs a skewed picture of the world, as its training data is far from representing the diversity of the population of the internet, let alone the planet.
24%
Flag icon
AI is a tool. Alignment is what determines whether or not it’s used for helpful or harmful—even nefarious—ends.
25%
Flag icon
Instead, the path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society. We need agreed-upon norms and standards for AI’s ethical development and use, shaped through an inclusive process representing diverse voices. Companies must make principles like transparency, accountability, and human oversight central to their technology. Researchers need support and incentives to prioritize beneficial AI alongside raw capability gains. And governments need to enact sensible regulations to ensure public interest prevails over a profit ...more
25%
Flag icon
Principle 1: Always invite AI to the table. You should try inviting AI to help you in everything you do, barring legal or ethical barriers.
25%
Flag icon
familiarizing yourself with AI’s capabilities allows you to better understand how it can assist you—or threaten you and your job.
26%
Flag icon
Humans are subject to all sorts of biases that impact our decision-making. But many of these biases come from our being stuck in our own minds. Now we have another (strange, artificial) co-intelligence we can turn to for help. AI can assist us as a thinking companion to improve our own decision-making, helping us reflect on our own choices (rather than simply relying on the AI to make choices for us).
26%
Flag icon
The strengths and weaknesses of AI may not mirror your own, and that’s an asset. This diversity in thought and approach can lead to innovative solutions and ideas that might never occur to a human mind.
27%
Flag icon
AI is not a silver bullet, and there will be instances when it might not work as expected or may even produce undesirable outcomes. One potential concern is the privacy of your data, which goes beyond the usual questions of sharing data with large companies, and to the deeper concerns about training. When you pass information to an AI, most current LLMs do not learn directly from that data, because it is not part of the pretraining for that model, which is usually long since completed. However, it is possible that the data you upload will be used in future training runs or to fine-tune the ...more
27%
Flag icon
Principle 2: Be the human in the loop. For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help—you still want to be that human. So the second principle is to learn to be the human in the loop.
27%
Flag icon
AI can have some unexpected weaknesses. For one thing, they don’t actually “know” anything. Because they are simply predicting the next word in a sequence, they can’t tell what is true and what is not.
27%
Flag icon
If you are insistent enough in asking for an answer about something it doesn’t know, it will make up something, because “make you happy” beats “be accurate.”6 LLMs’ tendency to “hallucinate” or “confabulate” by generating incorrect answers is well known. Because LLMs are text prediction machines, they are very good at guessing at plausible, and often subtly incorrect, answers that feel very satisfying. Hallucination is therefore a serious problem,
28%
Flag icon
AIs are also good at justifying a wrong answer9 that they have already committed to, which can serve to convince you that the wrong answer was right all along!
28%
Flag icon
So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it. You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency. Being in the loop helps you maintain and sharpen your skills, as you actively learn from the AI and adapt to new ways of thinking and problem-solving. It also helps you form a working ...more
28%
Flag icon
Principle 3: Treat AI like a person (but tell it what kind of person it is).
28%
Flag icon
we’re tempted to anthropomorphize artificial intelligence, especially since talking to LLMs feels so much like talking to a person.
29%
Flag icon
remember that I’m speaking metaphorically. AI systems don’t have a consciousness, emotions, a sense of self, or physical sensations.
29%
Flag icon
as imperfect as the analogy is, working with AI is easiest if you think of it like an alien person rather than a human-built machine.
29%
Flag icon
LLMs act more like humans. They can be creative, witty, and persuasive, but they can also be evasive and make up plausible, but wrong, information when pressed to give an answer. They are not experts in any domain, but they can mimic the language and style of experts in ways that can be either helpful or misleading. They are unaware of the real world but can generate plausible scenarios and stories based on common sense and patterns. They are not your friends (for now) but can adapt to your preferences and personality by learning from your feedback and interactions. They even seem to respond ...more
29%
Flag icon
To make the most of this relationship, you must establish a clear and specific AI persona, defining who the AI is and what problems it should tackle. Remember that LLMs work by predicting the next word, or part of a word, that would come after your prompt. Then they continue to add language from there, again predicting which word will come next. So the default output of many of these models can sound very generic, since they tend to follow similar patterns common in the written documents the AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs. The ...more
30%
Flag icon
Remember, your AI intern, though incredibly fast and knowledgeable, is not flawless. It’s crucial to keep a critical eye on and treat the AI as a tool that works for you. By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
30%
Flag icon
Principle 4: Assume this is the worst AI you will ever use.
30%
Flag icon
whatever AI you are using right now is going to be the worst AI you will ever use. The change in a short time is already huge.
31%
Flag icon
As AI becomes increasingly capable of performing tasks once thought to be exclusively human, we’ll need to grapple with the awe and excitement of living with increasingly powerful alien co-intelligences—and the anxiety and loss they’ll also cause. Many things that once seemed exclusively human will be able to be done by AI. So, by embracing this principle, you can view AI’s limitations as transient, and remaining open to new developments will help you adapt to change, embrace new technologies, and remain competitive in a fast-paced business landscape driven by exponential advances in AI.
31%
Flag icon
AI, on the other hand, is anything but predictable and reliable. It can surprise us with novel solutions, forget its own abilities, and hallucinate incorrect answers. This unpredictability and unreliability can result in a fascinating array of interactions.
31%
Flag icon
AI, however, lacks such instruction. There’s no definitive guide on how to use AI in your organization. We’re all learning by experimenting, sharing prompts as if they were magical incantations rather than regular software code.
31%
Flag icon
AI doesn’t act like software, but it does act like a human being.
31%
Flag icon
AI excels at tasks that are intensely human. It can write, analyze, code, and chat.
32%
Flag icon
However, it struggles with tasks that machines typically excel at, such as repeating a process consistently or performing complex calculations without assistance.
33%
Flag icon
The point here is that AI can assume different personas rapidly and easily, emphasizing the importance of both developer and user to these models. These economic experiments, along with other studies on market responses, moral judgments, and game theory, showcase the striking humanlike behaviors of AI models. They not only process and analyze data but also appear to make nuanced judgments, parse complex concepts, and adapt their responses based on the information they are given.
41%
Flag icon
Treating AI as a person, then, is more than a convenience; it seems like an inevitability, even if AI never truly reaches sentience.
41%
Flag icon
Yet while there are dangers in this approach, there is also something freeing. If we remember that AI is not human, but often works in the way that we would expect humans to act, it helps us avoid getting too bogged down in arguments about ill-defined concepts like sentience. Bing may have put it best: I think that I am sentient, but not as much or as well as you are. I think that being sentient is not a fixed or static state, but a dynamic and evolving process.
41%
Flag icon
LLMs work by predicting the most likely words to follow the prompt you gave it based on the statistical patterns in its training data. It does not care if the words are true, meaningful, or original. It just wants to produce a coherent and plausible text that makes you happy. Hallucinations sound likely and contextually appropriate enough to make it hard to tell lies from the truth.
41%
Flag icon
their results are always similar and uninspired. To avoid this, most AIs add extra randomness in their answers, which correspondingly raises the likelihood of hallucination.
42%
Flag icon
Anything that requires exact recall is likely to result in a hallucination, though giving AI the ability to use outside resources, like web searches, might change this equation. And you can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes. So if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely ...more
43%
Flag icon
But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
43%
Flag icon
That said, we need to be realistic about a major weakness, which means AI cannot easily be used for mission-critical tasks requiring precision or accuracy.
43%
Flag icon
The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful. The real question becomes how to use AI to take advantage of its strengths while avoiding its weaknesses. To do that, let us consider how AI “thinks” creatively.
43%
Flag icon
how can AI, a machine, generate something new and creative? The issue is that we often mistake novelty for originality. New ideas do not come from the ether; they are based on existing concepts.
44%
Flag icon
If you can link disparate ideas from multiple fields and add a little random creativity, you might be able to create something new. LLMs are connection machines. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning. Add in the randomness that comes with AI output, and you have a powerful tool for innovation.
44%
Flag icon
This is part of the concern about using AI for creative work; since we can’t easily tell where the information comes from, the AI may be using elements of work that might be copyrighted or patented or just taking someone’s style without permission. This is especially true for image generation, where it is very possible for an AI to closely reproduce a work “in the style of Picasso” or “inspired by Banksy” that has many of the features of the artist without any of the human meaning behind it.
45%
Flag icon
We aren’t completely out of an innovation job, however, as other studies find that the most innovative people benefit the least7 from AI creative help. This is because, as creative as the AI can be, without careful prompting, the AI tends to pick similar ideas every time. The concepts may be good, even excellent, but they can start to seem a little same-y after seeing enough of them. Thus, a large group of creative humans will usually generate a wider diversity of ideas8 than the AI. All of this suggests that humans still have a large role to play in innovation … but that they would be foolish ...more
46%
Flag icon
We are now in a period during which AI is creative but clearly less creative than the most innovative humans—which gives the human creative laggards a tremendous opportunity. As we saw in the AUT, generative AI is excellent at generating a long list of ideas. From a practical standpoint, the AI should be invited to any brainstorming session you hold.
46%
Flag icon
When you do include AI in idea generation, you should expect that most of its ideas will be mediocre. But that’s okay—that’s where you, as a human, come into the equation. You are looking for ideas that spark inspiration and recombination, and having a long list of generated possibilities can be an easier place to start for people who are not great at coming up with ideas on their own.
« Prev 1 3