Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Read between March 24 - April 1, 2025
2%
Flag icon
You realize the world has changed in fundamental ways and that nobody can really tell you what the future will look like.
Jon Nakapalau liked this
2%
Flag icon
So my sleepless nights came early, just after the release of ChatGPT in November 2022. After only a couple of hours, it was clear that something huge had shifted between previous iterations of GPT and this new one.
2%
Flag icon
Kirill Naumov, had created a working demo for his entrepreneurship project—a Harry Potter–inspired moving picture frame that reacted to people walking near it—using a code library he had never used before, in less than half the time it would otherwise have taken. He had venture capital scouts reaching out to him by the end of the next day.
3%
Flag icon
AI is what those of us who study technology call a General Purpose Technology (ironically, also abbreviated GPT). These advances are once-in-a-generation technologies, like steam power or the internet, that touch every industry and every aspect of life.
7%
Flag icon
The Transformer solved these issues by utilizing an “attention mechanism.” This technique allows the AI to concentrate on the most relevant parts of a text, making it easier for the AI to understand and work with language in a way that seemed more human.
8%
Flag icon
Many of the AI companies keep the source text they train from, called training corpuses, secret, but a typical example of training data largely consists of text pulled from the internet, public domain books and research articles, and assorted other free sources of content that researchers can find. Actually looking into these sources in detail reveals some odd material. For example, the entire email database of Enron, shut down for corporate fraud, is used as part of the training material for many AIs, simply because it was made freely available to AI researchers. Similarly, there is a ...more
8%
Flag icon
AI companies are searching for more data to use for training (one estimate suggests that high-quality data, like online books and academic articles, will be exhausted by 2026), and continue to use lower-quality data as well. There is also active research into understanding whether AI can pretrain on its own content.
9%
Flag icon
That feedback is then used to do additional training, fine-tuning the AI’s performance to fit the preferences of the human, providing additional learning that reinforces good answers and reduces bad answers, which is why the process is called Reinforcement Learning from Human Feedback (RLHF).
9%
Flag icon
After an AI has gone through this initial phase of reinforcement learning, they can continue to be fine-tuned and adjusted. This type of fine-tuning is usually done by providing more specific examples to create a new tweaked model.
11%
Flag icon
GPT-4 scored in the 90th percentile on the bar examination, while GPT-3.5 managed only the 10th percentile. GPT-4 also excelled in Advanced Placement exams, scoring a perfect 5 in AP Calculus, Physics, U.S. History, Biology, and Chemistry.
12%
Flag icon
The crazy thing is that no one is entirely sure why a token prediction system resulted in an AI with such seemingly extraordinary abilities.
Dan Pfeiffer
Inherent intelligence system architecture.
12%
Flag icon
It may suggest that language and the patterns of thinking behind it are simpler and more “law-like” than we thought and that LLMs have discovered some deep and hidden truths about them, but the answers are still unclear.
Dan Pfeiffer
Akhasic morphological linguistic resonance.
13%
Flag icon
“There are hundreds of billions of connections between these artificial neurons, some of which are invoked many times during the processing of a single piece of text, such that any attempt at a precise explanation of an LLM’s behavior is doomed to be too complex for any human to understand.”
Dan Pfeiffer
A system behavior beyond the control or understanding of its maker.
15%
Flag icon
Based on the sources we know about, the core of most AI corpuses appears to be from places where permission is not required, such as Wikipedia and government sites, but it is also copied from the open web and likely even from pirated material.
16%
Flag icon
Part of the reason AIs seem so human to work with is that they are trained on our conversations and writings. So human biases also work their way into the training data.
17%
Flag icon
But RHLF is not just about addressing bias. It also places guardrails on the AI to prevent malicious actions. Remember, the AI has no particular sense of morality; RHLF constrains its ability to behave in what its creators would consider immoral ways. After this sort of alignment, AIs act in a more human, less alien fashion. One study found that AIs make the same moral judgments as humans do in simple scenarios 93 percent of the time.
23%
Flag icon
models can sound very generic, since they tend to follow similar patterns common in the written documents the AI was trained on. By breaking the pattern, you can get much more useful and interesting outputs. The easiest way to do that is to provide context and constraints. It can help to tell the system “who” it is, because that gives it a perspective.
25%
Flag icon
whatever AI you are using right now is going to be the worst AI you will ever use.
25%
Flag icon
While Large Language Models are marvels of software engineering, AI is terrible at behaving like traditional software.
25%
Flag icon
When properly built and debugged, software yields the same outcomes every time. AI, on the other hand, is anything but predictable and reliable. It can surprise us with novel solutions, forget its own abilities, and hallucinate incorrect answers. This unpredictability and unreliability can result in a fascinating array of interactions.
26%
Flag icon
AI excels at tasks that are intensely human. It can write, analyze, code, and chat. It can play the role of marketer or consultant, increasing productivity by outsourcing mundane tasks. However, it struggles with tasks that machines typically excel at, such as repeating a process consistently
27%
Flag icon
This is not an amazing test for a variety of reasons. A primary criticism is that it is limited to linguistic behavior and overlooks many other aspects of human intelligence, such as emotional intelligence, creativity, and physical interaction with the world.
34%
Flag icon
Soon, companies will start to deploy LLMs that are built specifically to optimize “engagement” in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site. This point is not far off, as researchers have already published papers showing they can alter AI behaviors so that users feel more compelled to interact with them.
35%
Flag icon
They don’t store text directly; rather, they store patterns about which tokens are more likely to follow others. That means the AI doesn’t actually “know” anything. It makes up its answers on the fly. Plus,
36%
Flag icon
And you can’t figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes. So if you ask it to explain itself, the AI will appear to give you the right answer, but it will have nothing to do with the process that generated the original result. The system has no way of explaining its decisions, or even knowing what those decisions were. Instead, it is (you guessed it) merely generating text that it thinks will make you happy in response to your query.
36%
Flag icon
LLMs are not generally optimized to say “I don’t know” when they don’t have enough information. Instead, they will give you an answer, expressing confidence.
36%
Flag icon
These small hallucinations are hard to catch because they are completely plausible. I was able to notice the issues only after an extremely close reading and research on every fact and sentence in the output. I may still have missed something (sorry, whoever fact-checks this chapter). But this is what makes hallucinations so perilous: it isn’t the big issues you catch but the small ones you don’t notice that can cause problems.
36%
Flag icon
hallucination rates are dropping over time. For example, a study examining the number of hallucinations and errors in citations given by AI found that GPT-3.5 made mistakes in 98 percent of the cites, but GPT-4 hallucinated only 20 percent of the time.
38%
Flag icon
One such test is known as the Alternative Uses Test (AUT). This measures an individual’s ability to come up with a wide variety of uses for a common object. In this test, a participant is presented with an everyday object, such as a paper clip, and is asked to come up with as many different uses for the object as possible.
39%
Flag icon
they staged an idea generation contest to come up with the best products for a college student that would cost $50 or less. It was the GPT-4 AI against 200 students. The students lost, and it wasn’t even close. AI was faster, obviously, generating a lot more ideas than the average person in any given time. But it was also better.
39%
Flag icon
The degree of the victory was startling: of the 40 best ideas rated by the judges, 35 came from ChatGPT.
42%
Flag icon
Risk obviously plays a big role in stock market returns, so financial firms have spent a lot of time and money using specialized, older forms of machine learning to try to identify the uncertainties associated with various corporations. ChatGPT, without any specialized stock market knowledge, tended to outperform these more specialized models, working as a “powerful predictor of future stock price volatility.” In fact, it was the ability of the AI to apply more generalized knowledge of the world that made it such a good analyst, since it could put the risks discussed in conference calls into a ...more
56%
Flag icon
we should start the way every other automation wave has started: with the tedious, (mentally) dangerous, and repetitive. Companies and organizations could start with thinking about how to make boring processes “AI friendly,” allowing machines (with human supervision) to fill our required forms.
56%
Flag icon
Rewarding workers for slaying boring tasks with AI could also help streamline operations, while making everyone happier. And if this sheds light on tasks that could be safely automated with no decrease in value, so much the better.
61%
Flag icon
For slightly more advanced prompts, think about what you are doing as programming in prose.
61%
Flag icon
One approach, called chain-of-thought prompting, gives the AI an example of how you want it to reason, before you make your request.
61%
Flag icon
AI is only going to get better at guiding us, rather than requiring us to guide it. Prompting is not going to be that important for that much longer.
66%
Flag icon
Raj, conversely, integrates an AI-driven architectural design assistant into his workflow. Each time he creates a design, the AI provides instantaneous feedback. It can highlight structural inefficiencies, suggest improvements based on sustainable materials, and even predict potential costs.
Dan Pfeiffer
Agent double checks the work quality
66%
Flag icon
Moreover, the AI offers comparisons between Raj’s designs and a vast database of other innovative architectural works, highlighting differences and suggesting areas of improvement.
Dan Pfeiffer
Agent provides comparative analysis.
66%
Flag icon
Instead of just iterating designs, Raj engages in a structured reflection after every project, thanks to the insights from the AI.
Dan Pfeiffer
Outcome: architect is prompted to think differently and creatively.
67%
Flag icon
In our study of Boston Consulting Group, where previously the gap between the average performances of top and bottom performers was 22 percent, the gap shrank to a mere 4 percent once the consultants used GPT-4. In creative writing, getting ideas from AI “effectively equalizes the creativity scores across less and more creative writers,” according to one study. And law students near the bottom of their class using AI equalized their performance with folks at the top of the class (who actually saw a slight decline when using AI). The authors of the study concluded, “This suggests that AI may ...more
Dan Pfeiffer
Equalization effect; ascension of competence; organization enjoys higher quality of outputs.
67%
Flag icon
So will AI result in the death of expertise? I don’t think so.
Dan Pfeiffer
Agreed but it won't be as emphasized either.
67%
Flag icon
still require human judgment.
Dan Pfeiffer
This wil improve in time.
68%
Flag icon
there may be a role for humans who are experts at working with AI in particular fields. We just haven’t quite pinpointed the specific skills or expertise that taps into the ability to “speak” to the AI.
Dan Pfeiffer
New career field
68%
Flag icon
An AI future requires that we lean into building our own expertise as human experts. Since expertise requires facts, students will still need to learn reading, writing, history, and all the other basic skills required in the twenty-first century.
Dan Pfeiffer
Education system must improve and establish ground truth.
71%
Flag icon
First, the $100 billion a year call-center market is transformed as AI agents start to supplement human ones. Next, most advertising and marketing writing are done primarily by AI, with limited guidance from human Cyborgs. Soon, AI is performing many analytical tasks and doing increasing amounts of coding and programming work.
71%
Flag icon
we have the paradox of our Golden Age of science. More research is being published by more scientists than ever, but the result is actually slowing progress! With too much to read and absorb, papers in more crowded fields are citing new work less and canonizing highly cited articles more.
Dan Pfeiffer
We need LLMs to sift all the outputs.
71%
Flag icon
Research has successfully demonstrated that it is possible to correctly determine the most promising directions in science by analyzing past papers with AI, ideally combining human filtering with the AI software.