Co-Intelligence: Living and Working with AI
Rate it:
Open Preview
Read between July 5 - July 7, 2025
57%
Flag icon
as Amara’s Law, named after futurist Roy Amara, says: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
57%
Flag icon
Benjamin Bloom, an educational psychologist, published a paper in 1984 called “The 2 Sigma Problem.” In this paper, Bloom reported that the average student tutored one-to-one performed two standard deviations better than students educated in a conventional classroom environment. This means that the average tutored student scored higher than 98 percent of the students in the control group (though not all studies of tutoring have found as large an impact). Bloom called this the two sigma problem, because he challenged researchers and teachers to find methods of group instruction that could ...more
59%
Flag icon
Every school or instructor will need to think hard about what AI use is acceptable: Is asking AI to provide a draft of an outline cheating? Requesting help with a sentence that someone is stuck on? Is asking for a list of references or an explainer about a topic cheating? We need to rethink education. We did it before, if in a more limited way.
59%
Flag icon
As education researcher Sarah J. Banks writes, in the early days of their popularity in the mid-1970s, many teachers were eager to incorporate calculators into their classrooms, recognizing the potential for increased student motivation and engagement.
59%
Flag icon
A mid-1970s survey found that 72 percent of teachers and laypeople did not approve of seventh-grade students using calculators. One concern was the inability to help students understand and identify their errors, for the calculators did not log the buttons that students pressed, making it difficult for teachers to see and correct mistakes. Early research similarly found that parents were worried that their children would become dependent on the technology and forget basic mathematical skills. Doesn’t that sound familiar?
59%
Flag icon
We’ll find a practical consensus that will allow AI to be integrated into the learning process without compromising the development of critical skills. Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically.
60%
Flag icon
I have made AI mandatory in all my classes for undergraduates and MBAs at the University of Pennsylvania. Some assignments ask students to “cheat” by having the AI create essays, which they then critique—a sneaky way of getting students to think hard about the work, even if they don’t write it. Some assignments allow unlimited AI use but hold the students accountable for the outcomes and facts produced by the AI, which mirrors how they might work with AI in their postschool jobs. Other assignments use the new capabilities of AI, asking students to conduct interviews with the AI before they ...more
This highlight has been truncated due to consecutive passage length restrictions.
62%
Flag icon
rather than distorting our education system around learning to work with AI via prompt engineering, we need to focus on teaching students to be the humans in the loop, bringing their own expertise to bear on problems. We know how to teach expertise. We try to do it in school all the time, but it is a hard process. AI might make it easier.
64%
Flag icon
People have traditionally gained expertise by starting at the bottom. The carpenter’s apprentice, the intern at a magazine, the medical resident. These are usually pretty horrible jobs, but they serve a purpose. Only by learning from more experienced experts in a field, and trying and failing under their tutelage, do amateurs become experts. But that is likely to change rapidly with AI. As much as the intern or first-year lawyer doesn’t like being yelled at for doing a bad job, their boss usually would rather just see the job done fast than deal with the emotions and errors of a real human ...more
65%
Flag icon
The closer we move to a world of Cyborgs and Centaurs in which the AI augments our work, the more we need to maintain and nurture human expertise. We need expert humans in the loop.
65%
Flag icon
So let’s consider what it takes to build expertise. First, it requires a basis of knowledge.
65%
Flag icon
Humans actually have many memory systems, and one of them, our working memory, is the brain’s problem-solving center, our mental workspace. We use our working memory’s stored data to search our long-term memory (a vast library of what we have learned and experienced) ...
This highlight has been truncated due to consecutive passage length restrictions.
65%
Flag icon
It isn’t just a certain amount of practice time that is important (10,000 hours is not a magical threshold, no matter what you have read), but rather, as psychologist Anders Ericsson discovered, the type of practice. Experts become experts through deliberate practice, which is much harder than merely repeating a task multiple times. Instead, deliberate practice requires serious engagement and a continual ratcheting up of difficulty.
66%
Flag icon
Raj, conversely, integrates an AI-driven architectural design assistant into his workflow. Each time he creates a design, the AI provides instantaneous feedback. It can highlight structural inefficiencies, suggest improvements based on sustainable materials, and even predict potential costs. Moreover, the AI offers comparisons between Raj’s designs and a vast database of other innovative architectural works, highlighting differences and suggesting areas of improvement. Instead of just iterating designs, Raj engages in a structured reflection after every project, thanks to the insights from the ...more
67%
Flag icon
In field after field, we are finding that a human working with an AI co-intelligence outperforms all but the best humans working without an AI.
67%
Flag icon
But it is possible that there may be a new type of expert arising. While, as we discussed in the last chapter, prompt crafting is unlikely to be useful for most people, that doesn’t mean it is entirely useless. It may be that working with AI is itself a form of expertise. It is possible that some people are just really good at it. They can adopt Cyborg practices better than others and have a natural (or learned) gift for working with LLM systems. For them, AI is a huge blessing that changes their place in work and society. Other people may get a small gain from these systems, but these new ...more
68%
Flag icon
An AI future requires that we lean into building our own expertise as human experts. Since expertise requires facts, students will still need to learn reading, writing, history, and all the other basic skills required in the twenty-first century.
68%
Flag icon
And besides, we need to continue to have educated citizens rather than delegate all our thinking to machines.
68%
Flag icon
Humans, walking and talking bags of water and trace chemicals that we are, have managed to convince well-organized sand to pretend to think like us.
70%
Flag icon
Regardless of which direction we head in, even without advances in AI, the way we relate to information will change.
70%
Flag icon
Current systems are not good enough in their understanding of context, nuance, and planning. That is likely to change.
70%
Flag icon
the possibility that we will soon hit technical limits for Large Language Models, as a number of scientists, including professor (and chief AI scientist at Meta) Yann LeCun, have argued.
70%
Flag icon
However it happens, this slower improvement would still represent an impressive rate of change, though one we can understand. Think of how televisions get a bit better every year. You don’t need to throw out your old TV, but new ones are likely quite a bit better and cheaper than the one you bought a few years ago. With this sort of linear change, we can see the future coming and plan for it.
70%
Flag icon
Early incidents where AIs are used to generate dangerous chemicals or weapons could result in effective regulation to slow down the proliferation of dangerous uses.
71%
Flag icon
the $100 billion a year call-center market is transformed as AI agents start to supplement human ones.
72%
Flag icon
Moore’s Law, which has seen the processing capability of computer chips double roughly every two years, has been true for fifty years.
72%
Flag icon
there is some early evidence that LLMs may help us overcome the barriers that have made building working robots so challenging.
72%
Flag icon
We will need to find new ways to occupy our free time in meaningful ways, since so much of our current life is focused around work. In some ways, however, this shift has already been occurring. In 1865 the average British man worked 124,000 hours over his lifetime, as did people in the US and Japan. By 1980, British workers spent only 69,000 hours at work, despite living longer. In the US, we went from spending 50 percent of our lives working to 20 percent. Work hours have improved more slowly since 1980. Still, UK workers now work 115 hours less a year than they did then, a decline of 6 ...more
73%
Flag icon
The end of human dominance need not be the end of humanity. It may even be a better world for us, but it is no longer a world where humans are at the top, ending a good two-million-year run.
73%
Flag icon
Achieving this level of machine intelligence means that AIs, not humans, are in charge. We have to hope they are properly aligned to human interests. They may then decide to watch over us as “machines of loving grace,” as the poem goes, solving our problems and making our lives better. Or they can view us as a threat, or an inconvenience, or a source of valuable molecules.
73%
Flag icon
if we don’t get all the way to superintelligence, even a truly sentient machine would challenge much of what we think about what it means to be human.
73%
Flag icon
the truth is that we don’t know if there is a straight road from today’s LLMs to building a true AGI. And we don’t know if AGI would help or hurt us, or how it would do either.
73%
Flag icon
one of the godfathers of AI, Geoffrey Hinton, left the field in 2023, warning of the danger of AI with statements like “It’s quite conceivable that humanity is just a passing phase in the evolution of intelligence.”
73%
Flag icon
If we focus solely on the risks or benefits of building super-intelligent machines, it robs us of our abilities to consider the more likely second and third scenarios, worlds where AI is ubiquitous but very much in human control. And in those worlds, we get to make choices about what AI means.
73%
Flag icon
Rather than being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring. Unimaginative or stressed leaders may decide to use these new tools for surveillance and for layoffs. The less fortunate in developing countries may be disproportionally hurt by the shift in jobs. Educators may decide to use AI in ways that leave some students behind. And those are just obvious problems.
1 3 Next »