More on this book
Community
Kindle Notes & Highlights
As much as the intern or first-year lawyer doesn’t like being yelled at for doing a bad job, their boss usually would rather just see the job done fast than deal with the emotions and errors of a real human being. So they will do it themselves with AI, which, if not yet the equivalent of a senior professional in many tasks, is often better than a new trainee. This could create a major training gap.
The closer we move to a world of Cyborgs and Centaurs in which the AI augments our work, the more we need to maintain and nurture human expertise. We need expert humans in the loop.
This difference in approach and outcome illustrates the gap between mere repetition and deliberate practice. The latter, with its elements of challenge, feedback, and incremental progression, is the true path to mastery.
Instead of just iterating designs, Raj engages in a structured reflection after every project, thanks to the insights from the AI. It’s akin to having a mentor watching over his shoulder at every step, nudging him toward excellence.
I have been making the argument that expertise is going to matter more than before, because experts may be able to get the most out of AI coworkers and are likely to be able to fact-check and correct AI errors. But even with deliberate practice, not everyone can become an expert in everything. Talent also plays a role.
there may be a role for humans who are experts at working with AI in particular fields. We just haven’t quite pinpointed the specific skills or expertise that taps into the ability to “speak” to the AI. An AI future requires that we lean into building our own expertise as human experts. Since expertise requires facts, students will still need to learn reading, writing, history, and all the other basic skills required in the twenty-first century. We have already seen how this broad-based knowledge can help people get the most out of AI. And besides, we need to continue to have educated citizens
...more
What happens next is science fiction—or rather, science fictions, because there are many possible futures. I see four clear possibilities for what will happen in the next few years in the world of AI.
Scenario 1: As Good as It Gets
Slightly more possible is a world where regulatory or legal action stops future AI development. Maybe AI safety experts convince governments to ban AI development, complete with threats of force against any who dare breach these limits. But, given that most governments are only just beginning to consider regulation, and there’s a lack of international consensus, it seems extremely unlikely that a global ban will happen soon or that regulation will make AI development grind to a halt.
Our already fragile consensus about what facts are real is likely to fall apart, quickly. Technological solutions are unlikely to save us.
Perhaps there will be a resurgence of trust in mainstream media, which might be able to act as arbiters of what images and stories are real, carefully tracking the provenance of each story and artifact. But that seems unlikely. A second option is that we further divide into tribes, believing the information we want to believe and ignoring as fake any information we don’t want to pay attention to. Soon, even the most basic facts will be in dispute.
AI has been increasing in ability at an exponential pace, but most exponential growth in technology eventually slows down. AI could hit that barrier soon.
effective regulation to slow down the proliferation of dangerous uses. Coalitions of companies and governments, or perhaps open-source privacy advocates, could have the time to develop usage rules that allow people to establish their identity in a way that can be verified, removing some of the threat of impersonation.
Scenario 3: Exponential Growth
Not all technological growth slows down quickly. Moore’s Law, which has seen the processing capability of computer chips double roughly every two years, has been true for fifty years. AI might continue to accelerate in this way. One reason this might occur is the so-called flywheel—AI companies might use AI systems to help them create the next generation of AI software. Once this process starts, it may be hard to stop. And at this pace, AI becomes hundreds of times more capable in the next decade.
In this scenario, risks are more severe and less predictable.
And unlike in the previous scenario, our current governmental systems do not have time to adjust in the usual way. Instead, these AI bad actors are held in check by “good” AIs. But there is an Orwellian tinge to this solution.
danger of AI-tocracy, as ubiquitous surveillance enables both dictators and democracies to establish more control over the citizens. The world looks more like a cyberpunk struggle between authorities and hackers, all using AI systems.
Scenario 4: The Machine God
In this fourth scenario, machines reach AGI and some form of sentience.
In the fourth scenario, human supremacy ends.
And we don’t know if AGI would help or hurt us, or how it would do either. Enough serious experts believe this risk is real that we need to take it seriously.
My agent, Rafe Sagalyn, gave me guidance at every step of the way, as well as a crash course on book proposals that helped me connect with the wonderful team at Portfolio.