More on this book
Community
Kindle Notes & Highlights
Read between
September 16 - October 1, 2023
Almost every foundational technology ever invented, from pickaxes to plows, pottery to photography, phones to planes, and everything in between, follows a single, seemingly immutable law: it gets cheaper and easier to use, and ultimately it proliferates, far and wide.
AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years.
Moreover, attempting to ban development of new technologies is itself a risk: technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress.
This is the core dilemma: that, sooner or later, a powerful generation of technology leads humanity toward either catastrophic or dystopian outcomes. I believe this is the great meta-problem of the twenty-first century.
They finished with an alarming thought: a single person today likely “has the capacity to kill a billion people.” All it takes is motivation.
Why wasn’t I, why weren’t we all, taking it more seriously? Why do we awkwardly sidestep further discussion? Why do some get snarky and accuse people who raise these questions of catastrophizing or of “overlooking the amazing good” of technology? This widespread emotional reaction I was observing is something I have come to call the pessimism-aversion trap: the misguided analysis that arises when you are overwhelmed by a fear of confronting potentially dark realities, and the resulting tendency to look the other way.
Pessimism aversion is an emotional response, an ingrained gut refusal to accept the possibility of seriously destabilizing outcomes. It tends to come from those in secure and powerful positions with entrenched worldviews, people who can superficially cope with change but struggle to accept any real challenge to their world order.
The various technologies I’m speaking of share four key features that explain why this isn’t business as usual: they are inherently general and therefore omni-use, they hyper-evolve, they have asymmetric impacts, and, in some respects, they are increasingly autonomous.
Our ancestors, whose strong jaws constrained skull growth, spent their time relentlessly chewing and digesting food like primates today. Liberated from this mundane necessity by fire, they could spend more time doing interesting things like hunting energy-rich foods, fashioning tools, or building complex social networks. The campfire became a central hub of human life, helping establish communities and relationships and organizing labor. The evolution of Homo sapiens rode these waves. We are not just the creators of our tools. We are, down to the biological, the anatomical level, a product of
...more
Proliferation is catalyzed by two forces: demand and the resulting cost decreases, each of which drives technology to become even better and cheaper.
Although its history is one of enabling people to do more, increasing capabilities, driving improvements in well-being, it’s not a one-sided story: Technology creates more lethal and destructive weapons as well as better tools. It produces losers, eliminates some jobs and ways of life, and creates harm up to the planetary, existential scale of climate change. New technologies can be unsettling and destabilizing, alien and invasive. Technology causes problems, and always has.
As long as a technology is useful, desirable, affordable, accessible, and unsurpassed, it survives and spreads and those features compound. While technology doesn’t tell us when, or how, or whether to walk through the doors it opens, sooner or later we do seem to walk through them. There is no necessary relationship here, just a persistent empirical linkage throughout history.
In the space of around a hundred years, successive waves took humanity from an era of candles and horse carts to one of power stations and space stations. Something similar is going to occur in the next thirty years. In the coming decades, a new wave of technology will force us to confront the most foundational questions our species has ever faced. Do we want to edit our genomes so that some of us can have children with immunity to certain diseases, or with more intelligence, or with the potential to live longer? Are we committed to holding on to our place at the top of the evolutionary
...more
AI is still in an early phase. It may look smart to claim that AI doesn’t live up to the hype, and it’ll earn you some Twitter followers. Meanwhile, talent and investment pour into AI research nonetheless. I cannot imagine how this will not prove transformative in the end.
Remember that DNA is itself the most efficient data storage mechanism we know of—capable of storing data at millions of times the density of current computational techniques with near-perfect fidelity and stability. Theoretically, the entirety of the world’s data might be stored in just one kilogram of DNA.
The truth is that the curiosity of academic researchers or the will of motivated governments is insufficient to propel new breakthroughs into the hands of billions of consumers. Science has to be converted into useful and desirable products for it to truly spread far and wide. Put simply: most technology is made to earn money.
Technology entered a virtuous circle of creating wealth that could be reinvested in further technological development, all of which drove up living standards. But none of these long-term goals were really the primary objective of any single individual. In chapter 1, I argued that almost everything around you is a product of human intelligence. Here’s a slight correction: much of what we see around us is powered by human intelligence in direct pursuit of monetary gain.
AI scientists and engineers are among the best-paid people in the world, and yet what really gets them out of bed is the prospect of being first to a breakthrough or seeing their name on a landmark paper. Love them or hate them, technology magnates and entrepreneurs are viewed as unique lodestars of power, wealth, vision, and sheer will. Critics and fawning fans alike see them as expressions of ego, excelling at making things happen.
Find a successful scientist or technologist and somewhere in there you will see someone driven by raw ego, spurred on by emotive impulses that might sound base or even unethical but are nonetheless an under-recognized part of why we get the technologies we do. The Silicon Valley mythos of the heroic start-up founder single-handedly empire building in the face of a hostile and ignorant world is persistent for a reason. It is the self-image technologists too often still aspire to, an archetype to emulate, a fantasy that still drives new technologies.
Even as it grows more powerful and entangled with everyday life, the grand bargain of the nation-state, therefore, is that not only can centralized power enable peace and prosperity, but this power can be contained using a series of checks, balances, redistributions, and institutional forms. We often take for granted the delicate balance that has to be struck between extremes to maintain this. On the one hand the most dystopian excesses of centralized power must be avoided, and on the other we must accept regular intervention to maintain order.
Our system of nation-states isn’t perfect, far from it. Nonetheless, we must do everything to bolster and protect it. This book, in part, is my attempt to rally to its defense. Nothing else—no other silver bullet—will arrive in time to save us, to absorb the destabilizing force of the wave. There simply isn’t another option in the medium term.
Today, no matter how wealthy you are, you simply cannot buy a more powerful smartphone than is available to billions of people. This phenomenal achievement of civilization is too often overlooked. In the next decade, access to ACIs will follow the same trend. Those same billions will soon have broadly equal access to the best lawyer, doctor, strategist, designer, coach, executive assistant, negotiator, and so on. Everyone will have a world-class team on their side and in their corner.
The coming AIs make it easier than ever to identify and exploit weaknesses. They could even find legal or financial means of damaging corporations or other institutions, hidden points of failure in banking regulation or technical safety protocols. As the cybersecurity expert Bruce Schneier has pointed out, AIs could digest the world’s laws and regulations to find exploits, arbitraging legalities. Imagine a huge cache of documents from a company leaked. A legal AI might be able to parse this against multiple legal systems, figure out every possible infraction, and then hit that company with
...more
These tools will only temporarily augment human intelligence. They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing. They will eventually do cognitive labor more efficiently and more cheaply than many people working in administration, data entry, customer service (including making and receiving phone calls), writing emails, drafting summaries, translating documents, creating content, copywriting, and so on. In the face of an abundance of ultra-low-cost equivalents, the days of this kind of
...more
Economists like David Autor argue that new technology consistently raises incomes, creating demand for new labor. Technology makes companies more productive, it generates more money, which then flows back into the economy. Put simply, demand is insatiable, and this demand, stoked by the wealth technology has generated, gives rise to new jobs requiring human labor.
As we saw in chapter 4, AI’s rate of improvement is well beyond exponential, and there appears no obvious ceiling in sight. Machines are rapidly imitating all kinds of human abilities, from vision to speech and language. Even without fundamental progress toward “deep understanding,” new language models can read, synthesize, and generate eye-wateringly accurate and highly useful text. There are literally hundreds of roles where this single skill alone is the core requirement, and yet there is so much more to come from AI.
Demand for masseurs, cellists, and baseball pitchers won’t go away. But my best guess is that new jobs won’t come in the numbers or timescale to truly help. The number of people who can get a PhD in machine learning will remain tiny in comparison to the scale of layoffs. And, sure, new demand will create new work, but that doesn’t mean it all gets done by human beings.
The fully omni-use nature of the coming wave means it is found at every level, in every sector, every business, or subculture, or group, or bureaucracy, in every corner of our world. It produces trillions of dollars in new economic value while also destroying certain existing sources of wealth. Some individuals are greatly enabled; others stand to lose everything.
Whatever the end point, we are heading to a place where unprecedented powers and abilities are out there, in the hands of already powerful actors who’ll no doubt use them to amplify their reach and further their own agenda.
Such concentrations will enable vast, automated megacorporations to transfer value away from human capital—work—and toward raw capital. Put all the inequalities resulting from concentration together, and it adds up to another great acceleration and structural deepening of an existing fracture. Little wonder there is talk of neo- or techno-feudalism—a direct challenge to the social order, this time built on something beyond even stirrups.
Imagine a future where small groups—whether in failing states like Lebanon or in off-grid nomad camps in New Mexico—provide AI-empowered services like credit unions, schools, and health care, services at the heart of the community often reliant on scale or the state. Where the chance to set the terms of society at a micro level becomes irresistible: come to our boutique school and avoid critical race theory forever, or boycott the evil financial system and use our DeFi product.
Over time, then, the implications of these technologies will push humanity to navigate a path between the poles of catastrophe and dystopia. This is the essential dilemma of our age.
Nor is building safe and contained technology in itself sufficient. Solving the question of AI alignment doesn’t mean doing so once; it means doing it every time a sufficiently powerful AI is built, wherever and whenever that happens. You don’t just need to solve the question of lab leaks in one lab; you need to solve it in every lab, in every country, forever, even while those same countries are under serious political strain.
I think it’s easy to discount how much of our way of life is underwritten by constant technological improvements. Those historical precedents—the norm, remember, for every prior civilization—are screaming loud and clear. Standstill means a meager future of at best decline but probably an implosion that could spiral alarmingly.
It was a foundational lesson for me: shareholder capitalism works because it is simple and clear, and governance models too have a tendency to default to the simple and clear. In the shareholder model, lines of accountability and performance tracking are quantified and very transparent. It may be possible to design more modern structures in theory, but operating them in practice is another story.
In the future, taxation needs to switch emphasis toward capital, not only funding a redistribution toward those adversely affected, but creating a slower and fairer transition in the process. Fiscal policy is an important valve in controlling this transition, a means of exercising control over those choke points and building state resilience at the same time.
A carefully calibrated shift in the tax burden away from labor would incentivize continued hiring and cushion disruptions in household life. Tax credits topping up the lowest incomes could be an immediate buffer in the face of stagnating or even collapsing incomes.
Some measure of anti-proliferation is necessary. And yes, let’s not shy away from the facts; that means real censorship, possibly well beyond national borders. There are times when this will be seen—perhaps rightly—as unbridled U.S. hegemony, Western arrogance, and selfishness. Quite honestly, I’m not always sure where the right balance is, but I now firmly believe that complete openness will push humanity off the narrow path.
Too many visions of the future start with what technology can or might do and work from there. That’s completely the wrong foundation. Technologists should focus not just on the engineering minutiae but on helping to imagine and realize a richer, social, human future in the broadest sense, a complex tapestry of which technology is just one strand. Technology is central to how the future will unfold—that’s undoubtedly true—but technology is not the point of the future, or what’s really at stake. We are.