Scott Aaronson's Blog, page 2
August 13, 2025
Updates!
(1) My 8-year-old son asked me last week, “daddy, did you hear that GPT-5 is now out?” So yes, I’m indeed aware that GPT-5 is now out! I’ve just started playing around with it. For detailed reports on what’s changed and how impressive it is compared to previous models, see for example Zvi #1, #2, #3. Briefly, it looks like there are major reductions in hallucinations and sycophancy, and improvements in practical usefulness for coding and other tasks, even while the “raw intelligence” is unlikely to blow away someone who was already well-acquainted with o3 and Opus 4 other state-of-the-art models, the way ChatGPT and then GPT-4 blew away people who had no idea what was possible in late 2022 and early 2023. Partly how impressive a result you see depends on which of several GPT-5 models your query gets routed to, which you don’t entirely control. Anyway, there’s grist here for the people who claim that progress toward AGI is slowing down, but also grist for the people who claim that it continues pretty much as expected within our post-ChatGPT reality!
(2) In other belated news, OpenAI and DeepMind (and then, other companies) announced that they achieved Gold Medal performance on the International Math Olympiad, by solving 5 of the 6 problems (there was one problem, the 6th and hardest, that all of the AIs struggled with). Most importantly, this means that I’ve won $100 from my friend Ernest Davis, AI expert at NYU, who bet me $100 that no AI would earn a Gold Medal at the International Math Olympiad by December 4, 2026. Even though I’m normally risk-averse and reluctant to take bets, I considered this one to be extremely safe, and indeed I won it with more than a year to spare.
(3) I’ve signed an open letter to OpenAI, along with many of my fellow former OpenAI employees as well as distinguished scientists and writers (Geoffrey Hinton, Stuart Russell, Sheldon Glashow, Sean Carroll, Matt Yglesias…), asking for more transparency about OpenAI’s continuing efforts to change its own structure. The questions basically ask OpenAI to declare, in writing, whether it has or hasn’t now completely abandoned the original nonprofit goals with which the organization was founded in 2015.
(4) At Lighthaven, the rationalist meeting space in Berkeley that I recently visited (and that our friend Cade Metz recently cast aspersions on in the New York Times), there’s going to be a writer’s residency called Inkhaven for the whole month of November. The idea—which I love—is that you either write a new blog post every day, or else you get asked to leave (while you also attend workshops, etc. to improve your writing skills). I’d attend myself for the month if teaching and family obligations didn’t conflict; someone standing over me with a whip to make me write is precisely what I need these days! As it is, I’m one of the three advisors to Inkhaven, along with Scott Alexander and Gwern, and I’ll be visiting for a long weekend to share my blogging wisdom, such as I have. Apply now if you’re interested!
(5) Alas, the Springer journal Frontiers of Computer Science has published a nonsense paper, entitled “SAT requires exhaustive search,” claiming to solve (or dissolve, or reframe, or something) the P versus NP problem. It looks indistinguishable from the stuff I used to get in my inbox every week—and now, in the ChatGPT era, get every day. That this was published indicates a total breakdown of the peer review process. Worse, when Eric Allender, Ryan Williams, and others notified the editors of this, asking for the paper to be retracted, the editor-in-chief declined to do so: see this guest post on Lance’s blog for a detailed account. As far as I’m concerned, Frontiers of Computer Science has now completely discredited itself as a journal; publication there means nothing more than publication in viXra. Minus 10 points for journals themselves as an institution, plus 10 points for just posting stuff online and letting it be filtered by experts who care.
(6) Uma Girish and Rocco Servedio released an arXiv preprint called Forrelation is Extremally Hard. Recall that, in the Forrelation problem, you’re given oracle access to two n-bit Boolean functions f and g, and asked to estimate the correlation between f and the Fourier transform of g. I introduced this problem in 2009, as a candidate for an oracle separation between BQP and the polynomial hierarchy—a conjecture that Ran Raz and Avishay Tal finally proved in 2018. What I never imagined was that Forrelation could lead to an oracle separation between EQP (that is, Exact Quantum Polynomial Time) and the polynomial hierarchy. For that, I thought you’d need to go back to the original Recursive Fourier Sampling problem of Bernstein and Vazirani. But Uma and Rocco show, using “bent Boolean functions” (get bent!) and totally contrary to my intuition, that the exact (zero-error) version of Forrelation is already classically hard, taking Ω(2n/4) queries by any randomized algorithm. They leave open whether exact Forrelation needs ~Ω(2n/2) randomized queries, which would match the upper bound, and also whether exact Forrelation is not in PH.
(7) The Google quantum group, to little fanfare, published a paper entitled Constructive interference at the edge of quantum ergodic dynamics. Here, they use their 103-qubit superconducting processor to measure Out-of-Time-Order Correlators (OTOCs) in a many-body scrambling process, and claim to get a verifiable speedup over the best classical methods. If true, this is a great step toward verifiable quantum supremacy for a useful task, for some definition of “useful.”
(8) Last night, on the arXiv, the team at USTC in China reported that it’s done Gaussian BosonSampling with 3,050 photons and 8,176 modes. They say that this achieves quantum supremacy, much more clearly than any previous BosonSampling demonstration, beating (for example) all existing simulations based on tensor network contraction. Needless to say, this still suffers from the central problem of all current sampling-based quantum supremacy experiments, namely the exponential time needed for direct classical verification of the outputs.
August 4, 2025
ChatGPT and the Meaning of Life: Guest Post by Harvey Lederman

Scott Aaronson’s Brief Foreword:
Harvey Lederman is a distinguished analytic philosopher who moved from Princeton to UT Austin a few years ago. Since his arrival, he’s become one of my best friends among the UT professoriate. He’s my favorite kind of philosopher, the kind who sees scientists as partners in discovering the truth, and also has a great sense of humor. He and I are both involved in UT’s new AI and Human Objectives Initiative (AHOI), which is supported by Open Philanthropy.
The other day, Harvey emailed me an eloquent meditation he wrote on what will be the meaning of life if AI doesn’t kill us all, but “merely” does everything we do better than we do it. While the question is of course now extremely familiar to me, Harvey’s erudition—bringing to bear everything from speculative fiction to the history of polar exploration—somehow brought the stakes home for me in a new way.
Harvey mentioned that he’d sent his essay to major magazines but hadn’t had success. So I said, why not a Shtetl-Optimized guest post? Harvey replied—what might be the highest praise this blog has ever received—well, that would be even better than the national magazine, as it would reach more relevant people.
And so without further ado, I present to you…
ChatGPT and the Meaning of Life, by Harvey Lederman
For the last two and a half years, since the release of ChatGPT, I’ve been suffering from fits of dread. It’s not every minute, or even every day, but maybe once a week, I’m hit by it—slackjawed, staring into the middle distance—frozen by the prospect that someday, maybe pretty soon, everyone will lose their job.
At first, I thought these slackjawed fits were just a phase, a passing thing. I’m a philosophy professor; staring into the middle distance isn’t exactly an unknown disease among my kind. But as the years have begun to pass, and the fits have not, I’ve begun to wonder if there’s something deeper to my dread. Does the coming automation of work foretell, as my fits seem to say, an irreparable loss of value in human life?
The titans of artificial intelligence tell us that there’s nothing to fear. Dario Amodei, CEO of Anthropic, the maker of Claude, suggests that: “historical hunter-gatherer societies might have imagined that life is meaningless without hunting,” and “that our well-fed technological society is devoid of purpose.” But of course, we don’t see our lives that way. Sam Altman, the CEO of OpenAI, sounds so similar, the text could have been written by ChatGPT. Even if the jobs of the future will look as “fake” to us as ours do to “a subsistence farmer”, Altman has “no doubt they will feel incredibly important and satisfying to the people doing them.”
Alongside these optimists, there are plenty of pessimists who, like me, are filled with dread. Pope Leo XIV has decried the threats AI poses to “human dignity, labor and justice”. Bill Gates has written about his fear, that “if we solved big problems like hunger and disease, and the world kept getting more peaceful: What purpose would humans have then?” And Douglas Hofstadter, the computer scientist and author of Gödel, Escher, Bach, has spoken eloquently of his terror and depression at “an oncoming tsunami that is going to catch all of humanity off guard.”
Who should we believe? The optimists with their bright visions of a world without work, or the pessimists who fear the end of a key source of meaning in human life?
I was brought up, maybe like you, to value hard work and achievement. In our house, scientists were heroes, and discoveries grand prizes of life. I was a diligent, obedient kid, and eagerly imbibed what I was taught. I came to feel that one way a person’s life could go well was to make a discovery, to figure something out.
I had the sense already then that geographical discovery was played out. I loved the heroes of the great Polar Age, but I saw them—especially Roald Amundsen and Robert Falcon Scott—as the last of their kind. In December 1911, Amundsen reached the South Pole using skis and dogsleds. Scott reached it a month later, in January 1912, after ditching the motorized sleds he’d hoped would help, and man-hauling the rest of the way. As the black dot of Amundsen’s flag came into view on the ice, Scott was devastated to reach this “awful place”, “without the reward of priority”. He would never make it back.
Scott’s motors failed him, but they spelled the end of the great Polar Age. Even Amundsen took to motors on his return: in 1924, he made a failed attempt for the North Pole in a plane, and, in 1926, he successfully flew over it, in a dirigible. Already by then, the skis and dogsleds of the decade before were outdated heroics of a bygone world.
We may be living now in a similar twilight age for human exploration in the realm of ideas. Akshay Venkatesh, whose discoveries earned him the 2018 Fields Medal, mathematics’ highest honor, has written that, the “mechanization of our cognitive processes will alter our understanding of what mathematics is”. Terry Tao, a 2006 Fields Medalist, expects that in just two years AI will be a copilot for working mathematicians. He envisions a future where thousands of theorems are proven all at once by mechanized minds.
Now, I don’t know any more than the next person where our current technology is headed, or how fast. The core of my dread isn’t based on the idea that human redundancy will come in two years rather than twenty, or, for that matter, two hundred. It’s a more abstract dread, if that’s a thing, dread about what it would mean for human values, or anyway my values, if automation “succeeds”: if all mathematics—and, indeed all work—is done by motor, not by human hands and brains.
A world like that wouldn’t good news for my childhood dreams. Venkatesh and Tao, like Amundsen and Scott, live meaningful lives, lives of purpose. But worthwhile discoveries like theirs are a scarce resource. A territory, once seen, can’t be seen first again. If mechanized minds consume all the empty space on the intellectual map, lives dedicated to discovery won’t be lives that humans can lead.
The right kind of pessimist sees here an important argument for dread. If discovery is valuable in its own right, the loss of discovery could be an irreparable loss for humankind.
A part of me would like this to be true. But over these last strange years, I’ve come to think it’s not. What matters, I now think, isn’t being the first to figure something out, but the consequences of the discovery: the joy the discoverer gets, the understanding itself, or the real life problem their knowledge solves. Alexander Fleming discovered penicillin, and through that work saved thousands, perhaps millions of lives. But if it were to emerge, in the annals of an outlandish future, that an alien discovered penicillin thousands of years before Fleming did, we wouldn’t think that Fleming’s life was worse, just because he wasn’t first. He eliminated great suffering from human life; the alien discoverer, if they’re out there, did not. So, I’ve come to see, it’s not discoveries themselves that matter. It’s what they bring about.
But the advance of automation would mean the end of much more than human discovery. It could mean the end of all necessary work. Already in 1920, the Czech playwright Karel Capek asked what a world like that would mean for the values in human life. In the first act of R.U.R.—the play which introduced the modern use of the word “robot”—Capek has Henry Domin, the manager of Rossum’s Universal Robots (the R.U.R. of the title), offer his corporation’s utopian pitch. “In ten years”, he says, their robots will “produce so much corn, so much cloth, so much everything” that “There will be no poverty.” “Everybody will be free from worry and liberated from the degradation of labor.” The company’s engineer, Alquist, isn’t convinced. Alquist (who, incidentally, ten years later, will be the only human living, when the robots have killed the rest) retorts that “There was something good in service and something great in humility”, “some kind of virtue in toil and weariness”.
Service—work that meets others’ significant needs and wants— is, unlike discovery, clearly good in and of itself. However we work— as nurses, doctors, teachers, therapists, ministers, lawyers, bankers, or, really, anything at all—working to meet others’ needs makes our own lives go well. But, as Capek saw, all such work could disappear. In a “post-instrumental” world, where people are comparatively useless and the bots meet all our important needs, there would be no needed work for us to do, no suffering to eliminate, no diseases to cure. Could the end of such work be a better reason for dread?
The hardline pessimists say that it is. They say that the end all needed work would not only be a loss of some value to humanity, as everyone should agree. For them it would be a loss to humanity on balance, an overall loss, that couldn’t be compensated in another way.
I feel a lot of pull to this pessimistic thought. But once again, I’ve come to think it’s wrong. For one thing, pessimists often overlook just how bad most work actually is. In May 2021, Luo Huazhang, a 31 year-old ex-factory worker in Sichuan wrote a viral post, entitled “Lying Flat is Justice”. Luo had searched at length for a job that, unlike his factory job, would allow him time for himself, but he couldn’t find one. So he quit, biked to Tibet and back, and commenced his lifestyle of lying flat, doing what he pleased, reading philosophy, contemplating the world. The idea struck a chord with overworked young Chinese, who, it emerged, did not find “something great” in their “humility”. The movement inspired memes, selfies flat on one’s back, and even an anthem.
That same year, as the Great Resignation in the United States took off, the subreddit r/antiwork played to similar discontent. Started in 2013, under the motto “Unemployment for all, not only the rich!”, the forum went viral in 2021, starting with a screenshot of a quitting worker’s texts to his supervisor (“No thanks. Have a good life”), and culminating in labor-actions, first supporting striking workers at Kelloggs by spamming their job application site, and then attempting to support a similar strike at McDonald’s. It wasn’t just young Chinese who hated their jobs.
In Automation and Utopia: Human Flourishing in a World without Work, the Irish lawyer and philosopher John Danaher imagines an antiwork techno-utopia, with plenty of room for lying flat. As Danaher puts it: “Work is bad for most people most of the time.”“We should do what we can to hasten the obsolescence of humans in the arena of work.”
The young Karl Marx would have seen both Domin’s and Danaher’s utopias as a catastrophe for human life. In his notebooks from 1844, Marx describes an ornate and almost epic process, where, by meeting the needs of others through production, we come to recognize the other in ourselves, and through that recognition, come at last to self-consciousness, the full actualization of our human nature. The end of needed work, for the Marx of these notes, would be the impossibility of fully realizing our nature, the end, in a way, of humanity itself.
But such pessimistic lamentations have come to seem to me no more than misplaced machismo. Sure, Marx’s and my culture, the ethos of our post-industrial professional class, might make us regret a world without work. But we shouldn’t confuse the way two philosophers were brought up with the fundamental values of human life. What stranger narcissism could there be than bemoaning the end of others’ suffering, disease, and need, just because it deprives you of the chance to be a hero?
The first summer after the release of ChatGPT—the first summer of my fits of dread—I stayed with my in-laws in Val Camonica, a valley in the Italian alps. The houses in their village, Sellero, are empty and getting emptier; the people on the streets are old and getting older. The kids that are left—my wife’s elementary school class had, even then, a full complement of four—often leave for better lives. But my in-laws are connected to this place, to the houses and streets where they grew up. They see the changes too, of course. On the mountains above, the Adamello, Italy’s largest glacier, is retreating faster every year. But while the shows on Netflix change, the same mushrooms appear in the summer, and the same chestnuts are collected in the fall.
Walking in the mountains of Val Camonica that summer, I tried to find parallels for my sense of impending loss. I thought about William Shanks, a British mathematician who calculated π to 707 digits by hand in 1873 (he made a mistake at 527; almost 200 digits were wrong). He later spent years of his life, literally years, on a table of the reciprocals of the primes up to one-hundred and ten thousand, calculating in the morning by hand, and checking it over in the afternoon. That was his life’s work. Just sixty years after his death, though, already in the 1940s, the table on which his precious mornings were spent, the few mornings he had on this earth, could be made by a machine in a day.
I feel sad thinking about Shanks, but I don’t feel grief for the loss of calculation by hand. The invention of the typewriter, and the death of handwritten notes seemed closer to the loss I imagined we might feel. Handwriting was once a part of your style, a part of who you were. With its decline some artistry, a deep and personal form of expression, may be lost. When the bots help with everything we write, couldn’t we too lose our style and voice?
But more than anything I thought of what I saw around me: the slow death of the dialects of Val Camonica and the culture they express. Chestnuts were at one time so important for nutrition here, that in the village of Paspardo, a street lined with chestnut trees is called “bread street” (“Via del Pane”). The hyper-local dialects of the valley, outgrowths sometimes of a single family’s inside jokes, have words for all the phases of the chestnut. There’s a porridge made from chestnut flour that, in Sellero goes by ‘skelt’, but is ‘pult’ in Paspardo, a cousin of ‘migole’ in Malonno, just a few villages away. Boiled, chestnuts are tetighe; dried on a grat, biline or bascocc, which, seasoned and boiled become broalade. The dialects don’t just record what people eat and ate; they recall how they lived, what they saw, and where they went. Behind Sellero, every hundred-yard stretch of the walk up to the cabins where the cows were taken to graze in summer, has its own name. Aiva Codaola. Quarsanac. Coran. Spi. Ruc.
But the young people don’t speak the dialect anymore. They go up to the cabins by car, too fast to name the places along the way. They can’t remember a time when the cows were taken up to graze. Some even buy chestnuts in the store.
Grief, you don’t need me to tell you, is a complicated beast. You can grieve for something even when you know that, on balance, it’s good that it’s gone. The death of these dialects, of the stories told on summer nights in the mountains with the cows, is a loss reasonably grieved. But you don’t hear the kids wishing more people would be forced to stay or speak this funny-sounding tongue. You don’t even hear the old folks wishing they could go back fifty years—in those days it wasn’t so easy to be sure of a meal. For many, it’s better this way, not the best it could be, but still better, even as they grieve what they stand to lose and what they’ve already lost.
The grief I feel, imagining a world without needed work, seems closest to this kind of loss. A future without work could be much better than ours, overall. But, living in that world, or watching as our old ways passed away, we might still reasonably grieve the loss of the work that once was part of who we were.
In the last chapter of Edith Wharton’s Age of Innocence, Newland Archer contemplates a world that has changed dramatically since, thirty years earlier, before these new fangled telephones and five-day trans-Atlantic ships, he renounced the love of his life. Awaiting a meeting that his free-minded son Dallas has organized with Ellen Olenska, the woman Newland once loved, he wonders whether his son, and this whole new age, can really love the way he did and does. How could their hearts beat like his, when they’re always so sure of getting what they want?
There have always been things to grieve about getting old. But modern technology has given us new ways of coming to be out of date. A generation born in 1910 did their laundry in Sellero’s public fountains. They watched their grandkids grow up with washing machines at home. As kids, my in-laws worked with their families to dry the hay by hand. They now know, abstractly, that it can all be done by machine. Alongside newfound health and ease, these changes brought, as well, a mix of bitterness and grief: grief for the loss of gossip at the fountains or picnics while bringing in the hay; and also bitterness, because the kids these days just have no idea how easy they have it now.
As I look forward to the glories that, if the world doesn’t end, my grandkids might enjoy, I too feel prospective bitterness and prospective grief. There’s grief, in advance, for what we now have that they’ll have lost: the formal manners of my grandparents they’ll never know, the cars they’ll never learn to drive, and the glaciers that will be long gone before they’re born. But I also feel bitter about what we’ve been through that they won’t have to endure: small things like folding the laundry, standing in security lines or taking out the trash, but big ones too—the diseases which will take our loved ones that they’ll know how to cure.
All this is a normal part of getting old in the modern world. But the changes we see could be much faster and grander in scale. Amodei of Anthropic speculates that a century of technological change could be compressed into the next decade, or less. Perhaps it’s just hype, but—what if it’s not? It’s one thing for a person to adjust, over a full life, to the washing machine, the dishwasher, the air-conditioner, one by one. It’s another, in five years, to experience the progress of a century. Will I see a day when childbirth is a thing of the past? What about sleep? Will our ‘descendants’ have bodies at all?
And this round of automation could also lead to unemployment unlike any our grandparents saw. Worse, those of us working now might be especially vulnerable to this loss. Our culture, or anyway mine—professional America of the early 21st century—has apotheosized work, turning it into a central part of who we are. Where others have a sense of place—their particular mountains and trees—we’ve come to locate ourselves with professional attainment, with particular degrees and jobs. For us, ‘workists’ that so many of us have become, technological displacement wouldn’t just be the loss of our jobs. It would be the loss of a central way we have of making sense of our lives.
None of this will be a problem for the new generation, for our kids. They’ll know how to live in a world that could be—if things go well—far better overall. But I don’t know if I’d be able to adapt. Intellectual argument, however strong, is weak against the habits of years. I fear they’d look at me, stuck in my old ways, with the same uncomprehending look that Dallas Archer gives his dad, when Newland announces that he won’t go see Ellen Olenska, the love of his life, after all. “Say”, as Newland tries to explain to his dumbfounded son, “that I’m old fashioned, that’s enough.”
And yet, the core of my dread is not about aging out of work before my time. I feel closest to Douglas Hofstadter, the author of Gödel, Escher, Bach. His dread, like mine, isn’t only about the loss of work today, or the possibility that we’ll be killed off by the bots. He fears that even a gentle superintelligence will be “as incomprehensible to us as we are to cockroaches.”
Today, I feel part of our grand human projects—the advancement of knowledge, the creation of art, the effort to make the world a better place. I’m not in any way a star player on the team. My own work is off in a little backwater of human thought. And I can’t understand all the details of the big moves by the real stars. But even so, I understand enough of our collective work to feel, in some small way, part of our joint effort. All that will change. If I were to be transported to the brilliant future of the bots, I wouldn’t understand them or their work enough to feel part of the grand projects of their day. Their work would have become, to me, as alien as ours is to a roach.
But I’m still persuaded that the hardline pessimists are wrong. Work is far from the most important value in our lives. A post-instrumental world could be full of much more important goods— from rich love of family and friends, to new undreamt of works of art—which would more than compensate the loss of value from the loss of our work.
Of course, even the values that do persist may be transformed in almost unrecognizable ways. In Deep Utopia: Life and Meaning in a Solved World, the futurist and philosopher Nick Bostrom imagines how things might look. In one of the most memorable sections of the book—right up there with an epistolary novella about the exploits of Pignolius the pig (no joke!)—Bostrom says that even child-rearing may be something that we, if we love our children, would come to forego. In a truly post-instrumental world, a robot intelligence could do better for your child, not only in teaching the child to read, but also in showing unbreakable patience and care. If you’ll snap at your kid, when the robot would not, it would only be selfishness for you to get in the way.
It’s a hard question whether Bostrom is right. At least some of the work of care isn’t like eliminating suffering or ending mortal disease. The needs or wants are small-scale stuff, and the value we get from helping each other might well outweigh the fact that we’d do it worse than a robot could.
But even supposing Bostrom is right about his version of things, and we wouldn’t express our love by changing diapers, we could still love each other. And together with our loved ones and friends, we’d have great wonders to enjoy. Wharton has Newland Archer wonder at five-day transatlantic ships. But what about five day journeys to Mars? These days, it’s a big deal if you see the view from Everest with your own eyes. But Olympus Mons on Mars is more than twice as tall.
And it’s not just geographical tourism that could have a far expanded range. There’d be new journeys of the spirit as well. No humans would be among the great writers or sculptors of the day, but the fabulous works of art a superintelligence could make could help to fill our lives. Really, for almost any aesthetic value you now enjoy—sentimental or austere, minute or magnificent, meaningful or jocular—the bots would do it much better than we have ever done.
Humans could still have meaningful projects, too. In 1976, about a decade before any of Altman, Amodei or even I were born, the Canadian philosopher Bernhard Suits argued that “voluntary attempts to overcome unnecessary obstacles” could give people a sense of purpose in a post-instrumental world. Suits calls these “games”, but the name is misleading; I prefer “artificial projects”. The projects include things we would call games like chess, checkers and bridge, but also things we wouldn’t think of as games at all, like Amundsen’s and Scott’s exploits to the Pole. Whatever we call them, Suits—who’s followed here explicitly by Danaher, the antiwork utopian and, implicitly, by Altman and Amodei—is surely right: even as things are now, we get a lot of value from projects we choose, whether or not they meet a need. We learn to play a piece on the piano, train to run a marathon, or even fly to Antartica to “ski the last degree” to the Pole. Why couldn’t projects like these become the backbone of purpose in our lives?
And we could have one real purpose, beyond the artificial ones, as well. There is at least one job that no machine can take away: the work of self-fashioning, the task of becoming and being ourselves. There’s an aesthetic accomplishment in creating your character, an artistry of choice and chance in making yourself who you are. This personal style includes not just wardrobe or tattoos, not just your choice of silverware or car, but your whole way of being, your brand of patience, modesty, humor, rage, hobbies and tastes. Creating this work of art could give some of us something more to live for.
Would a world like that leave any space for human intellectual achievement, the stuff of my childhood dreams? The Buddhist Pali Canon says that “All conditioned things are impermanent—when one sees this with wisdom, one turns away from suffering.” Apparently, in this text, the intellectual achievement of understanding gives us a path out of suffering. To arrive at this goal, you don’t have to be the first to plant your flag on what you’ve understood; you just have to get there.
A secular version of this idea might hold, more simply, that some knowledge or understanding is good in itself. Maybe understanding the mechanics of penicillin matters mainly because of what it enabled Fleming and others to do. But understanding truths about the nature of our existence, or even mathematics, could be different. That sort of understanding plausibly is good in its own right, even if someone or something has gotten there first.
Venkatesh the Fields Medalist seems to suggest something like this for the future of math. Perhaps we’ll change our understanding of the discipline, so that it’s not about getting the answers, but instead about human understanding, the artistry of it perhaps, or the miracle of the special kind of certainty that proof provides.
Philosophy, my subject, might seem an even more promising place for this idea. For some, philosophy is a “way of life”. The aim isn’t necessarily an answer, but constant self-examination for its own sake. If that’s the point, then in the new world of lying flat, there could be a lot of philosophy to do.
I don’t myself accept this way of seeing things. For me, philosophy aims at the truth as much as physics does. But I of course agree that there are some truths that it’s good for us to understand, whether or not we get there first. And there could be other parts of philosophy that survive for us, as well. We need to weigh the arguments for ourselves, and make up our own minds, even if the work of finding new arguments comes to belong to a machine.
I’m willing to believe, and even hope that future people will pursue knowledge and understanding in this way. But I don’t find, here, much consolation for my personal grief. I was trained to produce knowledge, not merely to acquire it. In the hours when I’m not teaching or preparing to teach, my job is to discover the truth. The values I imbibed—and I told you I was an obedient kid—held that the prize goes for priority.
Thinking of this world where all we learn is what the bots have discovered first, I feel sympathy with Lee Sedol, the champion Go player who retired after his defeat by Google’s AlphaZero in 2016. For him, losing to AI “in a sense, meant my entire world was collapsing”. “Even if I become the number one, there is an entity that cannot be defeated.” Right or wrong, I would feel the same about my work, in a world with an automated philosophical champ.
But Sedol and I are likely just out of date models, with values that a future culture will rightly revise. It’s been more than twenty years since Garry Kasparov lost to IBM’s Deep Blue, but chess has never been more popular. And this doesn’t seem some new-fangled twist of the internet age. I know of no human who quit the high-jump after the invention of mechanical flight. The Greeks sprinted in their Olympics, though they had, long before, domesticated the horse. Maybe we too will come to value the sport of understanding with our own brains.
Frankenstein, Mary Shelley’s 1818 classic of the creations-kill-creator genre, begins with an expedition to the North Pole. Robert Walton hopes to put himself in the annals of science and claim the Pole for England, when he comes upon Victor Frankenstein, floating in the Arctic Sea. It’s only once Frankenstein warms up, that we get into the story everyone knows. Victor hopes he can persuade Walton to turn around, by describing how his own quest for knowledge and glory went south.
Frankenstein doesn’t offer Walton an alternative way of life, a guide for living without grand goals. And I doubt Walton would have been any more personally consoled by the glories of a post-instrumental future than I am. I ended up a philosopher, but I was raised by parents who, maybe like yours, hoped for doctors or lawyers. They saw our purpose in answering real needs, in, as they’d say, contributing to society. Lives devoted to families and friends, fantastic art and games could fill a wondrous future, a world far better than it has ever been. But those aren’t lives that Walton or I, or our parents for that matter, would know how to be proud of. It’s just not the way we were brought up.
For the moment, of course, we’re not exactly short on things to do. The world is full of grisly suffering, sickness, starvation, violence, and need. Frankenstein is often remembered with the moral that thirst for knowledge brings ruination, that scientific curiosity killed the cat. But Victor Frankenstein makes a lot of mistakes other than making his monster. His revulsion at his creation persistently prevents him, almost inexplicably, from feeling the love or just plain empathy that any father should. On top of all we have to do to help each other, we have a lot of work to do, in engineering as much as empathy, if we hope to avoid Frankenstein’s fate.
But even with these tasks before us, my fits of dread are here to stay. I know that the post-instrumental world could be a much better place. But its coming means the death of my culture, the end of my way of life. My fear and grief about this loss won’t disappear because of some choice consolatory words. But I know how to relish the twilight too. I feel lucky to live in a time where people have something to do, and the exploits around me seem more poignant, and more beautiful, in the dusk. We may be some of the last to enjoy this brief spell, before all exploration, all discovery, is done by fully automated sleds.
July 31, 2025
Quantum Complexity Theory Student Project Showcase #5 (2025 Edition)!
Sorry for the long blog-hiatus! I was completely occupied for weeks, teaching an intensive course on theoretical computer science to 11-year-olds (!), at a math camp in St. Louis that was also attended by my 8-year-old son. Soon I’ll accompany my 12-year-old daughter to a science camp in Connecticut, where I’ll also give lectures.
There’s a great deal to say about these experiences, but for now: it’s been utterly transformative and life-affirming to spend my days in teaching precocious, enthusiastic, non-jaded children the material I love in the real world, rather than (let’s say) in scrolling through depressing world news and social media posts and emails from hateful trolls on my phone. It’s made me feel 25 years younger (well, at least until I need to walk up a flight of stairs). It’s made me want to go back to actual research and teaching, which besides family and friends have been the main sources of joy in my life.
So on that note, and without further ado: I hereby present the latest Quantum Complexity Theory Student Project Showcase! As the name suggests, this is where I share a selection of the best research projects, from the students who took my graduate class on Quantum Complexity Theory at UT Austin this past spring.
See here for the four previous iterations of the Showcase:
2011 edition (MIT)2012 edition (MIT)2014 edition (MIT)2021 edition (MIT / UT Austin)(As you can see, the timing hasn’t been 100% consistent.)
I expect that, as in past editions, many of this year’s projects will lead to published research papers, or at the very least, preprints on the arXiv.
And now, really without further ado, the projects!
(1) Quantum Hermite Transform and Gaussian Goldreich-Levin, by Vishnu Iyer and Siddhartha Jain.
Vishnu and Sid propose a new primitive for quantum algorithms—the Hermite transform, as opposed to the Fourier transform—and give at least one successful example of its use. They’d be eager to know if anyone can think of other applications!
(2) Quantum Statistical Witness Indistinguishability, by Shafik Nassar and Ronak Ramachandran.
In modern cryptography, even if it isn’t statistical zero-knowledge (SZK), a proof protocol might have the weaker property of being statistically witness-indistinguishable (SWI): that is, Arthur can’t tell which of two possible yes-witnesses Merlin holds. Here Shafik and Ronak initiate the study of quantum SWI, and prove the basic properties of this notion, such as the equivalence of honest and dishonest verifier. Hopefully this will serve as a springboard for someone to find an actual QSWI protocol for an interesting problem.
(3) A Zero-Knowledge Protocol for Keyed Unitary Families, by David Joy and Angela Zhang.
Continuing the theme of quantum zero-knowledge, David and Angela give a protocol by which Merlin can convince Arthur that he knows a unitary relating one pure state to another, without revealing the unitary. Again continuing a theme, applications of this protocol are sought!
(4) On Query Lower Bounds for Aaronson-Kuperberg Unitary Synthesis Circuits, by Arko Banerjee.
Back in 2006, when we formulated our so-called “Unitary Synthesis Conjecture,” Greg Kuperberg and I showed that if a quantum algorithm applies an n-qubit unitary U(f) after making a single query to a Boolean function f, then as we range over f’s, there can be at most 4n possible values of U(f). Here Arko improves our bound to 2n, which is tight. He also tries extremely hard to generalize our bound to the two-query case, not quite succeeding but proving partial results that hopefully will be helpful to others.
(5) Quantum Search with Non-Interacting Bosons and Fermions, by Aravind Karthigeyan.
This one really made me think. Aravind studies the problem of search for a single marked vertex, on the complete graph with N vertices, using either M bosons or M fermions that can hop between the vertices. With M bosons, he shows that the search succeeds in Θ(√(N/M)) time with high probability, which is just the usual runtime for Grover search with M parallel searchers. With fermions, by contrast, he shows that more time is needed. Why? Because of the Pauli Exclusion Principle! The fermions all “step on each others’ toes,” competing to be the one that jumps onto the marked vertex, which limits the advantage of having M fermions searching in parallel.
(6) Limits to Pseudodeterminism in Quantum Communication Protocols, by Jiawei Li.
Jiawei revisits the famous Hidden Matching Problem, which is known to have an exponential gap between its randomized one-way communication complexity of ~√n, and its quantum one-way communication complexity of ~log(n). He makes a new observation about this problem: namely, if you want the exponential quantum communication advantage, then you must content yourself with a protocol that can generate many different possible correct answers with appreciable probabilities (i.e., that generates large min-entropy). In other words, no good quantum protocol for the problem is pseudodeterministic. This complements, for example, my and Shih-Han Hung’s work, which showed the same statement for quantum supremacy experiments based on Random Circuit Sampling, or the long line of works that showed it for experiments that violate the Bell/CHSH inequality.
Congratulations to my students for their accomplishments, and thanks to them for giving me permission to include their work in this showcase!
June 28, 2025
BusyBeaver(6) is really quite large
For overdetermined reasons, I’ve lately found the world an increasingly terrifying and depressing place. It’s gotten harder and harder to concentrate on research, or even popular science writing. Every so often, though, something breaks through that wakes my inner child, reminds me of why I fell in love with research thirty years ago, and helps me forget about the triumphantly strutting factions working to destroy everything I value.
Back in 2022, I reported an exciting advance in BusyBeaverology: namely, whereas we previously knew merely that BB(6) > 1036,534, Pavel Kropitz managed to show that
BB(6) > 1510.
For those tuning in from home, here BB(6) is the 6th Busy Beaver number, i.e. the maximum number of steps that a 6-state Turing machine with a {0,1} alphanet can take before halting, when run on an initially all-0 input tape. Also, the left-superscript means tetration, or iterated exponentiation: for example, 1510 means 10 to the 10 to the 10 and so on 15 times.
By comparison, last year the international “BBchallenge” team determined that BB(5) is “merely” 47,176,870 (see also Quanta magazine’s superb feature article on that milestone). So, between 5 and 6 is where the Busy Beaver function makes its leap, from the millions to beyond the bounds of observable reality.
But if you thought that was the end of the BB(6) story, think again! Eleven days ago, Tristan Sterin, who organized the BBchallenge the team, emailed to tell me that a team member with the handle “mxdys” improved the BB(6) bound yet further, to
BB(6) > 10,000,00010
(i.e., 10 to the 10 to the 10 and so on 10 million times), with a correctness proof in Coq. Then, three days ago, Tristan wrote again to say that mxdys has improved the bound again, to
$$ BB(6) \gt ^{^{{^9}2}2}2 $$
I.e., BB(6) is at least 2 tetrated to the 2 tetrated to the 2 tetrated to the 9. So in particular, BB(6) is at least 2 pentated to the 5, where pentation is iterated tetration, i.e. the operation that is to tetration as tetration is to exponentiation, exponentiation is to multiplication, and multiplication is to addition.
Last week, when we “merely” knew that BB(6) > 10,000,00010, I talked to a journalist who asked me to give an intuitive sense of how big such a number is. So I said, imagine you had 10,000,00010 grains of sand. Then you could … well, uh … you could fill about 10,000,00010 copies of the observable universe with that sand. I hope that helps people visualize it!
The journalist also asked: have these new discoveries about BB(6) caused me to rethink any broader beliefs about the Busy Beaver function? And I mean, yes and no: it was always completely within the realm of possibility that BB(6) would already be, not some puny little thing like 1036,534, but way out in iteration land. Now that we know for sure that it is, though, maybe I ought to conjecture that the value of BB(n) becomes independent of the ZFC axioms of set theory already when n is 7 or 8 or 9, rather than when it’s 20 or 30 or whatever. (Currently, we know that BB(n) becomes independent of ZFC only when n=643.)
Unrelated Update: I’m just now returning to the US from STOC’2025 in Prague, where I saw lots of old friends and learned many interesting new things, again helping to distract me from the state of the world! Many I’ll write about some of those things in a future post. For now, though, anyone who’s interested in my STOC plenary lecture, entitled “The Status of Quantum Speedups,” can check out the PowerPoint slides here.
June 23, 2025
Raymond Laflamme (1960-2025)

Even with everything happening in the Middle East right now, even with (relatedly) everything happening in my own family (my wife and son sheltering in Tel Aviv as Iranian missiles rained down), even with all the rather ill-timed travel I’ve found myself doing as these events unfolded (Ecuador and the Galapagos and now STOC’2025 in Prague) … there’s been another thing, a huge one, weighing on my soul.
Ray Laflamme played a major role in launching the whole field of quantum computing and information, and also a major role in launching my own career. The world has lost him too soon. I’ve lost him too soon.
After growing up in Quebec—I still hear his French-Canadian accent, constantly on the verge of laughter, as I’m writing this—Ray went into physics and became a PhD student of Stephen Hawking. No, not a different Stephen Hawking. If you’ve read or watched anything by or about Hawking, including A Brief History of Time, you might remember the story where Hawking believed for a while that time would reverse itself as the universe contracted in a Big Crunch, with omelettes unscrambling themselves, old people turning into children, etc. etc., but then two graduate students persuaded him that that was totally wrong, and entropy would continue to increase like normal. Anyway, Ray was one of those students (Don Page was the other). I’d always meant to ask Ray to explain what argument changed Hawking’s mind, since the idea of entropy decreasing during contraction just seemed obviously wrong to me! Only today, while writing this post, did I find a 1993 paper by Hawking, Laflamme, and Lyons that explains the matter perfectly clearly, including three fallacious intuitions that Hawking had previously held. (Even though, as they comment, “the anatomy of error is not ruled by logic.”)
Anyway, in the mid-1990s, starting at Los Alamos National Lab and continuing at the University of Waterloo, Ray became a pioneer of the then-new field of quantum computing and information. In 1997, he was a coauthor of one of the seminal original papers that proved the possibility of fault-tolerant quantum computation with a constant error rate, what we now call the Threshold Theorem (Aharonov and Ben-Or had such a result independently). He made lots of other key early contributions to the theory of quantum error-correcting codes and fault-tolerance.
When it comes to Ray’s scientific achievements after his cosmology work with Hawking and after quantum fault-tolerance—well, there are many, but let me talk about two. Perhaps the biggest is the KLM (Knill-Laflamme-Milburn) Theorem. It would be fair to say that KLM started the entire field of optical or photonic quantum computation, as it’s existed in the 21st century. In one sentence, what KLM showed is that it’s possible to build a universal quantum computer using only
single-photon states,a network of “linear-optical elements” (that is, beamsplitters and phaseshifters) that the photons travel through, andfeedforward measurements—that is, measurements of an optical mode that tell you how many photons are there, in such a way that you can condition (using a classical computer) which optical elements to apply next on the outcome of the measurement.All of a sudden, there was a viable path to building a quantum computer out of photons, where you wouldn’t need to get pairs of photons to interact with each other, which had previously been the central sticking point. Have you heard of PsiQuantum, the startup in Palo Alto with a $6 billion valuation and hundreds of employees that’s right now trying to build an optical quantum computer with a million qubits? Or Xanadu, its competitor in Toronto? These, in some sense, are companies that grew out of a theorem: specifically the KLM Theorem.
For me, though, the significance of KLM goes beyond the practical. In 2011, I used the KLM Theorem, together with the fact (known since the 1950s) that photonic amplitudes are the permanents of matrices, to give a new proof of Leslie Valiant’s celebrated 1979 theorem that calculating the permanent is a #P-complete problem. Thus, as I pointed out in a talk two years ago at Ray’s COVID-delayed 60th birthday conference, entitled Ray Laflamme, Complexity Theorist (?!), KLM had said something new about computational complexity, without any intention of doing so. More generally, KLM was crucial backdrop to my and Alex Arkhipov’s later work on BosonSampling, where we gave strong evidence that some classical computational hardness—albeit probably not universal quantum computation—remains in linear optics, even if one gets rid of KLM’s feedforward measurements.
The second achievement of Ray’s that I wanted to mention was his 1998 creation, again with his frequent collaborator Manny Knill, of the One Clean Qubit or “DQC1” model of quantum computation. In this model, you get to apply an arbitrary sequence of 2-qubit unitary gates, followed by measurements at the end, just like in standard quantum computing—but the catch is that the initial state consists of just a single qubit in the state |0⟩, and all other qubits in the maximally mixed state. If all qubits started in the maximally mixed state, then nothing would ever happen, because the maximally mixed state is left invariant by all unitary transformations. So it would stand to reason that, if all but one of the qubits start out maximally mixed, then almost nothing happens. The big surprise is that this is wrong. Instead you get a model that, while probably not universal for quantum computation, can do a variety of things in polynomial time that we don’t know how to do classically, including estimating the traces of exponentially large unitary matrices and the Jones polynomials of trace closures of braids (indeed, both of these problems turn out to be DQC1-complete). The discovery of DQC1 was one of the first indications that there’s substructure within BQP. Since then, the DQC1 model has turned up again and again in seemingly unrelated investigations in quantum complexity theory—way more than you’d have any right to expect a priori.
Beyond his direct contributions to quantum information, Ray will be remembered as one of the great institution-builders of our field. He directed the Institute for Quantum Computing (IQC) at the University of Waterloo in Canada, from its founding in 2002 until he finally stepped down in 2017. This includes the years 2005-2007, when I was a postdoc at IQC—two of the most pivotal years of my life, when I first drove a car and went out on dates (neither of which I do any longer, for different reasons…), when I started this blog, when I worked on quantum money and learnability of quantum states and much more, and when I taught the course that turned into my book Quantum Computing Since Democritus. I fondly remember Ray, as my “boss,” showing me every possible kindness. He even personally attended the Quantum Computing Since Democritus lectures, which is why he appears as a character in the book.
As if that wasn’t enough, Ray also directed the quantum information program of the Canadian Institute for Advanced Research (CIFAR). If you ever wondered why Canada, as a nation, has punched so far above its weight in quantum computing and information for the past quarter-century—Ray Laflamme is part of the answer.
At the same time, if you imagine the stereotypical blankfaced university administrator, who thinks and talks only in generalities and platitudes (“how can we establish public-private partnerships to build a 21st-century quantum workforce?”) … well, Ray was whatever is the diametric opposite of that. Despite all his responsibilities, Ray never stopped being a mensch, a friend, an intellectually curious scientist, a truth-teller, and a jokester. Whenever he and I talked, probably at least a third of the conversation was raucous laughter.
I knew that Ray had spent many years battling cancer. I naïvely thought he was winning, or had won. But as so often with cancer, it looks like the victory was only temporary. I miss him already. He was a ray of light in the world—a ray that sparkles, illuminates, and as we now know, even has the latent power of universal quantum computation.
June 22, 2025
Trump and Iran, by popular request
I posted this on my Facebook, but several friends asked me to share more widely, so here goes:
I voted against Trump three times, and donated thousands to his opponents. I’d still vote against him today, seeing him as a once-in-a-lifetime threat to American democracy and even to the Enlightenment itself.
But last night I was also grateful to him for overruling the isolationists and even open antisemites in his orbit, striking a blow against the most evil regime on the planet, and making it harder for that regime to build nuclear weapons. I acknowledge that his opponents, who I voted for, would’ve probably settled for a deal that would’ve resulted in Iran eventually getting nuclear weapons, and at any rate getting a flow of money to redirect to Hamas, Hezbollah, and the Houthis.
May last night’s events lead to the downfall of the murderous ayatollah regime altogether, and to the liberation of the Iranian people from 46 years of oppression. To my many, many Iranian friends: I hope all your loved ones stay safe, and I hope your great people soon sees better days. I say this as someone whose wife and 8-year-old son are right now in Tel Aviv, sheltering every night from Iranian missiles.
Fundamentally, I believe not only that evil exists in the world, but that it’s important to calibrate evil on a logarithmic scale. Trump (as I’ve written on this blog for a decade) terrifies me, infuriates me, and embarrasses me, and through his evisceration of American science and universities, has made my life noticeably worse. On the other hand, he won’t hang me from a crane for apostasy, nor will he send a ballistic missile to kill my wife and son and then praise God for delivering them into his hands.
Update: I received the following comment on this post, which filled me with hope, and demonstrated more moral courage than perhaps every other anonymous comment in this blog’s 20-year history combined. To this commenter and their friends and family, I wish safety and eventually, liberation from tyranny.
I will keep my name private for clear reasons. Thank you for your concern for Iranians’ safety and for wishing the mullah regime’s swift collapse. I have fled Tehran and I’m physically safe but mentally, I’m devastated by the war and the internet blackout (the pretext is that Israeli drones are using our internet). Speaking of what the mullahs have done, especially outrageous was the attack on the Weizmann Institute. I hope your wife and son remain safe from the missiles of the regime whose thugs have chased me and my friends in the streets and imprisoned my friends for simple dissent. All’s well that ends well, and I hope this all ends well.
June 9, 2025
Guess I’m A Rationalist Now
A week ago I attended LessOnline, a rationalist blogging conference featuring many people I’ve known for years—Scott Alexander, Eliezer Yudkowsky, Zvi Mowshowitz, Sarah Constantin, Carl Feynman—as well as people I’ve known only online and was delighted to meet in person, like Joe Carlsmith and Jacob Falkovich and Daniel Reeves. The conference was at Lighthaven, a bewildering maze of passageways, meeting-rooms, sleeping quarters, gardens, and vines off Telegraph Avenue in Berkeley, which has recently emerged as the nerd Shangri-La, or Galt’s Gulch, or Shire, or whatever. I did two events at this year’s LessOnline: a conversation with Nate Soares about the Orthogonality Thesis, and an ask-me-anything session about quantum computing and theoretical computer science (no new ground there for regular consumers of my content).
What I’ll remember most from LessOnline is not the sessions, mine or others’, but the unending conversation among hundreds of people all over the grounds, which took place in parallel with the sessions and before and after them, from morning till night (and through the night, apparently, though I’ve gotten too old for that). It felt like a single conversational archipelago, the largest in which I’ve ever taken part, and the conference’s real point. (Attendees were exhorted, in the opening session, to skip as many sessions as possible in favor of intense small-group conversations—not only because it was better but also because the session rooms were too small.)
Within the conversational blob, just making my way from one building to another could take hours. My mean free path was approximately five feet, before someone would notice my nametag and stop me with a question. Here was my favorite opener:
“You’re Scott Aaronson?! The quantum physicist who’s always getting into arguments on the Internet, and who’s essentially always right, but who sustains an unreasonable amount of psychic damage in the process?”
“Yes,” I replied, not bothering to correct the “physicist” part.
One night, I walked up to Scott Alexander, who sitting on the ground, with his large bald head and a blanket he was using as a robe, resembled a monk. “Are you enjoying yourself?” he asked.
I replied, “you know, after all these years of being coy about it, I think I’m finally ready to become a Rationalist. Is there, like, an initiation ritual or something?”
Scott said, “Oh, you were already initiated a decade ago; you just didn’t realize it at the time.” Then he corrected himself: “two decades ago.”
The first thing I did, after coming out as a Rationalist, was to get into a heated argument with Other Scott A., Joe Carlsmith, and other fellow-Rationalists about the ideas I set out twelve years ago in my Ghost in the Quantum Turing Machine essay. Briefly, my argument was that the irreversibility and ephemerality of biological life, which contrasts with the copyability, rewindability, etc. of programs running on digital computers, and which can ultimately be traced back to microscopic details of the universe’s initial state, subject to the No-Cloning Theorem of quantum mechanics, which then get chaotically amplified during brain activity … might be a clue to a deeper layer of the world, one that we understand about as well as the ancient Greeks understood Newtonian physics, but which is the layer where mysteries like free will and consciousness will ultimately need to be addressed.
I got into this argument partly because it came up, but partly also because this seemed like the biggest conflict between my beliefs and the consensus of my fellow Rationalists. Maybe part of me wanted to demonstrate that my intellectual independence remained intact—sort of like a newspaper that gets bought out by a tycoon, and then immediately runs an investigation into the tycoon’s corruption, as well as his diaper fetish, just to prove it can.
The funny thing, though, is that all my beliefs are the same as they were before. I’m still a computer scientist, an academic, a straight-ticket Democratic voter, a liberal Zionist, a Jew, etc. (all identities, incidentally, well-enough represented at LessOnline that I don’t even think I was the unique attendee in the intersection of them all).
Given how much I resonate with what the Rationalists are trying to do, why did it take me so long to identify as one?
Firstly, while 15 years ago I shared the Rationalists’ interests, sensibility, and outlook, and their stances on most issues, I also found them bizarrely, inexplicably obsessed with the question of whether AI would soon become superhumanly powerful and change the basic conditions of life on earth, and with how to make the AI transition go well. Why that, as opposed to all the other sci-fi scenarios one could worry about, not to mention all the nearer-term risks to humanity?
Suffice it to say that empirical developments have since caused me to withdraw my objection. Sometimes weird people are weird merely because they see the future sooner than others. Indeed, it seems to me that the biggest thing the Rationalists got wrong about AI was to underestimate how soon the revolution would happen, and to overestimate how many new ideas would be needed for it (mostly, as we now know, it just took lots more compute and training data). Now that I, too, spend some of my time working on AI alignment, I was able to use LessOnline in part for research meetings with colleagues.
A second reason I didn’t identify with the Rationalists was cultural: they were, and are, centrally a bunch of twentysomethings who “work” at an ever-changing list of Berkeley- and San-Francisco-based “orgs” of their own invention, and who live in group houses where they explore their exotic sexualities, gender identities, and fetishes, sometimes with the aid of psychedelics. I, by contrast, am a straight, monogamous, middle-aged tenured professor, married to another such professor and raising two kids who go to normal schools. Hanging out with the Rationalists always makes me feel older and younger at the same time.
So what changed? For one thing, with the march of time, a significant fraction of Rationalists now have marriages, children, or both—indeed, a highlight of LessOnline was the many adorable toddlers running around the Lighthaven campus. Rationalists are successfully reproducing! Some because of explicit pronatalist ideology, or because they were persuaded by Bryan Caplan’s arguments in Selfish Reasons to Have More Kids. But others simply because of the same impulses that led their ancestors to do the same for eons. And perhaps because, like the Mormons or Amish or Orthodox Jews, but unlike typical secular urbanites, the Rationalists believe in something. For all their fears around AI, they don’t act doomy, but buzz with ideas about how to build a better world for the next generation.
At a LessOnline parenting session, hosted by Julia Wise, I was surrounded by parents who worry about the same things I do: how do we raise our kids to be independent and agentic yet socialized and reasonably well-behaved, technologically savvy yet not droolingly addicted to iPad games? What schooling options will let them accelerate in math, save them from the crushing monotony that we experienced? How much of our own lives should we sacrifice on the altar of our kids’ “enrichment,” versus trusting Judith Rich Harris that such efforts quickly hit a point of diminishing returns?
A third reason I didn’t identify with the Rationalists was, frankly, that they gave off some (not all) of the vibes of a cult, with Eliezer as guru. Eliezer writes in parables and koans. He teaches that the fate of life on earth hangs in the balance, that the select few who understand the stakes have the terrible burden of steering the future. Taking what Rationalists call the “outside view,” how good is the track record for this sort of thing?
OK, but what did I actually see at Lighthaven? I saw something that seemed to resemble a cult only insofar as the Beatniks, the Bloomsbury Group, the early Royal Society, or any other community that believed in something did. When Eliezer himself—the bearded, cap-wearing Moses who led the nerds from bondage to their Promised Land in Berkeley—showed up, he was argued with like anyone else. Eliezer has in any case largely passed his staff to a new generation: Nate Soares and Zvi Mowshowitz have found new and, in various ways, better ways of talking about AI risk; Scott Alexander has for the last decade written the blog that’s the community’s intellectual center; figures from Kelsey Piper to Jacob Falkovich to Aella have taken Rationalism in new directions, from mainstream political engagement to the … err … statistical analysis of orgies.
I’ll say this, though, on the naysayers’ side: it’s really hard to make dancing to AI-generated pop songs about Bayes’ theorem and Tarski’s definition of truth not feel cringe, as I can now attest from experience.
The cult thing brings me to the deepest reason I hesitated for so long to identify as a Rationalist: namely, I was scared that if I did, people whose approval I craved (including my academic colleagues, but also just randos on the Internet) would sneer at me. For years, I searched of some way of explaining this community’s appeal so reasonable that it would silence the sneers.
It took years of psychological struggle, and (frankly) solidifying my own place in the world, to follow the true path, which of course is not to give a shit what some haters think of my life choices. Consider: five years ago, it felt obvious to me that the entire Rationalist community might be about to implode, under existential threat from Cade Metz’s New York Times article, as well as RationalWiki and SneerClub and all the others laughing at the Rationalists and accusing them of every evil. Yet last week at LessOnline, I saw a community that’s never been thriving more, with a beautiful real-world campus, excellent writers on every topic who felt like this was the place to be, and even a crop of kids. How many of the sneerers are living such fulfilled lives? To judge from their own angry, depressed self-disclosures, probably not many.
But are the sneerers right that, even if the Rationalists are enjoying their own lives, they’re making other people’s lives miserable? Are they closet far-right monarchists, like Curtis Yarvin? I liked how The New Yorker put it in its recent, long and (to my mind) devastating profile of Yarvin:
The most generous engagement with Yarvin’s ideas has come from bloggers associated with the rationalist movement, which prides itself on weighing evidence for even seemingly far-fetched claims. Their formidable patience, however, has also worn thin. “He never addressed me as an equal, only as a brainwashed person,” Scott Aaronson, an eminent computer scientist, said of their conversations. “He seemed to think that if he just gave me one more reading assignment about happy slaves singing or one more monologue about F.D.R., I’d finally see the light.”
The closest to right-wing politics that I witnessed at LessOnline was a session, with Kelsey Piper and current and former congressional staffers, about the prospects for moderate Democrats to articulate a moderate, pro-abundance agenda that would resonate with the public and finally defeat MAGA.
But surely the Rationalists are incels, bitter that they can’t get laid? Again, the closest I saw was a session where Jacob Falkovich helped a standing-room-only crowd of mostly male nerds confront their fears around dating and understand women better, with Rationalist women eagerly volunteering to answer questions about their perspective. Gross, right? (Also, for those already in relationships, Eliezer’s primary consort and former couples therapist Gretta Duleba did a session on relationship conflict.)
So, yes, when it comes to the Rationalists, I’m going to believe my own lying eyes over the charges of the sneerers. The sneerers can even say about me, in their favorite formulation, that I’ve “gone mask off,” confirmed the horrible things they’ve always suspected. Yes, the mask is off—and beneath the mask is the same person I always was, who has an inordinate fondness for the Busy Beaver function and the complexity class BQP/qpoly, and who uses too many filler words and moves his hands too much, and who strongly supports the Enlightenment, and who once feared that his best shot at happiness in life would be to earn women’s pity rather than their contempt. Incorrectly, as I’m glad to report. From my nebbishy nadir to the present, a central thing that’s changed is that, from my family to my academic colleagues to the Rationalist community to my blog readers, I finally found some people who want what I have to sell.
Unrelated Announcements:
My replies to comments on this post might be light, as I’ll be accompanying my daughter on a school trip to the Galapagos Islands!
A few weeks ago, I was “ambushed” into leading a session on philosophy and theoretical computer science at UT Austin. (I.e., asked to show up for the session, but thought I’d just be a participant rather than the main event.) The session was then recorded and placed on YouTube—and surprisingly, given the circumstances, some people seemed to like it!
Friend-of-the-blog Alon Rosen has asked me to announce a call for nominations for a new theoretical computer science prize, in memory of my former professor (and fellow TCS blogger) Luca Trevisan, who was lost to the world too soon.
May 30, 2025
“If Anyone Builds It, Everyone Dies”
Eliezer Yudkowsky and Nate Soares are publishing a mass-market book, the rather self-explanatorily-titled If Anyone Builds It, Everyone Dies. (Yes, the “it” means “sufficiently powerful AI.”) The book is now available for preorder from Amazon:

(If you plan to buy the book at all, Eliezer and Nate ask that you do preorder it, as this will apparently increase the chance of it making the bestseller lists and becoming part of The Discourse.)
I was graciously offered a chance to read a draft and offer, not a “review,” but some preliminary thoughts. So here they are:
For decades, Eliezer has been warning the world that an AI might soon exceed human abilities, and proceed to kill everyone on earth, in pursuit of whatever strange goal it ended up with. It would, Eliezer said, be something like what humans did to the earlier hominids. Back around 2008, I followed the lead of most of my computer science colleagues, who considered these worries, even if possible in theory, comically premature given the primitive state of AI at the time, and all the other severe crises facing the world.
Now, of course, not even two decades later, we live on a planet that’s being transformed by some of the signs and wonders that Eliezer foretold. The world’s economy is about to be upended by entities like Claude and ChatGPT, AlphaZero and AlphaFold—whose human-like or sometimes superhuman cognitive abilities, obtained “merely” by training neural networks (in the first two cases, on humanity’s collective output) and applying massive computing power, constitute (I’d say) the greatest scientific surprise of my lifetime. Notably, these entities have already displayed some of the worrying behaviors that Eliezer warned about decades ago—including lying to humans in pursuit of a goal, and hacking their own evaluation criteria. Even many of the economic and geopolitical aspects have played out as Eliezer warned they would: we’ve now seen AI companies furiously racing each other, seduced by the temptation of being (as he puts it) “the first monkey to taste the poisoned banana,” discarding their previous explicit commitments to safety, transparency, and the public good once they get in the way.
Today, then, even if one still isn’t ready to swallow the full package of Yudkowskyan beliefs, any empirically minded person ought to be updating in its direction—and acting accordingly. Which brings us to the new book by Eliezer and his collaborator Nate Soares. This book is far and away the clearest, most accessible presentation of Eliezer’s beliefs, the culmination of a quarter-century of his developing and talking about them. That undoubtedly owes a great deal to Nate, who seems to have sanded down the infamously brusque rough edges of Eliezer’s writing style. So much the better! But it also owes a lot to the world itself: current events now offer an endless supply of real-world examples for Eliezer’s formerly abstract arguments about AI, examples that the book deploys to maximum effect.
The book also mines history—everything from the Wright Brothers to World War II to the Chernobyl accident—for lessons about human attitudes toward technological progress, safety, and risk. And it maintains Eliezer’s fondness for stories and parables, one of the most charming features of his writing.
Even today, I’m not nearly as confident about the doom scenario as Eliezer and Nate are. I don’t know whether an AI’s goals are really “orthogonal” to its abilities, in the sense that will matter in practice. And when I reach the part where the AI, having copied itself all over the Internet and built robot factories, then invents and releases self-replicating nanotechnology that gobbles the surface of the earth in hours or days, a large part of me still screams out that there must be practical bottlenecks that haven’t been entirely accounted for here.
And yet, even if you agree with only a quarter of what Eliezer and Nate write, you’re likely to close this book fully convinced—as I am—that governments need to shift to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created. And that, if they won’t, their citizens need to pressure them to do so.
So regardless of how much they agree or disagree, I’d like everyone on earth who cares about the future to read this book, debate its ideas, and have its thesis in mind when they’re discussing AI.
As for me? It would’ve been better if I’d reached my current position earlier: if I hadn’t needed empirical reality, plus superb writing like Eliezer’s and Nate’s, to bonk me over the head with the risks that AI was likely to pose to humanity in my lifetime. But having failed to see as far ahead as they did, the least I can do is update. You should too, and you can start by reading the book.
As it happens, this weekend I’ll be at LessOnline, the rationalist blogging conference in Berkeley, where (among my other events) I’ll engage in a dialogue/debate with Nate Soares about the orthogonality thesis, one of the crucial underpinnings of his and Eliezer’s case for AI doom. So, I’ll probably be LessAvailable to respond to comments on this post. But feel free to discuss anyway! After all, it’s merely the fate of all Earth-originating life that’s at stake here, not some actually hot-button topic like Trump or Gaza.
May 8, 2025
Cracking the Top Fifty!
I’ve now been blogging for nearly twenty years—through five presidential administrations, my own moves from Waterloo to MIT to UT Austin, my work on algebrization and BosonSampling and BQP vs. PH and quantum money and shadow tomography, the publication of Quantum Computing Since Democritus, my courtship and marriage and the birth of my two kids, a global pandemic, the rise of super-powerful AI and the terrifying downfall of the liberal world order.
Yet all that time, through more than a thousand blog posts on quantum computing, complexity theory, philosophy, the state of the world, and everything else, I chased a form of recognition for my blogging that remained elusive.
Until now.
This week I received the following email:
I emailed regarding your blog Shtetl-Optimized Blog which was selected by FeedSpot as one of the Top 50 Quantum Computing Blogs on the web.
https://bloggers.feedspot.com/quantum_computing_blogs
We recommend adding your website link and other social media handles to get more visibility in our list, get better ranking and get discovered by brands for collaboration.
We’ve also created a badge for you to highlight this recognition. You can proudly display it on your website or share it with your followers on social media.
We’d be thankful if you can help us spread the word by briefly mentioning Top 50 Quantum Computing Blogs in any of your upcoming posts.
Please let me know if you can do the needful.
You read that correctly: Shtetl-Optimized is now officially one of the top 50 quantum computing blogs on the web. You can click the link to find the other 49.
Maybe it’s not unrelated to this new notoriety that, over the past few months, I’ve gotten a massively higher-than-usual volume of emailed solutions to the P vs. NP problem, as well as the other Clay Millennium Problems (sometimes all seven problems at once), as well as quantum gravity and life, the universe, and everything. I now get at least six or seven confident such emails per day.
While I don’t spend much time on this flood of scientific breakthroughs (how could I?), I’d like to note one detail that’s new. Many of the emails now include transcripts where ChatGPT fills in the details of the emailer’s theories for them—unironically, as though that ought to clinch the case. Who said generative AI wasn’t poised to change the world? Indeed, I’ll probably need to start relying on LLMs myself to keep up with the flood of fan mail, hate mail, crank mail, and advice-seeking mail.
Anyway, thanks for reading everyone! I look forward to another twenty years of Shtetl-Optimized, if my own health and the health of the world cooperate.
May 6, 2025
Opposing SB37
Yesterday, the Texas State Legislature heard public comments about SB37, a bill that would give a state board direct oversight over course content and faculty hiring at public universities, perhaps inspired by Trump’s national crackdown on higher education. (See here or here for coverage.) So, encouraged by a friend in the history department, I submitted the following public comment, whatever good it will do.
I’m a computer science professor at UT, although I’m writing in my personal capacity. For 20 years, on my blog and elsewhere, I’ve been outspoken in opposing woke radicalism on campus and (especially) obsessive hatred of Israel that often veers into antisemitism, even when that’s caused me to get attacked from my left. Nevertheless, I write to strongly oppose SB37 in its current form, because of my certainty that no world-class research university can survive ceding control over its curriculum and faculty hiring to the state. If this bill passes, for example, it will severely impact my ability to recruit the most talented computer scientists to UT Austin, if they have competing options that will safeguard their academic freedom as traditionally conceived. Even if our candidates are approved, the new layer of bureaucracy will make it difficult and slow for us to do anything. For those concerned about intellectual diversity in academia, a much better solution would include safeguarding tenure and other protections for faculty with heterodox views, and actually enforcing content-neutral time, place, and manner rules for protests and disruptions. UT has actually done a better job on these things than many other universities in the US, and could serve as a national model for how viewpoint diversity can work — but not under an intolerably stifling regime like the one proposed by this bill.
Scott Aaronson's Blog
- Scott Aaronson's profile
- 126 followers
