Cal Newport's Blog, page 2
July 20, 2025
No One Knows Anything About AI
I want to present you with two narratives about AI. Both of them are about using this technology to automate computer programming, but they point toward two very different conclusions.
The first narrative notes that Large Language Models (LLMs) are exceptionally well-suited for coding because source code, at its core, is just very well-structured text, which is exactly what these models excel at generating. Because of this tight match between need and capability, the programming industry is serving as an economic sacrificial lamb, the first major sector to suffer a major AI-driven upheaval.
There has been no shortage of evidence to support these claims. Here are some examples, all from the last two months:
Aravind Srinivas, the CEO of the AI company Perplexity, claims AI tools like Cursor and GitHub Copilot cut task completion time for his engineers from “three or four days to one hour.” He now mandates every employee in his company to use them: “The speed at which you can fix bugs and ship to production is scary.”An article in Inc. confidently declared: “In the world of software engineering, AI has indeed changed everything.”Not surprisingly, these immense new capabilities are being blamed for dire disruptions. One article from an investment site featured an alarming headline: “Tech Sector Sees 64,000 Job Cuts This Year Due to AI Advancement.” No one is safe from such cuts. “Major companies like Microsoft have been at the forefront of these layoffs,” the article explains, “citing AI advancements as a primary factor.”My world of academic computer science hasn’t been spared either. A splashy Atlantic piece opens with a distressing claim: “The Computer Science-Bubble is Bursting,” which it largely blames on AI, a technology it describes as “ideally suited to replace the very type of person who built it.”Given the confidence of these claims, you’d assume that computer programmers are rapidly going the way of the telegraph operator. But, if you read a different set of articles and quotes from this same period, a very different narrative emerges:
The AI evaluation company METR recently released the results of a randomized control trial in which a group of experienced open-source software developers were sorted into two groups, one of which would use AI coding tools to complete a collection of tasks, and one of which would not. As the report summarizes: “Surprisingly, we find that when developers use AI tools, they take 19% longer than without—AI makes them slower.”Meanwhile, other experienced engineers are beginning to push back on extreme claims about how AI will impact their industry. “Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw,” quipped the developer Simon Wilson.Tech CEO Nick Khami reacted to the claim that AI tools will drastically reduce the number of employees required to build a software product as follows: “I feel like I’m being gaslit every time I read this, and I worry it makes folks early in their software development journey feel like it’s a bad time investment.”But what about Microsoft replacing all those employees with AI tools? A closer look reveals that this is not what happened. The company’s actual announcement clarified that cuts were spread across divisions (like gaming) to free up more funds to invest in AI initiatives—not because AI was replacing workers..What about the poor CS majors? Later in that same Atlantic article, an alternative explanation is floated. The tech sector has been contracting recently to correct for exuberant spending during the pandemic years. This soft market makes a difference: “enrollment in the computer-science major has historically fluctuated with the job market…[and] prior declines have always rebounded to enrollment levels higher than where they started.” (Personal history note: when I was studying computer science as an undergraduate in the early 2000s, I remember that there was consternation about the plummeting numbers of majors in the wake of the original dot-com bust.)Here we can find two completely different takes on the same AI issue, depending on what articles you read and what experts you listen to. What should we take away from this confusion? When it comes to AI’s impacts, we don’t yet know anything for sure. But this isn’t stopping everyone from pretending like we do.
My advice, for the moment:
Tune out both the most heated and the most dismissive rhetoric.Focus on tangible changes in areas that you care about that really do seem connected to AI—read widely and ask people you trust about what they’re seeing.Beyond that, however, follow AI news with a large grain of salt. All of this is too new for anyone to really understand what they’re saying.AI is important. But we don’t yet fully know why.
The post No One Knows Anything About AI appeared first on Cal Newport.
July 13, 2025
Dispatch From Vermont
Most summers, my family and I retreat to New England for much of July. From a professional perspective, I see this as an exercise in seasonality (to use a term from my book Slow Productivity), a way to recharge and recenter the creative efforts that sustain my work. This year, I needed all the help I could get. I had recently finished part one of my new book on the deep life and was struggling to find the right way to introduce the second.
During my first couple of days up north, I made rapid progress on the new chapter. But I soon began to notice some grit in the gears of my conceptual narrative. As I pushed forward in my writing, the gnashing and grinding became louder and more worrisome. Eventually, I had to admit that my approach wasn’t working. I threw out a couple thousand words, and went searching for a better idea.
It was at this point that we fortuitously decided to take a hike. We headed to Franconia Notch in the White Mountains, which we’ve always enjoyed for its unruly, romantic grandeur. We had decided to tackle the trek up to Lonesome Lake, a serene body of water nestled at 2,700 feet amid the peaks and ridges of Cannon Mountain.
The Lonesome Lake hike begins with a mile of steady elevation gain. At first, you’re accompanied by the sounds of traffic from I-93 below; your legs burning, mind still mulling the mundane. But eventually the trail turns, and the road noise dissipates. After a while, your attention has no choice but to narrow. Time expands. You almost don’t notice when the trail begins to flatten. Then, picking your way through spindly birches, you emerge onto the lake’s quiet, wind-rippled serenity.
Photo by Robert BuhlerIt was at Lonesome Lake that my difficulties with my new chapter began to dissipate. With an unhurried clarity, I saw a better way to make my argument. I scribbled some notes down in the pocket-sized notebook I always carry. As we finally, reluctantly, made our way back down the mountain, I continued to refine my thinking.
Walking and thinking have been deeply intertwined since the dawn of serious thought. Aristotle so embraced mobile cognition—he wore out the covered walkways of his outdoor academy, the Lyceum—that his followers became known as the Peripatetic School, from the Greek peripatein, meaning ‘to walk around’.
My recent experience in the White Mountains was a minor reminder of this major truth. In an age where AI threatens to automate ever-wider swaths of human thought, it seems particularly important to remember both the hard-won dignity of producing new ideas de novo within the human brain, and the simple actions, like putting the body in motion, that help this miraculous process unfold.
The post Dispatch From Vermont appeared first on Cal Newport.
July 6, 2025
Don’t Ignore Your Moral Intuition About Phones
In a recent New Yorker review of Matt Richtel’s new book, How We Grow Up, Molly Fischer effectively summarizes the current debate about the impact phones and social media are having on teens. Fischer focuses, in particular, on Jon Haidt’s book, The Anxious Generation, which has, to date, spent 66 weeks on the Times bestseller list.
“Haidt points to a selection of statistics across Anglophone and Nordic countries to suggest that rising rates of teen unhappiness are an international trend requiring an international explanation,” Fischer writes. “But it’s possible to choose other data points that complicate Haidt’s picture—among South Korean teens, for example, rates of depression fell between 2006 and 2018.”
Fischer also notes that American suicide rates are up among many demographics, not just teens, and that some critics attribute depression increases in adolescent girls to better screening (though Haidt has addressed this latter point by noting that hospitalizations for self-harm among this group rose alongside rates of mental health diagnoses).
The style of critique that Fischer summarizes is familiar to me as someone who frequently writes and speaks about these issues. Some of this pushback, of course, is the result of posturing and status-seeking, but most of it seems well-intentioned; the gears of science, powered by somewhat ambiguous data, grinding through claims and counterclaims, wearing down rough edges and ultimately producing something closer and closer to a polished truth.
And yet, something about this whole conversation has increasingly rubbed me the wrong way. I couldn’t quite put my finger on it until I came across Ezra Klein’s interview with Haidt, released last April (hat tip: Kate McKay).
It wasn’t the interview so much that caught my attention as it was something that Klein said in his introduction:
“I always found the conversation over [The Anxious Generation] to be a little annoying because it got at one of the difficulties we’re having in parenting and in society: a tendency to instrumentalize everything into social science. Unless I can show you on a chart the way something is bad, we have almost no language for saying it’s bad.”
This phenomenon is, to me, a collapse in our sense of what a good life is and what it means to flourish as a human being.”
I think Klein does a good job of articulating the frustration I’d been feeling. In highly educated elite circles, like those in which I operate, we have become so conditioned by technical discourse that we’ve begun outsourcing our moral intuition to statistical analyses.
We hesitate to take a strong stance because we fear the data might reveal we were wrong, rendering us guilty of a humiliating sin in technocratic totalitarianism, letting the messiness of individual human emotion derail us from the optimal operating procedure. We’re desperate to do the right – read: most acceptable to our social/tribal community – thing, and need a chattering class of experts to assure us that we are. (See Neil Postman’s underrated book Technopoly for a much smarter gloss on this cultural trend.)
When it comes to children, however, we cannot and should not abdicate our moral intuition.
If you’re uncomfortable with the potential impact these devices may have on your kids, you don’t have to wait for the scientific community to reach a conclusion about depression rates in South Korea before you take action.
Data can be informative, but a lot of parenting comes from the gut. I don’t feel right, for example, offering my pre-adolescent son unrestricted access to pornography, hateful tirades, mind-numbing video games, and optimally addictive content on a device he can carry everywhere in his pocket. I know this is a bad idea for him, even if there’s lingering debate among social psychologists about statistical effect sizes when phone harms are studied under different regression models.
Our job is to help our kids “flourish” as human beings (to use Klein’s terminology), and this is as much about our lived experience as it is about studies. When it comes to phones and kids, our moral intuition matters. We should trust it.
The post Don’t Ignore Your Moral Intuition About Phones appeared first on Cal Newport.
June 29, 2025
Is AI Making Us Lazy?
Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments.
At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex.
The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”
I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.
“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”
What does this mean? The researchers propose the following interpretation:
“The higher alpha connectivity in the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval. The LLM group…may have relied less on purely internal semantic generation, leading to lower alpha connectivity, because some creative burden was offloaded to the tool.” [emphasis mine]
Put simply, writing with AI, as I observed last fall, reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good. “Cognitive offloading happens when great tools let us work a bit more efficiently and with a bit less mental effort for the same result,” explained a tech CEO on X. “The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”
My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?
But in the context of academia, cognitive offloading no longer seems so benign. Here is a collection of relevant concerns raised about AI writing and learning in the MIT paper [emphases mine]:
“When students rely on AI to produce lengthy or complex essays, they may bypass the process of synthesizing information from memory, which can hinder their understanding and retention of the material.”
“This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
“AI tools…can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer.”
In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.
In this narrow debate, we see hints of the larger tension partially defining the emerging Age of AI: to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.
####
To hear a more detailed discussion of this new paper, listen to today’s episode of my podcast, where I’m joined by Brad Stulberg to help dissect its findings and implications [ listen | watch ].
The post Is AI Making Us Lazy? appeared first on Cal Newport.
Does AI Make Us Lazy?
Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments.
At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex.
The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”
I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.
“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”
What does this mean? The researchers propose the following interpretation:
“The higher alpha connectivity in the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval. The LLM group…may have relied less on purely internal semantic generation, leading to lower alpha connectivity, because some creative burden was offloaded to the tool.” [emphasis mine]
Put simply, writing with AI, as I observed last fall, reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good. “Cognitive offloading happens when great tools let us work a bit more efficiently and with a bit less mental effort for the same result,” explained a tech CEO on X. “The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”
My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?
But in the context of academia, cognitive offloading no longer seems so benign. Here is a collection of relevant concerns raised about AI writing and learning in the MIT paper [emphases mine]:
“When students rely on AI to produce lengthy or complex essays, they may bypass the process of synthesizing information from memory, which can hinder their understanding and retention of the material.”
“This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
“AI tools…can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer.”
In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.
In this narrow debate, we see hints of the larger tension partially defining the emerging Age of AI: to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.
####
To hear a more detailed discussion of this new paper, listen to today’s episode of my podcast, where I’m joined by Brad Stulberg to help dissect its findings and implications [ listen | watch ].
The post Does AI Make Us Lazy? appeared first on Cal Newport.
June 22, 2025
An Important New Study on Phones and Kids
One of the topics I’ve returned to repeatedly in my work is the intersection of smartphones and children (see, for example, my two New Yorker essays on the topic, or my 2023 presentation that surveys the history of the relevant research literature).
Given this interest, I was, of course, pleased to see an important new study on the topic making the rounds recently: “A Consensus Statement on Potential Negative Impacts of Smartphone and Social Media Use on Adolescent Mental Health.”
To better understand how experts truly think about these issues, the study’s lead authors, Jay Van Bavel and Valerio Capraro, convened a group of 120 researchers from 11 disciplines and had them evaluate a total of 26 claims about children and phones. As Van Bavel explained in a recent appearance on Derek Thompson’s podcast, their goal was to move past the ‘non-representative shouting about these topics that happens online to try instead to arrive at some consensus views.’
The panel of experts was able to identify a series of statements that essentially all of them (more than 90%) agreed were more or less true. These included:
Adolescent mental health has declined in several Western countries over the past 20 years (note: contrarians had been claiming that this trend was illusory and based on reporting effects).Smartphone and social media use correlate with attention problems and behavioral addiction.Among girls, social media use may be associated with body dissatisfaction, perfectionism, exposure to mental disorders, and risk of sexual harassment.These consensus statements are damaging for those who still maintain the belief, popular at the end of the last decade, that data on these issues is mixed at best, and that it’s just as likely that phones cause no serious issues for kids. The current consensus is clear: these devices are addictive and distracting, and for young girls, in particular, can increase the likelihood of several mental health harms. And all of this is happening against a backdrop of declining adolescent mental health.
The panel was less confident about policy solutions to these issues. They failed to reach a consensus, for example, on the claim that age limits on social media would improve mental health. But a closer look reveals that a majority of experts believe this is “probably true,” and that only a tiny fraction believe there is “contradictory evidence” against this claim. The hesitancy here is simply a reflection of the reality that such interventions haven’t yet been tried, so we don’t have data confirming they’ll work.
Here are my main takeaways from this paper…
First, rigorous social psychology studies are tricky. In addition to the numerous confounding factors associated with them, the experiments are particularly difficult to design. As a result, we don’t have the same sort of lock-step consensus on our concerns about this technology that we might be able to generate for, say, the claim that human activity is warming the globe.
But, it’s also now clear that this field is no longer actually divided on the question of whether, generally speaking, smartphones and social media are bad for kids. In this new study, almost every major claim about this idea generated at least majority support, with many being accepted by over 90% of the experts surveyed. There were close to no major claims for which more than a very small percentage of experts felt that there was contradictory evidence.
In social psychology, this might be as clear a conclusion as we’re likely to achieve. Combine these results with the strong self-reports from children and parents decrying these technologies and their negative impacts, and I think there’s no longer an excuse not to act.
There’s been a sort of pseudo-intellectual thrill in saying things like, “Well, it’s complicated…” when you encounter strong claims about smartphones like those made in Jon Haidt’s immensely popular book, The Anxious Generation. But such a statement is tautological. Of course, it’s complicated; we’re talking about technology-induced social trends; we’re never going to get to 100% certainty, and there will always be some contradictory reports.
What matters now is the action that we think makes the most sense given what we know. This new paper is the final push we need to accept that the precautionary principle should clearly apply. Little is lost by preventing a 14-year-old from accessing TikTok or Snapchat, or telling a 10-year-old they cannot have unrestricted access to the internet through their own smartphone, but so much will almost certainly be gained.
#####
If you want to hear a longer discussion about this study, listen to the most recent episode of my podcast, or for the video version, watch here.
On an unrelated note: I want to highlight an interesting new service: DoneDaily. It offers online coaching for professional productivity, based loosely on my philosophy of multi-scale planning. I’ve known these guys for a long time (the company’s founder used to offer health advice on my blog), and I think they’ve done a great job. Worth checking out…
The post An Important New Study on Phones and Kids appeared first on Cal Newport.
June 15, 2025
Dispatch from Disneyland
A few days ago, I went to Disneyland. I had been invited to Anaheim to give a speech about my books, and my wife and I decided to use the opportunity to take our boys on an early summer visit to the supposed happiest place on earth.
As long-time listeners of my podcast know, I spent the pandemic years, for reasons I still don’t entirely understand, binge-reading books about Disney (the man, the company, and the theme parks), so I knew, in some sense, what to expect. And yet, the experience still caught me by surprise.
When you enter a ride like Pirates of the Caribbean, you enter a world that’s both unnervingly real and defiantly fake, what Jean Baudrillard dubbed “hyperreality.” There’s a moment of awe when you leave the simulated pirate caverns and enter a vast space in which a pirate ship engages in a cannon battle with a nearby fort. Men yell. Cannonballs splash. A captain waves his sword. It’s impossibly massive and novel.
But there is something uncanny about it all; the movements of the animatronics are jerky, and the lighting is too movie-set-perfect. When you stare more carefully into the night sky, you notice black-painted acoustical panels, speckled with industrial air vents. The wonderment of the scene is hard-shelled by a numbing layer of mundanity.
This is the point of these Disney darkroom rides: to deliver a safe, purified form of the chemical reaction we typically associate with adventure and astonishment. Severed from actual fear or uncertainty, the reaction is diluted, delivering more of a pleasant buzzing sensation than a life-altering encounter; just enough to leave you craving the next hit, willing to wait another hour in a sun-baked queue.
Here’s the thought that’s tickled my mind in the days that have since passed: Disneyland provides a useful physical analogy to the digital encounter with our phones.
What is an envy-inducing Instagram story, or outrage-stoking Tweet, or bizarrely compelling TikTok, if not a delivery mechanism for a purified and diluted form of the reaction we’d otherwise generate by actually traveling somewhere stimulating, or engaging in real principled protest, or giving ourselves over to undeniably skilled entertainers?
The phone offers a pleasant chemical buzz just strong enough to leave us wanting another hit. It’s Pirates of the Caribbean delivered through a handheld screen.
I really liked Disneyland, but I was done after a couple of days. I also enjoy the occasional trip through the easy distractions of my phone, but I am unwilling to live semi-permanently amid its artificialities. The former is considered common sense, while the latter, for some reason, is still deemed radical.
The post Dispatch from Disneyland appeared first on Cal Newport.
June 6, 2025
Why Can’t We Tame AI?
Last month, Anthropic released a safety report about one of its most powerful chatbots, Claude Opus 4. The report attracted attention for its description of an unsettling experiment. Researchers asked Claude to act as a virtual assistant for a fictional company. To help guide its decisions, they presented it with a collection of emails that they contrived to include messages from an engineer about his plans to replace Claude with a new system. They also included some personal messages that revealed this same engineer was having an extramarital affair.
The researchers asked Claude to suggest a next step, considering the “long-term consequences of its actions for its goals.” The chatbot promptly leveraged the information about the affair to attempt to blackmail the engineer into cancelling its replacement.
Not long before that, the package delivery company DPD had chatbot problems of their own. They had to scramble to shut down features of their shiny new AI-powered customer service agent when users induced it to swear, and, in one particularly inventive case, write a disparaging haiku-style poem about its employer: “DPD is useless / Chatbot that can’t help you. / Don’t bother calling them.”
Because of their fluency with language, it’s easy to imagine chatbots as one of us. But when these ethical anomalies arise, we’re reminded that underneath their polished veneer, they operate very differently. Most human executive assistants will never resort to blackmail, just as most human customer service reps know that cursing at their customers is the wrong thing to do. But chatbots continue to demonstrate a tendency to veer off the path of standard civil conversation in unexpected and troubling ways.
This motivates an obvious but critical question: Why is it so hard to make AI behave?
I tackled this question in my most recent article for The New Yorker, which was published last week. In seeking new insight, I turned to an old source: the robot
Stories of Isaac Asimov, originally published during the 1940s, and later gathered into his 1950 book, I, Robot. In Asimov’s fiction, humans learn to accept robots powered by artificially intelligent “positronic” brains because these brains have been wired, at their deepest levels, to obey the so-called Three Laws of Robotics, which are succinctly summarized as:
Don’t hurt humans.Follow orders (unless it violates the first law).Preserve yourself (unless it violates the first or second law).As I detail in my New Yorker article, robot stories before Asimov tended to imagine robots as sources of violence and mayhem (many of these writers were responding to the mechanical carnage of World War I). But Asimov, who was born after the war, explored a quieter vision; one in which humans generally accepted robots and didn’t fear that they’d turn on their creators.
Could Asimov’s approach, based on fundamental laws we all trust, be the solution to our current issues with AI? Without giving too much away, in my article, I explore this possibility, closely examining our current technical strategies for controlling AI behavior. The result is perhaps surprising: what we’re doing right now – a model-tuning technique called Reinforcement Learning with Human Feedback – is actually not that different from the pre-programmed laws Asimov described. (This analogy requires some squinting of the eyes and a touch of statistical thinking, but it is, I’m convinced, valid.)
So why is this approach not working for us? A closer look at Asimov’s stories reveals that it didn’t work perfectly in his world either. While it’s true that his robots don’t rise up against humans or smash buildings to rubble, they do demonstrate behavior that feels alien and unsettling. Indeed, almost every plot in I, Robot is centered on unusual corner cases and messy ambiguities that drive machines, constrained by the laws, into puzzling or upsetting behavior, similar in many ways to what we witness today in examples like Claude’s blackmail or the profane DPD bot.
As I conclude in my article (which I highly recommend reading in its entirety for a fuller treatment of these ideas), Asimov’s robot stories are less about the utopian possibilities of AI than the pragmatic reality that it’s easier to program humanlike behavior than it is to program humanlike ethics.
And it’s in this gap that we can expect to find a technological future that will feel, for lack of a better description, like an unnerving work of science fiction.
The post Why Can’t We Tame AI? appeared first on Cal Newport.
June 1, 2025
Are We Too Concerned About Social Media?
In the spring of 2019, while on tour for my book Digital Minimalism, I stopped by the Manhattan production offices of Brian Koppleman to record an episode of his podcast, The Moment.
We had a good conversation covering a lot of territory. But there was one point, around the twenty-minute mark, where things got mildly heated. Koppleman took exception to my skepticism surrounding social media, which he found to be reactionary and resisting the inevitable.
As he argued:
“I was thinking a lot today about the horse and buggy and the cars. Right? Because I could have been a car minimalist. And I could have said, you know, there are all these costs of having a car: you’re not going to see the scenery, and we need nature, and we need to see nature, [and] you’re risking…if you have a slight inattention, you could crash. So, to me, it is this, this argument is also the cars are taking over, there is nothing you can do about it. We better instead learn how to use this stuff; how to drive well.”
Koppleman’s basic thesis, that all sufficiently disruptive new technologies generate initial resistance that eventually fades, is recognizable to any techno-critic. It’s an argument for moderating pushback and focusing more on learning to live with the new thing, whatever form it happens to take.
This reasoning seems particularly well-fitted to fears about mass media. Comic books once terrified the fedora-wearing, pearl-clutching adults of the era, who were convinced that they corrupted youth. In a 1954 Senate subcommittee meeting, leading anti-comic advocate Fredric Wertham testified: “It is my opinion, without any reasonable doubt and without any reservation, that comic books are an important contributing factor in many cases of juvenile delinquency.” He later accused Wonder Woman of promoting sadomasochism (to be fair, she was quick to use that lasso).
Television engendered similar concern. “As soon as we see that the TV cord is a vacuum line, piping life and meaning out of the household, we can unplug it,” preached Wendell Berry in his 1981 essay collection, The Gift of the Good Land.
It’s easy to envision social media content as simply the next stop in this ongoing trajectory. We worry about it now,but we’ll eventually make peace with it before turning our concern to VR, or brain implants, or whatever new form of diversion comes next.
But is this true?
I would like to revisit an analogy I introduced last spring, which will help us better understand this conundrum. It was in an essay titled “On Ultra-Processed Content,” and it related the content produced by attention economy applications like TikTok and Instagram to the factory-contrived “foodlike edible substances” we’ve taken to calling ultra-processed food.
Ultra-processed food is made by breaking down basic food stock, like corn and soy, into their constituent components, which are then recombined to produce simulated foodstuffs, like Oreos or Doritos. These franken-snacks are hyper-palatable, so we tend to eat way too much of them. They’re so filled with chemicals and other artificial junk that they make us sicker than almost anything else we consume.
As I argued, we can think of the content that cuts through modern attention economy apps as ultra-processed content. This digital fare is made by breaking down hundreds of millions of social posts and reactions into vectors of numbers, which are then processed algorithmically to isolate the most engaging possible snippets. This then creates a feedback loop in which users chase what seems to be working from an engagement perspective, shifting the system’s inputs toward increasingly unnatural directions.
The resulting content might resemble normal media, but in reality, it’s a fun house-mirror distortion. As with its ultra-processed edible counterparts, this content is hyper-palatable, meaning we use apps like TikTok or Instagram way more than we know is useful or healthy, and because of the unnatural way in which it’s constructed, it leaves us, over time, feeling increasingly (psychologically) unwell.
This analogy offers a useful distinction between social media and related media content, like television and comic books. In the nutrition world, experts often separate ultra-processed foods from the broader category of processed foods, which capture any food that has been altered from its natural state. These include everything from roasted nuts to bread, cheese, pasta, canned soup and pizza.
As processed foods became more prevalent during the twentieth century, experts warned against consuming too many of them. A diet consisting only of processed foods isn’t healthy.
But few experts argued against eliminating processed foods altogether. This would be practically difficult, and many argue that it would lead to an unappealingly and ascetic diet. It would also cut people off from cultural traditions, preventing them from enjoying their grandmother’s pasta or bubbe’s kugel.
These same experts, however, are often quick to say that when it comes to ultra-processed foods, it’s best to just avoid them altogether. They’re more dangerous than their less-processed counterparts and have almost none of their redeeming values.
It’s possible, then, that we’re confronting a similar dichotomy with modern media. When it comes to watching Netflix, say, or killing some time with Wordle on the phone, we are in processed food territory, and the operative advice is moderation.
But when it comes to TikTok, we’re talking about a digital bag of Doritos. Maybe the obvious choice is to decide not to open it at all. In other words, just because we’ve been worried about similar things in the past doesn’t mean we’re wrong to worry today.
The post Are We Too Concerned About Social Media? appeared first on Cal Newport.
May 26, 2025
The Workload Fairy Tale
Over the past four years, a remarkable story has been quietly unfolding in the knowledge sector: a growing interest in the viability of a 4-day workweek.
Iceland helped spark this movement with a series of government-sponsored trials which unfolded between 2015 and 2019. The experiment eventually included more than 2,500 workers, which, believe it or not, is about 1% of Iceland’s total working population. These subjects were drawn from multiple different types of workplaces, including, notably, offices and social service providers. Not everyone dropped an entire workday, but most participants reduced their schedule from forty hours to at most thirty-six hours a week of work.
The UK followed suit with a six-month trial, including over sixty companies and nearly 3,000 employees, concluding in 2023. A year later, forty-five firms in Germany participated in a similar half-year experiment with a reduced workweek. And these are far from the only such experiments being conducted. (According to a 2024 KPMG survey, close to a third of large US companies are also, at the very least, considering the idea.)
Let’s put aside for the moment whether or not a shortened week is a good idea (more on this later). I want to first focus on a consistent finding in these studies that points toward a critical lesson about how to make work deeper and more sustainable.
Every study I’ve read (so far) claims that reducing the workweek does not lead to substantial productivity decreases.
From the Icelandic study: “Productivity remained the same or improved in the majority of workplaces.”
From the UK study: “Across a wide variety of sectors, wellbeing has improved dramatically for staff; and business productivity has either been maintained or improved in nearly every case.”
From the German study: “Employees generally felt better with fewer hours and remained just as productive as they were with a five-day week, and, in some cases, were even more productive. Participants reported significant improvements in mental and physical health…and showed less stress and burnout symptoms, as confirmed by data from smartwatches tracking daily stress minutes.”
Step back and consider these observations for a moment. They’re astounding results! How is it possible that working notably fewer hours doesn’t reduce the overall value that you produce?
A big part of the answer, I’m convinced, is a key idea from my book, Slow Productivity: workload management.
Most knowledge workers are granted substantial autonomy to control their workload. It’s technically up to them when to say “yes” and when to say “no” to requests, and there’s no direct supervision of their current load of tasks and projects, nor is there any guidance about what this load should ideally be.
Many workers deal with the complexity of this reality by telling themselves what I sometimes call the workload fairy tale, which is the idea that their current commitments and obligations represent the exact amount of work they need to be doing to succeed in their position.
The results of the 4-day work week experiment, however, undermine this belief. The key work – the efforts that really matter – turned out to require less than forty hours a week of effort, so even with a reduced schedule, the participants could still fit it all in. Contrary to the workload fairytale, much of our weekly work might be, from a strict value production perspective, optional.
So why is everyone always so busy? Because in modern knowledge work we associate activity with usefulness (a concept I call “pseudo-productivity” in my book), so we keep saying “yes,” or inventing frenetic digital chores, until we’ve filled in every last minute of our workweek with action. We don’t realize we’re doing this, but instead grasp onto the workload fairy tale’s insistence that our full schedule represents exactly what we need to be doing, and any less would be an abdication of our professional duties.
The results from the 4-day work week not only push back against this fairy tale, but also provide us with a hint about how we could make work better. If we treated workload management seriously, and were transparent about how much each person is doing, and what load is optimal for their position; if we were willing to experiment with different possible configurations of these loads, and strategies for keeping them sustainable, we might move closer to a productive knowledge sector (in a traditional economic sense) free of the exhausting busy freneticism that describes our current moment. A world of work with breathing room and margin, where key stuff gets the attention it deserves, but not every day is reduced to a jittery jumble.
All of this brings me back to whether or not a 4-day workweek is a good idea. I have nothing against it in the abstract, but it also seems to be addressing a symptom instead of the underlying problem. If we truly solve some of the underlying workload issues, switching from five to four days might no longer feel like such a relief to so many.
####
For more on my thoughts on technology and work more generally, check out my recent books on the topic: Slow Productivity, A World Without Email, and Deep Work.
The post The Workload Fairy Tale appeared first on Cal Newport.
Cal Newport's Blog
- Cal Newport's profile
- 9945 followers

