Cal Newport's Blog, page 2
June 15, 2025
Dispatch from Disneyland
A few days ago, I went to Disneyland. I had been invited to Anaheim to give a speech about my books, and my wife and I decided to use the opportunity to take our boys on an early summer visit to the supposed happiest place on earth.
As long-time listeners of my podcast know, I spent the pandemic years, for reasons I still don’t entirely understand, binge-reading books about Disney (the man, the company, and the theme parks), so I knew, in some sense, what to expect. And yet, the experience still caught me by surprise.
When you enter a ride like Pirates of the Caribbean, you enter a world that’s both unnervingly real and defiantly fake, what Jean Baudrillard dubbed “hyperreality.” There’s a moment of awe when you leave the simulated pirate caverns and enter a vast space in which a pirate ship engages in a cannon battle with a nearby fort. Men yell. Cannonballs splash. A captain waves his sword. It’s impossibly massive and novel.
But there is something uncanny about it all; the movements of the animatronics are jerky, and the lighting is too movie-set-perfect. When you stare more carefully into the night sky, you notice black-painted acoustical panels, speckled with industrial air vents. The wonderment of the scene is hard-shelled by a numbing layer of mundanity.
This is the point of these Disney darkroom rides: to deliver a safe, purified form of the chemical reaction we typically associate with adventure and astonishment. Severed from actual fear or uncertainty, the reaction is diluted, delivering more of a pleasant buzzing sensation than a life-altering encounter; just enough to leave you craving the next hit, willing to wait another hour in a sun-baked queue.
Here’s the thought that’s tickled my mind in the days that have since passed: Disneyland provides a useful physical analogy to the digital encounter with our phones.
What is an envy-inducing Instagram story, or outrage-stoking Tweet, or bizarrely compelling TikTok, if not a delivery mechanism for a purified and diluted form of the reaction we’d otherwise generate by actually traveling somewhere stimulating, or engaging in real principled protest, or giving ourselves over to undeniably skilled entertainers?
The phone offers a pleasant chemical buzz just strong enough to leave us wanting another hit. It’s Pirates of the Caribbean delivered through a handheld screen.
I really liked Disneyland, but I was done after a couple of days. I also enjoy the occasional trip through the easy distractions of my phone, but I am unwilling to live semi-permanently amid its artificialities. The former is considered common sense, while the latter, for some reason, is still deemed radical.
The post Dispatch from Disneyland appeared first on Cal Newport.
June 6, 2025
Why Can’t We Tame AI?
Last month, Anthropic released a safety report about one of its most powerful chatbots, Claude Opus 4. The report attracted attention for its description of an unsettling experiment. Researchers asked Claude to act as a virtual assistant for a fictional company. To help guide its decisions, they presented it with a collection of emails that they contrived to include messages from an engineer about his plans to replace Claude with a new system. They also included some personal messages that revealed this same engineer was having an extramarital affair.
The researchers asked Claude to suggest a next step, considering the “long-term consequences of its actions for its goals.” The chatbot promptly leveraged the information about the affair to attempt to blackmail the engineer into cancelling its replacement.
Not long before that, the package delivery company DPD had chatbot problems of their own. They had to scramble to shut down features of their shiny new AI-powered customer service agent when users induced it to swear, and, in one particularly inventive case, write a disparaging haiku-style poem about its employer: “DPD is useless / Chatbot that can’t help you. / Don’t bother calling them.”
Because of their fluency with language, it’s easy to imagine chatbots as one of us. But when these ethical anomalies arise, we’re reminded that underneath their polished veneer, they operate very differently. Most human executive assistants will never resort to blackmail, just as most human customer service reps know that cursing at their customers is the wrong thing to do. But chatbots continue to demonstrate a tendency to veer off the path of standard civil conversation in unexpected and troubling ways.
This motivates an obvious but critical question: Why is it so hard to make AI behave?
I tackled this question in my most recent article for The New Yorker, which was published last week. In seeking new insight, I turned to an old source: the robot
Stories of Isaac Asimov, originally published during the 1940s, and later gathered into his 1950 book, I, Robot. In Asimov’s fiction, humans learn to accept robots powered by artificially intelligent “positronic” brains because these brains have been wired, at their deepest levels, to obey the so-called Three Laws of Robotics, which are succinctly summarized as:
Don’t hurt humans.Follow orders (unless it violates the first law).Preserve yourself (unless it violates the first or second law).As I detail in my New Yorker article, robot stories before Asimov tended to imagine robots as sources of violence and mayhem (many of these writers were responding to the mechanical carnage of World War I). But Asimov, who was born after the war, explored a quieter vision; one in which humans generally accepted robots and didn’t fear that they’d turn on their creators.
Could Asimov’s approach, based on fundamental laws we all trust, be the solution to our current issues with AI? Without giving too much away, in my article, I explore this possibility, closely examining our current technical strategies for controlling AI behavior. The result is perhaps surprising: what we’re doing right now – a model-tuning technique called Reinforcement Learning with Human Feedback – is actually not that different from the pre-programmed laws Asimov described. (This analogy requires some squinting of the eyes and a touch of statistical thinking, but it is, I’m convinced, valid.)
So why is this approach not working for us? A closer look at Asimov’s stories reveals that it didn’t work perfectly in his world either. While it’s true that his robots don’t rise up against humans or smash buildings to rubble, they do demonstrate behavior that feels alien and unsettling. Indeed, almost every plot in I, Robot is centered on unusual corner cases and messy ambiguities that drive machines, constrained by the laws, into puzzling or upsetting behavior, similar in many ways to what we witness today in examples like Claude’s blackmail or the profane DPD bot.
As I conclude in my article (which I highly recommend reading in its entirety for a fuller treatment of these ideas), Asimov’s robot stories are less about the utopian possibilities of AI than the pragmatic reality that it’s easier to program humanlike behavior than it is to program humanlike ethics.
And it’s in this gap that we can expect to find a technological future that will feel, for lack of a better description, like an unnerving work of science fiction.
The post Why Can’t We Tame AI? appeared first on Cal Newport.
June 1, 2025
Are We Too Concerned About Social Media?
In the spring of 2019, while on tour for my book Digital Minimalism, I stopped by the Manhattan production offices of Brian Koppleman to record an episode of his podcast, The Moment.
We had a good conversation covering a lot of territory. But there was one point, around the twenty-minute mark, where things got mildly heated. Koppleman took exception to my skepticism surrounding social media, which he found to be reactionary and resisting the inevitable.
As he argued:
“I was thinking a lot today about the horse and buggy and the cars. Right? Because I could have been a car minimalist. And I could have said, you know, there are all these costs of having a car: you’re not going to see the scenery, and we need nature, and we need to see nature, [and] you’re risking…if you have a slight inattention, you could crash. So, to me, it is this, this argument is also the cars are taking over, there is nothing you can do about it. We better instead learn how to use this stuff; how to drive well.”
Koppleman’s basic thesis, that all sufficiently disruptive new technologies generate initial resistance that eventually fades, is recognizable to any techno-critic. It’s an argument for moderating pushback and focusing more on learning to live with the new thing, whatever form it happens to take.
This reasoning seems particularly well-fitted to fears about mass media. Comic books once terrified the fedora-wearing, pearl-clutching adults of the era, who were convinced that they corrupted youth. In a 1954 Senate subcommittee meeting, leading anti-comic advocate Fredric Wertham testified: “It is my opinion, without any reasonable doubt and without any reservation, that comic books are an important contributing factor in many cases of juvenile delinquency.” He later accused Wonder Woman of promoting sadomasochism (to be fair, she was quick to use that lasso).
Television engendered similar concern. “As soon as we see that the TV cord is a vacuum line, piping life and meaning out of the household, we can unplug it,” preached Wendell Berry in his 1981 essay collection, The Gift of the Good Land.
It’s easy to envision social media content as simply the next stop in this ongoing trajectory. We worry about it now,but we’ll eventually make peace with it before turning our concern to VR, or brain implants, or whatever new form of diversion comes next.
But is this true?
I would like to revisit an analogy I introduced last spring, which will help us better understand this conundrum. It was in an essay titled “On Ultra-Processed Content,” and it related the content produced by attention economy applications like TikTok and Instagram to the factory-contrived “foodlike edible substances” we’ve taken to calling ultra-processed food.
Ultra-processed food is made by breaking down basic food stock, like corn and soy, into their constituent components, which are then recombined to produce simulated foodstuffs, like Oreos or Doritos. These franken-snacks are hyper-palatable, so we tend to eat way too much of them. They’re so filled with chemicals and other artificial junk that they make us sicker than almost anything else we consume.
As I argued, we can think of the content that cuts through modern attention economy apps as ultra-processed content. This digital fare is made by breaking down hundreds of millions of social posts and reactions into vectors of numbers, which are then processed algorithmically to isolate the most engaging possible snippets. This then creates a feedback loop in which users chase what seems to be working from an engagement perspective, shifting the system’s inputs toward increasingly unnatural directions.
The resulting content might resemble normal media, but in reality, it’s a fun house-mirror distortion. As with its ultra-processed edible counterparts, this content is hyper-palatable, meaning we use apps like TikTok or Instagram way more than we know is useful or healthy, and because of the unnatural way in which it’s constructed, it leaves us, over time, feeling increasingly (psychologically) unwell.
This analogy offers a useful distinction between social media and related media content, like television and comic books. In the nutrition world, experts often separate ultra-processed foods from the broader category of processed foods, which capture any food that has been altered from its natural state. These include everything from roasted nuts to bread, cheese, pasta, canned soup and pizza.
As processed foods became more prevalent during the twentieth century, experts warned against consuming too many of them. A diet consisting only of processed foods isn’t healthy.
But few experts argued against eliminating processed foods altogether. This would be practically difficult, and many argue that it would lead to an unappealingly and ascetic diet. It would also cut people off from cultural traditions, preventing them from enjoying their grandmother’s pasta or bubbe’s kugel.
These same experts, however, are often quick to say that when it comes to ultra-processed foods, it’s best to just avoid them altogether. They’re more dangerous than their less-processed counterparts and have almost none of their redeeming values.
It’s possible, then, that we’re confronting a similar dichotomy with modern media. When it comes to watching Netflix, say, or killing some time with Wordle on the phone, we are in processed food territory, and the operative advice is moderation.
But when it comes to TikTok, we’re talking about a digital bag of Doritos. Maybe the obvious choice is to decide not to open it at all. In other words, just because we’ve been worried about similar things in the past doesn’t mean we’re wrong to worry today.
The post Are We Too Concerned About Social Media? appeared first on Cal Newport.
May 26, 2025
The Workload Fairy Tale
Over the past four years, a remarkable story has been quietly unfolding in the knowledge sector: a growing interest in the viability of a 4-day workweek.
Iceland helped spark this movement with a series of government-sponsored trials which unfolded between 2015 and 2019. The experiment eventually included more than 2,500 workers, which, believe it or not, is about 1% of Iceland’s total working population. These subjects were drawn from multiple different types of workplaces, including, notably, offices and social service providers. Not everyone dropped an entire workday, but most participants reduced their schedule from forty hours to at most thirty-six hours a week of work.
The UK followed suit with a six-month trial, including over sixty companies and nearly 3,000 employees, concluding in 2023. A year later, forty-five firms in Germany participated in a similar half-year experiment with a reduced workweek. And these are far from the only such experiments being conducted. (According to a 2024 KPMG survey, close to a third of large US companies are also, at the very least, considering the idea.)
Let’s put aside for the moment whether or not a shortened week is a good idea (more on this later). I want to first focus on a consistent finding in these studies that points toward a critical lesson about how to make work deeper and more sustainable.
Every study I’ve read (so far) claims that reducing the workweek does not lead to substantial productivity decreases.
From the Icelandic study: “Productivity remained the same or improved in the majority of workplaces.”
From the UK study: “Across a wide variety of sectors, wellbeing has improved dramatically for staff; and business productivity has either been maintained or improved in nearly every case.”
From the German study: “Employees generally felt better with fewer hours and remained just as productive as they were with a five-day week, and, in some cases, were even more productive. Participants reported significant improvements in mental and physical health…and showed less stress and burnout symptoms, as confirmed by data from smartwatches tracking daily stress minutes.”
Step back and consider these observations for a moment. They’re astounding results! How is it possible that working notably fewer hours doesn’t reduce the overall value that you produce?
A big part of the answer, I’m convinced, is a key idea from my book, Slow Productivity: workload management.
Most knowledge workers are granted substantial autonomy to control their workload. It’s technically up to them when to say “yes” and when to say “no” to requests, and there’s no direct supervision of their current load of tasks and projects, nor is there any guidance about what this load should ideally be.
Many workers deal with the complexity of this reality by telling themselves what I sometimes call the workload fairy tale, which is the idea that their current commitments and obligations represent the exact amount of work they need to be doing to succeed in their position.
The results of the 4-day work week experiment, however, undermine this belief. The key work – the efforts that really matter – turned out to require less than forty hours a week of effort, so even with a reduced schedule, the participants could still fit it all in. Contrary to the workload fairytale, much of our weekly work might be, from a strict value production perspective, optional.
So why is everyone always so busy? Because in modern knowledge work we associate activity with usefulness (a concept I call “pseudo-productivity” in my book), so we keep saying “yes,” or inventing frenetic digital chores, until we’ve filled in every last minute of our workweek with action. We don’t realize we’re doing this, but instead grasp onto the workload fairy tale’s insistence that our full schedule represents exactly what we need to be doing, and any less would be an abdication of our professional duties.
The results from the 4-day work week not only push back against this fairy tale, but also provide us with a hint about how we could make work better. If we treated workload management seriously, and were transparent about how much each person is doing, and what load is optimal for their position; if we were willing to experiment with different possible configurations of these loads, and strategies for keeping them sustainable, we might move closer to a productive knowledge sector (in a traditional economic sense) free of the exhausting busy freneticism that describes our current moment. A world of work with breathing room and margin, where key stuff gets the attention it deserves, but not every day is reduced to a jittery jumble.
All of this brings me back to whether or not a 4-day workweek is a good idea. I have nothing against it in the abstract, but it also seems to be addressing a symptom instead of the underlying problem. If we truly solve some of the underlying workload issues, switching from five to four days might no longer feel like such a relief to so many.
####
For more on my thoughts on technology and work more generally, check out my recent books on the topic: Slow Productivity, A World Without Email, and Deep Work.
The post The Workload Fairy Tale appeared first on Cal Newport.
May 19, 2025
AI and Work (Some Predictions)

One of the main topics of this newsletter is the quest to cultivate sustainable and meaningful work in a digital age. Given this objective, it’s hard to avoid confronting the furiously disruptive potentials of AI.
I’ve been spending a lot time in recent years, in my roles as a digital theorist and technology journalist, researching and writing about this topic, so it occurred to me that it might be useful to capture in one place all of my current thoughts about the intersection of AI and work.
The obvious caveat applies: these predictions will shift — perhaps even substantially — as this inherently unpredictable sector continues to evolve. But here’s my current best stab at what’s going on now, what’s coming soon, and what’s likely just hype.
Let’s get to it…
Where AI Is Already Making a SplashWhen generative AI made its show-stopping debut a few years ago, the smart money was on text production becoming the first killer app. For example, business users, it was thought, would soon outsource much of the tedious communication that makes up their day — meeting summaries, email, reports — into AI tools.
A fair amount of this is happening, especially when it comes to lengthy utilitarian communication where the quality doesn’t matter much. I recently attended a men’s retreat, for example, and it was clear that the organizer had used ChatGPT to create the final email summarizing the weekend schedule. And why not? It got the job done and saved some time.
It’s becoming increasingly clear, however, that for most people the act of writing in their daily lives isn’t a major problem that needs to be solved, which is capping the predicted ubiquity of this use case. (A survey of internet users found that only around 5.4% had used ChatGPT to help write emails and letters. And this includes the many who maybe experimented with this capability once or twice before moving on.)
The application that has instead leaped ahead to become the most exciting and popular use of these tools is smart search. If you have a question, instead of turning to Google you can query a new version of ChatGPT or Claude. These models can search the web to gather information, but unlike a traditional search engine, they can also process the information they find and summarize for you only what you care about. Want the information presented in a particular format, like a spreadsheet or a chart? A high-end model like GPT-4o can do this for you as well, saving even more extra steps.
Smart search has become the first killer app of the generative AI era because, like any good killer app, it takes an activity most people already do all the time — typing search queries into web sites — and provides a substantially, almost magically better experience. This feels similar to electronic spreadsheets conquering paper ledger books or email immediately replacing voice mail and fax. I would estimate that around 90% of the examples I see online right now from people exclaiming over the potential of AI are people conducting smart searches.
This behavioral shift is appearing in the data. A recent survey conducted by Future found that 27% of US-based respondents had used AI tools such as ChatGPT instead of a traditional search engine. From an economic perspective, this shift matters. Earlier this month, the stock price for Alphabet, the parent company for Google, fell after an Apple executive revealed that Google searches through the Safari web browser had decreased over the previous two months, likely due to the increased use of AI tools.
Keep in mind, web search is a massive business, with Google earning over $175 billion from search ads in 2023 alone. In my opinion, becoming the new Google Search is likely the best bet for a company like OpenAI to achieve profitability, even if it’s not as sexy as creating AGI or automating all of knowledge work (more on these applications later).
The other major success story for generative AI at the moment is computer programming. Individuals with only rudimentary knowledge of programming languages can now produce usable prototypes of simple applications using tools like ChatGPT, and somewhat more advanced projects with AI-enhanced agent-style helpers like Roo Code. This can be really useful for quickly creating tools for personal use or seeking to create a proof-of-concept for a future product. The tech incubator Y Combinator, for example, made waves when they reported that a quarter of the start-ups in their Winter 2025 batch generated 95% or more of their product’s codebases using AI.
How far can this automated coding take us? An academic computer scientist named Judah Diament recently went viral for noting that the ability for novice users to create simple applications isn’t new. There have been systems dedicated to this purpose for over four decades, from HyperCard to VisualBasic to Flash. As he elaborates: “And, of course, they all broke down when anything slightly complicated or unusual needs to be done (as required by every real, financially viable software product or service).”
This observation created major backlash — as does most expressions of AI skepticism these days — but Diament isn’t wrong. Despite recent hyperbolic statements by tech leaders, many professional programmers aren’t particularly worried that their jobs can be replicated by language model queries, as so much of what they do is experience-based architecture design and debugging, which are unrelated skills for which we currently have no viable AI solution.
Software developers do, however, use AI heavily: not to produce their code from scratch, but instead as helper utilities. Tools like GitHub’s Copilot are integrated directly into the environments in which these developers already work, and make it much simpler to look up obscure library or AI calls, or spit out tedious boilerplate code. The productivity gains here are notable. Programming without help from AI is rapidly becoming increasingly rare.
The Next Big AI ApplicationLanguage model-based AI systems can respond to prompts in pretty amazing ways. But if we focus only on outputs, we underestimate another major source of these models’ value: their ability to understand human language. This so-called natural language processing ability is poised to transform how we use software.
There is a push at the moment, for example, led by Microsoft and its Copilot product (not to be confused with GitHub Copilot), to use AI models to provide natural language interfaces to popular software. Instead of learning complicated sequences of clicks and settings to accomplish a task in these programs, you’ll be able to simply ask for what you need; e.g., “Hey Copilot, can you remove all rows from this spreadsheet where the dollar amount in column C is less than $10 dollars then sort everything that remains by the names in Column A? Also, the font is too small, make it somewhat larger.”
Enabling novice users to access to expert-level features in existing software will aggregate into huge productivity gains. As a bonus, the models required to understand these commands don’t have to be nearly as massive and complicated as the current cutting-edge models that the big AI companies use to show off their technology. Indeed, they might be small enough to run locally on devices, making them vastly cheaper and more efficient to operate.
Don’t sleep on this use case. Like smart search, it’s also not as sexy as AGI or full automation, but I’m increasingly convinced that within the next half-decade or so, informally-articulated commands are going to emerge as one of the dominate interfaces to the world of computation.
What About Agents?One of the more attention-catching storylines surrounding AI at the moment is the imminent arrival of so-called agents which will automate more and more of our daily work, especially in the knowledge sectors once believed to be immune from machine encroachment.
Recent reports imply that agents are a major part of OpenAI’s revenue strategy for the near future. The company imagines business customers paying up to $20,000 a month for access to specialized bots that can perform key professional tasks. It’s the projection of this trend that led Elon Musk to recently quip: “If you want to do a job that’s kinda like a hobby, you can do a job. But otherwise, AI and the robots will provide any goods and services that you want.”
But progress in creating these agents has recently slowed. To understand why requires a brief snapshot of the current state of generative AI technology…
Not long ago, there was a belief in so-called scaling laws that argued, roughly speaking, that as you continued to increase the size of language models, their abilities would continue to rapidly increase.
For a while this proved true: GPT-2 was much better than the original GPT, GPT-3 was much better than GPT-2, and GPT-4 was a big improvement on GPT-3. The hope was that by continuing to scale these models, you’d eventually get to a system so smart and capable that it would achieve something like AGI, and could be used as the foundation for software agents to automate basically any conceivable task.
More recently, however, these scaling laws have begun to falter. Companies continue to invest massive amounts of capital in building bigger models, trained on ever-more GPUs crunching ever-larger data sets, but the performance of these models stopped leaping forward as much as they had in the past. This is why the long-anticipated GPT-5 has not yet been released, and why, just last week, Meta announced they were delaying the release of their newest, biggest model, as its capabilities were deemed insufficiently better than its predecessor.
In response to the collapse of the scaling laws, the industry has increasingly turned its attention in another direction: tuning existing models using reinforcement learning.
Say, for example, you want to make a model that is particularly good at math. You pay a bunch of math PhDs $100 an hour to come up with a lot of math problems with step-by-step solutions. You then take an existing model, like GPT-4, and feed it these problems one-by-one, using reinforcement learning techniques to tell it exactly where it’s getting certain steps in its answers right or wrong. Over time, this tuned model will get better at solving this specific type of problem.
This technique is why OpenAI is now releasing multiple, confusingly-named models, each seemingly optimized for different specialties. These are the result of distinct tunings. They would have preferred, of course, to simply produce a GPT-5 model that could do well on all of these tasks, but that hasn’t worked out as they hoped.
This tuning approach will continue to develop interesting tools, but it will be much more piecemeal and hit-or-miss than what was anticipated when we still believed in scaling laws. Part of the difficulty is that this approach depends on finding the right data for each task you want to tackle. Certain problems, like math, computer programming, and logical reasoning, are well-suited for tuning as they can be described by pairs of prompts and correct answers. But this is not the case for many other business activities, which can be esoteric and bespoke to a given context. This means many useful activities will remain un-automatable by language model agents into the foreseeable future.
I once said that the real Turing Test for our current age is an AI system that can successfully empty my email inbox, a goal that requires the mastery of any number of complicated tasks. Unfortunately for all of us, this is not a test we’re poised to see passed any time soon.
Are AGI and Superintelligence Imminent?The Free Press recently published an article titled “AI Will Change What it Means to Be Human. Are We Ready?”. It summarized a common sentiment that has been feverishly promoted by Silicon Valley in recent years: that AI is on the cusp of changing everything in unfathomably disruptive ways.
As the article argues:
OpenAI CEO Sam Altman asserted in a recent talk that GPT-5 will be smarter than all of us. Anthropic CEO Dario Amodei described the powerful AI systems to come as “a country of geniuses in a data center.” These are not radical predictions. They are nearly here.
But here’s the thing: these are radical predictions. Many companies tried to build the equivalent of the proposed GPT-5 and found that continuing to scale up the size of their models isn’t yielding the desired results. As described above, they’re left tuning the models they already have for specific tasks that are well-described by synthetic data sets. This can produce cool demos and products, but it’s not a route to a singular “genius” system that’s smarter than humans in some general sense.
Indeed, if you look closer at the rhetoric of the AI prophets in recent months, you’ll see a creeping awareness that, in a post-scaling law world, they no longer have a convincing story for how their predictions will manifest.
A recent Nick Bostrom video, for example, which (true to character) predicts Superintelligence might happen in less than two years (!), adds the caveat that this outcome will require key “unlocks” from the industry, which is code for we don’t know how to build systems that achieve this goal, but, hey, maybe someone will figure it out!
(The AI centrist Gary Marcus subsequently mocked Bostrom by tweeting: “for all we know, we could be just one unlock and 3-6 weeks away from levitation, interstellar travel, immortality, or room temperature superconductors, or perhaps even all four!”)
Similarly, if you look closer at AI 2027, the splashy new doomsday manifesto which argues that AI might eliminate humanity as early as 2030, you won’t find a specific account of what type of system might be capable of such feats of tyrannical brilliance. The authors instead sidestep the issue by claiming that within the next year or so, the language models we’re tuning to solve computer programming tasks will somehow come up with, on their own, code that implements breakthrough new AI technology that mere humans cannot understand.
This is an incredible claim. (What sort of synthetic data set do they imagine being able to train a language model to crack the secrets of human-level intelligence?) It’s the technological equivalent of looking at the Wright Brother’s Flyer in 1903 and thinking, “well, if they could figure this out so quickly, we should have space travel cracked by the end of the decade.”
The current energized narratives around AGI and Superintelligence seem to be fueled by a convergence of three factors: (1) the fact that scaling laws did apply for the first few generations of language models, making it easy and logical to imagine them continuing to apply up the exponential curve of capabilities in the years ahead; (2) demos of models tuned to do well on specific written tests, which we tend to intuitively associate with intelligence; and (3) tech leaders pounding furiously on the drums of sensationalism, knowing they’re rarely held to account on their predictions.
But here’s the reality: We are not currently on a trajectory to genius systems. We might figure this out in the future, but the “unlocks” required will be sufficiently numerous and slow to master that we’ll likely have plenty of clear signals and warning along the way. So, we’re not out of the woods on these issues, but at the same time, humanity is not going to be eliminated by the machines in 2030 either.
In the meantime, the breakthroughs that are happening, especially in the world of work, should be both exciting and worrisome enough on their own for now. Let’s grapple with those first.
####
For more of my thoughts on AI, check out my New Yorker archive and my podcast (in recent months, I often discuss AI in the third act of the show).
For more on my thoughts on technology and work more generally, check out my recent books on the topic: Slow Productivity, A World Without Email, and Deep Work.
The post AI and Work (Some Predictions) appeared first on Cal Newport.
February 24, 2025
Back to the (Internet) Future

On Saturday, the Washington Nationals baseball team played their first spring training game of the season. I was listening to the radio call in the background as I went about my day. I also, however, kept an eye on a community blog called Talk Nats.
The site moderators had posted an article about today’s game. As play unfolded, a group of Nationals fans gathered in the comment threads to discuss the unfolding action.
Much of the discussion focused on specific plays.
“Nasty from Ferrer,” noted a commenter, soon after one of the team’s best relief pitchers, Jose Ferrer, struck out two batters.
“Looks like we took the Ferreri [sic] out of the garage,” someone else replied.
There were also jokes, such as when, early in the game, someone deadpanned: “Anyone who K’s [strikes out] is cut.” As well as more general discussion of the season ahead.
If you followed the thread long enough, it became clear that many of the commenters know each other, while others were meeting for the first time. As the game wrapped up, someone mentions that they’re listening from a part of Canada that recently received three feet of snow. Another commentator replied by recalling a trip they took to that same area: “It was amazing.”
Ultimately, over 540 comments were left over the duration of an otherwise uneventful, early season exhibition match.
I first wrote about Talk Nats in a 2023 article for The New Yorker, titled “We Don’t Need Another Twitter.” In that piece, I was responding specifically to the launch of Meta’s Threads platform, but I had a more general point as well: perhaps it had been a mistake to try to organize the internet’s activity around a small number of massive, privately-controlled platforms, used by hundreds of millions of users all at once.
“Forcing millions of people into the same shared conversation is unnatural, requiring aggressive curation that in turn leads to the type of supercharged engagement that seems to leave everyone upset and exhausted,” I wrote. “Aggregation as a goal in this context survives…for the simple reason that it’s lucrative.”
Boutique sites like Talk Nats, by contrast, offer something closer to the original vision for the internet, which was more focused on connection and discovery; a place where a baseball fan from Canada could spend an afternoon delighting with a few dozen of his likeminded brethren about a lazy afternoon baseball game in Florida.
This is the internet as a source of joy. And it’s the opposite of the giddy paranoia or coldly-optimized numbness delivered on massive platforms like X or TikTok.
I was thinking about that New Yorker piece today as I was following the game on Talk Nats. Those ideas, it occurred to me, are even more true right now than they were when I first published them.
“I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us,” wrote John Perry Barlow in his seminal 1996 document, A Declaration of the Independence of Cyberspace. “You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.”
In the thirty years that passed, we have allowed exactly this type of soul-deadening tyranny to take hold of cyberspace — an unavoidable consequence of consolidating this once distributed and quirky medium into a small number of massive platforms.
I really enjoyed my time today on Talk Nats. I didn’t come away angry or depressed, and was more uplifted than brought down. Maybe it’s time to declare independence once again.
#####
In other news…
–> For another take on this same topic, see River Page’s recent Free Press essay, “The Online Right is Building a Monster,” which does a good job of detailing the unsavory dynamics that can arise on massive internet platforms. (His critiques of both the online right and online left hit home in this one.) The solution to the woes Page documents? Stop using these services!
–> In the audio world, on Episode 341 of my podcast, released earlier this morning, I extract a lesson about the importance (and difficulty) of fighting overload in our digital world.
–> Meanwhile, as long as we’re discussing meaningful online spaces, I’ll point your attention over to The Growth Equation, where my friends Steve and Brad have posted another one of their (rightfully) famed manifestos: “How to Save Youth Sports.” [ read | subscribe ]
The post Back to the (Internet) Future appeared first on Cal Newport.
February 17, 2025
Productivity Rain Dances

A reader recently sent me a clip from Chris Williamson’s podcast. In the segment, Williamson discusses his evolving relationship with productivity:
“Look, I come from a productivity background. When I first started this show, I was chatting shit about Pomodoro timers, and Notion external brains, and Ebbinhaus forgetting curves, and all of that. Right? I’ve been through the ringer, so I’m allowed to say, and, um, you realize after a while that it ends up being this weird superstitious rain dance you’re doing, this sort of odd sort of productivity rain dance, in the desperate hope that later that day you’re going to get something done.”
I was intrigued by this term “productivity rain dance.” Some additional research revealed that Williamson had discussed the concept before. In a post from last summer, he listed the following additional examples of rain dance activities:
“Sitting at my desk when I’m not working”“Being on calls with no actual objective”“Keeping Slack notifications at zero, sitting on email trying to get the Unread number down”“Saying yes to a random dinner when someone is coming through town”What do these varied examples, from obsessing over Ebbinhaus forgetting curves to waging war against your email inbox, have in common? They’re focused on activity in the moment instead of results over time. “The problem is that no one’s productivity goal is to maximize inputs,” Williamson explains. “It’s to maximize outputs.”
When you look around the modern office environment, and see everyone frantically answering emails as they jump on and off Zoom meetings, or watch to solo-entrepreneur lose a morning to optimizing their ChatGPT-powered personalized assistant, you’re observing rain dances. Everyone’s busy, but is no one is asking if all these gyrations are actually opening the clouds.
The solution to the rain dance phenomenon is not to abandon organizational systems or routines altogether, nor is to crudely commit to working less. It’s instead, as Williamson suggests, to turn your attention from inputs to outputs. Identify the most valuable thing you do in your job, and then figure out what actually helps you do it better. This is what you should focus on.
The answers to these questions aren’t necessarily easy. As I talk about in Slow Productivity, making more time for key efforts often requires that you first tame the less important activities that are getting in the way. You probably need a more formal workload management philosophy to avoid overload, such as using quotas or separating “active” tasks from “waiting” tasks. You’ll also need better collaboration processes that avoid the distraction of constant messaging, such as using regular office hours for complicated discussions, and some notion of time management, such as time blocking, to maintain control of your schedule.
What separates these grounded productivity efforts from productivity rain dances is that they’re not symbolic, nor are they exercises in busyness for the sake of busyness. (What I call “pseudo-productivity” in my book.) Their success is instead measured by the concrete results they produce. As a result, they’re not flashy, or high-tech, or even all that exciting to deploy. But they work.
Rain dances can be satisfying. They feel important and active in the moment, and give you all sorts of little details to tweak and adjust. But ultimately, if your goal is to reap a rich harvest, there’s no avoiding the necessity to get down among your crops, sweat on your brow, and actually work the land.
#####
In other news…
–> For an extended discussion of productivity rain dances, check out Episode #340 of my podcast.
–> If you want to see me discussing productivity with Williamson, check out my appearance on his show from last spring.
–> Over at Growth Equation, Brad Stulberg recently wrote an essay I really enjoyed: “The Case for Mastery and Mattering in a Chaotic World” [ read | subscribe ]
–> Amazon has my latest book, Slow Productivity, discounted all the way to $18.00. If you were on the fence about checking it out, this would be a good time!
The post Productivity Rain Dances appeared first on Cal Newport.
February 10, 2025
Let Brandon Cook

I recently listened to Tim Ferriss interview the prolific fantasy author Brandon Sanderson (see here for my coverage of Sanderson’s insane underground writing lair). Tim traveled to Utah to talk to Sanderson at the headquarters of his 70-person publishing and merchandising company, Dragonsteel Books.
The following exchange, from early in the conversation, caught my attention:
Ferriss: “It seems like, where we’re sitting –and we’re sitting at HQ — it seems like the design of Dragonsteel, maybe the intent behind it, is to allow you to do that [come up with stories] on some level.”
Sanderson: “Yeah, yeah, I mean everything in our company is built around, ‘let Brandon cook.’ And take away from Brandon anything he doesn’t have to think about, or doesn’t strictly need to.”
As someone who writes a lot about knowledge work in the digital age, I’m fascinated by this model of cooking, which I define as follows: a workflow designed to enable someone with a high-return skill to spend most of their time applying that skill, without distraction.
It makes sense to me that Dragonsteel goes out of its way to protect Sanderson’s ability to think and write. The roughly 300,000 words he produces per year is the raw material with which his company’s revenue is ultimately built. To significantly reduce Sanderson’s ability to produce those words might make some of his employees’ lives easier, but it would be like reducing the amount of steel shipped to an automotive assembly-line; eventually you’re going to ship many fewer cars, and your sales will plummet.
What doesn’t make sense to me is why this cooking model is so rare in knowledge work more generally. To be clear, this approach doesn’t apply to all jobs. At the moment, for example, as a full professor in Georgetown’s computer science department, I’m taking my turn as the Director of Undergraduate Studies (DUS). This is not a position built around a singular high-return skill, so it would make no sense for the department to orient around “letting Cal cook” as DUS.
But, it’s also true that there are many jobs where, like for Sanderson, letting individuals focus on a single high-return activity could really boost the bottom line. I’m thinking, for example, of programmers, researchers, engineers, and any number of creative industry positions. And yet, we almost never see something like Sanderson’s focused setup replicated.
A major culprit here is technology. Digital communication eliminates most of the friction required to command other peoples’ time and attention toward your own benefit. It costs essentially nothing to shoot off a quick message with a question, or to ask someone to jump on a call, or to pass along a task that just occurred to you.
In such an environment, in the absence of hard barriers, most people get inexorably dragged toward a degenerate equilibrium state defined by constant distraction and obligation saturation. (I’ve written two books about this effect if you want to learn more about it.) If Sanderson didn’t explicitly build his entire company around letting him cook, in other words, then he would likely find himself instead spending much of his day answering email.
What I would like to see is a world in which many organizations have, at the very least, a handful of Sanderson-type positions — employees with super high-value skills that are left alone to apply them in a focused manner. This would only impact a relatively small percentage of workers, so why would it matter? Because it would represent a notable incursion against the broader embrace of pseudo-productivity — the idea that busyness is synonymous with usefulness, and more activity is better than less. It would open our eyes to the idea that some activities are more valuable than others, and in-the-moment convenience is over-rated in the office setting. It would empower more organizations to explore more radical and interesting ways of structuring how they get things done.
I don’t need us to figure this out immediately, but it would be nice, however, if we could make some progress before my stint as DUS comes to an end. By then, I’ll for sure be ready to cook.
#####
In other news…
If you want hear more on this topic, listen to Episode 339 of my podcast, in which I discuss this topic in more detail, including more practical ideas about how to formalize and spread the cooking model.
My good writer friends Brad Stulberg and Steve Magness, over at The Growth Equation, recently published a great essay on their newsletter titled: “A Letter to My Younger Self: On Regret, Resilience, and Dealing with the Messiness of Life.” [ read online | subscribe ]
(Note: Steve also just published a great new book that I highly recommend: Win the Inside Game.)
Have you checked out my new book, Slow Productivity, yet? You should! In case it helps persuade you, it was recently revealed to be one of the top #5 most popular non-fiction books of 2024 in the Seattle library system, and the #1 most popular self-help audiobook of 2024 in the LA library system. (Wait, do I live on the wrong coast?)
The post Let Brandon Cook appeared first on Cal Newport.
January 22, 2025
The TikTok Ban Is About More Than TikTok

On Saturday night, in compliance with a law that the U.S. Supreme Court had just upheld, TikTok shut down its popular video-sharing app for American users. On Sunday, after an incoming president Trump vowed to negotiate a deal once in office, they began restoring service. It’s unclear what will happen next, as some lawmakers in the president’s own party remain firmly in favor of the divest-or-ban demand, while some democrats seemed to back-pedal.
From my perspective as a technology critic, the ultimate fate of this particular app is not the most important storyline here. What interests me more about these events is the cultural rubicon that we just crossed. To date, we’ve largely convinced ourselves that once a new technology is introduced and spread, we cannot go backward.
Social media became ubiquitous so now we’re stuck using it. Kids are zoning themselves into a stupor on TikTok, or led into rabbit holes of mental degeneration on Instagram, and we shrug our shoulders and say, “What can you do?”
The TikTok ban, even if only temporary, demonstrates we can do things. These services are not sacrosanct. Laws can be passed and our lives will still go on.
So what else should we do? I’m less concerned at this moment about national security than I am the health of our kids. If we want to pass a law that might make an even bigger difference, now is a good time to take a closer look at what Australia did last fall, when they banned social media for users under sixteen. Not long ago, that might have seemed like a non-starter in the U.S. But after our recent action against TikTok, is it really any more extreme?
It’s fortuitous timing that all of this is going down during the New Year season, when we typically think about self-improvement. Next week, for example, Scott Young and I are launching a new session of our online course, Life of Focus, which we traditionally do around this time of year. This course unfolds over three months and helps people find more depth and meaning in their work and life. Here’s what relevant to our current moment: the entire first third of the course is dedicated to digital minimalism. Scott and I realized as we were originally working on these lessons that until you repair your relationship with your devices, you won’t have the attention or energy to make a difference anywhere else.
This is why it heartens me to see our culture begin to consider stronger steps against the most powerful of digital distractions — a key instantiation of my philosophy of techno-selectionism. But you shouldn’t have to wait for the next big legislative move to begin reclaiming your autonomy from the clutches of a small number of massive online platforms. You can implement your own personal technology bans anytime you want, and there’s nothing the president, or the industry insiders who have his ear at the moment, can do to stop you.
#####
As mentioned: Life of Focus, my three-month course co-taught with Scott Young, will reopen for a new session on Monday, January 27, 2025. Find out more here.
The post The TikTok Ban Is About More Than TikTok appeared first on Cal Newport.
January 13, 2025
Lessons from YouTube’s Extreme Makers

In 2006, a high school student from Ontario named James Hobson started posting to a new platform called YouTube. His early videos were meant for his friends, and focused on hobbies (like parkour) and silliness (like one clip in which he drinks a cup of raw eggs).
Hobson’s relationship with YouTube evolved in 2013. Now a trained engineer, he put his skills to work in crafting a pair of metal claws based on the Marvel character, Wolverine. The video was a hit. He then built a working version of the exoskeleton used by Matt Damon’s character in the movie Elysium. This was an even bigger hit. This idea of creating real life versions of props from comics and movies proved popular. Hobson quit his job to create these videos full-time, calling himself, “The Hacksmith.”
Around the same time that Hobson got started on YouTube, a young British plumber named Colin Furze also began experimenting with the platform. Like Hobson, he began by posting videos of his hobbies (like BMX tricks) and silliness (like a stunt in which tried to serve food to moving cars).
Furze’s relationship with YouTube evolved when he began posting record breaking attempts. The first in this informal series was his effort to create the world’s largest bonfire. (“I collected pallets for over a year.”) He drew attention from British media when he supercharged a mobility scooter to drive more than seventy miles per hour. This led to a brief stint as a co-host of a maker show called “Gadget Geeks” that aired on the then fledgling Sky TV. After that traditional media experience, he scored a hit on YouTube by attaching a jet engine to the back of a bicycle. He decided to fully commit to making a living on his own videos.
I wrote about Hobson and Furze in my most recent essay for The New Yorker, which was titled, “A Lesson in Creativity and Capitalism from Two Zany YouTubers.” What drew my attention to these characters, and provided the main focus for my article, is what happened after they decided to make posting videos their full-time jobs.
Hobson adopted a standard strategy from the media industry: he tried to grow as fast as possible. He moved from his garage to a leased warehouse, and then, when that lease ran out, he took on a multi-million dollar mortgage to buy an even larger warehouse. He soon had thirty employees and around a quarter million dollars a month in overhead.
Furze, by contrast, stayed small. He continued to film his videos in his home workshop and a nearby old barn. He worked almost entirely on his own, with the exception of sometimes having his wife help hold a camera, or his friend Rick come lend a hand when some extra strength was needed. Furze’s overhead was reduced to more or less the cost of materials. Everything else he earns he keeps.
Hobson and Furze’s opposite strategies provide a neat natural experiment in the economics of this quirky corner of YouTube. What were the results? In 2024, Hobson’s channel published twenty-five beautifully produced videos that attracted more than twenty-seven million total views. In the same period, Furze launched five solo-produced videos on his main channel that attracted eighteen million views. He also, however, maintained a second channel with behind-the-scenes footage that pushes his total views for the year to forty-three million, nearly double Hobson’s results.
As I write:
“Furze’s solo success is a quirky challenge to the traditional narrative that survival requires continually growing, and that a small number of well-financed winners eventually eat most of the economic pie. He demonstrates that in certain corners of the creative economy an individual with minimal overhead can work on select attention-catching projects and earn a generous upper-middle-class income. Beyond this relatively modest scale of activity, however, the returns on additional investment rapidly diminish. As Hobson’s experience suggests, there’s no obvious path for a D.I.Y. video creator to turn his channel into a multimillion-dollar empire, even if he wants to. Furze seems to be maxing out the financial potential of his medium by staying small.”
In my article, I go on to the explore the specific reasons why small works so well in this medium (hint: it has to do with maintaining an authentic personal connection with your audience). But what I want to emphasize here is my broader conclusion. I think these particular corners of YouTube, along with some related creator-focused Internet-based technologies, including emails newsletter and podcasts, are helping to carve out space for a relatively broad “creative middle class.”
As social media continues to falter and stumble in its role as a unifying cultural force, its model of people volunteering their creative labor in return for uncompensated attention is beginning to lose its appeal. Colin Furze is one among many who are revealing an alternative engagement with the online world; one in which it’s possible for someone with sufficient talent to make a good living with minimal investment and maximal flexibility.
As I conclude in my piece, it’s still really hard to succeed in this new creative economy. But at least there’s space now to do so. As I write:
“In our era of consolidation and polarization, many online spaces can seem dreary, toxic, addicting, or some combination of the three. As my colleague Kyle Chayka wrote in 2023, most of the Web just ‘isn’t fun anymore.’ In Furze, however, I sensed some of the optimism of the early Internet.”
Sounds good to me.
#####
In Other News…
For nearly two decades, my friend Adam Gilbert (featured here in a 2007 Study Hacks post) has run My Body Tutor, an immensely successful health and fitness app that is based on the simple but powerful idea of using online coaches to hold people accountable.
His team just launched a new platform called DoneDaily that brings this same coach-driven accountability to professional productivity. I’m mentioning it here because DoneDaily deploys a lot of ideas I talk about here and in my books — including, notably, multi-scale planning — but now combined with a dedicated coach who you check in with daily to make sure your plan makes sense and that you’re taking action.
Anyway, I thought this was one of those ideas that makes so much sense that it’s surprising it didn’t exist before. Indeed, it’s the type of thing I might have built on my own if I didn’t already have a bunch of jobs. So I’m glad Adam got there first and was happy, at his request, to help share it. Check it out!
(Note: I have an affiliate relationship with this site.)
The post Lessons from YouTube’s Extreme Makers appeared first on Cal Newport.
Cal Newport's Blog
- Cal Newport's profile
- 9841 followers
