Cal Newport's Blog, page 6

September 4, 2023

On Tools and the Aesthetics of Work

In the summer of 2022, an engineer named Keegan McNamara, who was at the time working for a fundraising technology startup, found his way to the Arms and Armor exhibit at the Met. He was struck by the unapologetic mixture of extreme beauty and focused function captured in the antique firearms on display. As reported in a recent profile of McNamara published in The Verge, this encounter with the past sparked a realization about the present:

“That combination of craftsmanship and utility, objects that are both thoroughly practical and needlessly outrageously beautiful, doesn’t really exist anymore. ‘And it especially doesn’t exist for computers.'”

Aesthetically, contemporary digitals devices have become industrial and impersonal: grey and black rectangles carved into generically-modern clean lines . Functionally, they offer the hapless user a cluttered explosion of potential activity, windows piling on top of windows, command bars thick with applications. Standing in the Arms and Armor exhibit McNamara began to wonder if there was a way to rethink the PC; to save it from a predictable maximalism.

The result was The Mythic I, a custom computer that McNamara handcrafted over the year or so that followed that momentous afternoon at the Met. The machine is housed in a swooping hardwood frame carved using manual tools. An eight-inch screen is mounted above a 1980’s IBM-style keyboard with big clacking keys that McNamara carefully lubricated to achieve exactly the right sound on each strike: “if you have dry rubbing of plastic, it doesn’t sound thock-y. It just sounds cheap.” Below the keyboard is an Italian leather hand rest. To turn it on you insert and turn a key and then flip a toggle switch.

Equally notable is what happens once the machine is activated. McNamara designed the Mythic for three specific purposes: writing a novel, writing occasional computer code, and writing his daily journal. Accordingly, it runs a highly-modular version of Linux called NixOS that he’s customized to only offer emacs, a text-based editor popular among hacker types, that’s launched from a basic green command line. You can’t go online, or create a PowerPoint presentation, or edit a video. It’s a writing a machine, and like the antique arms that inspired it, the Mythic implements this functionality with a focused, beautiful utilitarianism.

In his critical classic, Amusing Ourselves to Death, Neil Postman argued that the form taken by the technologies we use impacts the fundamental nature of our cognition. When we switched media consumption from long newspaper articles to television soundbites, for example, our understanding of news lost its heft and became more superficial and emotionally-charged.

When pondering Keegan McNamara and the Mythic, I can’t help but apply Postman’s framework to the machines that organize our professional activities. The modern computer, with its generic styling and overloaded activity, creates a cognitive environment defined by urgent, bland, Sisyphean widget cranking — work as endless Slack and email and Zoom and “jumping on” calls, in which there is always too much to do, but no real sense of much of importance actually being accomplished.

In Keegan’s construction we find an alternative understanding of work, built now on beauty, craftsmanship, and focus. Replacing everyone’s MacBook with custom-carved hardwood, of course, is not enough on its own to transform how we think about out jobs, as these issues have deeper roots. But the Mythic is a useful reminder that the rhythms of our professional lives are not pre-ordained. We craft the world in which we work, even if we don’t realize it.

#####

In other news: My longtime friend Brad Stulberg has a great new book out this week. It’s called, Master of Change: How to Excel When Everything is Changing — Including You. In my cover blurb, I noted that this “immensely wise and timely book provides a roadmap for a tumultuous world.” I really mean it! The idea of preparing yourself to thrive, and not crumble, when faced with inevitable change is self-evidently important, and Brad does a great job of delivering the goods on this timely theme.

Pro-tip: if you do buy the book this week, go to Brad’s website to claim a bunch of cool pre-order bonuses that he’s offering through the first full week of publication.

The post On Tools and the Aesthetics of Work appeared first on Cal Newport.

3 likes ·   •  1 comment  •  flag
Share on Twitter
Published on September 04, 2023 06:03

August 26, 2023

We Don’t Need a New Twitter

In July, Meta announced Threads, a new social media service that was obviously designed to steal market share from Twitter (which I still refuse to call X). You can’t blame Meta for trying. In the year or so that’s passed since Elon Musk vastly overpaid for the iconic short-text posting service, Twitter has been struggling, its cultural capital degrading rapidly alongside its infrastructure.

Meta’s plan with Threads is to capture the excitement of Twitter without all the controversy. Adam Mosseri, the executive in charge of Threads, recently said they were looking to provide “a less angry place for conversations.” His boss, Chris Cox, was more direct: “We’ve been hearing from creators and public figures who are interested in having a platform that is sanely run.”

Can Meta succeed with this plan to create a nicer Twitter? In my most recent article for The New Yorker, published earlier this month, I looked closer at this question and concluded the answer was probably “no.” At the core of Twitter’s ability to amplify the discussions that are most engaging to the internet hive mind at any one moment is its reliance on its users to help implement this curation.

As I explain in my piece:

The individual decision to retweet or quote a message, when scaled up to millions of active users, turns out to produce an eerily effective distributed selection process. A quip or observation that hits the Internet just right can quickly spark an information cascade, where retweets spawn more retweets—the original message branching exponentially outward until it reaches, seemingly all at once, an extensive readership.

This cybernetic approach to selecting trends embeds a Faustian bargain: it will generate engagement, but this engagement will be inevitably biased toward negativity and rancor, as in the game of initiating information cascades, the more charged missives are more likely to succeed. Given the reality of these techno-dynamics, my conclusion is that Threads will not succeed with its mission. It can make its platform less angry by relying more on algorithms than humans to figure out what to share, but the result will be a more sanitized and boring experience, like a text-based Instagram feed, full of anodyne comments and bland influencer drivel.

In the second half of my piece, I turn my attention to the bigger question: should we care? In other words, is it important that the internet host a successful global conversation platform on which hundreds of millions of people gather to discuss anything and everything on a common massive feed? I’ll point you toward my full article for my detailed examination of this issue, but if you’re a longtime reader of my newsletter, you can likely already guess where I’ll end up.

#####

In other news: The recent launch of the new spiral-bound version of my Time Block Planner was a big success. The positive feedback I’ve been receiving has been gratifying. If you’re still interested in learning more, I want to point you toward this recent podcast episode in which I provide a detailed overview of time blocking and the new planner, and then provide some advanced tips for getting the most out of a blocking discipline.

The post We Don’t Need a New Twitter appeared first on Cal Newport.

5 likes ·   •  1 comment  •  flag
Share on Twitter
Published on August 26, 2023 06:13

August 6, 2023

Edsger Dijkstra’s One-Day Workweek

Within my particular subfield of theoretical computer science there’s perhaps no individual more celebrated than Edsger Dijkstra. His career spanned half-a-century, beginning with a young Dijkstra formulating and solving the now classic shortest paths problem while working as a computer programmer at the Mathematical Center in Amsterdam, and ending with him as a renowned full professor holding a prestigious chair in the computer science department of the University of Texas at Austin.

During this period, Dijkstra introduced some of the biggest ideas in distributed and concurrent computing, from semaphores and deadlock, to nondeterminacy and fairness. In 2003, the year after his death, the annual award given by the top conference in my field was renamed The Dijkstra Prize in his honor.

This is all to say that I was intrigued when an alert reader recently pointed my attention to a fascinating observation about Dijkstra’s career. In 1973, fresh off winning a Turing Award, the highest prize in all of computer science, Dijkstra accepted a research fellow position that the Burroughs Corporation created specifically for him. As his colleagues later recalled:

“[Dijkstra’s] duties consisted of visiting some of the company’s research centers a few times a year and carrying on his own research, which he did in the smallest Burroughs research facility, namely, his study on the second floor of his house in Nuenen.”

Dijkstra maintained an academic appointment during this period, but ramped down his involvement with his university so that he only visited campus one day per week, on Tuesdays, during which he would gather likeminded colleagues to read papers and discuss ideas. He even pulled back on the time-consuming task of preparing papers for peer-reviewed publication, capturing more of his ideas directly in hand-written, sequentially-numbered reports that he called “EWDs”, referencing his initials.

At this point, Dijkstra had become the opposite of busy. He spent almost all of his time thinking and recording his ideas. He only came to campus on Tuesdays. And yet, as Dijkstra’s colleagues noted:

“The Burroughs years saw him at his most prolific in output of research articles. He wrote nearly 500 documents in the EWD series.”

In this specific case study we see hints of a general observation about slow productivity. Busyness is not the engine of production. It can, in many cases, instead be the obstacle to accomplishing your best work.

#####

As you may have noticed, this newsletter took a little break over the summer, during which time I’ve been serving as a Montgomery Fellow up here at Dartmouth College in Hanover, New Hampshire. As the summer quarter winds down, and me and my family are preparing to move back from the idyllic Montgomery House here on campus to our home in Takoma Park, I’ve now restarted the newsletter, which should return to something like its normal rhythm of 2+ essays per month.

A couple quick administrative notes to share:

The second edition of my Time Block Planner is launching on August 15th. I’ll probably post more about it closer to that date, but I’ll mention now that the new edition has spiral binding (!), a beautiful new grade of paper, and an extra month’s worth of planning pages. If you’re already thinking about ordering one, you might consider pre-ordering it, as my editor warned me that once we sell out the first printing it might take a minute until we receive the next one (due to supply chain nonsense).We recently revealed the cover for my upcoming book on Slow Productivity, which is scheduled to come out in March. I made a conscious choice with this design to separate myself from the standard vernacular of business and advice guides and instead emphasize this book’s aspirational focus on crafting a more humane and sustainable life. I’ll of course be talking a lot more about all of this as we get closer to the release date next year.

The post Edsger Dijkstra’s One-Day Workweek appeared first on Cal Newport.

6 likes ·   •  0 comments  •  flag
Share on Twitter
Published on August 06, 2023 10:47

June 20, 2023

When Work Didn’t Follow You Home

In a recent article written for Slate, journalist Dan Kois recounts the shock his younger coworkers expressed when they discovered that he had, earlier in his career, earned a master’s degree while working a full-time job. “It was easy,” he explained:

“I worked at a literary agency during the day, I got off work at 5 p.m., and I studied at night. The key was that this was just after the turn of the millennium. ‘But what would you do when you had work emails?’ these coworkers asked. ‘I didn’t get work emails,’ I said. ‘I barely had the internet in my apartment.'”

In his article, Kois goes on to interview other members of Generation X about their lives in the early 2000s, before the arrival of smartphones or even widely available internet. They shared tales of coming home and just watching whatever show happened to be on TV (maybe “Seventh Heaven,” or “Law and Order”). They also talked about going to the movies on a random weekday evening because they had nothing else to do, or just heading to a bar where they hoped to run into friends, and often would.

The threads that kept catching my attention, however, were about work communication. “The very idea that, once work hours were over, no one could get hold of you—via email, text, Slack, whatever—is completely alien to contemporary young people,” Kois explained. But this reality made a huge difference when it came to the perception of busyness and exhaustion. When work was done at work, and there was no chance of continuing your labors at home, your job didn’t seem nearly as all-consuming or onerous .

There’s a lot about early 2000s culture I’m not eager to excavate, but this idea of the constrained workday certainly seems worthy of nostalgia.

The post When Work Didn’t Follow You Home appeared first on Cal Newport.

9 likes ·   •  0 comments  •  flag
Share on Twitter
Published on June 20, 2023 12:38

June 13, 2023

On the Slow Productivity of John Wick

I found myself recently, as one does, watching the mini-documentary featurettes included on the DVD for the popular 2014 Keanu Reeves movie, John Wick — an enjoyably self-aware neon noir revenge-o-matic, filmed cinematically on anamorphic lenses.

At the core of John Wick‘s success are the action sequences. The movie’s director, Chad Stahelski, is a former stuntman who played Reeve’s double in The Matrix trilogy and subsequently made a name for himself as a second unit director specializing in filming fights. When Reeves asked Stahelski to helm Wick, he had exactly this experience in mind. Stahelski rose to the challenge, making the ambitious choice to feature a visually-arresting blend of judo, jiu-jitsu, and tactical 3-gun shooting. In contrast to the hand-held, chaotic, quick-cutting style that defines the Bourne and Taken franchises, Stahelski decided to capture his sequences in long takes that emphasized the balletic precision of the fighting.

The problem with this plan, of course, is that it required Keanu Reeves to become sufficiently good at judo, jiu-jitsu, and tactical 3-gun shooting so as not to look clumsy for Stahelski’s stable camera. Reeves was game. According to the featurette I watched, to prepare for production, he trained eight hours a day, four months in a row. The effort paid off. The action set pieces in the movie were show-stopping, and after initially struggling to find a distributor, the film, made on a modest budget, went on to earn $86 million, kicking off a franchise that has since brought in hundreds of millions more.

What struck me as I watched this behind-the-scenes feature is how differently creatives who work in the arts think about productivity as compared to creatives who work in office jobs. For Keanu Reeves, it was obvious that the most productive path was to focus all of his attention on a single goal: becoming really good at Stahelski’s innovative brand of gun fu. Doing this, and basically only this, month after month, materialized hundreds of millions of dollar of profit out of the entertainment ether.

In office jobs, by contrast, productivity remains rooted in notions of busyness and multi-faceted activity. The most productive knowledge workers are those who stay on top of their inboxes and somehow juggle the dozens of obligations, from the small tasks to major projects, hurled in their direction every week. Movie-making is of course different than, say, being a marketing executive, or professor, or project manager, but creating things that are too good to be ignored, regardless of the setting, is an activity that almost without exception requires undivided attention. Are we so sure that the definition of “productive” that defines knowledge work really is the most profitable use of our talents?

John Wick may be shallow entertainment, but the story of its success highlights some deep lessons about what the rest of us might be missing in our pursuit of a job well done.

The post On the Slow Productivity of John Wick appeared first on Cal Newport.

4 likes ·   •  0 comments  •  flag
Share on Twitter
Published on June 13, 2023 09:25

May 22, 2023

The End of Screens?

Image by Sightful

Believe it or not, one of the most important technology announcements of the past few months had nothing to do with artificial intelligence. While critics and boosters continue to stir and fret over the latest capabilities of ChatGPT, a largely unknown 60-person start-up, based out of Tel Aviv, quietly began demoing a product that might foretell an equally impactful economic disruption.

The company is named Sightful and their new offering is Spacetop: “the world’s first augmented reality laptop.” Spacetop consists of a standard computer keyboard tethered to pair of goggles, styled like an unusually chunky pair of sport sunglasses. When you put on the goggles, the Spacetop technology inserts multiple large virtual computer screens into your visual field, floating above the keyboard as if you were using a computer connected to large external monitors.

As oppose to virtual reality technology, which places you into an entirely artificial setting, Spacetop is an example of augmented reality (AR), which places virtual elements into the real world. The goggles are transparent: when you put them on at your table in Starbucks you still see the coffee shop all around you. The difference is now there are also virtual computer screens floating above your macchiato.

To be clear, I don’t believe that this specific product, which is just now entering a limited, 1000-person beta testing phase, will imminently upend the technology industry. The goggles are still too big and unwieldy (more Google Glass than Ray Ban), and the field of vision for their virtual projections remains too limited to fully support the illusion of screens that exist in real space.

But I increasingly believe that Sightful may have stumbled into the right strategy for finally pushing AR into the mainstream. Unlike Magic Leap, the over-hyped Google-backed start-up that burned through $4 billion trying to develop a general-purpose AR device that could do all things for all people, Sightful has remained much more focused with their initial product.

Spacetop solves a narrow problem that’s perfectly suited for AR: limited screen space for mobile computing. Their initial audience will likely be power users who desperately crave monitor real estate. (As I learned researching a 2021 New Yorker article about working in virtual reality, computer programmers, in particular, will happily embrace even the most wonky of cutting-edge technologies if it allow them to use more windows simultaneously.)

This narrowness simplifies many of the technical issues that afflicted the general-purpose AR technologies developed by companies like Magic Leap. Projecting virtual screens is much easier than trying to render arbitrary 3D objects in a real space, as you don’t have to worry about matching the ambient lighting. Furthermore, the keyboard base provides a familiar user interface and vastly simplifies the process of tracking head movements.

In other words, this is a problem that AR has a chance to convincingly solve. And once this door is open, and AR emerges as a legitimate profitable consumer technology, significant disruption might soon follow.

Imagine the following scenario:

In the third generation of their technology, Sightful achieves a small enough form-factor and large enough field of vision for their AR goggles to appeal to the much broader market segment of business users looking for more screen space when working away from the orfice.Seeing the potential, Apple invests several hundred million dollars to develop the iGlass: a pair of fashion-forward AR goggles, connected wirelessly to an elegant, foldable base on which you can touch or type, marketed as a replacement for the iPad and MacBook that can fit in your pocket while still providing you a screen bigger than their biggest studio monitors.Spooked, Samsung scrambles to release a high-end AR television experience that allows you to enjoy a virtual 200-inch television in any room.Apple smells blood and adds television functionality as a software update to iGlass. Soon Samsung’s market drastically shrinks. This sets off the first of multiple cataclysmic consolidations in the consumer electronics sector. Within a decade, we find ourselves in a world largely devoid of screens. Computation unfolds in the cloud and is presented to us as digital projections on thin plastic optical wave-guides positioned inches from our eyes.

I don’t, at this point, mean this prognostication to be either optimistic or dystopian. I want only to emphasize that in a moment in which we’re all so enthralled with the question of whether or not autoregressive token predictors might take our jobs, there are some other major technological fault lines that are beginning to rumble and might very well be close to radically shifting.

#####

In other news:

Speaking of a potential AR revolution, I talked about Apple’s upcoming splashy entrance into this space during the final segment of Episode 249 of my podcast, Deep Questions .My friend Adam Alter, who I quoted extensively in Digital Minimalism, has a fantastic new book out titled Anatomy of a Breakthrough . Here’s my blurb from the back cover: “A deeply researched and compelling guide to breaking through the inevitable obstacles on the path to meaningful accomplishment.” Check it out!

The post The End of Screens? appeared first on Cal Newport.

9 likes ·   •  0 comments  •  flag
Share on Twitter
Published on May 22, 2023 14:57

May 4, 2023

On Kids and Smartphones

Not long ago, my kids’ school asked me to give a talk to middle school students and their parents about smartphones. I’ve written extensively on the intersection of technology and society in both my books and New Yorker articles, but the specific issue of young people and phones is one I’ve only tackled on a small number of occasions (e.g., here and here). This invited lecture therefore provided me a great opportunity to bring myself up to speed on the research relevant to this topic.

I was fascinated by what I discovered.

In my talk, I ended up not only summarizing the current state-of-the-art thinking about kids and phones, but also diving into the history of this literature, including how it got started, evolved, adjusted to criticism, and, over the last handful of years, ultimately coalesced around a rough consensus.

Assuming that other people might find this story interesting, I recorded a version of this talk for Episode 246 of my podcast, Deep Questions. Earlier today, I also released it as a standalone video. If you’re concerned, or even just interested, in what researchers currently believe to be true about the dangers involved in giving a phone to a kid before they’re ready, I humbly suggest watching my presentation.

In the meantime, I thought it might be useful to summarize a few of the more interesting observations that I uncovered:

Concern that young people were becoming more anxious, and that smartphones might be playing a role, began to bubble up among mental health professionals and educators starting around 2012. It was, as much as anything else, Jean Twenge’s 2017 cover story for The Atlantic, titled “Have Smartphones Destroyed a Generation?”, that subsequently shoved this concern into the broader cultural conversation.Between 2017 and 2020, a period I call The Data Wars, there were many back-and-forth fights in the research literature, in which harms would be identified, followed by critics pushing back and arguing that the harms were exaggerated, followed then by responses to these critiques. This was normal and healthy: exactly the empirical thrust and parry you want to see in the early stages of an emerging scientific hypothesis. Over the last few years, a rough consensus has emerged that there really are significant harms in giving young people unrestricted access to the internet through smartphones. This is particularly true for pre-pubescent girls. This consensus arose in part because the main critiques raised during The Data Wars were resoundingly answered, and because, more recently, multiple independent threads of inquiry (including natural experiments, randomized controlled trials, and self-report data) all pointed toward the same indications of harm.The research community concerned about these issues are converging on the idea that the safe age to give a kid unrestricted access to a smartphone is 16. (The Surgeon General recently suggested something similar.)You might guess that the middle school students who attended my talk balked at this conclusion, but reality is more complicated. They didn’t fully embrace my presentation, but they didn’t reject it either. Many professed to recognize the harms of unrestricted internet access at their age and are wary about it. (My oldest son, by contrast, who is 10, is decidedly not happy with me for spreading these vile lies at his school.)

This is clearly a fascinating and complicated topic that seems to be rapidly evolving. If you’re struggling with these developments, I hope you find my talk somewhat useful. I’m convinced that our culture will eventually adapt to these issues. Ten years from now, there won’t be much debate about what’s appropriate when it comes to kids and these technologies. Until then, however, we’re all sort of on our own, so the more we know, the better off we’ll be.

The post On Kids and Smartphones appeared first on Cal Newport.

4 likes ·   •  0 comments  •  flag
Share on Twitter
Published on May 04, 2023 15:22

April 28, 2023

Danielle Steel and the Tragic Appeal of Overwork

Based on a tip from a reader, I recently tumbled down an esoteric rabbit hole aimed at the writing habits of the novelist Danielle Steel. Even if you don’t read Steel, you’ve almost certainly heard of her work. One of the best-selling authors of all time, Steel has written more than 190 books that have cumulatively sold over 800 million copies. She publishes multiple titles per year, often juggling up to five projects simultaneously. Unlike James Patterson, however, who also pushes out multiple books per year, Steel writes every word of every manuscript by herself.

How does she pull this off? She works all the time. According to a 2019 Glamour profile, Steel starts writing at 8:30 am and will continue all day and into the night. It’s not unusual for her to spend 20 to 22 hours at her desk. She eats one piece of toast for breakfast and nibbles on bittersweet chocolate bars for lunch. A sign in her office reads: “There are no miracles. There is only discipline.”

These details fascinate me. Steel is phenomenally successful, but her story reads like a Greek tragedy. She could, of course, decide to only write a single book per year, and still be a fabulously bestselling author, while also, you know, sleeping. Indeed, her cultural impact might even increase if she slowed down, as this extra breathing room might allow her to more carefully apply her abundant talent.

But there’s a primal action-reward feedback loop embedded into the experience of disciplined effort leading to success. Once you experience its pleasures it’s natural to crave more. For Steel, this dynamic seems to have spiraled out of control. Like King Midas, lost in his gilded loneliness, Steel cannot leave the typewriter. She earned everything she hoped for, but in the process she lost the ability to step away and enjoy it.

I think this dynamic, to one degree or another, impacts anyone who has been fortunate enough to experience some success in their field. Doing important work matters and sometimes this requires sacrifices. But there’s also a deep part of our humanity that responds to these successes — and the positive feedback they generate — by pushing us to seek this high at ever-increasing frequencies.

One of the keys to cultivating a deep life seems to be figuring out how to ride this razor’s edge; to avoid the easy cynicism of dismissing effort altogether, while also avoiding Steel’s 20-hour days. This is an incredibly hard challenge, yet it’s one that receives limited attention and generates almost no formal instruction. I don’t have a simple solution but I thought it was worth emphasizing. For a notable subset of talented individuals burnout is less about their exploitation by others than it is their uneasy dialogue with themselves.

The post Danielle Steel and the Tragic Appeal of Overwork appeared first on Cal Newport.

10 likes ·   •  1 comment  •  flag
Share on Twitter
Published on April 28, 2023 14:51

April 13, 2023

My Thoughts on ChatGPT

In recent months, I’ve received quite a few emails from readers expressing concerns about ChatGPT. I remained quiet on this topic, however, as I was writing a big New Yorker piece on this technology and didn’t want to scoop my own work. Earlier today, my article was finally published, so now I’m free to share my thoughts.

If you’ve been following the online discussion about these new tools you might have noticed that the rhetoric about their impact has been intensifying. What started as bemused wonder about ChatGPT’s clever answers to esoteric questions moved to fears about how it could be used to cheat on tests or eliminate jobs before finally landing on calls, in the pages of the New York Times, for world leaders to “respond to this moment at the level of challenge it presents,” buying us time to “learn to master AI before it masters us.”

The motivating premise of my New Yorker article is the belief that this cycle of increasing concern is being fueled, in part, by a lack of a deep understanding about how this latest generation of chatbots actually operate. As I write:

“Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?”

I then spend several thousand words trying to detail the key ideas that explain how the large language models that drive tools like ChatGPT really function. I’m not, of course, going to replicate all of that exposition here, but I do want to briefly summarize two relevant conclusions:

ChatGPT is almost certainly not going to take your job. Once you understand how it works, it becomes clear that ChatGPT’s functionality is crudely reducible to the following: it can write grammatically-correct text about an arbitrary combination of known subjects in an arbitrary combination of known styles, where “known” means it encountered it sufficiently many times in its training data. This ability can produce impressive chat transcripts that spread virally on Twitter, but it’s not useful enough to disrupt most existing jobs. The bulk of the writing that knowledge workers actually perform tends to involve bespoke information about their specific organization and field. ChatGPT can write a funny poem about a peanut butter sandwich, but it doesn’t know how to write an effective email to the Dean’s office at my university with a subtle question about our hiring policies.ChatGPT is absolutely not self-aware, conscious, or alive in any reasonable definition of these terms. The large language model that drives ChatGPT is static. Once it’s trained, it does not change; it’s a collection of simply-structured (though massive in size) feed-forward neural networks that do nothing but take in text as input and spit out new words as output. It has no malleable state, no updating sense of self, no incentives, no memory. It’s possible that we might one day day create a self-aware AI (keep an eye on this guy), but if such an intelligence does arise, it will not be in the form of a large language model.

I’m sure that I will have more thoughts to share on AI going forward. In the meantime, I recommend that you check out my article, if you’re able. For now, however, I’ll leave you with some concluding thoughts from my essay.

“It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy,” I wrote. “ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem.”

The post My Thoughts on ChatGPT appeared first on Cal Newport.

14 likes ·   •  0 comments  •  flag
Share on Twitter
Published on April 13, 2023 17:51

April 2, 2023

On Taylor Koekkoek’s Defiant Disconnection

An article appearing last month in the Los Angeles Times book section opens with a nondescript picture of a young man in a Hawaiian shirt standing in front of a brick wall. The caption is arresting: “Taylor Koekkoek is one of the best short-story writers of his (young) generation. So why haven’t you heard of him?”

On March 21st, Koekkoek (pronounced, cook-cook) published his debut short story collection, Thrillville, USA. Those who have read it seem to love it. The Paris Review called it a “raw and remarkable debut story collection.” The author of the LA Times piece braved a blizzard in a rental car just for the chance to interview Koekkoek at his Oregon house. And yet, the book has so far escaped wide notice: At the time of this writing, its Amazon rank is around 175,000.

The LA Times provides some insight into this state of affairs:

“A Google search reveals very little about the writer: a few published stories, no social media trail, author bios at a handful of universities that feature the same photo of an amiable-looking young white man in a Hawaiian shirt. If one were to make up an identity for a fictitious writer, the results would resemble something like the sum total of Koekkoek’s online experience.” [emphasis mine]

It’s possible that Koekkoek will go on to make the standard moves for someone his age: engaging in social media, creating waves online, brashly carving out an audience. (Indeed, since his book came out, he seems to have started an Instagram account that currently features three posts.) But there’s a part of me that hopes he resists this well-worn path; that he continues to let his soulful words speak for themselves, and that, ultimately, the sheer quality of what he’s doing wins him grand recognition.

This would be a nice counterpoint to our current moment of instinctive self-promotion. A reminder for the rest of us, nervous about slipping into digital oblivion, that what ultimately matters is the fundamental value of what we produce. Everything else is distraction.

In other news…

My apologies for my recent radio silence on this newsletter. In a coincidence of scheduling, of the type that happens now and again, I had an academic, magazine, and book-related deadline all fall into the same three-week period, so something had to give. I should be back to a more normal pace of posting now.I suppose I should mention that, a few weeks ago, I was profiled by the Financial Times Weekend Magazine. Believe it or not, I was the cover story (!?). You can read it here.

The post On Taylor Koekkoek’s Defiant Disconnection appeared first on Cal Newport.

 •  0 comments  •  flag
Share on Twitter
Published on April 02, 2023 17:24

Cal Newport's Blog

Cal Newport
Cal Newport isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Cal Newport's blog with rss.