Cal Newport's Blog, page 28
January 7, 2019
Are Smartphones Necessary Anymore?

When I was researching Digital Minimalism, I came across an interesting article written by Vlad Savov for The Verge. It was titled: “It’s time to bring back the dumb phone.”
I’ve both read and written numerous articles about the negative aspects of the modern smartphone, and have interviewed many people who have returned to a simpler alternative with few regrets.
But what caught my attention about Savov’s piece was the following new (to me) argument he made in favor of stepping back from these devices:
“This is not as drastic a regression as you might think — or as it might have been a few years ago. In the age before paper-thin tablets and laptops, your smartphone truly was the only viable connected device you could carry around everywhere.
But nowadays? I have paper pads thicker and heavier than the Apple MacBook…[y]ou can tuck a tablet discreetly into a large jacket pocket, and it can connect to LTE networks.”
Like Proust’s Madeleine, this comment sparked in me memories of the early smartphone era; a time when laptops were large, bulky affairs and accessible WiFi connections scarce. In this context, a “smart” phone that might allow you to send an email or perform rudimentary document edits could significantly improve your productivity when away from the office.
But as Savov notes, there are now many other affordable, portable, connected devices that offer much better productivity experiences than even the largest phone.
So why do smartphones persist in a world where their original rationale has dissipated?
My current theory: Steve Jobs.
When Jobs returned to Apple computer in an interim position in 1996, the company was still competing in the business productivity market. Indeed, one of Jobs’s first actions was to accept a $150 million dollar investment from Microsoft and form a partnership to maintain a Mac version of Microsoft Office.
But starting with the 1998 release of the iMac, Jobs began executing his vision to transform the company into a consumer brand. By 2000, he effectively eliminated the Mac clone market and shut down the productivity-focused Newton and OpenDoc projects.
In 2001, the iPod was introduced. In 2006, the iTunes store sold its billionth song.
It’s in this context that Apple began developing the iPhone: a smartphone conceived from the start as a consumer lifestyle product. The reason people lined up outside Apple stores for this device’s initial release was not because they cared about getting more done on the go, but because they wanted to be a part of the shiny new chrome-case digital culture it represented.
We’re used to this idea today, but it really was radical back in 2007.
This history is important to revisit because it reminds us of the shaky foundation on which our current culture’s compulsive smartphone habits are built.
The original smartphones solved a real problem: how do I check in on work when away from my office computer? This problem is now better solved by more recent innovations.
The communication devices that dominate our time and attention today, by contrast, are mainly used for novel behaviors, like compulsive social media checking, that were developed specifically to exploit the trend of non-business users craving fancier phones.
Which brings us back to Vlad Savov’s article. He asks if it’s time to bring back the dumb phone. If we return to thinking of these gadgets in a more purely instrumental sense — that is, asking what important problems they solve — then, perhaps to our surprise, we might find ourselves wondering why the appropriate answer is not just a simple “yes.”
(Photo by Leon Lee.)
December 29, 2018
Join Analog Social Media

A phenomenon I noticed when researching Digital Minimalism is that many people are confused by the creeping unease they feel about their digital lives. This confusion is caused in part by problems of scope.
When you take an activity like social media, for example, and zoom in close, you isolate behaviors like commenting on a friend’s picture, or encountering an interesting link, that seem mildly positive. What harm could their possibly be in clicking a heart icon?
When you zoom out, however, the cumulative effect of all this swiping and tapping seems to add up to something distinctly negative. Few are happy, for example, after allowing yet another movie night to devolve into side-by-side iPad idling.
The dynamic at play here is that digital activities that are mildly positive in isolation, combine to crowd out other real world activities that are potentially much more satisfying. This is what allows you to love Twitter in the moment when you discover a hilarious tweet, but at the end of the day fear that the app is degrading your soul.
Understanding this dynamic is critical because it tells you that you cannot improve your life by focusing exclusively on digital tools. Triaging your apps, or cutting back phone time, will not by itself make you happier. You must also aggressively fill in the space this pruning creates with the type of massively satisfying, real world activities that these tools have been increasingly pushing out of your life.
It is with this in mind, and in the spirit of the New Year, that I suggest you make a simple resolution: join analog social media.
As I’ve discussed before, analog social media describes organizations, activities and traditions that require you to interact with interesting people and encounter interesting things in the real world.
Here are some examples:
Join a local political group that meets regularly to organize on issues relevant to your local community, or serve as a volunteer on the election campaign of a local politician you know and like.Join a social fitness group, like a running club, or local CrossFit box.Become a museum or theater member and attend openings.Go to at least one author talk per month at a local bookstore.Create a book club, or poker group, or gaming club.Join a committee at your church/temple/mosque.Establish a weekly brunch or happy hour with your close friends.
These types of activities tend to provide significantly more value in your life than their digital counterparts. Indeed, tools like online social media are probably best understood as weak online simulacrums of the analog encounters that we know deep down we need to thrive as humans.
Equally important, as I learned during last year’s big digital declutter experiment (summarized here; detailed here), the more analog social media you introduce into your life, the more bulwarks you establish against the creeping demands of the digital.
With nothing else in place to fill your time, your phone will become increasingly irresistible, regardless of your intentions to spend more time disconnected. When you instead introduce meaningful analog activity into your regular routine, the appeal of the screen suddenly diminishes.
To summarize: if you’re vaguely unhappy with your digital life, respond by introducing much more positive real world activity. If you embrace analog social media, you’ll soon be wondering how you ever dedicated so much time to its inferior digital equivalent.
December 19, 2018
From the Hyperlink to the Stream: Hossein Derakshan’s Critique of the Internet in the Age of Social Media

The Six Year Transformation
A friend recently pointed me toward an essay published on Medium in 2015. It’s written by Hossein Derakshan, a Canadian-Iranian blogger who helped instigate the Persian-language blogging revolution during the first decade of the 21st century, and whose online truth-telling eventually lead to his imprisonment in Tehran’s notorious Evin Prison from 2008 to 2014.
In his essay, Derakshan explores the radical shift in internet culture that occurred between when he entered prison in 2008 and his release six years later. As Derakshan explains, in 2008, the source of the internet’s potency was the hyperlink:
“The hyperlink was my currency six years ago…[it] provided a diversity and decentralisation that the real world lacked. The hyperlink represented the open, interconnected spirit of the world wide web…a way to abandon centralization — all the links, lines and hierarchies — and replace them with something more distributed, a system of nodes and networks.”
If the hyperlink was “currency,” as Derakshan elaborates, then blogs were the market in which this currency was exchanged. You might start a web browsing session at a site you knew well, but a few dozen clicks later might find yourself at a novel corner of the blogosphere, digesting insights from a bright mind you would have never otherwise known existed.
When Derakshan emerged from prison in 2014, however, the internet had changed. Social media had dethroned the blog, and in doing so, replaced the hyperlink’s central position within online culture with something altogether new, “The Stream.”
As he details:
“The Stream now dominates the way people receive information on the web. Fewer users are directly checking dedicated webpages, instead getting fed by a never-ending flow of information that’s picked for them by complex — and secretive — algorithms.
The Stream means you don’t need to open so many websites any more. You don’t need numerous tabs. You don’t even need a web browser. You open Twitter or Facebook on your smartphone and dive deep in. The mountain has come to you. Algorithms have picked everything for you.”
What we lost in this shift from the hyperlink to The Stream was the ability to encounter diverse ideas, radical insight, and transformative new perspectives. What we got instead was more of what we already know, delivered like a pre-masticated paste, easy to digest and sure to please:
“[N]ot only do the algorithms behind the Stream equate newness and popularity with importance, they also tend to show us more of what we’ve already liked. These services carefully scan our behaviour and delicately tailor our news feeds with posts, pictures and videos that they think we would most likely want to see.”
Derakshan’s analysis provides a sharp take on some of the issues I’ve been discussing in recent posts. This shift from the wild and exciting decentralized web of the 1990s and 2000’s, to the creepy, Huxley-esque walled gardens of today’s social media monopolies has many different consequences, from privacy, to distraction, to manipulation.
But as Derakshan emphasizes, perhaps one of the biggest impacts of this transformation is that it’s denuding the internet of many of the attributes that made it so disruptive and exciting in the first place.
December 15, 2018
Is YouTube Fundamental or Trivial?

The YouTube Conundrum
As a public critic of social media, I’m often asked if my concerns extend to YouTube. This is a tricky question.
As I’ve written, platforms such as Facebook and Instagram didn’t offer something fundamentally different than the world wide web that preceded them. Their main contribution was to make this style of online life more accessible and convenient.
My first independently owned and operated web site from the 1990s, for example, required me to learn HTML and upload files to a server at a local ISP using FTP. Ten years later, expressing yourself online became as easy as using your student email address to open an account at thefacebook.com, and then answering some questions about your relationship status and favorite movies.
YouTube seems different.
Before it came along, there were not many options for individuals to publish original video content online. Now this can be done for free with the click of a button, which is an important shift. Many content creators I know see the democratization of video as a force that’s shaping up to be as disruptive to traditional media as the preceding arrival of web sites.
And yet, at the same time, many of the people I spoke with while researching Digital Minimalism admitted that idle YouTube browsing is devouring more and more of their discretionary time, and they’re not happy about it.
So what’s the right way to think about YouTube: is it fundamental to the internet revolution, or just another source of social media distraction?
The best answer I can come up with for now is both.
On the positive side, video is powerful. Enabling more people to create and publish video will therefore unleash powerful creative innovation. (It will also, of course, enable the creation of more insipid and brain-dulling content as well, but this is an unavoidable feature of any publishing revolution, from Gutenberg onward).
On the negative side, YouTube’s attention economy revenue model, supercharged with statistical recommendation algorithms, creates a browsing experience that can suck you into a powerful vortex of distraction and creeping extremism that cannot possibly be healthy.
A Better Way Forward
Perhaps the best way to emphasize the positives of online video while diminishing its negatives is to deploy a hybrid indie web approach.
Imagine an online world in which people hosted their innovative video on large, big-infrastructure platforms like YouTube or Vimeo, but then embedded the players on their own independent web sites. This would allow users to find interesting new video content by leveraging the same style of decentralized trust hierarchies that structure the blogosphere, instead of relying on artificial statistical algorithms tuned to optimize attention extraction.
Because YouTube came along at exactly the moment when broadband penetration made online video practical, we never had a period of indie experimentation before the market consolidated into platform monopolies. I think it’s worth exploring what we missed.
December 7, 2018
On Blogs in the Social Media Age

Twitter Defector
Earlier this week, Glenn Reynolds, known online as Instapundit, published an op-ed in USA Today about why he recently quit Twitter. He didn’t hold back, writing:
“[I]f you set out to design a platform that would poison America’s discourse and its politics, you’d be hard pressed to come up with something more destructive than Twitter.”
What really caught my attention, however, is when Reynolds begins discussing the advantages of the blogosphere as compared to walled garden social media platforms.
He notes that blogs represent a loosely coupled system, where the friction of posting and linking slows down the discourse enough to preserve context and prevent the runaway reactions that are possible in tightly coupled systems like Twitter, where a tweet can be retweeted, then retweeted again and again, forming an exponential explosion of pure reactive id.
As a longtime blogger myself, Reynolds’s op-ed got me thinking about other differences between social media and the blogosphere…
Attention Markets
One of these differences that has consistently caught my attention is the way in which social media reconstructed the market for online attention.
Blogs implement a capitalist attention market. If you want attention for your blog you have to earn it through a combination of quality, in the sense that you’re producing something valuable for your readers, and trust, in the sense that you’ve produced enough good stuff over time to establish a good reputation with the fellow bloggers whose links will help grow your audience.
Succeeding in this market, like succeeding with a business venture, can be ruthlessly difficult. There’s lots of competition for the attention you’re trying to attract, and even skilled writers often find that something about their voice, or the timing of their topic, fails to catch on.
Social media, by contrast, implements a collectivist attention market, where the benefits of receiving attention are redistributed more uniformly to all users.
A key dynamic driving the popularity of platforms like Facebook and Instagram, for example, is the following notion: if you like me, I’ll like you. As I noted in Deep Work, if you took the contents of the standard Facebook or Instagram feed and published it on a blog, it wouldn’t attract any readers, or comments, or links. But put this content on a Facebook wall and there’s an implicit social contract in place to motivate the people you know to click a like button, or leave a nice comment in the anticipation that you’ll do the same.
Twitter is a little more complicated. A key dynamic on this platform is deconstructing “content” into small chunks that exist largely independently of the type of slowly accreting, decentralized trust hierarchies that throttle information flow in the blogosphere.
These tweets are easy to write and publish, and they can be acknowledged just as easily with a quick tap of a retweet or heart icon. By drastically lowering the bar for what “content creation” requires, and allowing content to spread in a homogenous, fluid interaction graph, many more people can experience the positive feeling of having someone pay attention to something they said.
Quality vs. Satisfaction
It’s not self-evident that one type of online media is better than the other. One advantage of a collectivist market, for example, is that it feels nice to receive attention, so spreading this experience to more people seems like a worthwhile endeavor.
Collectivist markets also potentially bring more voices into the online conversation, as the obstacles to finding an audience in the blogosphere are severe enough that some people who otherwise have something interesting to say might not bother trying to say it.
Capitalist attention markets, on the other hand, offer one decidedly important advantage: better content. To state the obvious, there are plenty of bad blogs. But in the blogosphere it’s easy to filter these from the more serious contributors that, through the traits of quality and trust cited above, distinguish themselves as worthwhile.
As any serious blog consumer can attest, a carefully curated blog feed, covering niches that matter to your life, can provide substantially more value than the collectivist ping-ponging of likes and memes that make up so much of social media interaction.
In other words, Glenn Reynolds was on to something when he stepped away from Twitter and began to reminisce about what once made blogging seem so exciting.
#####
As longtime readers know, I’m a big fan of Mouse Books, which prints classic books in a smartphone-sized format — allowing you to pull a deeper source of distraction out of your pocket during moments of boredom. (I actually feature Mouse Books in Digital Minimalism .)
Anyway, they just launched a Kickstarter campaign to fund their “second season” (series of books). Definitely check it out…
December 4, 2018
My New Book: Digital Minimalism
A Manual for a Focused Life
I’m excited to officially announce my new book, Digital Minimalism: Choosing a Focused Life in a Noisy World, which will be published on February, 5th.
My last book, Deep Work, tackled the impact of new technologies on the world of work. After it came out, many readers began asking me about the equally important impact of these tools on their personal lives. This new book is my response.
In it, I argue that we have been too casual in adopting alluring new technologies, and as a result our quality of life is diminishing. To solve this problem I propose a philosophy of technology use called digital minimalism in which you radically reduce the time you spend starting at screens, focusing on a small number of digital activities that strongly support things you deeply value, and then happily ignoring the rest.
In addition to arguing why minimalism is a necessary answer to our increasing digital discontentment, I take the reader inside the vibrant subculture of digital minimalists who have already found great satisfaction and authentic meaning in taking back control of their technological lives — highlighting the key principles they use to succeed in adopting this philosophy.
(Among other things, you’ll learn the detailed story of the digital declutter experiment I ran last January as part of my book research, which ended up growing to over 1,600 participants and receiving coverage in the New York Times.)
I will, of course, be writing quite a bit more about these ideas and this book in the weeks ahead. My purpose for now is mainly to bring you up to speed on what I’m up to.
As a final logistical note: if you have already preordered this book, or are planning to preorder it, hold on to your digital receipt, as I’ll soon be offering a large preorder package — including advance content from the book, a detailed look inside my personal productivity systems, and access to private Q&As — as my way of saying thank you. (Preorders are incredibly important for a book launch, so I’m incredibly grateful for anyone who takes the time to support me in this way.)
Stay tuned!
November 27, 2018
Is Facebook the AOL of the 2010s? A Skeptical Examination of Social Media Network Effects.

The Law
In economics, a network effect is a positive benefit created by a new user buying a product or joining a service. In the context of computer networks, these benefits are commonly believed to scale quickly with the number of users.
In technology circles, perhaps the best known instantiation of network effects is Metcalfe’s Law, named for Ethernet co-inventor Bob Metcalfe, who was likely inspired by similar theories developed at Bell Telephone in the early 20th century.
This law concerned the value of the Ethernet network cards sold by Metcalfe’s company 3Com. It states that given a network with N users, buying one additional Ethernet card provides you with N new possible network connections (e.g., from the new card to each of the N existing users).
It then follows, roughly speaking, that the value of N network cards grows as N^2 instead of N. Once a network achieves a certain critical size, therefore, the value it returns will quickly begin to far exceed the cost of joining it, creating a powerful positive feedback loop.
Metcalfe’s Law is incredibly influential in Silicon Valley, where it’s often applied to justify the monopoly status of the social media conglomerates. If a network like Facebook has over a 1,000,000,000 users, the law tells us, then its value to users grows as (1,000,000,000)^2 — a quantity so vast that any attempt to compete with this giant must be futile.
It’s widely believed among many Silicon Valley types that this calculus helps explains the lack of venture capital investment in new social media start-ups in recent years. The power of network effects in this sector is unimpeachable.
But should they be?
AOL Redux
I’ve long harbored suspicions about how network effects are referenced to justify massive social media conglomerates.
If you examine the canonical examples of these effects, such as 3Com’s Ethernet cards or Bell’s telephones, you’ll notice that joining the network in question is the only way to connect to other people in that general manner.
In 1908, if you didn’t own a Bell telephone, you couldn’t talk in real time to people over distance. In 1988, if your computer didn’t have an Ethernet card, it couldn’t connect to other devices in your office. In these scenarios, buying the relevant product shifted you from completely disconnected to massively connected.
Social media, however, is different.
In 2018, joining a network like Facebook enables you to connect with or monitor the status of people you know using digital networks. Unlike telephones or Ethernet cards, however, you don’t need a private network like Facebook for these benefits. Both the Internet and SMS, among other technologies, already provide many different tools, protocols, and services for connecting and disseminating information digitally.
Case in point: I’ve never had a social media account, and yet I constantly enjoy connecting to people, and posting and monitoring information using digital networks.
So what then exactly do massive social media platforms like Facebook provide? A more honest answer is that they offer a more convenient experience than the wilder, less centralized social internet, but not something fundamentally unique.
There’s value in convenience, but not Metcalfe’s Law level, dominating value. In some sense, Facebook is to the social internet today what AOL was to the world wide web in the 1990s — a walled garden that provides a gentle on-ramp to the capabilities of a more exuberant decentralized network roiling beyond its boundaries.
These are thoughts I’m in the early stages of developing, so I’m interested in any pointers you can share about people smarter than me exploring similar ideas. But it increasingly seems to me that social media giants like Facebook offer at best network enhancements to its users, not the mythical network effects that helped make the monopolies of past eras so inescapable.
November 18, 2018
On Bryce Harper and the Impact of Social Media on Athletes
#teamnoscroll
As a big time Washington Nationals fan, I’ve been watching Bryce Harper play here in D.C. since he was first brought up to the majors at the age of 19. As you might therefore imagine, I’ve been closely following his free agency this fall.
It was due to this hardball diligence that I recently noticed a small sports page news item that intersects with the types of topics we like to discuss here. A couple weeks ago, Harper declared he was going on a social media fast. He even ironically (oxymoronically?) introduced a hash tag for his effort: #teamnoscroll.
I applaud Harper for his public step back from social media, especially during a period of intense scrutiny where checking the latest buzz would only increase his anxiety.
But reading about #teamnoscroll prompted an interesting thought: Why aren’t more superstar athletes permanently disengaged from social media?
At the elite level, athletes differentiate themselves by maximizing every physical and cognitive advantage (for more on this, see Ben Bergeron’s Chasing Excellence). Superstars become superstars, in large part, because they pair once in a generation gifts with relentless training.
There’s a reason Lebron James spends $1.5 million dollars a year on his off-season training: the return on investment is worth it.
But then there’s social media. These services create cognitive drag by subjecting you to a compulsive mix of drama and distraction. If you’re famous, this drag is even more pronounced.
For the average user, this reality might prove a nuisance, but for athletes performing at the top levels of their sports, the result could be the difference between a solid career and the hall of fame; a 5-year $25 million dollar deal, and a 10-year $350 million deal.
And yet, many of the same athletes that measure their food on a scale are somehow fine scrolling on a whim.
In my role as an author who writes about focus, I’ve had the opportunity to discuss deep work with front office professional sports types. The general sense I’m getting is that pro teams are becoming increasingly wary about the impact of technology on cognitive fitness.
Some have told me that it’s the sports agents who are exacerbating this problem by encouraging their clients to “build their brand” through social media. I have a hard time believing this is true because it’s self-defeating. I can’t imagine that star agents like Scott Boras, with his binders full of advanced analytics breaking down every contribution of his clients, would be blind to the extra edge provided by an unusually focused mind.
I wouldn’t be surprised, in other words, if we start to see more of a systematic move away from social media by top figures in sports as the full impact of this technology is better understood.
To make this more concrete: few things would make me happier than to see much less of Bryce Harper on Instagram, and many more years of his ferocious swing here at Nats Park.
###
Speaking of social connection and work, my friend Dan Schawbel has a new book out this week called Back to Human. It provides a compelling and evidence-based argument for the importance of old-fashioned, non-technological interaction in leading successful teams. I’ve always liked Schawbel’s work, but this book hits particularly close to the types of issues I write about here — in particular, the unintended consequences of tools like email, and what we can do to mitigate these issues. Check it out…
(Photo by Keith Allison.)
November 7, 2018
On Physician Burnout and the Plight of the Modern Knowledge Worker
On Screens and Surgeons
Atul Gawande has a fascinating article in the most recent issue of the New Yorker about the negative consequences of the electronic medical records revolution. There are many points in this piece that are relevant to the topics we discuss here, but there was one observation in particular that I found particularly alarming.
Gawande introduces the Berkeley psychologist Christina Maslach, who is one of the leading experts on occupational burnout: her Maslach Burnout Inventory has been used for almost four decades to track worker well-being.
One of the striking findings from Maslach’s research is that the burnout rate among physicians has been rapidly rising over the last decade. Interestingly, this rate differs between different specialities — sometimes in unexpected ways.
Neurosurgeons, for example, report lower levels of burnout than emergency physicians, even though the surgeons work longer hours and experience poorer work-life balance than ER doctors.
As Gawande reports, this puzzle was partly solved when a research team from the Mayo Clinic looked closer at the causes of physician burnout. Their discovery: one of the strongest predictors of burnout was how much time the doctor spent starting at a computer screen.
Surgeons spend most of their clinical time performing surgeries. Emergency physicians, by contrast, spend an increasing amount of this time wrangling information into electronic medical systems. Gawande cites a 2016 study that finds the average physician now spends two hours at a computer screen for every hour they spend working with patients.
Incomplete Solutions
Electronic medical records present a complicated case. As Gawande emphasizes, this technology undoubtedly represents the future of medical care — it solves many problems, and going back to ad hoc, handwritten systems is no more viable than the acolytes of Ned Ludd demanding the return of hand-driven looms.
The solutions Gawande outline include two major themes. The first is making these systems smaller, more agile, and more responsive to the way specific physicians actually practice, instead of trying to introduce massive, monolithic software that generically applies to many different specialities.
The second theme is introducing more administrative help to mediate between the doctor’s clinical work and interactions with the electronic systems (c.f., my recent article on intellectual specialization).
What caught my attention as I read this article, however, is that many knowledge work fields have experienced a similar shift where individuals now spend increasing amounts of their day interacting with screens instead of performing the high-value activities for which they were trained (just ask any professor, computer programmer or lawyer).
For us, it’s email and instant messenger instead of electronic medical systems, but there’s no reason to believe that the effect wouldn’t be the same: more ancillary screen time produces less well-being and, eventually, more burnout.
In the rarified and focused world of medical care, there are solutions to this screen creep problem. But where are the solutions for the rest of us? This is arguably one of the biggest problems facing our increasingly knowledge-based economy, and yet few currently take it seriously.
November 1, 2018
You Are Not a Talent Agent (So Why Do You Work Like One?)
The CAA Way
I’m currently reading Michael Ovitz’s engaging new memoir. Even if you don’t know Ovitz, you definitely know his clients’ work. He’s the super-agent who co-founded the domineering CAA talent agency, and during the 1980s and 90’s become one of the most powerful figures in Hollywood.
In his memoir, Ovitz emphasizes the importance of communication in the talent business. For a talent agent, he notes, your time is one of the primary resources you have to offer, so to succeed in this field, you have to constantly talk to clients, potential clients, ex-clients you might want back, and all the assorted figures in the entertainment world orbit who might have information helpful to your clients.
One of the cardinal rules during the early years of CAA was that you always returned every call the same day. Ovitz personally exemplified this rule. He would start making calls as soon as he woke up and continue making calls until right before he went to bed. He would make hundreds of calls every day.
The importance of these touches were so important that he had a small sign that read “communicate” placed on every phone in the I. M. Pei-designed CAA headquarters.
Here’s what struck me as I read about this: in the late 1970s, when Ovitz was helping CAA gain a toehold in the entertainment industry, the need to be constantly communicating was an artificial and unnatural behavior — something that had to be purposefully instilled and enforced in his agents.
Today, by contrast, almost every knowledge worker acts like a CAA agent. We may have replaced telephones with email and instant messenger, but the underlying behavior is the same: a constant whirring of contact from when we first wake up to right before we go to bed.
The problem, of course, is that most knowledge workers are not CAA agents. Indeed, for most knowledge workers, constant communication probably makes them worse at doing the thing they supposedly do best.
Viewed with some objective distance, this is a puzzling development.
I can’t help but wonder when some new Michael Ovitz-style figure will arise, in a sector like computer programming or academia where unbroken concentration unambiguously produces value, and once again help drive his or her organization to immense success by putting small signs on each employee’s desk — except this time, they’ll read: “think, don’t talk.”
Cal Newport's Blog
- Cal Newport's profile
- 9947 followers

