Cal Newport's Blog, page 38
September 6, 2016
A Productivity Lesson from a Classic Arcade Game
The Distracted Gamer
A reader recently shared with me an interesting observation from his own life.
To provide some context, this reader is a fan of the classic arcade game snake (shown above). This game is hard: as your snake grows, it requires an increasing amount of concentration to avoid twisting back on yourself and ending the round.
What this reader noticed was that whenever he paused the game for a quick interruption (e.g., answering a text or talking to someone who walked into the room), he became significantly more likely to fail soon after returning to play.
These arcade struggles might not sound that surprising, but they turn out to be a great example of a psychological effect that every knowledge worker should know about: attention residue.
The Most Important Theory You’re Ignoring
The research literature on attention residue, which was pioneered by business professor Sophie Leroy, reveals that there’s a cost to switching your attention — even if the switch is brief.
When you turn your attention from one target to another, the original target leaves a “residue” that reduces cognitive performance for a non-trivial amount of time to follow.
This was likely the effect that was tanking my reader’s arcade performance: when he switched his attention to the new target presented by an interruption, and then back to his game, the resulting attention residue reduced his cognitive performance and therefore his game play suffered.
As I argued in Deep Work, this effect can have a profoundly negative impact on knowledge worker productivity.
In more detail, most knowledge workers who claim to single task are actually primarily working on one thing at a time, but punctuating this work with a frequent series of just checks (quick glances at text messages, email inboxes, slack channels, social media feeds, etc…just in case something important has arrived).
This type of pseudo-focus might seem better than old school multitasking (in which you try to work on multiple primary tasks simultaneously), but attention residue theory teaches us that it might be just as bad.
Each one of those just checks shifts your attention. Even if this shift is brief (think: twenty seconds in an inbox), it’s enough to leave behind a residue that reduces your cognitive capacity for a non-trivial amount of time to follow.
Similar to our reader from above losing his ability to play snake at a high level, your ability to write/code/strategize at a high level is significantly diminished every time you let your attention drift.
If, like most, you rarely go more than 10 – 15 minutes without a just check, you have effectively put yourself in a persistent state of self-imposed cognitive handicap. The flip side, of course, is to imagine the relative cognitive enhancement that would follow by minimizing this effect.
To put this another way: if you commit to long blocks without any interruption (not even the quickest of glances), you’ll be shocked by how much sharper and productive you feel.


August 30, 2016
Technology Alone Won’t Make You Better at What You Do
Some Insights from a Geek’s Heresy
I recently began reading Kentaro Toyama’s 2015 book, Geek Heresy: Rescuing Social Change from the Cult of Technology.
To provide some background, in 2005 Toyama cofounded Microsoft Research India, which focused on applying technology to social issues. He then left for academia where he began to study such efforts from an objective distance. Geek Heresy describes what he found.
I’m only through around 100 pages, but so far Toyama’s conclusions have been bracing.
He leverages a blend of research and firsthand experience to dismiss the cult-like belief (common in Silicon Valley) that hard social problems can be solved with the application of the “right” technology (an illustrative target of Toyama’s critique is Nicholas Negroponte’s belief in the power of cheap laptops to cure all that ails the developing world).
For the purposes of this post, however, I want to highlight a powerful observation detailed in Chapter 2. It’s here that Toyama introduces what he calls the Law of Amplification, which he defines as follows:
[T]echnologies primary effect is to amplify human forces. Like a lever, technology amplifies people’s capacities in the direction of their intentions.
As Toyama elaborates, you cannot expect a technology to transcend existing social forces or transform existing intentions; it tends instead to amplify whatever tendencies are already in place; c.f., social media, which instead of transforming us into a newly informed global community, supercharged our existing biases for gossip, self-aggrandizement, and easy distraction.
It seems to me that among many applications, this “law” is quite relevant for understanding some of the issues concerning technology in the workplace that I’ve been grappling with over the past few years.
Consider, in particular, my favored target of the moment: workplace email.
To understand email’s impact in the context of this law we need to step back and understand the forces at play right before this tool’s arrival.
Fortunately, we can get an insightful look at this period in another book I recently read, Leslie Perlow’s 1997 treatise, Finding Time: How Corporations, Individuals, and Families Can Benefit from New Work Practices, which describes nine months Perlow spent observing software developers in a Fortune 500 company.
Finding Time paints a picture of a pre-email workplace in which many of the problems that define our current age are incipient. Managers interrupt workers constantly to check in and unexpectedly shift priorities. There’s little structure to how the day unfolds, and in its place are an atmosphere of crisis, fragmented time, long hours, and a sense of very little “real” work getting done.
(Interestingly, later in the study, Perlow ends up convincing the developers to deploy prescheduled deep work blocks into their day — to much success.)
The arrival of email into the workplace was generally accepted by our culture as a good thing. Making communication faster and people more accessible, it was argued, should only increase options and opportunity (which, as Kevin Kelly explains, is What Technology Wants).
According to Toyama’s Law of Amplification, however, this new technology should instead be expected to amplify the types of existing intentions and trends Perlow documented.
Which is exactly what it did.
As I mentioned in a recent post, digital communications certainly did simplify some existing processes in a positive manner. But it’s main impact has been to take the culture of crisis and interruption Perlow found bubbling below the surface in the late 1980s and early 1990s, and erupt it into the new, stress-inducing whirlwind that defines 21st century knowledge work.
To step back: Toyama’s book primarily addresses the folly of assuming technology in isolation can solve social problems. But I’m arguing that this warning also applies to the world of work.
As the history of email teaches us, a new technology, no matter how slick, cannot by itself transform a workplace for the better. This remains a deeply human endeavor. We must figure out what it means to work well in the 21st century and then slot in technologies only where and how they fit this plan. These tools cannot do this hard work for us.
August 25, 2016
A Brief Note on Tenure
I don’t like talking about myself (outside discussions of hyper-specific productivity techniques), so I’ll keep this announcement brief…
At some point early on in my graduate student career I set two somewhat arbitrary goals for my academic trajectory: to become a professor by the age of 30 and tenured by the age of 35.
I ended up starting at Georgetown at the age of 29, and earlier this summer I earned tenure at the age of 33 (though I since turned 34).
There are many factors that help fuel an academic career, and many fell outside my direct control.
But reflecting on these past five years, it’s easy for me to identify what was by far the highest ROI activity in my professional life: deep work.
I know I’ve said similar things a million times before. And it’s not sexy. And it’s not a contrarian “hack.”
But in my case, focusing intensely on hard things that people unambiguously value, day after day, week after week, was more or less the whole ball game.
August 21, 2016
Email is Most Useful When Improving a Process that Existed Before Email
Connectivity Contradictions
Recently, I’ve been collecting stories from people who held the same type of job before and after the introduction of email. Something that struck me as I sorted through these recollections is their variety.
Email was a miracle to some.
For example, I talked to a woman who has spent many years in mergers and acquisitions. These deals, it turns out, require large contracts to be received and sent with urgency at unexpected times.
Before email, this meant weekends camped out at the office.
“If I was expecting a new version of a merger agreement, I would have to stand outside the fax room waiting for my 200-page document and then call to ask the other side to re-fax any missing pages,” my source recalled.
“If there was even a possibility that I would be needed, it made no sense to go home…people would sleep at the office.”
With email, these same urgent documents could suddenly reach her anywhere — greatly reducing time wasted squatting by the warmth of a fax modem and increasing time with her family.
“Email has been a plus,” she concludes.
But email was also a curse to many others.
One teacher I spoke with, for example, told me about how the arrival of email made teachers at her school suddenly available to parents in a way they never had been before.
The school eventually instituted a policy that all such emails must be answered within 48 hours.
“Email exploded,” my source recalled. “My planning period was spent reading and answering emails…forget planning. [It became] a huge distraction from the already very difficult job of teaching.”
A Useful Heuristic
How do we make sense of these contradictions?
As I sorted through more stories like the above an interesting pattern emerged.
Email seems to be at its best when it directly replaces a professional behavior or process that existed before email’s rise.
For example, in mergers and acquisitions, the urgent and hard to predict delivery of complicated contracts has long been a necessary and important behavior. Sending these documents by email is much easier than relying on fax machines.
On the other hand, email seems to be at its worst when it helps instigate the sudden arrival of a new behavior or process that didn’t exist before.
In teaching, for example, pre-email parents didn’t have nor did they expect ubiquitous access to their children’s teachers. There was no pressing pedagogical or parental need for such access.
Once teachers got email addresses, however, this new behavior emerged essentially ex nihilo and began to cause problems.
On reflection, this heuristic makes sense…
If a behavior or process has been around for a long time in a given profession, it probably serves a useful purpose. Therefore, if a technology like email can make it strictly more efficient/easy, then it’s a clear win.
By contrast, when email helps instigate a new behavior or process, this development tends to occur in a bottom up fashion. That is, no one identifies in advance the new behavior or process as being something that’s useful — it’s instead driven by in-the-moment convenience and happenstance. (For more on this idea of unguided emergence see Leslie Perlow’s discussion of “cycles of responsiveness” in Sleeping with Your Smartphone .) This is not a great way to evolve professional practices, so we shouldn’t be surprised that the results are often exhausting and counterproductive to those forced to live with them.
This is not to say that we shouldn’t develop new behaviors and processes in our professional lives. But we should be wary of those that emerge without our explicit consent. Email, in this accounting, should be a source of concern not because it’s intrinsically bad, but because it’s so easy and convenient that it tends to encourage the emergence of these new unguided and often draining behaviors.
August 2, 2016
On Primal Productivity
A Primal Movement
The primal/paleo philosophy argues that we’d all be better off behaving more like cavemen.
In slightly more detail, this school of thought notes that humankind evolved over hundreds of thousands of years to thrive with a paleolithic lifestyle. The neolithic revolution, which started with agricultural, and quickly (in evolutionary timescales) spawned today’s modern civilizations, is much too recent for our species to have caught up.
By this argument, we should look to paleolithic behavior to shape our basic activities such as eating, exercising, and socializing. To eat bread, or sit all day, or center our social life on a small electronic screen, is to fight our genetic heritage.
Or something like that.
This philosophy attracts both righteous adherents and smug critics. And they both have a point.
I maintain, however, that this type of thinking is important. Not necessarily because it’s able to credibly identify “optimum” behaviors, but because it poses clear thought experiments that are worthy of discussion.
An Interesting Thought Experiment
It’s with this spirit of exploration in mind that I pose the following prompt: what would the primal/paleo movement have to say about productivity?
I’m no paleoanthropologist, but pulling from a common sense understanding of this era, I would point toward the following three dictums as a reasonable approximation of what it might mean to work like a caveman:
Rule #1: Work on one thing intensely with plenty of rest surrounding this effort.
Rule #2: Develop an expert skill/craft from which your status and value to the tribe is primarily derived.
Rule #3: Work closely with a small team oriented toward the same goal, with outside communication nonexistent or rare.
Of course, this is all somewhat pie in the sky, but what strikes me about this thought experiment is how far modern knowledge work has drifted from how we likely spent hundreds of thousands of years approaching our daily labor.
For those interested in the concept of workflow engineering, questions like the above are important. It’s not that we can figure out exactly how paleolithic man functioned, nor would we want to follow that script precisely in the modern era.
But this thought experiment forces us to confront, and ultimately justify, the mismatch between how we’ve been wired over the eons to function and the recently emerged and somewhat arbitrary work patterns of the digital age.
#####
If you’re interested in reading detailed summaries of the top books on these primal paleo ideas (such as this and this and this), consider trying out Study Hack’s sponsor Blinkist — my new secret weapon for quickly figuring out which books are worth my time and attention.
(Photo by Steve Schroeder)
July 20, 2016
No Email, No Problem: A Workflow Engineering Case Study
An Insightful Tale
I recently ate lunch with an executive who manages several teams at a large biomedical organization. He told me an interesting story.
Not long ago, he hired someone new to help tackle an important project. A logistical problem, however, delayed some paperwork processing for the new employee.
The result was that he spent his first week with no company email address.
In isolation, this is just a story of minor HR bungling. But what caught my attention was what happened as a result of this accidental experiment in email freedom: nothing bad.
The Workflow Engineer
The fact that the new employee had no email address had no discernible impact on his productivity. In his first week, he jumped right in and became a valuable contributor — even though he couldn’t be reached by digital means.
The secret to this surprising outcome is the mindset of the executive who made the hire. It turns out that this executive is a supporter of workflow engineering (though he wouldn’t use that term).
He rejects the conventional wisdom that the best way to manage knowledge workers is to give everyone an email address or slack id and then just rock and roll — figuring things out as they arise in an unstructured, incessant flow of messages.
He instead asks, “what’s the best way for you to get your job done?”, and is willing to experiment relentlessly to validate his intuitions.
Scrum over Email
This brings us back to the new hire without an email address. The executive managed him using a variation of the scrum project management methodology (for more on scrum, read this book or this book or this book — all three of which I devoured after hearing this story over lunch).
In more detail, the executive had the new hire externalize his obligations onto a physical board split into columns for tasks in waiting, tasks underway, and tasks completed.
Each morning, the executive holds a quick in-person meeting with the hire. They look at the board and discuss what the hire will be working on that day and what he needs to succeed. A plan is agreed upon and the hire then turns his attention to execution.
Many knowledge workers implicitly implement similar workflows using email. The tasks on their plate exist only as pointers in obtuse messages lurking in an overflowed inbox, and coordination and planning takes place throughout the day in a lazy exchange of dashed off notes and questions. This approach works well enough, but it’s exhausting, and it fragments attention, and, in general, is riddled with inefficiency.
The executive and the new hire made those implicit workflows explicit. And once they did, it was clear that in this instance email was not an optimal implementation of what needed to get done.
The Power of Workflow Engineering
My goal in this post is not to promote this particular scrum-style approach to knowledge work. It might be a good fit for some jobs but not for others. What I do want to promote is the workflow engineering mindset that generated it.
The executive in this story wasn’t content to simply accept the sugar-coated convenience of email-based management. He instead made the effort necessary to investigate the best way to complete a given knowledge work objective, and the result was a worker who, by all accounts, is exceptionally productive, and probably much less stressed than his inbox-slaved counterparts.
(Photo by barcoo)
July 14, 2016
From Descartes to Pokemon: Matthew Crawford’s Quest to Reclaim our Attention
The Crawford Prescription
Matthew Crawford is one my favorite social critics.
(Damon Linker got it right when he quipped in The Week: “Reading [Crawford] is like putting on a pair of perfectly suited prescription glasses after a long period of squinting one’s way through life.”)
Crawford’s 2009 book, Shop Class as Soulcraft, which I draw from in Deep Work, takes on the bewildering, dehumanizing mess that is the knowledge economy.
His 2015 follow-up, The World Beyond Your Head, takes on the natural next topic: the attention economy.
This book is complicated and ambitious. But there’s one thread in particular that I think is worth underscoring. Crawford notes that the real problem with the current distracted state of our culture is not the prevalence of new distracting technologies. These are simply a reaction to a more fundamental reality:
“[W]e are agnostic on the question of what is worth paying attention to — that is, what to value.”
In the absence of strongly-held answers to this question our attention remains adrift and unclaimed — we cannot, therefore, be surprised that app-peddlers and sticky websites swooped in to aggressively feast on this abundant resource.
Ecologies of Attention
Crawford’s solution (which echoes the concluding chapter of the also excellent All Things Shining) is that we need to apprentice ourselves to existing ecologies of attention — his term for the well-defined frameworks that specify what matters, what doesn’t, and why, that suffuses most craft endeavors.
He spends time profiling short-order cooks, hockey players, and a team that custom builds baroque-style pipe organs. As you master these well-structured crafts, he argues, you don’t lose autonomy, you instead gain it.
A cook in his kitchen or hockey player on the ice makes clear distinctions between what’s worth paying attention to and what is not. They perceive their surroundings with a nuance and richness lost to the uninitiated. They exert agency, in other words, on the formation of the world beyond their head.
When you exist outside of such ecologies, by contrast, you lose this agency and your world is instead shaped for you by other, often mechanistic means — be it Facebook’s newsfeed or the algorithm Niantic Labs deploys to locate Pokemon on your block.
This idea is subversive. Since Descartes, we’ve lionized the ability for the individual to create his or her own value judgments. But when it comes to the question of what we pay attention to Crawford argues we need help.
My intuition is that many of my generation are increasingly craving a return to this attentional autonomy. There’s something dehumanizing about the endless attention engineered feeds scrolling by on smart phones. We’re ready for the hard, structured work required to take back control of our mental life.
#####
If you’re interested in Crawford, a good starting point is Brett McKay’s recent podcast interview with the author — it’s quite good.
If you’re interested in these types of books, but would like smart summaries of the main points before deciding whether or not to buy, I recommend trying Blinkist. They recently sent me a free membership and I’ve been loving it.


July 7, 2016
If You Don’t Choose Your Work Habits, Your Habits Will Choose You
The Nature of Our Business
When Harvard Business School professor Leslie Perlow began her multi-year study of consultants at the high-pressure Boston Consulting Group (BCG), she was quick to identify a defining behavior of her subjects: they were always connected. The pressure for them to check their email at all waking hours was intense — a point captured in the title of Perlow’s 2012 book on her research, Sleeping with Your Smartphone.
As Perlow summarized in an HBR article on the topic, the BCG consultants, like many knowledge workers, see this constant connectivity simply as “the nature of our business.”
To me, however, the important question lurking behind this topic is how did this behavior become so natural?
And it’s here that Perlow’s research on BCG uncovers an interesting answer…
The Cycle of Responsiveness
There was never a BCG board meeting or company wide strategy session where it was decided that significantly increasing consultant connectivity would boost profits or improve morale.
Instead, as Perlow documents, this behavior emerged in an undirected and incremental manner within the organization…
It would start with a legitimate need: perhaps a consultant (let’s call her Alice in this example) begins checking her email later at night to match the distant time zone of a client.
Once online, Alice might answer a few stray emails from her coworkers to get ahead on her inbox processing.
These coworkers, seeing that Alice seems to be online at night develop an expectation that a message sent late in the day will be answered before the next morning…so they start to send more messages of increased urgency after hours.
Alice responds to this pressure by spending more time online during the evening to answer these messages quicker.
This resets the expectations of Alice’s peers to the point where they now expect a response within an hour at any point in the evening.
And so on…
Repeat similar “cycles of responsiveness” (to use Perlow’s term) all across an organization and it doesn’t take long before a company evolves a culture of constant connectivity — the new behavior, seemingly overnight, becoming “the nature of our business.”
The key point in Perlow’s description is that this cycle is haphazard. It’s driven by convenience, expectations, and human nature — not a careful process analysis. We have, in other words, no reason to believe that this behavior is a particularly effective way to conduct business — and yet most modern knowledge workers accept it as an immutable law.
The Need for Workflow Engineering
A few months ago, I introduced the concept of workflow engineering. The basic idea is that knowledge workers should subject their workflows to the same metric-driven analyses that massively increased the profitability of the industrial sector in the early 20th century.
The cycle of responsiveness summarized above underscores the importance of this philosophy. When you don’t subject your professional behavior to scrutiny, and instead just accept the emergent status quo as the “nature of our business,” you may not be happy with what evolves.
In the end, it may turn out that our smartphone addictions are a boon to our productivity (though I doubt it), but as it stands we don’t know either way, as this is just one of many behaviors in 21st century knowledge work that we’re content to allow to emerge without constraint or evaluation.
Put another way: until we’re willing to work seriously on how we work, we’re in for an unpredictable and bumpy ride.


June 28, 2016
Aziz Ansari Ignores His Email
Deep Thoughts with Aziz Ansari
Last summer, comedian and actor Aziz Ansari was a guest on Stephen Dubner’s Freakonomics Radio show.
The stated purpose was to discuss Ansari’s book, Modern Romance, but the conversation wandered toward a wide-ranging exploration of Ansari’s complicated relationship with the Internet. I thought I would excerpt some choice quotes below.
Here’s Ansari on email versus depth:
“I would just get so many emails. And then when I started filming my TV show I just set up a thing that said, this email is dead. I’m not checking email…And I had an assistant on my show and I was like, you can call her…And you know what you realize is, all that shit people email you about all the time, all day, none of it is important. None of it is pressing…I found that I’m much more focused when I don’t have those little questions. And then at the end of the day I just have someone fill me in on everything or I call someone on the phone.”
And here he is on his social media habits:
“I deleted Twitter and Instagram off my phone. I mean I use them to like post stuff but I don’t have them on my phone. I don’t have, like, a feed. I don’t follow anyone. And I used to read that stuff a lot. And now I don’t read it. I don’t see those pictures. And I don’t miss it.”
And on why people spend so much time online:
“What you’re reading it for, and this is just my personal theories about this stuff, what you’re reading it for is a hit of this drug called the Internet.”
And his novel idea for putting the value of most Internet content into perspective:
“Like, here’s a test, OK. Take, like, your nightly or morning browse of the Internet, right? Your Facebook feed, Instagram feed, Twitter, whatever. OK if someone every morning was like, I’m gonna print this and give you a bound copy of all this stuff you read so you don’t have to use the Internet. You can just get a bound copy of it. Would you read that book? No! You’d be like, this book sucks. There’s a link to some article about a horse that found its owner somehow. It’s not that interesting.”
These insights, of course, all lead me toward an insistent question: How can I get this man a copy of Deep Work?


June 21, 2016
Milton Friedman’s Deep Work Seasons
Deep Economics
I’m always looking for particularly inspiring or exotic examples of deep work habits. With this in mind, I was pleased when an alert reader named Stepan recently sent me an interesting case study concerning the Nobel Prize winning economist Milton Friedman.
The following quote is taken from an interview with Friedman published in a macroeconomics textbook:
“[W]e typically spent three solid months in the country at our second home in New Hampshire to begin with and later on in Vermont. Then later on I split my life 50–50: we spent six months a year in Chicago and six months a year in Vermont. Almost all of my writing was done in Vermont or in New Hampshire, relatively little during the actual school year.”
Friedman goes on to elaborate how he maximized depth during his periods away from Chicago:
“I managed pretty much to keep down outside activities. I didn’t go away from Vermont or New Hampshire to make speeches or to address committee meetings or hearings. There were occasional exceptions but for the most part I made it an absolute rule. When I look at my remaining diaries from that period I am shocked by how full the pages are when I am in Chicago and how empty they are when I’m up in Vermont or New Hampshire [laughter]. So that’s the only reason I was able to write as much as I did.“
Readers of Deep Work will recognize this as an extreme version of the bimodal method deployed by deep thinkers as varied as Adam Grant to Carl Jung.
It also reminds me of my time at MIT. When summer rolled around, it sometimes seemed as if most every important professor at Harvard and MIT would decamp to northern New England to do the type of thinking that made them important professors in the first place.
It has always surprised me that these bimodal rhythms aren’t more widely used in other fields were elite level deep thinking produces high value results. Put another way, the key in the above quotes is not how much work Friedman accomplished at his country house, but is instead how little was accomplished at his office.
#####
P.S., for another interesting discussion of deep work, listen to Tim Ferriss’s recent interview with Jamie Foxx. About an hour into the interview, Ferriss details his theory about how social media is hurting some high-level creatives more than it helps them by crippling their ability to go deep. Foxx, who knows a little something about high-level creative output, enthusiastically agrees.
Cal Newport's Blog
- Cal Newport's profile
- 9854 followers
