Some More Thoughts on Grad School
Re-Reflection
In 2009, as I was approaching the end of my Phd program, I wrote a blog post titled, Some Thoughts on Grad School. It described lessons I learned during my time at MIT.
Since then, I’ve received many requests to revisit the theme. Now that I’m a professor — albeit a new one — I thought I’d once again reflect publicly on what I did well and what I wish I’d done better.
With this in mind, I want to offer a pair of thoughts on a topic of particular importance to my path as an academic: complexity.
Thought #1: Avoid Complexity When Seeking Problems
Early in my graduate school experience, I had a mentor named Rachid — a well-known distributed algorithm specialist from EPFL. I learned many things from Rachid. For example, I once asked him for advice on a summer internship I was considering. I made different arguments about the value of gaining connections and learning about industry.
“If you want my personal opinion,” he replied, “your time is better spent at MIT, preparing the next STOC/SOSP/JACM paper.”
To put this in context: STOC, SOSP, and JACM are acronyms for some of the most elite conferences and journals in the field of computer science. The lesson Rachid offered — which I’ve since strongly embraced — is that in the end, hard results are all that count.
But the Rachid lesson I want to emphasize here is about the danger of complexity. His approach was to always reduce a problem to its purest, most simple form. This is what leads to true understanding of the mathematical reality underlying the issue, he believed. Once you’re armed with this understanding, you can then, and only then, add back details (and the complexity they require) with confidence.
If you want to see this philosophy in action, take a look at this paper I co-authored with Rachid and another graduate student from MIT. The big picture problem that interested us was messy: how do parties work together to solve problems when their only means of communication is a broadcast channel where a malicious adversary can both jam and spoof messages?
You’ll notice in the paper, however, that we immediately reduce this down to the simplest possible expression of what makes this setting difficult: two players, Alice and Bob, trying to communicate a single bit, while a third player, Collin (the collider), tries to disrupt things.
All of the results in the paper build on our deep understanding of this simple three-player game.
(For what it’s worth, the paper has since been cited around 50 times.)
The problem here is that most graduate students tend toward the opposite of this approach. Their biggest fear is that they’ll propose a result and someone more knowledgeable will look at it, declare it “trivial,” and therefore validate their nagging imposter syndrome. Accordingly, students tend to rush to add technical complexity right away, as if a page full of math validates their ability.
This approach is flawed because it’s hard to make an impact in a technical field without deep understanding, and it’s hard to build deep understand of anything that’s not dead simple to describe. This is why the most respected professors are often those who are most likely to interrupt you and say, “slow down, and explain this to me like I don’t understand anything.”
They don’t want equations, they want insight.
Bottom Line: Hold off complexity as long as possible when studying a problem. It will inevitably enter the scene, but the later the entrance, the more insight you’ll develop.
Thought #2: Seek Complexity in Your Technical Skills
My first thought concerned something I think I do pretty well. My second thought concerns something I didn’t do enough as a graduate student, and that I’m only now, painfully, learning to embrace.
The value of a graduate student (not to mention, an assistant professor), I’ve come to realize, is directly proportional to the quantity and complexity of their technical tool kit. If you study algorithms, for example, the more corners of the literature you’ve mastered, and the more mathematical analysis techniques you’re comfortable with, the more problems you’ll be able to solve. And the more problems you’re able to solve, the more likely that you’ll solve some hard ones — the key currency for an academic career.
This thought doesn’t contradict the first thought (though it might seem to). When tackling a problem, you want to start with its simplest expression. To find a good problem and then make sense of its simplest expression, however, you need the most powerful possible combination of knowledge and skills.
The trickiness here is that mastering new knowledge and learning new technical skills is like learning to play a new instrument: it’s difficult, and frustrating, and takes a long time.
All graduate students are forced to develop a basic tool kit due to the deliberate practice required to pass your courses and contribute to your first publications. The students that thrive, however, don’t stop there; they keep pushing themselves to learn more.
I didn’t do nearly enough of this.
It took me two years to get decent at solving a certain class of problems concerning deterministic distributed algorithms (roughly 2004 – 2006). There was then a two year period where I was satisfied to use only this hammer and go seek nails, no matter how hard they became to find.
The issue I faced was that my field was moving forward. Randomization was where the interesting new work was being done, and my approach was in danger of becoming dated.
It wasn’t until 2008 that I began the dreary effort of teaching myself probability theory. In this early paper, for example, you can see the beginning of the transition: the majority of the results are deterministic, but they draw on a tentative, randomized sub-routine. (This is where, for example, I reintroduced myself to Dr. Chernoff).
The next year I published this paper, which pushed me forward in my learning, but was also a terrible strain. A significant fraction of its results came from the following process:
I would get stuck because I didn’t know enough probability theory.
I would go talk with one of my co-authors, who would reply by filling a white board with a bunch of inequalities.
I would scramble back to my office and try to recreate the argument from scratch, filling in the details, before it slipped my mind.
I would return to my co-author to discover that I had fouled up my dependencies in some terrible way that would likely involve the intervention of something called a “Martingale.”
This was pretty brutal. But I learned quite a bit.
I am realizing now, however, that my pace was still too slow. For example, I should have shot past independent probabilities and mastered techniques for bounded dependence. This is a natural — though difficult — next step that I avoided for too long.
Over the past year, I’ve been systematically increasing my pace of skill learning (more on this soon), but if I had committed to this mindset with more purpose back in 2006, I’m embarrassed to think about the extraordinary impact on my work it might have had by now.
Bottom Line: Treat your time as a graduate student like a professional musician treats his or her performance repertoire. If you’re not constantly straining yourself to learn more and perform better, you’re in danger of being left behind.
(Photo by Nietnagel)


Cal Newport's Blog
- Cal Newport's profile
- 9851 followers
