More on this book
Community
Kindle Notes & Highlights
Read between
June 2, 2020 - March 6, 2025
Writing documentation takes time and effort that could be spent on coding, and the benefits that result from that work are not immediate and are mostly reaped by others. Asymmetrical trade-offs like these are good for the organization as a whole given that many people can benefit from the time investment of a few, but without good incentives, it can be challenging to encourage such behavior.
Could analytics and votes help with this? Knowing that 200 people per month view the page might make us more likely to bother to contribute an update. Or may help us know we had an impact (for self satisfaction and perf).
Code documentation is one way to share knowledge; clear documentation not only benefits consumers of the library, but also future maintainers. Similarly, implementation comments transmit knowledge across time: you’re writing these comments expressly for the sake of future readers (including Future You!).
> including Future You!
Relevant xkcd, of course: https://xkcd.com/1421/
nb: Randall visited the Goodreads office the day before he put out this comic. As part of the Q&A I asked him what his code comments were like. I.e., I catalyzed this comic 😛!
Code reviews (see Chapter 9) are often a learning opportunity for both author(s) and reviewer(s). For example, a reviewer’s suggestion might introduce the author to a new testing pattern, or a reviewer might learn of a new library by seeing the author use it in their code. Google standardizes mentoring through code review with the readability process, as detailed in the case study at the end of this chapter.
I guess this is a good time to plug pair programming.
Two people sitting down together to code is a great way to learn and teach. As anyone who's done any sort of teaching knows, the act of teaching — or even just the prep/attempt at — often results in the teacher learning, too.
The Leadership section of Google’s software engineering job ladder outlines this clearly: Although a measure of technical leadership is expected at higher levels, not all leadership is directed at technical problems. Leaders improve the quality of the people around them, improve the team’s psychological safety, create a culture of teamwork and collaboration, defuse tensions within the team, set an example of Google’s culture and values, and make Google a more vibrant and exciting place to work. Jerks are not good leaders.
I actually think Amazon's Leadership Principles are more effective at instilling the importance of this in daily work. Perhaps it's not so much the LPs themselves, but how they're used: they're promoted even more prominently than Google promotes its Three Respects, and they're built strongly into hiring and perf.
These studies showed that readability has a net positive impact on engineering velocity. CLs by authors with readability take statistically significantly less time to review and submit than CLs by authors who do not have readability.
But isn't that with readability as a requirement? I.e., of course those with Readability pass Readability faster than those without. But what about if readability isn't a requirement?
Plus, if only 1% have Readability, then the "time saved" by those with readability almost certainly does not outweigh the readability cost incurred by the other 99%.
I'm not arguing against readability, but this is a poor argument that readability has net positive impact on engineering velocity.
“Build for everyone” is a Google brand statement, but the truth is that we still have a long way to go before we can claim that we do. One way to address these problems is to help the software engineering organization itself look like the populations for whom we build products.
Another way — not to detract from this one — is to make diversity a requirement and part of the development process, like we do for security and accessibility.
For example, if you interview other engineers for positions at your company, it is important to learn how biased outcomes happen in hiring. There are significant prerequisites for understanding how to anticipate harm and prevent it. To get to the point where we can build for everyone, we first must understand our representative populations. We need to encourage engineers to have a wider scope of educational training. The first order of business is to disrupt the notion that as a person with a computer science degree and/or work experience, you have all the skills you need to become an
...more
Really, that's the first order of business?
Other issues in hiring not even mentioned:
Bias based on name: "Minorities Who 'Whiten' Job Resumes Get More Interviews" https://hbswk.hbs.edu/item/minorities-who-whiten-job-resumes-get-more-interviews. Possible recommendation: hide name in parts of the process; e.g. when HM screens candidates.
Photos of candidates available on LinkedIn. Possible recommendation: You mostly only see photos of people you're already connected with or requesting connection (many of us often need visual aid to remember people).
It’s critical that on your journey to becoming an exceptional engineer, you understand the innate responsibility needed to exercise power without causing harm. The first step is to recognize the default state of your bias caused by many societal and educational factors. After you recognize this, you’ll be able to consider the often-forgotten use cases or users who can benefit or be harmed by the products you build.
What I don't see for discrimination/bias are solid frameworks for identifying, evaluating, and countering bias.
An example of a framework, for another area, is STRIDE for security (https://en.m.wikipedia.org/wiki/STRIDE_(security)).
I don't know of as succinct of a framework for Accessibility, but often has a pretty solid set of checks for text size, touch target size, contrast, and more.
We need to understand whether the software systems we build will eliminate the potential for entire populations to experience shared prosperity and provide equal access to technology. Historically, companies faced with a decision between completing a strategic objective that drives market dominance and revenue and one that potentially slows momentum toward that goal have opted for speed and shareholder value.
This is more a law of nature than anything else. With so many factors affecting outcome (timing, competition, limited resources, etc.) we have to make priorities. That can often mean focusing on a smaller group with money before "building for all." After some success. Then you can build for all — or at least work towards it.
One example of this is Tesla, who started with the Roadster (a very niche market), and then the Model S (a luxury vehicle). It was only through that path could Tesla reach a point where it could have a viable mass-market-ish strategy (Model 3).
The point: Sometimes you have to focus on a few and be successful there; only then areare you in a position to build for all (resources, infrastructure, etc.).
Even when we do have representation, a training set can still be biased and produce invalid results. A study completed in 2016 found that more than 117 million American adults are in a law enforcement facial recognition database.
Here's a better example, I think: "Algorithms Should’ve Made Courts More Fair. What Went Wrong?" https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/: "A 2011 Kentucky law requires judges to consult an algorithm when deciding whether defendants must post cash bail. More whites were allowed to go home, but not blacks."
For example, as a hiring software engineer manager, you’re accountable for ensuring that your candidate slates are balanced. Are there women or other underrepresented groups in the pool of candidates’ reviews? After you hire someone, what opportunities for growth have you provided, and is the distribution of opportunities equitable?
Another suggestion: ensure your interviewing team is diverse.
1. This ensures you're providing that training/opportunity to everyone, and
2. Represents diversity to your candidates better, improving your chances of successful hiring.
For example, if you are an engineering manager who wants to hire more women, don’t just focus on building a pipeline. Focus on other aspects of the hiring, retention, and progression ecosystem and how inclusive it might or might not be to women. Consider whether your recruiters are demonstrating the ability to identify strong candidates who are women as well as men.
> Consider whether your recruiters are demonstrating the ability to identify strong candidates who are women as well as men.
Isn't that focusing on the pipeline?
candidates who had received a poor performance rating were likely to overcome the poor rating if they found a new team. In fact, they were just as likely to receive a satisfactory or exemplary performance rating as candidates who had never received a poor rating. In short, performance ratings are indicative only of how a person is performing in their given role at the time they are being evaluated.
Another unsaid thing behind this: a poor rating might not be just because of role at the time, but manager at the time.
I see a lot written about the imprtance of good managers. But what I don't see very often is how to deal with bad mangers. And in practice, I can't remember a bad manager ever being dealt with...
Sometimes, a TLM is a more senior person, but more often than not, the role is taken on by someone who was, until recently, an individual contributor. At Google, it’s customary for larger, well-established teams to have a pair of leaders — one TL and one engineering manager — working together as partners. The theory is that it’s really difficult to do both jobs at the same time (well) without completely burning out, so it’s better to have two specialists crushing each role with dedicated focus.
> more often than not, the role is taken by someone who was, until recently, an individual contributor... it's difficult to do both jobs at the same time
So, we take the hardest possible role — one juggling two roles — and tend to fill it with someone who doesn't even have experience with one of the roles?
Another big reason for not becoming a manager is often unspoken but rooted in the famous “Peter Principle,” which states that “In a hierarchy every employee tends to rise to his level of incompetence.”
While the Peter Principle may come into play when moving into management, it's not unique to that move. It can apply to levels within a ladder (e.g. SWE L5→ SWE L6). Switching to the manager ladder shouldn't be considered "rising" to incompetence. Just as a person should demonstrate the level above before promotion, a person should demonstrate enough competence at some key manager responsibilities before they transfer _over_ to another ladder.
There are, however, great reasons to consider becoming a TL or manager. First, it’s a way to scale yourself. Even if you’re great at writing code, there’s still an upper limit to the amount of code you can write. Imagine how much code a team of great engineers could write under your leadership!
IMHO, this is not the message to send about considering being a manager. Managing is not about self, or having people do what you want done the way you would (scaling self). Also, the responsibilities of a manager and the people they're managing are different. So, a manager _manages scale_, bit isn't _scaling self_ (except when managing managers).
IC and manager are very different roles, and this makes it sound like becoming a manager is an effective way to become a super IC ("scale" your ICness by managing others to be your mini-me's).
This practically contradicts the following section; "managing is serving first."
Whereas the assembly-line worker of years past could be trained in days and replaced at will, software engineers working on large codebases can take months to get up to speed on a new team. Unlike the replaceable assembly-line worker, these people need nurturing, time, and space to think and create.
A key part of this equation is employee mobility. If the employee doesn't like their manager/company, they are much more likely than their blue-collar or historic counterparts to be able to find a new, great job.
If there’s one thing you remember from this chapter, make it this: Traditional managers worry about how to get things done, whereas great managers worry about what things get done (and trust their team to figure out how to do it).
If you want this in book form: First, Break All the Rules (https://www.goodreads.com/book/show/20417449-first-break-all-the-rules)
It’s never any fun to pull teeth. We’ve seen team leaders do all the right things to build incredibly strong teams only to have these teams fail to excel (and eventually fall apart) because of just one or two low performers. We understand that the human aspect is the most challenging part of writing software, but the most difficult part of dealing with humans is handling someone who isn’t meeting expectations.
This is a good chance to scare you away from management so you are more aware and sure of what you're getting into: dealing with low performers is unpleasant, hard, and time consuming.
It's unpleasant letting someone know they're a low performer, which you may have to do. Worse, firing someone is also unpleasant — both when you wish you didn't have to and even when you're glad to finally get to. (If you don't find it unpleasant, I'm inclinedto believe you're not a good manager 😬 because we should have sympathy even for those who "deserve it.")
It's hard. It's a difficult skill to let someone know they're under-performing in a healthy and productive way.
And it's time consuming. Learning how to effectively deal with it, managing performance, handling PIPs, etc.. Meanwhile, it's pulling you away from the positive pattern, which you want to do: spending your time with best performers (learning from them as well as doing what you can to empower them and make them even more effective).
This will usually lead the employee to the answer,6
This also reinforces — especially with the footnote here (“Rubber duck debugging”) — why I believe in pair programming: as you work through with or explain to your pair, you often end up learning something yourself (at the very least, learning how to teach or just honing your technical communication skills) .
the manager had it solved in less than two hours simply because he knew the right person to contact to discuss the matter. Another time, a team needed some server resources and just couldn’t get them allocated. Fortunately, the team’s manager was in communication with other teams across the company and managed to get the team exactly what it needed that very afternoon.
None of these examples showed the manager unblocking _because they were a manager_. These were successful examples simply because they knew the right people. Non-managers can do that to — sure, managers might sometimes be in a position to know more of such people, or correlated with having more tenure and thus know more people.
Better examples would be using their manager position to escalate the issue (e.g., your skip-level takes you more seriously than your she will take your report).
Dan Pink explains that the way to make people the happiest and most productive isn’t to motivate them extrinsically (e.g., throw piles of cash at them); rather, you need to work to increase their intrinsic motivation. Dan claims you can increase intrinsic motivation by giving people three things: autonomy, mastery, and purpose.
Another useful framework here is ikigai (https://en.wikipedia.org/wiki/Ikigai). A way to think about aligning passion, value, skill, and work.
Leading a team is a different task than that of being a software engineer. As a result, good software engineers do not always make good managers, and that’s OK — effective organizations allow productive career paths for both individual contributors and people managers.
At every step, this process is frustrating: you mourn the loss of these details, and you come to realize that your prior engineering expertise is becoming less and less relevant to your job. Instead, your effectiveness depends more than ever on your general technical intuition and ability to galvanize engineers to move in good directions.
It's not even just galvanizing engineers, it's getting their managers on board; their product and UX teams, too. And against the processes they have in place (or just averseness to new processes from 2+ degrees away)…
So, assuming that we understand the basics of leadership, what it does it take to scale yourself into a really good leader?
Again, I protest framing this as "scaling yourself." Do the ICs of the org see themselves as "scales" of their management chain? Or is Sundar scaling _himself_? No, he's leading an organization of people not like himself.
First, you need to identify the blinders; next, you need to identify the trade-offs; and then you need to decide and iterate on a solution.
My strategy is different.
1) Identify the success/evaluation criteria (including deal-breakers). That is, what things are the most important. (This is closest to their #2, trade-offs; but we won't see trade-offs in my strategy until #3).
2) What are all of the options? List everything. This is for everyone's good; don't leave room for "but why aren't you considering <x>?" This should address blinders, their #1.
3) Evaluate the options (#2) against the criteria (#1). That is, decide which option has the trade-offs that best fit the success criteria (including best avoid the deal breakers). As I've said before, it's not about _finding_ the right answer, but figuring out which has the best trade-offs per your priorities.
A simple way to depict the trade-offs is to draw a triangle of tension between Good (Quality), Fast (Latency), and Cheap (Capacity), as illustrated in Figure 6-1.
I think the illustration is better with the dimensions on the edges. Then your choice is a point within or on the triangle, where distance from the edge indicates lack of prioritizing it (i.e. closer to edge is choosing it, further from an edge is not choosing it). Choosing a point at a vertex is thus choosing the two edges that meet therr completely over the other edge (that point is at the furthest possible point from the third edge).
It’s easy to improve any one of these traits by deliberately harming at least one of the other two. For example, you can improve quality by putting more data on the search results page — but doing so will hurt capacity and latency. You can also do a direct trade-off between latency and capacity by changing the traffic load on your serving cluster.
This allowed them to construct a metric that pitted quality-driven improvements to short-term user engagement against latency-driven damage to long-term user engagement. This approach allows us to make more data-driven decisions about product changes. For example, if a small change improves quality but also hurts latency, we can quantitatively decide whether the change is worth launching or not.
Quality-driven and latency-driven weren't even the bottom line metrics. Short-term and long-term engagement metrics were.
In other words, you could have bypassed the quality vs latency test and evaluated against what were apparently the actual trade-off metrics: engagement.
Suppose that you’re working diligently through your inbox, responding to problems, and then you decide to put 20 minutes aside to fix a longstanding and nagging issue. But before you carry out the task, be mindful and stop yourself. Ask this critical question: Am I really the only one who can do this work?
There's a simple analogy I like (here's my bad attempt at it, for now): A real estate agent could mow the lawn in 15 minutes, but pays someone to do it who will take 30 minutes. And they can clean the pool in 30, but will pay someone to do it who will take 45 to do it. This is because this will allow them to focus on doing the things only they can do (list, create compelling marketing materials, etc), and be able to handle more properties at once.
The team clings to its blinders, because the solution has become part of the team’s identity and self-worth. If the team instead owns the problem (e.g., “We are the team that provides version control to the company”), it is freed up to experiment with different solutions over time.
Related problem: people can get fixated on a potential solution, and get into "solution looking for a problem" mode.
As you moved into leadership, though, you might have noticed that your main mode of work became less predictable and more about firefighting. That is, your job became less proactive and more reactive. The higher up in leadership you go, the more escalations you receive. You are the “finally” clause in a long list of code blocks!
Goals Signals Engineers write higher-quality code as a result of the readability process. Engineers who have been granted readability judge their code to be of higher quality than engineers who have not been granted readability. The readability process has a positive impact on code quality.
What, really? Researchers think that "Engineers who have been granted readability judge their code to be of higher quality than engineers who have not been granted readability" is a valid signal? People have a tendency to validate their investments (not to mention selves).
Engineers complete work tasks faster and more efficiently as a result of the readability process. Engineers who have been granted readability judge themselves to be more productive than engineers who have not been granted readability. Changes written by engineers who have been granted readability are faster to review than changes written by engineers who have not been granted readability.
That's not at all close to "The net effect on _everyone is faster_ with the Readability process in place," espcially considering < 10% have Readability; i.e., only 10% are faster.
For readability, we had a decision of either using a poor proxy and possibly making a decision based on it, or simply acknowledging that this is a point that cannot currently be measured. Ultimately, we decided not to capture this as a quantitative measure, though we did ask engineers to self-rate their code quality.
You could teach someone incorrectly how to tie a knot, swing a good club, or bake a cake — teaching them bad technique that does more harm than good — yet if you survey them, they will probably self-rate their skill higher.
I think you can do better — that's not to say it's necessarily worth it: ask 5 people to rate the readability of code of the engineers. Don't ask them, "How would you rate the Readability of it?" but "How would you rate how easy it was to understand and maintain?"
Readability Survey: Proportion of engineers reporting that not having readability reduces team engineering velocity
But that's only because it's required; i.e., Readability is required to submit, so of course submitting will be faster if there's more Readability available on the team. But if there's no Readability requirement, then velocity would be even faster.
Logs data: Median shepherding time for CLs from authors with readability and without readability
Why isn't this metric used with the "The readability process does not have a negative impact on engineering velocity." Signal? Multiply the difference in velocity by the adoption to get an idea of the net effect of Readability on velocity.
For readability, our study showed that it was overall worthwhile: engineers who had achieved readability were satisfied with the process and felt they learned from it.
But was it worth the cost to the rest of the engineers (those that the process slowed down); to the organization (it could have overall resulted in slower delivery of features; even if at the cost of quality, sometimes that trade-off is the right trade-off) and thus less market share and revenue. Perhaps GCP could have benefited from faster time-to market with good-enough-for-initial-releases quality.
So why do we have rules? The goal of having rules in place is to encourage “good” behavior and discourage “bad” behavior. The interpretation of “good” and “bad” varies by organization, depending on what the organization cares about. Such designations are not universal preferences; good versus bad is subjective, and tailored to needs. For some organizations,
"Good" and "bad" also vary by individuals. We each think differently and process code differently. A "style" that one considers easier to understand (and therefore easier to modify/maintain) might be the opposite for someone else.
Could be purely style (e.g., alignment of parameters on wrapped lines) to more "functional" differences (e.g. short-circuiting sequence of `if` statements vs nested `if`s).
Another principle of our rules is to optimize for the reader of the code rather than the author. Given the passage of time, our code will be read far more frequently than it is written. We’d rather the code be tedious to type than difficult to read. In our Python style guide, when discussing conditional expressions, we recognize that they are shorter than if statements and therefore more convenient for code authors. However, because they tend to be more difficult for readers to understand than the more verbose if statements, we restrict their usage. We value “simple to read” over “simple to
...more
Another factor not mentioned here (nor in the linked Python style guide): safety / error-proneness.
Applying to the example here: conditional expressions have the Pro that they are less error prone. A value is guaranteed to be assigned to the variable for each outcome. The same is not as necessarily true using `if`.
For example, our Java, JavaScript, and C++ style guides mandate use of the override annotation or keyword whenever a method overrides a superclass method.
This shouldn't be considered a matter of style. It's more a matter of safety. With the @Override annotation, the annotated method becomes erroneous if either the overriden or overriding method changes.
Meanwhile, the style aspect of this rule contradicts another one of Google's Java style: not prefixing names indicating their type (e.g., "m" for member field: https://google.github.io/styleguide/javaguide.html#s5.1-identifier-names). Because why put into code what we should expect an IDE to indicate for us (syntax highlighting, icons in margin, etc.).
Python’s dynamic nature allows such behavior, and in very limited circumstances, using hasattr() and getattr() is valid. In most cases, however, they just cause obfuscation and introduce bugs. Although these advanced language features might perfectly solve a problem for an expert who knows how to leverage them, power features are often more difficult to understand and are not very widely used.
To be clearer, what's best, in order, is what can be apparent at:
1. Coding time
2. Compile time
3. Testing time
4. Runtime
Reflection makes things unapparent for 1-3
We need all of our engineers able to operate in the codebase, not just the experts.
It's not even for experts. Even "experts" can't know about every little nook and cranny of the codebase, nor should have to perform exhaustive searches to make simple changes.
It's about safety, efficiency, and convenience for novice and expert alike
In fact, the C++ style arbiter group currently consists of four members. This might seem strange: having an odd number of committee members would prevent tied votes in case of a split decision. However, because of the nature of the decision making approach, where nothing is “because I think it should be this way” and everything is an evaluation of trade-off, decisions are made by consensus rather than by voting. The four-member group is happily functional as-is.
But even "evaluation of tradeoffs" is subjective. That is, there might not be consensus on which trade-off is better than another for this problem, or the speculative frequency or severity the trade-off will have on the many individuals.
The question of line lengths has stopped being interesting.13 Engineers just run the style checkers and keep moving forward. When formatting is done the same way every time, it becomes a non-issue during code review, eliminating the review cycles that are otherwise spent finding, flagging, and fixing minor style nits.
Definitely think this point is understated. Time saved, and perhaps even arguments avoided and friendships spared.
Comprehension of Code A code review typically is the first opportunity for someone other than the author to inspect a change.
I don't think the importance of comprehension is sold well enough (or early enough). Code _will_ need to be modified at some point. The first, essential step in modifying code is understanding it. Code that's hard to understand is not just harder to modify, it's more dangerous (did the modifier understand everything the code does so the modifier can avoid unintentional changes in behavior?
So, that's why it's valuable for code review to ensure comprehensibility.

