How Google Tests Software Quotes

Rate this book
Clear rating
How Google Tests Software How Google Tests Software by James A. Whittaker
829 ratings, 3.79 average rating, 69 reviews
Open Preview
How Google Tests Software Quotes Showing 1-30 of 34
“Scarcity brings clarity.”
James A. Whittaker, How Google Tests Software
“As test documentation goes, test plans have the briefest actual lifespan of any test artifact. Early in a project, there is a push to write a test plan [...]. Indeed, there is often an insistence among project managers that a test plan must exist and that writing it is a milestone of some importance. But, once such a plan is written, it is often hard to get any of those same managers to take reviewing and updating it seriously. The test plan becomes a beloved stuffed animal in the hands of a distracted child. We want it to be there at all times. We drag it around from place to place without ever giving it any real attention. We only scream when it gets taken away.”
James A. Whittaker, How Google Tests Software
“Time is the strangest substance known to man. You can’t see, touch, hear, smell, taste or avoid it. Time makes you stronger-minded but weaker-bodied, gradually transforming you from blushing grape to ornery, grouching raisin. Time is the most precious thing you have, yet you’re happiest when you’re wasting it. Time will outlive you, your offspring, your offspring’s robots and your offspring’s robots’ springs.”
James A. Whittaker, How Google Tests Software
“Small tests lead to code quality. Medium and large tests lead to product quality.”
James A. Whittaker, How Google Tests Software
“Each of the sub-experiments was run similarly—with each engineer owning his mission and design and collaborating at will with their peers. The primary management pull is simply to ensure that their work is reusable, shareable, and avoids limitations. Continually ask them to think bigger even as they code individual features. Google’s culture of sharing ideas—supporting bottom-up experimentation—and organizational flexibility creates a fertile world for test innovation. Whether it ends up valuable or not, you will never know unless you build it and try it out on real-world engineering problems. Google gives engineers the ability to try if they want to take the initiative, as long as they know how to measure success.”
James A. Whittaker, How Google Tests Software
“BITE stands for Browser Integrated Test Environment. BITE is an experiment in bringing as much of the testing activity, testing tools, and testing data into the browser and cloud as possible, and showing this information in context. The goal is to reduce distraction and make the testing work more efficient. A fair amount of tester time and mental energy is spent doing all this manually.”
James A. Whittaker, How Google Tests Software
“BITE tries to address many of these issues and lets the engineer focus on actual exploratory and regression testing—not the process and mechanics. Modern jet fighters have dealt with this information overload problem by building Heads Up Displays (HUDs). HUDs streamline information and put it in context, right over the pilot’s field of view. Much like moving from propeller-driven aircraft to jets, the frequency with which we ship new versions of software at Google also adds to the amount of data and the premium on the speed at which we can make decisions. We’ve taken a similar approach with BITE for regression and manual testing. We implemented BITE as a browser extension because it allows us to watch what the tester is doing (see Figure 3.35) and examine the inside of the web application (DOM). It also enables us to project a unified user experience in the toolbar for quick access to data while overlaying that data on the web application at the same time, much like a HUD.”
James A. Whittaker, How Google Tests Software
“Candidates who mindlessly jump into coding problems will do the same with testing problems. If we pose a problem to add test variations to modules, we don’t want them to start listing tests until we tell them to stop; we want the best tests first. An SET’s time is limited. We want candidates to take a step back and find the most efficient way to solve the problem, and the previous function definition can use some improvement. A good SET looks at a poorly defined API and turns it into something beautiful while testing it.”
James A. Whittaker, How Google Tests Software
“Bugs that make it out to customers are an important measure for me to monitor the impact I am bringing to a team. I like that number to be around 0, roughly! Also, I really take to heart what the buzz is about my project. If my project has a reputation for bugs or a crappy UI, say, in user forums (watch user forums closely!), I take that as signal for me to improve the level of impact I have on a project. A project can also suffer from bug debt, old bugs that never get fixed, so I also measure my impact by how many old bugs exist that still affect users today. I make an effort to escalate these and use the time a bug has existed as part of the justification for upping its priority.”
James A. Whittaker, How Google Tests Software
“Quality cannot be tested in” is so cliché it has to be true. From automobiles to software, if it isn’t built right in the first place, then it is never going to be right. Ask any car company that has ever had to do a mass recall how expensive it is to bolt on quality after the fact. Get it right from the beginning or you’ve created a permanent mess. However, this is neither as simple nor as accurate as it sounds. Although it is true that quality cannot be tested in, it is equally evident that without testing, it is impossible to develop anything of quality. How does one decide if what you built is high quality without testing it? The simple solution to this conundrum is to stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Test isn’t a separate practice; it’s part and parcel of the development process itself.”
James A. Whittaker, How Google Tests Software
“I think the gating point for determining when you can stop testing is when you can feel confident that if any bugs do remain, they are in components (or features or browsers or devices) that have a relatively low usage and thus a low impact on users if they are broken in some way. This is where prioritizing the functionality and supported environments for the application really comes into play.”
James A. Whittaker, How Google Tests Software
“Being an advocate for bug resolution is an important facet of my work. I am constantly battling against feature development versus developer time for bug fixes. So I am sure to make user feedback my ally when I am justifying a bug fix. The more user complaints I can find about a bug that might not otherwise be fixed, the more I can prove that the developer’s time is not wasted by fixing the issue instead of starting on that new feature.”
James A. Whittaker, How Google Tests Software
“Specifically, engineering roles that enable developers to do testing efficiently and effectively have to exist. At Google, we have created roles in which some engineers are responsible for making other engineers more productive and more quality-minded. These engineers often identify themselves as testers, but their actual mission is one of productivity. Testers are there to make developers more productive and a large part of that productivity is avoiding re-work because of sloppy development. Quality is thus a large part of that productivity. We are going to spend significant time talking about each of these roles in detail in subsequent”
James A. Whittaker, How Google Tests Software
“At times, testing is interwoven with development to the point that the two practices are indistinguishable from each other, and at other times, it is so completely independent that developers aren’t even aware it is going on.”
James A. Whittaker, How Google Tests Software
“Google has also benefitted from being at the inflection point of software moving from massive client-side binaries with multi-year release cycles to cloud-based services that are released every few weeks, days, or hours.1 This confluence of happy circumstances has endowed us with some similarities to the utopian software development process. Google SWEs are feature developers, responsible for building components that ship to customers. They write feature code and unit test code for those features. Google SETs are test developers, responsible for assisting SWEs with the unit test portion of their work and also in writing larger test frameworks to assist SWEs in writing small and medium tests to assess broader quality concerns. Google TEs are user developers, responsible for taking the users’ perspectives in all things that have to do with quality. From a development perspective, they create automation for user scenarios and from a product perspective, they assess the overall coverage and effectiveness of the ensemble of testing activity performed by the other engineering roles. It is not utopia, but it is our best attempt at achieving it in a practical way where real-world concerns have a way of disrupting best intentions in the most unforeseen and unforgiving way.”
James A. Whittaker, How Google Tests Software
“One thing is for sure: Testing must not create friction that slows down innovation and development.”
James A. Whittaker, How Google Tests Software
“Development and Test Workflow Before we dig into the workflow specific to SETs, it is helpful to understand the overall development context in which SETs work. SETs and SWEs form a tight partnership in the development of a new product or service and there is a great deal of overlap in their actual work. This is by design because Google believes it is important that testing is owned by the entire engineering team and not just those with the word test in their job title. Shipping code is the primary shared artifact between engineers on a team. It is the organization of this code, its development, care, and feeding that becomes the focus of everyday effort.”
James A. Whittaker, How Google Tests Software
“Whether it’s a test framework or test cases, keep it simple and iterate on the design as your project evolves. Don’t try to solve everything upfront. Be aggressive about throwing things away. If tests or automation are too hard to maintain, toss them and build some better ones that are more resilient. Watch out for maintenance and troubleshooting costs of your tests down the road Observe the 70-20-10 rule: 70 percent small unit tests that verify the behavior of a single class or function, 20 percent medium tests that validate the integration of one or more application modules, and 10 percent large tests (commonly referred to as “system tests” and “end-to-end” tests) that operate on a high level and verify the application as a whole is working. Other than that, prioritize and look for simple automation efforts with big payoffs, always remembering that automation doesn’t solve all your problems, especially when it comes to frontend projects and device testing. You always want smart, exploratory testing and to track test data.”
James A. Whittaker, How Google Tests Software
“It takes a diverse family of testers to raise an amazing product.”
James A. Whittaker, How Google Tests Software
“As test engineers (TEs) and software engineers in test (SETs) labor to support the user and developer, respectively, there is one role that ties them together, the test engineering manager (TEM). The TEM is an engineering colleague taking on important work as an individual contributor and a single point of contact through which all the support teams (development, product management, release engineers, document writers, and so on) liaise. The TEM is probably the most challenging position in all of Google, requiring the skills of both the TE and SET. He also needs management skills to support direct reports in career development.”
James A. Whittaker, How Google Tests Software
“Scarcity of resources brings clarity to execution and it creates a strong sense of ownership by those on a project. Imagine raising a child with a large staff of help: one person for feeding, one for diapering, one for entertainment, and so on. None of these people is as vested in the child’s life as a single, overworked parent. It is the scarcity of the parenting resource that brings clarity and efficiency to the process of raising children. When resources are scarce, you are forced to optimize. You are quick to see process inefficiencies and not repeat them. You create a feeding schedule and stick to it. You place the various diapering implements in close proximity to streamline steps in the process. It’s the same concept for software-testing projects at Google. Because you can’t simply throw people at a problem, the tool chain gets streamlined. Automation that serves no real purpose gets deprecated. Tests that find no regressions aren’t written. Developers who demand certain types of activity from testers have to participate in it. There are no make-work tasks. There is no busy work in an attempt to add value where you are not needed.”
James A. Whittaker, How Google Tests Software
“SETs are the engineers involved in enabling testing at all levels of the Google development process we just described. SETs are Software Engineers in Test. First and foremost, SETs are software engineers and the role is touted as a 100 percent coding role in our recruiting literature and internal job promotion ladders. It’s an interesting hybrid approach to testing that enables us to get testers involved early in a way that’s not about touchy-feely “quality models” and “test plans” but as active participants in designing and creating the codebase. It creates an equal footing between feature developers and test developers that is productive and lends credibility to all types of testing, including manual and exploratory testing that occurs later in the process and is performed by a different set of engineers.”
James A. Whittaker, How Google Tests Software
“Leading and managing testers at Google is likely the thing most different from other testing shops. There are several forces at work at Google driving these differences, namely: far fewer testers, hiring competent folks, and a healthy respect for diversity and autonomy. Test management at Google is much more about inspiring than actively managing. It is more about strategy than day-to-day or week-to-week execution. That said, this leaves engineering management in an open-ended and often more complex position than that typical of the places we’ve worked before. The key aspects of test management and leadership at Google are leadership and vision, negotiation, external communication, technical competence, strategic initiatives, recruiting and interviewing, and driving the review performance of the team.”
James A. Whittaker, How Google Tests Software
“The general rule in the case where a TEM can choose to take a project is simply to avoid toxic projects. Teams that are unwilling to be equal partners in quality should be left on their own to do their own testing. Teams unwilling to commit to writing small tests and getting good unit-level coverage should be left to dig their graves in peace.”
James A. Whittaker, How Google Tests Software
“A pirate’s ship is a useful analogy for managing test engineering teams at Google. Specifically, the test organization is a world where engineers are by nature questioning, wanting conclusive data, and constantly measuring their lead’s and manager’s directives. One of the key aspects we interview for is being a self-starter and self-directed—so how do you manage these folks? The answer is much like how I imagine the captain of a pirate ship maintains order. The truth is the captain cannot “manage” the ship through brute force or fear as he is outnumbered, and everyone is armed to the teeth with technical talent and other offers for work. He also cannot manage through gold alone, as these pirates often have more than they need for sustenance. What truly drives these pirates is the pirate way of life and the excitement of seeing what they can capture next. Mutiny is always a real possibility, too, as Google’s organizations are dynamic. Engineers are even encouraged to move between teams frequently. If the ship isn’t finding lots of treasure, or if it’s not that fun of a place to work, engineering “pirates” get to step off the ship at the next port and not return when it’s time to sail. Being an engineering leader means being a pirate engineer yourself and knowing just a bit more about what is on the horizon, which ships are sailing nearby, and what treasure they might hold. Leading through technical vision, promises of exciting technical adventures, and interesting ports of call. You always sleep with one eye open as an engineering manager at Google!”
James A. Whittaker, How Google Tests Software
“A funny thing about code is that when left alone, it gets moldy and breaks of its own accord. This is true of product code and test code. A large part of maintenance engineering is about monitoring quality, not looking for new issues.”
James A. Whittaker, How Google Tests Software
“At Google, this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Because the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. If a product breaks in the field, the first point of escalation is the developer who created the problem, not the tester who didn’t catch it.”
James A. Whittaker, How Google Tests Software
“Test code is only useful in the context of being executable in a way that accelerates development and doesn’t slow it down. Thus, it has to be integrated into the actual development process in a way that makes it part of development and not separate from it. Functional code never exists in a vacuum. Neither should test code.”
James A. Whittaker, How Google Tests Software
“We wanted to change Google’s developer culture to include testing as part of every feature developer’s responsibility. They shared their positive experiences with testing and encouraged teams to write tests. Some teams were interested, but didn’t know how to start. Others would put “improve testing” on their team objectives and key results (OKRs),14 which isn’t always actionable. It was like putting “lose weight” on your New Year’s resolutions. That’s a fine, lofty goal, but if that’s all you say, don’t be surprised if it never happens.”
James A. Whittaker, How Google Tests Software
“Define steps small enough that teams can see and show their progress. Don’t get bogged down trying to create the perfect system with a perfect set of measurements. Nothing will ever be perfect for everyone. Agreeing on something reasonable and moving forward is important when the alternative is being paralyzed. Be flexible where it makes sense to be, but hold the line where you shouldn’t bend.”
James A. Whittaker, How Google Tests Software

« previous 1