More on this book
Community
Kindle Notes & Highlights
Read between
September 1, 2019 - January 11, 2020
thus, update their software many times a day instead of once every few months, increasing their ability to use software to explore the market, respond to events, and release features faster than their competition.
Their evidence refutes the bimodal IT notion that you have to choose between speed and stability—instead, speed depends on stability, so good IT practices give you both.
Common traps were stepped in—like trying a top-down mandate to adopt Agile, thinking it was one size fits all, not focusing on measurement (or the right things to measure), leadership behavior not changing, and treating the transformation like a program instead of creating a learning organization (never done).
each year we emailed invitations to our mailing lists and leveraged social media, including Twitter, LinkedIn, and Facebook. Our invitations targeted professionals working in technology, especially those familiar with software development and delivery paradigms and DevOps. We encouraged our readers to invite friends and peers who might also work in software development and delivery to help us broaden our reach. This is called snowball sampling,
Some key research questions were: What does it mean to deliver software, and can it be measured? Does software delivery impact organizations? Does culture matter, and how do we measure it? What technical practices appear to be important?
technical, process, and cultural.
To remain competitive and excel in the market, organizations must accelerate: delivery of goods and services to delight their customers; engagement with the market to detect and understand customer demand; anticipation of compliance and regulatory changes that impact their systems; and response to potential risks such as security threats or changes in the economy.
The key to successful change is measuring and understanding the right things with a focus on capabilities—not on maturity.
While maturity models are very popular in the industry, we cannot stress enough that maturity models are not the appropriate tool to use or mindset to have. Instead, shifting to a capabilities model of measurement is essential for organizations wanting to accelerate software delivery. This is due to four factors.
Maturity models assume that “Level 1” and “Level 2” look the same across all teams and organizations, but those of us who work in technology know this is not the case.
Third, capability models focus on key outcomes and how the capabilities, or levers, drive improvement in those outcomes—that is, they are outcome based. This provides technical leadership with clear direction and strategy on high-level goals (with a focus on capabilities to improve key outcomes).
the gap between high performers and low performers narrowed for tempo (deployment frequency and change lead time) and widened for stability (mean time to recover and change failure rate).
First, they focus on outputs rather than outcomes. Second, they focus on individual or local measures rather than team or global ones. Let’s take three examples: lines of code, velocity, and utilization.
in reality we would prefer a 10-line solution to a 1,000-line solution to a problem.
Velocity is designed to be used as a capacity planning tool; for example, it can be used to extrapolate how long it will take the team to complete all the work that has been planned and estimated. However, some managers have also used it as a way to measure team productivity, or even to compare teams.
First, velocity is a relative and team-dependent measure, not an absolute one.
Queue theory in math tells us that as utilization approaches 100%, lead times approach infinity—in other words, once you get to very high levels of utilization, it takes teams exponentially longer to get anything done.
successful measure of performance should have two key characteristics. First, it should focus on a global outcome to ensure teams aren’t pitted against each other.
Second, our measure should focus on outcomes not output: it shouldn’t reward people for putting in large amounts of busywork that doesn’t actually help achieve organizational goals.
In our search for measures of delivery performance that meet these criteria, we settled on four: delivery lead time, deployment frequency, time to restore service, and change fail rate.
Lead time is the time it takes to go from a customer making a request to the request being satisfied.
The second metric to consider is batch size. Reducing batch size is another central element of the Lean paradigm—indeed, it was one of the keys to the success of the Toyota production system. Reducing batch sizes reduces cycle times and variability in flow, accelerates feedback, reduces risk and overhead, improves efficiency, increases motivation and urgency, and reduces costs and schedule growth (Reinertsen 2009, Chapter 5
a key metric when making changes to systems is what percentage of changes to production (including, for example, software releases and infrastructure configuration changes) fail.
lead to service impairment or outage, require a hotfix, a rollback, a fix-forward, or a patch). The four measures selected are shown in Figure 2.1.
This is precisely what the Agile and Lean movements predict, but much dogma in our industry still rests on the false assumption that moving faster means trading off against other performance goals, rather than enabling and reinforcing them.
The fact that software delivery performance matters provides a strong argument against outsourcing the development of software that is strategic to your business, and instead bringing this capability into the core of your organization.
’whenever there is fear, you get the wrong numbers’”
making sure the items are read and interpreted similarly by those who take the survey. This
should emphasize that bureaucracy is not necessarily bad. As Mark Schwartz points out in The Art of Business Value, the goal of bureaucracy is to “ensure fairness by applying rules to administrative behavior. The rules
Westrum’s theory posits that organizations with better information flow function more effectively.
hierarchical. Finally, teams with these cultural norms are likely to do a better job with their people, since problems are more rapidly discovered and addressed.
“what my . . . experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave—what they do”
In continuous delivery, we invest in building a culture supported by tools and people where we can detect any issues quickly, so that they can be fixed straight away when they are cheap to detect and resolve.
By splitting work up into much smaller chunks that deliver measurable business outcomes quickly for a small part of our target market, we get essential feedback on the work we are doing so that we can course correct.
A key goal of continuous delivery is changing the economics of the software delivery process so the cost of pushing out individual changes is very low.
Computers perform repetitive tasks; people solve problems. One important strategy to reduce the cost of pushing out changes is to take repetitive work that takes a long time, such as regression testing and software deployments, and invest in simplifying and automating this work. Thus, we free up people for higher-value problem-solving work, such as improving the design of our systems and processes in response to feedback.
Relentlessly pursue continuous ...
This highlight has been truncated due to consecutive passage length restrictions.
Everyone is responsible. As we learned from Ron Westrum, in bureaucratic organizations teams tend to focus on departmental goals rather than organizational goals. Thus, development focuses on throughput, testing on quality, and operations on stability. However, in reality these are all system-level outcomes, and they can only be achieved by close collaboration between everyone involved in the software delivery process.
Integrating all these branches requires significant time and rework. Following our principle of working in small batches and building quality in, high- performing teams keep branches short-lived (less than one day’s work) and integrate them into trunk/master frequently. Each change triggers a build process that includes running unit tests. If any part of this process fails, developers fix it immediately.
process. Automated unit and acceptance tests should be run against every commit to version control to give developers fast feedback on their changes. Developers should be able to run all automated tests on their workstations in order to triage and fix defects.
all types of tests should be possible to run in developer machines so that it's faster to get feedback and work on it.
Implementing continuous delivery means creating multiple feedback loops to ensure that high-quality software gets delivered to users more frequently and more reliably.
Fast feedback on the quality and deployability of the system is available to everyone on the team, and people make acting on this feedback their highest priority.
We show these relationships in Figure 4.1. Figure 4.1: Drivers of Continuous Delivery
“Quality is value to some person”
We found that the amount of time spent on new work, unplanned work or rework, and other kinds of work, was significantly different between high performers and low performers.
and 27% on unplanned work or rework. Unplanned work and rework are useful proxies for quality because they represent a failure to build quality into our products.
The theory behind this is that when developers are involved in creating and maintaining acceptance tests, there are two important effects. First, the code becomes more testable when developers write tests. This is one of the main reasons why test-driven development (TDD) is an important practice—it forces developers to create more testable designs. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them.
successful teams had adequate test data to run their fully automated test suites and could acquire test data for running automated tests on demand. In addition, test data was not a limit on the automated tests they could run.
Teams that did well had fewer than three active branches at any time, their branches had very short lifetimes (less than a day) before being merged into trunk and never had “code freeze” or stabilization periods.
we agree that working on short-lived branches that are merged into trunk at least daily is consistent with commonly accepted continuous integration practices.