Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations
Rate it:
4%
Flag icon
thus, update their software many times a day instead of once every few months, increasing their ability to use software to explore the market, respond to events, and release features faster than their competition.
4%
Flag icon
Their evidence refutes the bimodal IT notion that you have to choose between speed and stability—instead, speed depends on stability, so good IT practices give you both.
5%
Flag icon
Common traps were stepped in—like trying a top-down mandate to adopt Agile, thinking it was one size fits all, not focusing on measurement (or the right things to measure), leadership behavior not changing, and treating the transformation like a program instead of creating a learning organization (never done).
7%
Flag icon
each year we emailed invitations to our mailing lists and leveraged social media, including Twitter, LinkedIn, and Facebook. Our invitations targeted professionals working in technology, especially those familiar with software development and delivery paradigms and DevOps. We encouraged our readers to invite friends and peers who might also work in software development and delivery to help us broaden our reach. This is called snowball sampling,
8%
Flag icon
Some key research questions were: What does it mean to deliver software, and can it be measured? Does software delivery impact organizations? Does culture matter, and how do we measure it? What technical practices appear to be important?
9%
Flag icon
technical, process, and cultural.
10%
Flag icon
To remain competitive and excel in the market, organizations must accelerate: delivery of goods and services to delight their customers; engagement with the market to detect and understand customer demand; anticipation of compliance and regulatory changes that impact their systems; and response to potential risks such as security threats or changes in the economy.
11%
Flag icon
The key to successful change is measuring and understanding the right things with a focus on capabilities—not on maturity.
11%
Flag icon
While maturity models are very popular in the industry, we cannot stress enough that maturity models are not the appropriate tool to use or mindset to have. Instead, shifting to a capabilities model of measurement is essential for organizations wanting to accelerate software delivery. This is due to four factors.
11%
Flag icon
Maturity models assume that “Level 1” and “Level 2” look the same across all teams and organizations, but those of us who work in technology know this is not the case.
11%
Flag icon
Third, capability models focus on key outcomes and how the capabilities, or levers, drive improvement in those outcomes—that is, they are outcome based. This provides technical leadership with clear direction and strategy on high-level goals (with a focus on capabilities to improve key outcomes).
12%
Flag icon
the gap between high performers and low performers narrowed for tempo (deployment frequency and change lead time) and widened for stability (mean time to recover and change failure rate).
13%
Flag icon
First, they focus on outputs rather than outcomes. Second, they focus on individual or local measures rather than team or global ones. Let’s take three examples: lines of code, velocity, and utilization.
13%
Flag icon
in reality we would prefer a 10-line solution to a 1,000-line solution to a problem.
13%
Flag icon
Velocity is designed to be used as a capacity planning tool; for example, it can be used to extrapolate how long it will take the team to complete all the work that has been planned and estimated. However, some managers have also used it as a way to measure team productivity, or even to compare teams.
13%
Flag icon
First, velocity is a relative and team-dependent measure, not an absolute one.
13%
Flag icon
Queue theory in math tells us that as utilization approaches 100%, lead times approach infinity—in other words, once you get to very high levels of utilization, it takes teams exponentially longer to get anything done.
13%
Flag icon
successful measure of performance should have two key characteristics. First, it should focus on a global outcome to ensure teams aren’t pitted against each other.
14%
Flag icon
Second, our measure should focus on outcomes not output: it shouldn’t reward people for putting in large amounts of busywork that doesn’t actually help achieve organizational goals.
14%
Flag icon
In our search for measures of delivery performance that meet these criteria, we settled on four: delivery lead time, deployment frequency, time to restore service, and change fail rate.
14%
Flag icon
Lead time is the time it takes to go from a customer making a request to the request being satisfied.
14%
Flag icon
The second metric to consider is batch size. Reducing batch size is another central element of the Lean paradigm—indeed, it was one of the keys to the success of the Toyota production system. Reducing batch sizes reduces cycle times and variability in flow, accelerates feedback, reduces risk and overhead, improves efficiency, increases motivation and urgency, and reduces costs and schedule growth (Reinertsen 2009, Chapter 5
15%
Flag icon
a key metric when making changes to systems is what percentage of changes to production (including, for example, software releases and infrastructure configuration changes) fail.
15%
Flag icon
lead to service impairment or outage, require a hotfix, a rollback, a fix-forward, or a patch). The four measures selected are shown in Figure 2.1.
16%
Flag icon
This is precisely what the Agile and Lean movements predict, but much dogma in our industry still rests on the false assumption that moving faster means trading off against other performance goals, rather than enabling and reinforcing them.
17%
Flag icon
The fact that software delivery performance matters provides a strong argument against outsourcing the development of software that is strategic to your business, and instead bringing this capability into the core of your organization.
18%
Flag icon
’whenever there is fear, you get the wrong numbers’”
20%
Flag icon
making sure the items are read and interpreted similarly by those who take the survey. This
21%
Flag icon
should emphasize that bureaucracy is not necessarily bad. As Mark Schwartz points out in The Art of Business Value, the goal of bureaucracy is to “ensure fairness by applying rules to administrative behavior. The rules
21%
Flag icon
Westrum’s theory posits that organizations with better information flow function more effectively.
21%
Flag icon
hierarchical. Finally, teams with these cultural norms are likely to do a better job with their people, since problems are more rapidly discovered and addressed.
22%
Flag icon
“what my . . . experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave—what they do”
23%
Flag icon
In continuous delivery, we invest in building a culture supported by tools and people where we can detect any issues quickly, so that they can be fixed straight away when they are cheap to detect and resolve.
23%
Flag icon
By splitting work up into much smaller chunks that deliver measurable business outcomes quickly for a small part of our target market, we get essential feedback on the work we are doing so that we can course correct.
23%
Flag icon
A key goal of continuous delivery is changing the economics of the software delivery process so the cost of pushing out individual changes is very low.
23%
Flag icon
Computers perform repetitive tasks; people solve problems. One important strategy to reduce the cost of pushing out changes is to take repetitive work that takes a long time, such as regression testing and software deployments, and invest in simplifying and automating this work. Thus, we free up people for higher-value problem-solving work, such as improving the design of our systems and processes in response to feedback.
23%
Flag icon
Relentlessly pursue continuous ...
This highlight has been truncated due to consecutive passage length restrictions.
23%
Flag icon
Everyone is responsible. As we learned from Ron Westrum, in bureaucratic organizations teams tend to focus on departmental goals rather than organizational goals. Thus, development focuses on throughput, testing on quality, and operations on stability. However, in reality these are all system-level outcomes, and they can only be achieved by close collaboration between everyone involved in the software delivery process.
23%
Flag icon
Integrating all these branches requires significant time and rework. Following our principle of working in small batches and building quality in, high- performing teams keep branches short-lived (less than one day’s work) and integrate them into trunk/master frequently. Each change triggers a build process that includes running unit tests. If any part of this process fails, developers fix it immediately.
Sudhanshu
Add more sources of tests so that it's shorter to integrate and less problem to breakage
24%
Flag icon
process. Automated unit and acceptance tests should be run against every commit to version control to give developers fast feedback on their changes. Developers should be able to run all automated tests on their workstations in order to triage and fix defects.
Sudhanshu
all types of tests should be possible to run in developer machines so that it's faster to get feedback and work on it.
24%
Flag icon
Implementing continuous delivery means creating multiple feedback loops to ensure that high-quality software gets delivered to users more frequently and more reliably.
25%
Flag icon
Fast feedback on the quality and deployability of the system is available to everyone on the team, and people make acting on this feedback their highest priority.
25%
Flag icon
We show these relationships in Figure 4.1. Figure 4.1: Drivers of Continuous Delivery
25%
Flag icon
“Quality is value to some person”
26%
Flag icon
We found that the amount of time spent on new work, unplanned work or rework, and other kinds of work, was significantly different between high performers and low performers.
26%
Flag icon
and 27% on unplanned work or rework. Unplanned work and rework are useful proxies for quality because they represent a failure to build quality into our products.
26%
Flag icon
The theory behind this is that when developers are involved in creating and maintaining acceptance tests, there are two important effects. First, the code becomes more testable when developers write tests. This is one of the main reasons why test-driven development (TDD) is an important practice—it forces developers to create more testable designs. Second, when developers are responsible for the automated tests, they care more about them and will invest more effort into maintaining and fixing them.
27%
Flag icon
successful teams had adequate test data to run their fully automated test suites and could acquire test data for running automated tests on demand. In addition, test data was not a limit on the automated tests they could run.
27%
Flag icon
Teams that did well had fewer than three active branches at any time, their branches had very short lifetimes (less than a day) before being merged into trunk and never had “code freeze” or stabilization periods.
27%
Flag icon
we agree that working on short-lived branches that are merged into trunk at least daily is consistent with commonly accepted continuous integration practices.
« Prev 1