More on this book
Community
Kindle Notes & Highlights
Common traps were stepped in—like trying a top-down mandate to adopt Agile, thinking it was one size fits all, not focusing on measurement (or the right things to measure), leadership behavior not changing, and treating the transformation like a program instead of creating a learning organization (never done).
outcome-based team structures,
Senior leaders need to demonstrate their commitment to creating a learning organization.
Improvement Is Possible for Everyone Our quest to understand how to measure and improve software delivery was full of insights and surprises. The moral of the story, borne out in the data, is this: improvements in software delivery are possible for every team and in every company, as long as leadership provides consistent support— including time, actions, and resources—demonstrating a true commitment to improvement, and as long as team members commit themselves to the work.
we included additional questions to help us understand how technical practices influence human capital: employee Net Promoter Score (eNPS) and work identity—a factor that is likely to decrease burnout. These were our research questions:
using small teams that work in short cycles and measure feedback from users to build products and services that delight their customers and rapidly
To remain competitive and excel in the market, organizations must accelerate: delivery of goods and services to delight their customers; engagement with the market to detect and understand customer demand; anticipation of compliance and regulatory changes that impact their systems; and response to potential risks such as security threats or changes in the economy.
The key to successful change is measuring and understanding the right things with a focus on capabilities—not on maturity.
technology transformations should follow a continuous improvement paradigm.
A successful measure of performance should have two key characteristics. First, it should focus on a global outcome to ensure teams aren’t pitted against each other. The classic example is rewarding developers for throughput
measure should focus on outcomes not output:
In our search for measures of delivery performance that meet these criteria, we settled on four: delivery lead time, deployment frequency, time to restore service, and change fail rate.
Lead time is the time it takes to go from a customer making a request to the request being satisfied.
Reducing batch size is another central element of the Lean paradigm—
Table 2.2 Software. Delivery Performance for 2016 2016 High Performers Medium Performers Low Performers
Deployment Frequency On demand (multiple deploys per day) Between once per week and once per month Between once per month and once every six months Lead Time for Changes Less than one hour Between one week and one month Between one month and six months MTTR Less than one hour Less than one day Less than one day* Change Failure Rate 0-15% 31-45% 16-30% Table 2.3 Software Delivery Performance for 2017 2017 High Performers Medium Performers Low Performers
Deployment Frequency On demand (multiple deploys per day) Between once per week and once per month Between once per week and once per month* Lead Time for Changes Less than one hour Between one week and one month Between one week and one month* MTTR Less than one hour Less than o...
This highlight has been truncated due to consecutive passage length restrictions.
Astonishingly, these results demonstrate that there is no tradeoff between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures.
The DevOps mantra of continuous improvement is both exciting and real, pushing companies to be their best, and leaving behind those who do not improve.
medium performers do worse than low performers on change fail rate
medium performers are working along their technology transformation journey and dealing with the challenges that come from large-scale rearchitecture work, such as transitioning legacy code bases.
we found that medium performers spend more time on unplanned rework than low performers— because they report spending a greater proportion of time on new work.
We believe this new work could be occurring at the expense of ignoring critical rework, thus racking up technical debt which in turn leads to more fragile system...
This highlight has been truncated due to consecutive passage length restrictions.
The fact that software delivery performance matters provides a strong argument against outsourcing the development of software that is strategic to your business, and instead bringing this capability into the core of your organization.
As Deming said, ’whenever there is fear, you get the wrong numbers’”
Westrums Typology of Organizational Culture. Pathological (Power-Oriented) Bureaucratic (Rule-Oriented) Generative (Performance-Oriented) Low cooperation Modest cooperation High cooperation Messengers “shot” Messengers neglected Messengers trained Responsibilities shirked Narrow responsibilities Risks are shared Bridging discouraged Bridging tolerated Bridging encouraged
Failure leads to scapegoating Failure leads to justice Failure leads to inquiry Novelty crushed Novelty leads to problems Novelty implemented
an indicator of the level of organizational culture that prioritizes trust and collaboration in the team—to be both valid and reliable.
Westrum’s theory posits that organizations with better information flow function more effectively.
First, a good culture requires trust and cooperation between people across the organization, so it reflects the level of collaboration and trust inside the organization.
Second, better organizational culture can indicate higher quality decision-making.
However, when we tried to turn these four metrics into a construct, we ran into a problem: the four measures don’t pass all of the statistical tests of validity and reliability. Analysis showed that only lead time, release frequency, and time to restore together form a valid and reliable construct. Thus, in the rest of book, when we talk about software delivery performance it is defined using only the combination of those three metrics.
How organizations deal with failures or accidents is particularly instructive. Pathological organizations look for a “throat to choke”: Investigations aim to find the person or persons “responsible” for the problem, and then punish or blame them. But in complex adaptive systems, accidents are almost never the fault of a single person who saw clearly what was going to happen and then ran toward it or failed to act to prevent it. Rather, accidents typically emerge from a complex interplay of contributing factors. Failure in complex systems is, like other types of behavior in such systems,
...more
Even though working in small chunks adds some overhead, it reaps enormous rewards by allowing us to avoid work that delivers zero or negative value for our organizations.
One important strategy to reduce the cost of pushing out changes is to take repetitive work that takes a long time, such as regression testing and software deployments, and invest in simplifying and automating this work.
A key objective for management is making the state of these system-level outcomes transparent, working with the rest of the organization to set measurable, achievable, time-bound goals for these outcomes, and then helping their teams work toward them.
Our application code is in a version control system. Our system configurations are in a version control system. Our application configurations are in a version control system. Our scripts for automating build and configuration are in a version control system.