More on this book
Community
Kindle Notes & Highlights
Read between
November 3 - November 3, 2019
The principles make it clear that those four values have consequences beyond their “Mom and apple pie” connotation.
Anyone, even a manager, can predict that next week the team will get about 45 points done. Over the next ten weeks, they ought to get about 450 points done. That’s power! It’s especially powerful if the managers and the team have a good feel for the number of points in the project. In fact, good Agile teams capture that information on yet another graph on the wall.
One of the driving motivations for Agile software development is to provide the data that managers need to decide how to set the coefficients on the Iron Cross and drive the project to the best possible outcome.
Agile development is first and foremost a feedback-driven approach.
Sprint is the term used in Scrum. I dislike the term because it implies running as fast as possible. A software project is a marathon, and you don’t want to sprint in a marathon.
Some folks take this to mean that Agile is just a series of mini-Waterfalls. That is not the case. Iterations are not subdivided into three sections. Analysis is not done solely at the start of the iteration, nor is the end of the iteration solely implementation. Rather, the activities of requirements analysis, architecture, design, and implementation are continuous throughout the iteration.
And Agile is a way to provide an early and continuous dose of cold, hard reality as a replacement for hope.
Some folks think that Agile is about going fast. It’s not. It’s never been about going fast. Agile is about knowing, as early as possible, just how screwed we are.
The best possible outcome may be very disappointing to the stakeholders who originally commissioned the project.
But the best possible outcome is, by definition, the best they are going to get.
If the system is technically ready to deploy at the end of every iteration, then deployment is a business decision, not a technical decision. The business may decide there aren’t enough features to deploy, or they may decide to delay deployment for market reasons or training reasons. In any case, the system quality meets the technical bar for deployability.
I’ve got some news for you, sunshine. If a change to the requirements breaks your architecture, then your architecture sucks.
So how do you eliminate that fear? Imagine that you own a button that controls two lights: one red, the other green. Imagine that when you push this button, the green light is lit if the system works, and the red light is lit if the system is broken. Imagine that pushing that button and getting the result takes just a few seconds. How often would you push that button? You’d never stop. You’d push that button all the time. Whenever you made any change to the code, you’d push that button to make sure you hadn’t broken anything.
No! Iterations do not fail. The purpose of an iteration is to generate data for managers. It would be nice if the iteration also generated working code, but even when it doesn’t it has still generated data.
This is the practice of Acceptance Tests. The practice says that, to the degree practicable, the requirements of the system should be written as automated tests.
Moving QA to the beginning and automating the tests solves another huge problem. When QA operates manually at the end, they are the bottleneck. They must finish their work before the system can be deployed. Impatient managers and stakeholders lean on QA to finish up so that the system can be deployed.
“Principles without practices are empty shells, whereas practices without principles tend to be implemented by rote, without judgement.