David Scott Bernstein's Blog, page 11
April 17, 2019
Measure Efficiency of Feedback Loops
All of the things that I suggest that we measure on software development teams, from these current seven blog posts as well as other things that I’ve written, are all about ways of improving the efficiency of our software development process. We measure to understand and improve.
Agile is all about getting and learning from feedback and there are several feedback systems in Agile that we can draw upon.
Perhaps the most obvious form of feedback is the feedback that we get from our customers and users as we’re building software through our iteration demos, as well as product releases. This is oftentimes where we learn the truth of our work and how it impacts our users. It’s easy to delude ourselves and think that were building exactly what is needed as were building it but sometimes reality sets in.
Sometimes, when we deliver the software that we’ve built to our users we find it isn’t what they were expecting. Language is a blunt instrument and describing a feature in English doesn’t give enough fidelity for developers to know exactly how the feature should be implemented and behave. As a result, it’s not uncommon to deliver a feature and have the customer say, “That’s not what I asked for. That’s not what I wanted.”
There are other feedback loops that are also important in Agile. Code reviews and iteration demos are an important way that the team gets feedback on what they’re building.
But perhaps the most important feedback that we get at the team level happens during the retrospectives. Retrospectives are a vitally important part of the Agile software development process and when we are able to get good feedback from our retrospectives that we can act on then the team can improve very quickly. It’s better to make minor improvements regularly then trying to make huge improvements that may not actually stick. Retrospectives are a way of helping the team identify what small improvements can be made and to check in and measure that progress.
Agile is all about feedback and getting feedback at many levels. For me as a developer, the most important feedback loop that I engage in is the red/green/refactor loop of test-driven development and my continuous integration server.
I run this loop constantly, many times every hour when I’m developing and so the efficiency of this loop is critically important for my own personal productivity as a developer. I can run a local built and execute all of the unit tests in typically just a few seconds. When my local build succeeds it is automatically promoted to my build server, which oftentimes can run in just a few minutes and tell me if there’s any conflict with the work that I’ve just done in the rest of the system.
This is the feedback that I find most valuable as a software developer engaged in Agile software development practices. The build is our eyes and our ears that allow us to pay attention to the health of our code and this is why I feel that the build is one of the most important feedback loops for Agile software development.
Frequent feedback is generally better than infrequent feedback. Clear feedback is generally better than vague feedback. Feedback that we can act on is far more valuable then feedback for problems that cannot be resolved. The quality and efficiency of our feedback loops are the critical factor in making our processes work efficiently.
Finding out about a defect immediately when I create it and being able to fix it in a matter of seconds rather than having it slip into production and become a major crisis is a huge value of these practices. Knowing that I always have a buildable system is also hugely valuable for letting me see the real progress that I make as I build a system.
April 10, 2019
Measure Costs of Not Delivering Features
In my last blog post, called Measure Customer Value of Features, I discuss the importance of seeing the value of features from the customer’s perspective. But there is another perspective that the customer can sometimes have that is also important for us to see–the cost of not a delivering feature.
Sometimes we’re in time critical situations where we’re trying to beat our competition and get a feature out before they do or we’re trying to address a new market that’s exploding and the longer we delay the less opportunity we have.
Time critical situations like this can be very hard for a business to evaluate because there is opportunity costs as well as production costs. Markets are fickle and in some markets, being the first to market can make all the difference. This isn’t always true and in many markets being the highest quality, even if you’re not the first to market can give you the edge that allows you to ultimately dominate that market.
Opportunity cost can be real and in some situations can be measurable. Under these situations, I find that measuring these opportunity costs can be really helpful to management for making decisions.
Ironically, the one cost that I often find managers not noticing is their largest fixed cost, which is the cost of the team itself. If you have a team of 24 developers in your organization and their average annual salary is somewhere on the order of $100,000 for the sake of discussion, then with benefits and overhead and all the other costs related to having an employee, you probably are spending $200,000 a year on that employee, which for a group of 24 developers works out to be about $100,000 a week, every week or about five million dollars a year. That’s your burn rate.
The question is, are we creating at least that much value? Ideally, we would like to be creating ten times that value from the work that we’re doing. Running a development team is expensive but for many teams losing opportunities can be far more expensive. Just a few years ago I was working with the client and they saw an opportunity to modify some of their systems and capture a new market. The following year one of the developers told me that they made over $100 million by introducing just that one new product over the course of that last year.
In some situations, measuring the cost of not delivering a feature can have a big impact on the decisions management makes about what features to build next. This is important because the metrics that we track should be used to influence our decisions.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.
April 3, 2019
Measure Customer Value of Features
I always say that doing the right thing is far more important than doing the thing right, if we only have to choose one of them. Of course, we don’t we can choose to do both and that’s when software development is at its best.
It doesn’t matter how elegant our code is or how beautifully designed a feature is if the customer can’t use that feature or if it doesn’t address their needs. We got to give the customer what they need and deliver it so that it’s stable an extensible as well as independently testable.
But how do we know that we’re hitting the mark with our customers for the features that we’re building? How do we know that our customers are gaining maximum value from the features that we build for them? The answer is, of course, that we measure it.
Measuring the value of features to customers involves both subjective and objective indicators. How customers feel about features is just as important as how productive they are when using those features. If customers don’t feel productive with the features then they won’t use them.
If you want to know if a user likes a feature then just ask them. Users are often quite anxious to let us know what they think of the software that they use, especially if there’s room for improvement. Surveys are a great way to measure what our customers think of the software we give them. Also, because our software interacts with the user, we can see with relatively little effort how our programs are being used by building telemetry into our software. We can do this anonymously so we’re not violating any privacy agreements and seeing how our software is being used can oftentimes give us great insight for improving usability and that can make our customers even happier.
This could be as simple as tracking the number of times that a feature is invoked through a menu and then comparing the results at the end of quarter or the end of the year, which features got the most usage and which features got the least usage. There are many opportunities to get feedback from our customers on the value of what we deliver and that feedback is of paramount importance for us to create new features that continue to address the needs of our users.
Understanding what our customers want can often be challenging for many software companies. We assume that they want the things that we want but very often customers of a product see the product very differently than the creators of the product and so staying in touch with what the customer sees and experiences is a very healthy thing for the team in order for us to continue to address the needs of our users.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.
March 27, 2019
Measure Time to Detect Defects
When I was young I read a study by TRW done in the ‘60s that measured the cost of fixing a defect that was detected at different points in the development cycle. If the developer who wrote the defect found it immediately after writing it then we can assign one unit of effort for resolving that defect. If that’s true then it takes something like seven units of effort to resolve the defect during the testing phase and 15 units of effort to resolve the defect just before release and a whopping 67 units of effort to resolve the defect if it makes it into the hands of the customer. In other words, the cost of fixing defects grows exponentially with the amount of time it took from when the defect was created to when the defect was resolved.
It’s always cheaper to fix defects sooner. This is because the longer we wait from when a defect was created the less we are familiar with it. The vast majority of time and effort in debugging isn’t involved in fixing the defect. Some defects do require a significant amount of reengineering but most defects are small problems that can be quickly and easily fixed. What can’t be done quickly and easily often is finding of the defect in the first place. Much of the time, finding a defect is where we spend most of our effort and once we locate the defect it’s usually trivial to fix.
My friend Llewelyn Falco likes to put it this way, being told that there’s a defect in a program is like being told that there is a misspelled word in the dictionary. Fixing a misspelled word is easy but finding it in the dictionary can be tedious and time-consuming. This is what debugging is often like so having ways to help us find defects and make defects more findable can be a great asset to a program.
It is almost universally true that the sooner we can find defects the more straightforward and cost-effective it is to resolve them. Because of this I pay a lot of attention to finding defects as early as possible. I find that having a good suite of unit tests can help immensely with this and this is one of the reasons that I’m a big advocate of doing test-first development, because it gives us a set of regression tests that we can use to find defects and validate our software is working as we expected.
Of course, regardless of how quickly you find defects, it’s even cheaper if you don’t create them in the first place and I find that doing test-first development helps me eliminate a huge range of defects that might show up in my code otherwise. These range from fat-finger typing to conceptual and logical mistakes.
My unit tests are my sanity check and I find that I can build far more stable and dependable code with far less effort using test-driven development. Like any discipline, doing TDD requires skills and it’s very possible to do it incorrectly. Doing TDD incorrectly has very little value, just like doing any activity incorrectly has very little value. But when we do test-first development well, by building good, unique, implementation-independent tests that support refactoring and validate that our features are working as we expect them to, then this gives us a tremendous amount of confidence in our code and allows us to work fast.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Measuring Software Development.
March 20, 2019
Measure Defect Density
One of the things that you’re not going to find in this seven blog posts series on measuring the software development process is measuring velocity. I hate velocity because I’ve seen it misdirect managers and team members far more often than I’ve seen it provide valuable information.
Rather than spend time teaching teams about story points and velocity, I much rather show teams how to break features down into small homogeneous tasks that can be achieved in less than four hours. I know that some work doesn’t lend itself to be done in small chunks but in most situations I’ve been able to find ways of breaking down tasks into small chunks and I find that it’s very valuable when I do this.
If every task is a small, manageable piece of work that can be done in less than four hours then we don’t have to assign story points to tasks because everything is a small. Now we ask how many tasks can we do in this iteration and very likely that number will be similar to the number of tasks that was completed in the last iteration or perhaps even the one before that. This is usually more than enough fidelity to get a sense of how much work the team can accomplish in the present iteration.
Story points and velocity were designed in a way to be fine tuned for accuracy in short time increments and as a result, velocity cannot be compared across different teams and it cannot even be used to compare the performance of the same team over different time periods because the very meaning of a story point shifts for the team through time. The team may still estimate that they can finish a hundred story points in an iteration but the amount of work that that hundred story points represents today will be very different then the amount of work that it represented six months ago for the same team.
This actually improves accuracy in the short term but it means that velocity can’t be used for measuring productivity through time or across teams and so we like to say that velocity is not a productivity measurement, it is a capacity measurement. Rather than dealing with all the caveats and addendum’s related to velocity let’s just throw it out and stop tracking it.
But there is one statistic that I do track across different teams and through time because I find that it is scalable across teams and through time. The metric is defect density.
I define a defect or error or bug an incident or problem that escaped into production. Some teams track internal errors and if there is a separate quality assurance phase then this could be helpful for measuring the effectiveness of the QA effort but often I will opt just to look at defects that escape to production. I will rarely look at the severity of the defect and treat all defects as equal.
When comparing defects across product lines or across teams, it may make sense to look at the impact of defects or just the number of defects across product lines but often I find the fairest way to compare defects across teams or product lines is to look at defect density which I define as the percentage of defects per thousand lines of code. This helps normalize comparisons against small projects versus very large projects.
Of course, defect density is a powerful measure of the effectiveness of a software development process. When bugs consistently escape to production it tells us the something is seriously wrong in our software development process. Teams who adopt test-driven development and the other Extreme Programming practices often see a huge drop in defect density. Defects are expensive so dropping in defect density also drops costs. Reducing defect density can be a big and early win for teams when adopting Extreme Programming practices.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Measuring Software Development.
March 13, 2019
Measure Time Spent Coding
One of the most important metrics for the effectiveness of a software development team is one that I often find managers pay little attention to. The metric is how much time developers actually spend coding.
If our goal as software developers is to produce features in software then our process has to support us by allowing us time to create those features. But oftentimes I find that writing software is just a part of the software developer’s duties in many companies. Developers are also responsible to attend meetings, write test plans, interpret requirements and dozens of other duties. Some developers that I know spend so much time in meetings that they actually can’t get their work done and have to code during their lunch hour because the rest of their day is packed with other responsibilities.
It only makes logical sense that if you’re not given a lot of time to write software then you won’t write much software. The other thing that I find, which is almost universally true is that most software developers would prefer to be writing software then doing just about anything else. Writing code is what we do. It’s how we create value. And we know it. Forcing a developer to go to work but not write code is like keeping a racehorse in the stables all day. We want to get out and flex our muscles.
This is one of the main reasons that I am an advocate for Extreme Programming practices. Virtually all the Extreme Programming practices resolves to some form of writing software. One of the central practices of XP is test-driven development which is not a testing activity but really a developer activity. It’s writing software in the form of tests.
Doing TDD greatly reduces the amount of other non-coding work that developers have to do. Unit tests call software the way that they are intended to be used so we find that there is much less need to write internal programmer documentation for the code that we’re developing and that’s great because who likes writing internal programmer documentation? Very few.
We also find that when we have unit tests supporting us as we’re developing software, we make far fewer mistakes so were spending less time debugging code and more time writing code. We developers like that as well.
When software developers recognize that doing test-first development has us doing more development and less activities aren’t coding then they tend to get really on board with doing TDD on projects.
So, how much time should developers spend writing software every day? In some ways this is an individual decision but I think it’s very unrealistic to assume that an average software developer will write code for eight hours every day five days a week. That is an unrealistic expectation for managers to place on developers or for developers to place on themselves. Writers don’t write starting at 8 AM and ending at 5 PM every day with a break for lunch in between. Some days the ideas flow and others they don’t.
If I get four good hours of focused work done a day then I’m really happy. Of course, I’m an old guy and when I was younger I could put in eight hours or 10 hours or even 12 hour days on occasion. I don’t think many of us could do this consistently without burning out, but like many young people I could burn the midnight oil for several days in a row before I needed to take breaks.
Today, I believe that if we want to have a true industry around developing software then it has to be sustainable. Overtime and long hours can’t be the norm or we will burn out. We have to treat our profession like a real profession. Additionally, study after study has proven that when you take a developer and you make them work long, hard hours that the number of bugs that they produce increases exponentially and that ultimately slows the project down even more.
I generally believe that it makes more sense for us to measure our software development process rather than measuring individuals because very often it’s our process that has the most room for improvement. When I’m looking to measure the efficiency of a software development process the very first place I start is by asking how much quality time do your developers spend actually writing code. I find that by increasing the quality of the time or the quantity of the time, we can get a lot of low hanging fruit for improving the efficiency of a software development process.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.
March 6, 2019
Measure Time-to-Value
I am especially excited to share with you the next seven blog posts, which are based on seven strategies to measure value and software from my book Beyond Legacy Code: Nine Practices to Extend the Life (and Value) Of Your Software. You can find the original post that summarizes the seven strategies here https://tobeagile.com/2014/10/08/seven-strategies-for-measuring-value-in-software/.
I’m often asked what are the most valuable metrics to track on a team. In my experience, teams either don’t track any metrics or they track way too many metrics and wind up getting overwhelmed with the data.
I find that it’s best just to track 2 to 4 metrics, but which metrics you track really depends upon where your focus is and where your development process is at.
In these next seven blog posts I will propose seven different metrics to track but you don’t have to track them all and if you try it can be an awfully big burden. Instead pick a couple that makes sense and go with that.
The first metric that I’d like to discuss I feel is the most important. If you’re going to track something then track this because it is most representative of the value that a software development team creates. It’s where you’ll see the biggest impact for improvements.
The metric that I’m referring to is time-to-value. What I mean by time-to-value is how long does it take from the time a feature is requested to when that feature is delivered and the customer is deriving value from it. For some teams it makes more sense to measure time-to-value as how long did it take from the time we started to develop the feature until the customer was deriving value from it. The former way of measuring time-to-value includes the time it takes for a request in the system two be started. Some organizations find this valuable to track while others have inflexible requirements processes and they’re more interested in focusing on just the development process so they measure time-to-value based on when a feature is started in development to when the customer begins to derive value from it.
Time-to-value, regardless of how you define it, avoids the major trap when collecting performance data, which is local optimization. Focusing on optimizing just a part of our software development process is often useless in the long run if it doesn’t speed up the entire process. If there are five sequential steps in a process and step three is optimized but step four is not designed to handle the additional flow from step three then the overall workflow has not been improved.
Local optimizations seemed good on paper but they end up providing very little value. I would say this is the number one trap that I see managers get into when they look at ways of improving their software development process. We have to find ways of improving the overall process in order for us to derive value out of individual process improvements. Focusing on time-to-value helps us focus on the right things, which is the overall flow of our development process. When we can find ways of improving this then we see big results come out.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.
February 27, 2019
Why Practice 2: Build in Small Batches
In a lot of ways, I think that Practice 2–Build in Small Batches, from Beyond Legacy Code, is the core practice that made agile and Scrum great.
I was breaking down my software projects into smaller pieces as early as the 1980s. I didn’t call it agile back then. In fact, I was a bit embarrassed by it. I had trouble holding the entire program in my head at once so I broke it out into little pieces and built each piece separately, integrating them together as soon as possible as I built it. This turned out to be a much safer way of building software for me so I just stuck with it.
But there is an art in being able to break down a large project into small pieces. You don’t just chop things up randomly, of course. We need to decompose concepts so that they recompose easily later and they come together without a lot of rework.
There is definitely an art to story break down as well as a science to it. When we build small, we get more opportunities for feedback. We can also build more decoupled components that are independently verifiable.
Building small is really what makes agile great. This is really why we time box in agile. We make teams do two-week iterations because it forces them to build small units of behavior from start to finish so that the customers and stakeholders can see steady progress.
The seven strategies I want to discuss from my book on Practice 2 are based on one of my favorite blog posts. It’s called Seven Strategies for Measuring Software Development and I often refer clients, students, and associates to this blog post for some of my favorite metrics for measuring progress on software development teams.
But before we dive into these metrics I want to say that it’s easy to overdo it and we always have to be careful when we’re using statistics to analyze complex systems like teams or a software development process.
At best, metrics are indicators but in addition to metrics we have to look very closely at how collecting them incentivizes us to do certain things and de-incentivizes us not to do other things. As long as these things are in alignment with our greater purpose, then there’s no conflicts but too often these things don’t match up we get trouble. My advice would be to start with just a few metrics rather than trying to track a lot of them. Try to look at the ones that will affect you the most.
One metric that you won’t find me recommending that you track is velocity. I think velocity focuses us on the wrong things. Rather than estimating work, I prefer to work through a few possible examples for the features that we want to build so that we get clear on exactly what we want the feature to do and I can address any questions right away.
So, with those caveats, I present to you the next seven blog posts based on Seven Strategies for Measuring Software Development from my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software. Enjoy.
February 20, 2019
Support Refactoring
I’d like to conclude this series of
blog posts on Seven Strategies for Product Owners with a final strategy that
often gets overlooked on development teams but is vitally important: support refactoring.
Sometimes Product Owners resist the
team’s desire to refactor because they don’t get any new features after
refactoring but often times refactoring is the fastest way to actually get new
features in because a clean codebase is far simpler to work with than a messy
one.
Just think of your kitchen. Is it
easier to make a five-course meal in a messy kitchen or a clean one? This
analogy holds true with code. The cleaner, more straightforward the design is,
the faster it is to make changes to it.
I have had the privilege of working
with some of the top software development teams in the world and I can tell you
that one of the key characteristics of all of them is that they pay attention
to the quality of their code and constantly refactor both in the small and in
the large.
Software developers should be refactoring
in the small all the time and what I mean by that is that as we build a feature,
before we release it we should make sure that it’s supportable. If we’re doing
test-first development, there’s a step in the process to refractor our code.
In test-first development we build
software in three distinct tiny steps by first writing a test for the behavior
that we want to create, and then implementing that behavior to make that test
pass, and finally to refactor the code and the tests to make them supportable.
This includes coming up with good names for the behaviors that we’re building,
introducing design patterns when appropriate, and making our logic clear.
One of the things that I love about
doing test-first development is that I’m constantly paying attention to refactoring
my code and keeping it clean as I move through the red-green-refactor cycle
throughout my day.
But refactoring in the small is not
always enough and every once in a while, I still have to do larger refactoring.
This is in order to, as Ward Cunningham puts it, incorporate our learning back
into the code.
We don’t learn about our system in
tiny increments but rather in chunks and we want to have our code reflect our
understanding of the system. So, every 3 to 6 months or so I end up taking time
to refactor my code and incorporate some of the major learnings that I’ve had
back into the code. Sometimes this happens more frequently, like weekly or even
daily, depending upon how much I’m learning.
But I don’t encourage refactoring
code indiscriminately because we’re far too busy for that. My criterion for refactoring
code is typically when I have to go back into it in order to do something with
it, such as extending it or fixing a bug. At those times it makes sense to
clean my code up before starting to dig in and tear it apart to make it do
something that it didn’t do before. This is when I find refactoring most
valuable and also most cost-effective.
Scrum has the Golden Triangle of
the Product Owner, the ScrumMaster, and the team. The Product Owner is an
advocate for the product. The ScrumMaster is an advocate for the team and the
health of the team. The team and the Product Owner must be advocates for the
health of the product itself and part of the products health involves regular refactoring.
Everything else in the known universe requires maintenance so why shouldn’t
software?
Note: This blog post is based on a
section in my book, Beyond Legacy Code: Nine Practices to Extend the
Life (and Value) of Your Software called Seven Strategies for Product
Owners.
February 13, 2019
Remove Dependencies
One key characteristic of great Product Owners is that they help remove dependencies whenever the team encounters them.
Dependencies can show up in many
different forms. Our team may need some code from another team in our company,
but the other team hasn’t yet built it for us. Or one feature that we may have
to build has a dependency on another feature that needs to be built first.
Product Owners have to take these kinds of details into account because they
are the ones responsible for defining the next pieces of work to be done and that
work has to be ready to be consumed, which means that all dependencies have to
be resolved by the time it goes into iteration.
In many ways I think of this as a
management role that I take on as the Product Owner but the Product Owner’s
responsibilities are more to remove technical dependencies whereas the ScrumMaster’s
responsibilities are more about removing team impediments that relate to
physical dependencies.
The ScrumMaster, for example, would
be more responsible for helping a team member get a needed computer to do their
work or office space to work in, whereas a Product Owner might be more focused
on helping a team member resolve an internal or product dependency.
Some Product Owners are technical
and can help team members with dependencies on libraries and frameworks. Other Product
Owners are non-technical and can help team members understand their user’s
needs more fully and more clearly. It’s not uncommon for different parts of the
system to have different levels of dependencies and identifying these upfront
can often help us resolve issues more quickly.
I like to say that my job as a Product
Owner is to remove dependencies for the team and get out of their way. Good
teams operate just fine on their own as long as they have the information and
tools they need to do their jobs. My job as a Product Owner is to get them that
information and tools and then let them do their jobs.
The role of Product Owner often
doesn’t require the prerequisite of having been a software developer but I find
that software developers can make excellent Product Owners because they
understand the needs of developers and so are able to more directly address
them.
Being the Product Owner on a
product that people use is kind of like being the writer of a famous
screenplay. You get to be the puppet master. You define the rules of the game.
Before the implementation or even the design, you get to provide the context.
A good Product Owner is the product
champion. They understand WHAT the product is, WHO wants it and WHY it is
wanted. They understand their product and are able to convey that understanding
to the team.
A simple but highly effective way
of building software is to start with the most valuable features to the user
because this allows us to focus our attention on the things that are most
important first. However, some features depend on other features and in situations
like that, we have to have a strategy for building those features in such a way
that is most efficient and effective. The shortest distance between two points is
not always a straight line. Sometimes it makes sense to provide services that
other features can use but more often than not it’s more efficient and
effective to start by fulfilling a single need and then later redesign a
service to be more general purpose.
When building software developers
sometimes get lost in all the details so having a Product Owner who holds the
vision and keeps the big picture in mind can be helpful for developers. Holding
the product vision and making sure the team has everything they need to do
their work effectively are vital parts of a Product Owner’s job.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Product Owners.


