David Scott Bernstein's Blog, page 15
July 11, 2018
The Agile Community
Have you ever stopped to consider how fortunate we are? We are part of a community of Agile practitioners and there are many benefits to being part of this community.
I am fortunate that I get to speak at many of the Agile conferences, including the big Agile Conference, the deliver:Agile Conference, Scrum Gatherings, Better Software Conference, and many others.
In my experience, these conferences are different than other industry conferences that I’ve attended. They’re different from product-oriented conferences like COMDEX or language and framework oriented conferences like JavaOne. We focus on improving and sharing techniques around software development and that’s really exciting because it means that we’re still learning and growing a great deal as we continue to discover better ways of building software.
The software development community is really unique and special. Most of us are “makers” which means that we are fed by creating things, making things. We like learning new things and creating new things. We have this element of constantly learning and constantly discovering new and better ways of building software. This attracts a certain kind of individual to the field.
We are a relatively small community, especially in proportion to the amount of influence that we have in the world. All businesses are driven by software and so discovering better ways of creating it and maintaining it is of paramount importance to our society and serves a critical role for business.
Another key characteristic that I find in the Agile community is kindness. People are excited to share what they’ve learned and there’s a real sense of progress on the industry level. There are a lot of people in the Agile community that I can reach out to when I have questions or want to share ideas.
The other Agile consultants that I know and I are constantly sharing ideas and new material with each other and this is really exciting. We are wise enough to know that we aren’t competition to each other and that our only real competition is ignorance. When we look at the tens of thousands of companies that could immediately benefit but are unaware of many of the things that would benefit them from the Agile movement then we see that our real competition is ignorance and it makes a lot of sense to help each other. Helping each other is Agile, after all.
So, my sense of the Agile community is that it’s sincere. We really want to help and we believe that we can. Looking at other communities I don’t always see that that’s possible. It’s hard to be a doctor or a lawyer these days and still really care. I admire them for having to work within so many constraints.
I think that software and the software development profession is one of the few professions that still values creativity and allows us to keep growing and expanding and learning. I’m excited and honored to be part of this community and to help software development become a true discipline and a valuable profession.
July 4, 2018
Independence
Well, happy Independence Day.
Functional independence is one of the key characteristics of testable code so I thought it fitting to discuss today. We want to make each piece of our code as functionally independent as possible and make those pieces as small as possible. This is not always easy to do but most of the time it’s the right thing to do. Code that is functionally independent is more straightforward to verify and to extend.
But a lot of code is not functionally independent. A lot of code is intertwined with itself and nearly impossible to break apart. A lot of code depends on global state and therefore can be very difficult to test. Sometimes we have to use global state but if we can avoid it then we should. Again, good unit tests are functionally independent of each other so they should have no dependencies on anything external to the test itself.
The constraints around writing good functionally independent and testable code are the same constraints for building maintainable software. When code is functionally independent it’s far more straightforward to test and, because it injects its dependencies, new versions can inject new variations of dependencies and therefore extend the system in a very safe and effective way.
Independence in code means less concrete coupling and more abstract coupling. Much of this is about separating out what we do from how we do it. The features that we present should provide what they do but should provide others as little implementation details as possible because the more that we can hide the freer we are in the future to change those implementation details without affecting other parts of the code that depends on it.
There are many ways to separate out the “what” from the “how” and much of the art of computer software development is based on this. It’s about how we abstract and think about problems and then how we model them in code so that they’re understandable to us as well as functional in a computer program.
Independence and software is a good thing because we want to compose complex behaviors from simpler behaviors and we want to do this because each of the different layers can be tested independently. Small tests tend to be simpler tests so we like to test things at the unit level rather than testing things that the integration or system-level. The unit tests also run a lot faster than integration or system tests.
Independence in software is a good thing, it’s a virtue. The more independent we make our code the more straightforward it is to verify and the fewer dependencies are required in order to test it. This helps us build code in functionally independent components and make software more standalone.
June 27, 2018
Introducing TDD to Managers
In my last blog post, I discussed different ways of introducing test-driven development to teams. The bottom line is to give developers an experience of benefiting from doing TDD. Once they see how their tests catch bugs that otherwise would have perhaps escaped and had to be fixed later, and how TDD helps them keep focused on building software that fulfills acceptance criteria, and gives them the opportunity to refactor and clean up their code, then developers begin to see the benefit for themselves and want to continue to do it.
But convincing some managers, especially those who really don’t understand what software development is all about, can be a challenge. I believe that constructing software is fundamentally different than constructing physical things. Software is unique and very often the techniques we use to manage other kinds of projects don’t work for managing software projects. It takes a while to gain experience as a manager in the software industry.
Quality is important in virtually every field but quality in software and software construction is critically important. But we have to understand what quality means in the context of software. Reliability, security, and performance are the effects or result of deeper causes that come from good developer principles and practices at work.
Volumes have been written on programming and how to do it well. If I had to sum it up in one word that word would be testability. When we make software testable it is also of high quality and so the cost of maintaining and extending this kind of software is lower than untestable code.
When managers ask me what’s the most important thing to hold their developers to I always say testability. Have your developers write testable code because that code will be more cost-effective to maintain and extend. Given that industry spends five times more on software after it’s released than it initially cost to create, it’s clear that maintenance is very important. Since testable software costs less to maintain we want to encourage developers to write testable software.
The best way I found to encourage developers to write testable software is to have them do test-first development. With this one practice, we encourage code to be not only of high quality but also independently verifiable. If we build good tests then we will also be building a reliable continuous integration system that will allow us to automate regression.
There is value in catching bugs early. There is value in having a reliable continuous integration server. There is value in being able to confidently make last-minute changes to code. These are the valuable things that we get from doing test-driven development and when managers see those benefits in action they often get on board with having their teams do test-driven development. But just like developers, managers have to see the value of these practices for themselves.
June 20, 2018
Introducing TDD to Teams
When managers find out that I teach test-driven development to software development teams, they sometimes ask me how they can introduce TDD to their team. Unfortunately, it’s not always easy.
We software developers are jaded. And for good reason. It seems like nearly every day we are hit with new technologies or methodologies that are going to save the industry and make our lives so much better. Most of these don’t pan out. The things that we discovered that do work for us are valuable and we’re not about to abandon them for something untested.
I find that we can discuss all the benefits and reasoning behind doing test-driven development and we can all agree intellectually that it’s valuable but still not be willing to implement it on our team. The only way that I’ve seen teams adopted test-driven development is if they experience the benefits of it for themselves, preferably together.
But learning TDD and experiencing its benefits requires the right kind of environment. Doing TDD with existing code when there’s already a lot of technical debt adds another level of complexity. It’s always better to start by learning TDD on a greenfield project. Doing TDD can be very effective with existing legacy software but it requires advanced techniques and it’s always better to start with a clean code base.
It always seems that new Scrum teams form and immediately want to hit the ground running so they don’t take much time to think about technical practices or development standards. Learning new skills like TDD get put off until later. But when later comes we already have so much legacy code that is quite difficult to dig ourselves out of the technical debt we’ve accumulated.
I’ve seen this vicious cycle play out time and time again on Scrum teams and, of course, the solution to this challenge is to be prepared. Give teams the skills and the baseline knowledge for the kind of development that you want to have happen in your organization. If technical excellence is important to your organization then there should be some sort of consensus and common understanding of what that means and how that gets incorporated into the software being built.
I see TDD is a core software development practice and many teams are discovering how important it is in order to correctly implement continuous integration and become truly Agile. Test-driven development is perhaps the most advanced practice of Extreme Programming. It’s easy to do it in a less than optimal way and there aren’t many good sources to learn how to do it well.
The best book I found on the subject of TDD is the first book that was written on it by Kent Beck, called Test-Driven Development by Example. It’s an excellent book but no book can cover all of TDD and how to do it properly. We need a dozen excellent books on the subject.
I try to share as much as I can about what I’ve learned that works for me in TDD on my blog but it’s hard to communicate an entire body of knowledge in a series of blog posts. We can go much deeper with three to five days of doing TDD in my developer classes.
One way to learn TDD is by mobbing where the whole team gets together and works on the same story using new techniques or approaches like test-driven development. Mobbing and pair programming are some of the most powerful ways that I know to propagate new practices throughout a team. Training and attending conferences can also help.
June 13, 2018
Code Coverage
Like most metrics in software development, code coverage can be a good indicator or it can be heavily abused.
I know many teams that have a code coverage standard. For example, 80% of their code must be covered by unit tests. The problem with having a standard for the percentage of code coverage is that not all code is straightforward to test and sometimes what happens is developers will write tests for the code that’s easy to cover, such as getters and setters, and leave complex computations that are harder to test uncovered.
I don’t believe in having a percentage of code coverage as a standard. I think that all code that we write should be covered by tests because I see unit tests that I write when I do TDD as a form of specifying the behavior of the system. My tests are executable specifications and if that’s true then everything I would put in the spec should go in my tests. If my specification says that a name field has a getter and setter than I also want to have tests for that getter and setter, not because I’m concerned that this code needs tests but rather because I think about my tests as specifications.
I am very concerned with building practices that are shareable so that we can build consensus across our industry. Seeing unit tests as a form of specifications helps me get really clear on the number and kind of tests I need to write in order to elicit any behavior. I find this approach extremely valuable so I don’t mind taking it to its logical conclusion.
When doing TDD, if you always write the test first and always write implementation to make a failing test pass then by definition you will always have 100% code coverage. This is why I rarely use code coverage tools when I’m building code test first. However, I do use code coverage tools when I refactor my code to make sure that my code coverage doesn’t change. If, when I refactor my code, I see my coverage change then it tells me something is wrong. I either have dead code in my project or my tests are not doing what I think they’re doing.
I know that a lot of developers feel that writing tests for simple code like getters and setters are a waste of time, and if you feel that way then I’m not going to argue with you. As long as your getters and setters are simple and don’t contain any additional business rules then you really don’t have to write tests for them. However, I still do because, again, I see my tests as specifications and I like the consistency of having 100% code coverage. I even have unit tests for the constants in my system. I want to have tests for anything in my code that could change the behavior of the system.
While I don’t rely heavily on code coverage tools when writing new code, I tend to rely very heavily on them when I’m working with legacy code. One of the first things I want to do when working with legacy code is to get that code under test so I can refactor it safely. In these situations, code coverage tools can be immensely valuable for showing me areas of the code that are safe and areas that are unsafe for refactoring.
So, code coverage tools can be valuable and they have their place but they can also encourage developers to do the wrong thing. If you must have code coverage standards then insist that complex coding is always covered.
June 6, 2018
Make Your Test Fail First
The test-first development cycle means that first we write a failing test and prove that it fails by running it and seeing the red bar. Then we implement the test so that it passes and we see the green bar. Finally, we refactor the code and the test for quality and maintainability.
That’s the test first development cycle. It’s pretty simple and straightforward but sometimes people try to short-circuit that cycle. They think that the goal of doing TDD is to get to the green bar but this is actually not true. The goal, initially, of doing TDD is to turn the red bar green. The key distinction here is that before we get to the green bar we have to have the red bar.
Always starting with a failing test can seem like busywork when were first learning how to do TDD. After all, we know the test is going to fail initially because we haven’t yet written the code to make it pass. So what’s the purpose of making the test fail and seeing it fail?
Think about making the test fail and seeing it fail as your test of the test. Often, people ask me if your tests verify your code then what verifies your tests and the answer is actually twofold. The test tests the code and the code tests the test. This is very much like the kind of confirmation we get when we do double-book entry accounting where each credit is offset by a debit. When code and tests agree on the same result then it’s pretty likely that they are in alignment.
But there’s another test of the test that we do when doing TDD. It’s a very simple test but it’s an important one that we don’t want to overlook. We verify that the test can actually fail by making it fail first and then we prove that the test passes for the reason that we intend it to by implementing the behavior and seeing the test turn green.
Every once in a while I’ll write a test that I will expect will fail but when I run it, it passes. Often, this means that the implementation that I was about to create already exists in the system. That’s good news because it means that I don’t have to implement that behavior myself. It could also mean that I wrote a bad test and if that’s the case I want to know that right up front. I don’t want to be putting bad tests in my code because a test that cannot fail is worse than no test at all. It’s a lie. I don’t like putting lies in my code.
So, always start by writing a failing test first and watch it fail. It may seem like busywork at first but the first time it surprises you and you get the green bar when you were expecting the red bar, you’ll find that it was worth the effort.
I love watching Uncle Bob Martin write code and fortunately, he has several videos where he demonstrates doing TDD for various coding activities. One of the habits that he is in when doing TDD that I really love is that before he clicks to run his tests, which he does quite frequently, he says what he expects to see happen. So, he’ll make some changes to the code and then before he clicks to run his test he’ll say, “I expect this to fail,” or, “I expect this to pass.” He sets an expectation for himself and whoever else is coding with him.
Let’s face it, it only takes a moment to see a test fail and the benefit of knowing a test is not doing what we expect far outweighs the extra effort it takes.
May 30, 2018
Don’t Write All Your Tests Upfront
I find that different people have different ideas about what test-driven development really means.
Some people think that test-driven development is about writing all of your tests after you write your code. To me, that is not test driven development. I call that test-after development and when we do TDD we write our tests before we write our code.
Some people think the test-driven development is about writing all of your tests upfront and then making each one pass, one at a time. I don’t consider this test-driven development either because one of the main benefits of doing TDD is the higher level perspective you get by thinking in terms of the context of your test. It’s actually the feedback of going between code and test that gives us this insight. By writing all our tests up front and then making each one pass, we often lose some of this insight that we would’ve gotten otherwise by building software iteratively.
I believe that software development in its highest form should be a discovery process, but I know that those words make some people a bit anxious. I can certainly relate. I remember times in my early career when I didn’t have enough information to start coding but I started anyway and ended up basically wasting my time and my customer’s money. My solution back then to this dilemma was to do more planning upfront, like any good Waterfall developer. But there are some things that you just cannot foresee in planning upfront can be really hard.
Building software requires deep thought and I find that I get a great deal of insight as I’m coding, as opposed to thinking about coding up front. There’s a huge difference between building a planning and executing the plan. There’s a difference between reading a map and taking a journey. Very often, what we think is on the map doesn’t fully represent the actual journey.
When I do TDD, I focus on creating a small behavior in the system by first defining a test for that behavior. When I do this, I set an expectation on what that behavior will do, as well as what parameters I need to call for the API that generates the behavior, and what return value I expect back. In other words, I get crystal clear on what it is I’m about to implement. That’s what writing a test does when you do it before you write the code. It sets your expectation and that can be very powerful.
The saying goes, “ready, aim, fire,” not “fire, aim, ready.” If we want to hit a target we have to aim for that target and writing the test before writing the implementation is the process of focusing my attention on what it is I want to create.
Many developers who have not been exposed to test-driven development properly see writing tests up front as busywork but I couldn’t disagree more. Writing a test before writing the implementation for that test is like having guardrails, they keep me focused and in the channel where I can implement the behavior that does what I wanted to do and is independently testable, and with good code quality. That’s exactly what I should be focused on when building software.
May 23, 2018
It’s Not a Test of You Write it First
Although I am an advocate of test-first development, I also acknowledge that there’s a bit of a conundrum in the name test-first development. How can you write a test for something before you write the something? There’s nothing yet to test so how can it be a test?
The answer to that question is that it’s not a test. It just looks like a test. How can it be a test if there isn’t anything to test? Calling it a test at this stage before the code it’s supposed to test is written is a bit of a misnomer. So then what is it?
At this point, I’d like to think about it as a hypothesis. We’re hypothesizing that when we write the code it will behave as we intend it. We’re also hypothesizing that the way we call the routine we want to test is embodied within the test, as well as what we’re expecting to be returned. I think of my test as a hypothesis and the code that I use to make the test pass as my experiment, which proves or disproves my hypothesis.
To me this makes sense. Scientists don’t randomly mix things together in the laboratory. They run experiments and their experiments are based on a hypothesis that they are trying to prove or disprove. It would be reckless to try to run a scientific experiment without a hypothesis or something that you’re trying to prove or disprove. I say that it’s equally reckless to write code without knowing what your goal is. Far too often, the goals of writing code are not explicitly clear to us. This is the reason that most software developers write more cruft than actual working code.
How many times have you boarded a flight without knowing or caring where it was going? Never, I hope. And I hope the same is true when you code. Having a clear sense of what we’re trying to accomplish when we’re writing software drives us to get the job done and keeps us from getting lost. I find that when I do test-first development that my tests serve that purpose and keep me on track and focused so that I’m only building valuable software and the amount of cruft that I write is minimal.
Therefore, the role of a test when we’re doing test-first development is actually dual. A test has two purposes. When we first write the test it’s purpose is to assert a hypothesis and then when we make the test pass it becomes a real test that provides value for us evermore by verifying a behavior is working as expected.
The unit tests that I write when I do test-first development allows me to regress behaviors in a program and prove that it works whenever I make an even minor change to the code base. That level of confidence is priceless. My tests have my back, so to speak, and have saved me far too often than I can count. I find that I’m able to take far more risks in my code because I know if I make a mistake that my test will likely catch it and I’ll have the opportunity to fix it immediately before anyone else sees what I did.
Of course, to have a high degree of confidence in my code and my tests, I can’t just write any tests. I have to follow a methodology to support me in writing reliable tests. These are learnable skills if we’re willing to learn them. And once we master these skills development gets a lot less stressful and a lot more fulfilling.
May 16, 2018
Why Write the Test First
As you can probably tell by reading my blog, I am a proponent of TDD. But my enthusiasm for test-first development took a long time coming. It took me a long time to convince myself that TDD had great value and was worth the effort. Software developers already have too much to do and not enough time so adding an additional practice to our already tight schedules is something that caused me great trepidation.
But I believe that doing TDD, if we do it correctly, can simultaneously improve quality while reducing our workloads. This happens because adding the practice of TDD can let us remove several things that were actually major impediments to the software development process.
I don’t usually find the developers like to read and interpret long-winded requirements and statistically, it consumes 30 to 50% of software development efforts, if we also include the time spent capturing requirements. I find I spend far less time reading and interpreting requirements when I have acceptance tests driving my development.
The place that developers lose most of their time is typically in debugging and it’s not a very fun activity. I used to be an expert with the debugger but now I find that I’m rarely in it these days because my unit tests find problems for me before they can even become defects.
As soon as I type a mistake, my unit tests tell me there’s a problem while it is still fresh in my mind and I can quickly fix it. I hardly notice defects when I’m doing this and in just a few seconds I fix what could have become a showstopper defect for me if I was made aware of them long after I’ve forgotten the code that I was working on, which for me is just a few days.
Very few developers can be intimately familiar with code that they wrote, even a few weeks ago, and so there is a learning process or I should say a re-learning process to get ourselves back up to speed with code before we can start to debug it and that’s time-consuming, as well as being a fairly unpleasant activity.
Developers love to develop. It’s what we’re good at and what we’re paid to do. So, many of us feel a bit frustrated when our company’s development process has us in meetings or writing programmer documentation or interpreting requirements or estimating work to be done.
These activities can be valuable but they’re never nearly as valuable as producing working software. That is what we are paid to do and that is what we love doing but unfortunately, in many organizations programming can be a programmer’s last responsibility. I know some companies who keep their developers in meetings so much of the time that they only have about 20% of their time available to actually write code. A good software development process has most of the developer’s time devoted to writing software.
One thing that I love about doing test-first development is that it is coding. Tests are code and writing tests is writing code. When developers get that they can spend time doing TDD instead of writing programmer documentation or doing extensive debugging then they recognize a lot of the value inherent in TDD. I’ve written a lot about the value of automated regression tests and this is yet another benefit from doing TDD.
But perhaps the biggest benefit of doing test-first software development is that we’re always writing testable code because the process forces us to. In TDD, we write the test first and then we write the implementation to make the test pass. In this context, it’s pretty much impossible to write an implementation that isn’t testable. So, all the code we build doing TDD is testable code. And testable code is high-quality software that’s more straightforward to maintain and extend than untestable code.
May 9, 2018
On Implementation Independence
I’ve been looking forward to writing this blog post for a while because I’ve been misinterpreted in the past on what I be by implementation independence. And this is entirely understandable because the concept is slippery and difficult for people to easily grasp. Yet I feel implementation independence is one of the most important concepts in software development, not only in writing our tests but also in writing code, and even just to help us think more clearly.
In English and many other spoken languages, we conjugate verbs depending upon when they happened. For example, we say “walk,” “walking,” or “walked” depending upon when it happened. This lets the listener instantly know the timeframe that actions occur in.
But there’s no semantic distinction that we make in our language when we talk about WHAT, WHY and HOW. These are fundamentally different perspectives that require different knowledge. Because of the way spoken language works, we tend to slip in one of these different perspectives without any warning to our listener. This can significantly lower the fidelity of communication between individuals and so I try to keep a “cohesion of perspectives” when I write or speak by only dealing with a single perspective at one time and making it clear when I shift perspectives.
There are many ways we can break down perspectives and here we’ll just focus on one, which are the levels of perspectives that the UML calls out for object-oriented programs. These are the conceptual, specification and implementation perspectives.
The conceptual perspective talks about what we want but not about how we get it. Ideally, we would like the clients of services to have the conceptual perspective. This means that the services we provide should include APIs that only specify what they give and not specify how they give it. Again, this is not so easy to do because the way we speak and the way we think doesn’t distinguish these different levels of perspectives and so it’s really easy to pollute the conceptual perspective with implementation details.
This is not a just the coding thing, it has a lot to do with how we conceptualize problems. Naming things can also leak implementation details if were not careful. Our names for APIs should focus on the results they provide and not how they get those results. Why hide implementation details? The answer is simple. If we can hide implementation details then we’re free to change those details later without breaking a bunch of code.
Dependencies can happen in many subtle ways, including the way we name things. If I’m using 256-bit Blowfish encryption then the method I call to invoke encryption would be encrypt() and not encryptWith256BlowfishEncryption() because the latter name would have to be changed if I decided to replace that encryptor with a different type of encryptor in the future. This also makes it easier for me to introduce polymorphism in the future and I can select from a wide variety of encryptors but all of the encryptors that I use polymorphically are called encrypt().
There are really two worlds in software, the world of intentions and the world of implementation. We software developers get to straddle both worlds and everyone else should live in the world of intentions. This helps make code more flexible when it needs to change in the future.


