David Scott Bernstein's Blog, page 7
February 5, 2020
Use Accurate Examples
I discussed many benefits of using test-first development over these last eight blog posts. I find many benefits to doing test-first development but one stands out to me to be the biggest benefit and that is that the tests that I write concretize abstract requirements. This makes requirements understandable because they are real and tangible and associated with specific behaviors that I can assert against. It’s amazing how much ambiguity just dissolves away when we have the clarity of good unit tests that draw on accurate examples.
I want the examples that I use in my unit test to reflect how the system will be used so that I have an accurate representation of how I’m using the system in my tests. We understand so much better through examples. We think in concrete terms but we speak in generalizations. When we talk and think in generalizations then we’re constantly concretizing those abstractions in our heads in order to make sense of what is being said. Using concrete examples to illustrate concepts increases the fidelity of communication and helps everyone get on the same page to understand each other.
We always want to have a verifiable system that we can test against. I remember that I was working with one team that basically hardcoded all of their calls to the database which meant that their system was not very testable. They could write a test that simulated a user of a system but they couldn’t write small unit tests that could fail for only one reason that they needed in order to have a really good continuous integration system.
When we started trying to write unit tests for their system we found that we couldn’t mock out the database calls directly without breaking the system. Instead, they had to start by writing tests against a test database knowing that their next step once they got that working was to leverage it so that they could refactor their code again and replace the test database with mocks so that they could write good unit tests and have a responsive continuous integration server.
As much as we wanted to go directly to having good unit tests and mocks, we found that we couldn’t do it without bringing the system down for weeks and you never want to be in that situation because it could be very hard to get the system back up again. All changes that we make to a system, especially a live system, have to be small, safe, and incremental. If we can’t compile or run our code for long periods, it’s like trying to code blindfolded. We don’t want to do that.
When we’re doing test-driven development or acceptance test-driven development we are essentially defining the behavior of our features with examples and this is an extremely valuable and useful way to build software. I think it is the best way of building software that I have encountered and that is why I am an advocate. Of course, if we’re going to build a system based upon examples then we want to make sure that our examples are accurate and make use of the system the way we intend the system to be used in production.
One of the many benefits that I find when doing test-first development is that I can build components of the system and exercise them in the way I intend to use them in production so that I have another way of validating that they work the way I intend them to. When I’m doing test-first development all of the code that I write gets called in two ways. My code gets called from production and my code gets called from my tests. I want to call my code from my tests and use it the way I intend for my code to be used in production.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
January 29, 2020
Avoid Over-Specifying Tests
The number one problem that I see developers have when practicing test-first development that impedes them from refactoring their code is that they over-specify behavior in their tests. This leads developers to write more tests than are needed, which can become a burden when refactoring code.
I am a big advocate of having a complete test base and even erring on the side of caution when it comes to quality engineering and software validation but that is not what we’re talking about here. What we’re talking about here are the tests that we write when we’re doing test-first development and I’m proposing that writing those tests from the perspective of specifying the behaviors that we want to create is a highly valuable way of writing tests because it drives us to think at the right level of abstraction for creating behavioral tests and that allow us the freedom to refactor our code without breaking it.
So the question becomes how many tests are enough?
Have you ever played the game 20 questions? Most of us have played that game at one point in our lives. One person thinks of something that could be an animal, vegetable, or mineral and then they answer yes/no questions that are asked of them. The point of the game is to ask as few questions as possible in order to accurately guess what the person is thinking.
This is how I think of the unit tests that I write the specified behavior as I’m doing test-first development. I ask what are the fewest tests that I need to write in order to assert the behavior I want to create. Notice that how doing this as part of the test-first methodology makes a lot of sense because we’re essentially asking what assertions to create in order for us to be driven to build the behavior to make those assertions pass.
For the type of behavioral testing that we’re talking about when doing test-first development, our goal is to make each test unique and so we’re only testing the main scenarios. We’re writing tests for the scenarios that drive us to write the code we want to create but again that isn’t necessarily all the tests that we need. We want to add additional tests as part of our quality engineering effort and not as part of the effort of doing test-first development.
Here’s the challenge. When I’m given only one example of a process then it can be difficult to generalize and write a good general algorithm for the process. In these situations, I’ll sometimes come up with the second scenario that I can use to refactor my first implementation into a more generalized implementation.
But when I do this I and up with a second test which is redundant to the first test. Some people believe that that second test is still valuable because, they reason, that since you needed it to create the production code then someone else will probably need it to understand the system in the future if they needed to recreate the code from the tests, and that makes sense. But for me that additional test is redundant and so I typically delete it or move it into a different namespace where I keep my other quality assurance tests.
This has led me to follow a practice that I learned from Amir Kolsky, which involves separating out what he calls red tests from green tests. Green tests are kinds of tests that we write when we’re doing test-first development. We use these tests to drive us to create the behavior we want in the system. Red tests, on the other hand, are tests that we write after we write our code to validate that it is working as expected.
Why separate out red tests from green tests? Because my green tests serve a fundamentally different purpose. They are there to act as a living specification, validating that the behaviors work as expected. Regardless of whether they are implemented in a unit testing framework or an acceptance testing framework, they are in essence acceptance tests because they’re based upon validating behaviors or acceptance criteria rather than implementation details. I call these developer tests because essentially these are the kind of tests that we write in the course of doing development. Conversely, red tests are tests I write after the code is written to lock down some implementation. Red tests are more QA tests.
When I refactor my code, I expect that none of my green tests will break. If red tests break then that’s okay because remember, my red tests can be implementation dependent and when I change an implementation it may cause some red tests to break. But it shouldn’t break any green tests. I find that this is a valuable distinction.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
January 22, 2020
Use Mocks to Test Workflows
Unit testing frameworks are simple but I find them highly valuable. They contain a collection of assertions that I can use to validate a range of values and behaviors in the code that I’m building. I use assertions to verify that values are within bounds, exceptions are called when expected and not called when unexpected, and that logic and behavior is correctly implemented. This covers a great deal of the things that I want to test in my system but it doesn’t cover everything.
The thing that unit testing frameworks cannot really test easily is workflows. We often find that in complex software we are doing a series of steps, some of which are predicated upon various conditions, and these kinds of complex workflows are not well suited for testing with assertion frameworks. Fortunately, there are tools that are well-suited for that, namely test doubles or mocks.
Technically speaking, mocks are a special kind of test double but I often use the term mock generically to mean any kind of test double. I find just writing a simple mock that either is a subclass of the class I want to mock or implements the same interface, that I can avoid using a mocking framework and this can drop the overhead of my tests significantly.
Let’s say I have a collection a validator’s that I want to apply to a password. I want to ensure that all the validators are called when I call them out of a loop when I go through the collection of validators and call each one. In such a scenario, I might have a mock validator that I put at the end of the collection that merely keeps track of whether it was invoked or not, and then later my test code will interrogate that mock and see whether it was called. If it was then I know that all of the previous validators were also called. This is one way to use a mock to test workflows.
Mocking frameworks have become quite sophisticated and it seems like every week a new one comes out. There are many different kinds of mocking frameworks or test double frameworks that you can draw on but I very much prefer to roll my own whenever I can.
If I have a DAO that I’m using to access a database, for example, then I can mock the DAO by subclassing it and overriding the calls that access the database to return hard-coded values more quickly than real database accesses so the test runs faster. Alternatively, my production code can access the DAO through a common interface that allows me to mock the interface and take the database out of the equation of testing completely.
These kinds of techniques are super simple but often I find are sufficient for the simple tests that I need to write when I’m doing test-first development. Using a combination of mocks and an assertion testing framework, I find that I can write all the tests that I need to validate that all the behaviors in a system work as expected.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
January 15, 2020
Test Behaviors, Not Implementations
One of the keys to doing test-first development successfully and having the tests that you create to support you in refactoring code rather than breaking when you refactor code, is to write tests against the behaviors you want to create rather than how you implement those behaviors.
This is an important insight but not always obvious when you’re building software. It’s easy when writing code to get lost in the weeds and one of the ways that I get myself out is to ask myself, “What am I trying to accomplish here? What is my goal or desired outcome?”
This usually helps refocus me on what I’m trying to do.
One of the main ways that I stay focused when building a feature is to write my unit tests around the behaviors that I want to create. When I do this I find that my unit tests are more flexible and when I refactor my code they don’t break and instead validate that my new implementations are consistent and work as expected. This is one of the main benefits of writing unit tests against behaviors rather than how we implement those behaviors.
When doing test-first development, it all starts with the test so we look at our requirements, the story that we are trying to fulfill, and we think about what the acceptance criteria should be. Then we simply write a test for those acceptance criteria using the “Given, When, Then” format. The test then becomes the guiding light for implementing that behavior.
I see a lot of developers following the practice of creating a test class for every one of their production classes and a test method for every one of their public production methods. I don’t think this is generally good practice. This is not the kind of test coverage we want to get because we’re getting it at too low a level. Instead, we want our tests to cover behaviors.
This can show up in subtle ways. For example, a test called testConstructor() does not tell me much. A better name may be validateAndRetrieveValuesAfterConstruction().
Writing behavioral tests also sets me up to think about how I can encapsulate implementation and only expose what I need to in order for callers to use my services effectively. This allows me to create strong contracts and tight interfaces.
Focusing on the behaviors or perhaps better stated, the acceptance criteria that I’m trying to fulfill, helps me write more focused code. And even though I don’t explicitly test many of the component classes or methods directly, they end up getting tested indirectly and so my code coverage is high.
For example, a behavioral test that asserts a user can register in a system doesn’t need an explicit test to validate the factory for creating the user if the creation of the user is part of registration which it is. This means that we’ve indirectly tested the factory. The tests are if a user can register or not. It doesn’t matter the mechanism used to make this happen.
This also means that if we started our implementation by simply newing up a user and later decided to migrate the creation of users to a factory or a dependency injection framework then we would not have to change our existing test or write any additional tests because how we create a user is an implementation detail for a test that is validating that a created user can be registered. In fact, if we run a code coverage tool after refactoring the design to use a factory we will see that our test covers the factory instantiation without any additional changes.
This is the benefit of writing behavioral tests that are not dependent on implementation details. This is the clearest way that I found to think about doing test-first development and I find that writing behavioral tests in this way can generate a good suite of regression tests as well as serve as living specifications in the system. What these tests do not do on their own is to provide a complete set of regression that may be needed in some situations. In other words, the way I am advocating doing test-first software development is really valuable for developers and also helpful for regression testing but it is not complete testing and so for many situations, we need to go back and think about what could go wrong so we can also write tests for those scenarios.
Quality assurance and quality engineering which we typically do as a separate step from the coding step can focus on testing implementation details and other nonfunctional requirements. But the tests that I write when I do test-first development, I try to keep focused on validating behaviors.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
January 8, 2020
Show What’s Important
Another aspect of using unit tests as specifications is to clearly show what’s important in each test. We do this primarily by naming things well and calling out generalizations and key concepts in the names of the symbols that we use.
Every test has a name but you never call it, the system calls it but that doesn’t mean the name of the test is meaningless. I see beginners sometimes name their tests test1, test2, etc. and I think this is a missed opportunity. The name of the test should express what it is that we’re exercising, the purpose of the test, in plain English, and it doesn’t matter to me how verbose I make the name because I only ever have to type it in once.
However, I hate redundancy, acronyms, and jargon in names so I prefer to omit the “test” prefix that is typically used in test methods of unit testing frameworks. I can clearly see that it is a test method because it’s associated with a test class.
Showing what’s important in the test is about making the test scenario as clear as possible. Typically, a test scenario has three components that we’re all familiar with: the setup, the trigger, and the verify.
Setup is all about changing the system from its initial state into a state that’s ready to trigger the behavior that we want to test. Every unit test runs in isolation and so every test will require its own setup before it can be executed.
The second phase of a test scenario is the trigger. The trigger is the single event or behavior that our test is verifying. We invoke that behavior at this point. This is the thing that we’re testing and so we want to say what were testing here as clearly as possible.
The final phase of our test scenario is called verify and it’s all about checking the behavior that we triggered created the correct results and did not create any unwanted side effects. Verification is typically done by comparing the state of the system against what is expected.
I find that no matter how complex my tests are when I break them out into these three phases, they become much clearer and easier to understand.
I also use constants in my tests, which gave me the opportunity to get rid of magic numbers in my code and in my tests, which make my code in my tests more readable. When I see the number 21 in code I have no idea what it means but if I have a variable called MinimumAge that holds the number 21 then I have a better idea of what it represents.
I’ve always been a big proponent of using good names in my code but before I started doing test-first development I found that it was sometimes difficult to express how I wanted my code to be used in the symbols I used to define my code. When I do test-first development I don’t have that problem because I can name the symbols in my test in ways that express how I want my test to use my code and this helps clarify a great deal so I think of it as a valuable form of documentation for my code.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
December 18, 2019
Use Helper Methods
What? Helper methods? Whenever I see a class called Helper in code I think that the developer who wrote it wasn’t willing to take the time to discover what objects were really responsible for those behaviors. Although we would like to believe otherwise, in reality, there is no benevolent helper class in the world that floats around and provides utility services, or at least we shouldn’t think about modeling it that way in our code.
In other words, I think that oftentimes helper methods and helper classes in production code is a code smell that is hiding entities that have yet to be discovered in the system. The good news is that we can use this code smell to help us discover what the missing entities are and thereby enrich the design of the system.
However, I have a very different opinion about helper methods when used in test code versus production code. Test code is different than production code. It serves a different purpose and unlike the object-oriented programs that I write, which endeavor to become models of whatever it is that I’m building, my test code is really scenarios for exercising those models in the ways that I intend to use them.
The intention is not to build a model and test code but rather to exercise the model through test scenarios. Helper methods for creating in maintaining test scenarios are entirely valid and so I use them quite a bit when writing unit tests.
Refactoring is a major part of doing software development. Refactoring code is like editing prose. It’s always better to work with something rather than trying to start with a blank page. Any writer will tell you that. I believe it’s true for software development as well. When you understand how to do refactoring efficiently and effectively, it turns out to be a valuable technique for emerging designs safely and efficiently. But it does mean that I spend a significant portion of my time refactoring code. In addition to refactoring my code, I also spend time refactoring in my tests. In fact, I tend to spend more time refactoring my tests then I spend refactoring my code.
When I’m refactoring my tests I’m doing a different activity than when I’m refactoring my code. When I refactor code, I’m usually doing it to improve the design and I draw on a variety of principles, design patterns, practices, and other techniques to create understandable and maintainable designs. When I am refactoring my tests I’m generally doing it to remove redundancy. This has a lot to do with how I create my tests in the first place.
Let’s say I’m creating a system that requires users to register and log in before they may access an online forum. Every test that exercises the behavior of the forum must first register and login users. The set up for this may involve many steps and be quite complex but we can encapsulate this into a helper method.
If there’s a common set up or tear down that all test need in a particular test class then we can put this in a test initialize or tests finalize method that every unit testing framework supports, although the syntax is slightly different between frameworks. However, we often need to do additional set up in some tests that other tests don’t need and this is where helper methods come in.
First of all, think about what these additional steps accomplish for the test scenarios and put that common behavior in a helper method owned by the test and shared among multiple test methods. Give the helper method an intention revealing name that describes what it does.
Creating helper test methods not only removes redundancy by taking common scenario elements and putting them in a single place, but it also gives us the opportunity to give those common steps I name and thereby documenting the purpose which helps our tests read more like specifications so it’s more understandable to others.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
December 11, 2019
Instrument Your Tests
Here is the first of seven blog posts based on the section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, called Seven Strategies for Using Tests as Specifications. In this first post, I’ll discuss a technique called instrumentation that I learned from Scott Bain and Amir Kolsky at Net Objectives.
Instrumentation is a technique for writing your tests so that they’re more readable and understandable as specifications. We do this by replacing magic numbers and other values with constants or fields. This gives us an opportunity to assign meaningful names to the values we use to exercise the behaviors that we are testing, making our tests more readable.
For example, instead of doing this:
@Test
public void testConstructor() {
User user = new User(“Clark”, “Kent”, “user@example.com”,
“Superman”, “kryptonite”);
assertEquals(“Clark”, user.firstName());
assertEquals(“Kent”, user.lastName());
assertEquals(“user@example.com”, user.eMail());
assertEquals(“Superman”, user.userName());
assertEquals(“kryptonite”, user.password());
}
We do this:
@Test
public void testRetrievingParametersAfterConstruction() {
String firstName = “Clark”;
String lastName = “Kent”;
String eMail = “user@example.com”;
String userName = “Superman”;
String password = “kryptonite”;
User user = new User(firstName, lastName, eMail, userName, password);
assertEquals(firstName, user.firstName());
assertEquals(lastName, user.lastName());
assertEquals(eMail, user.eMail());
assertEquals(userName, user.userName());
assertEquals(password, user.password());
}
This makes the test more readable and disambiguates what it is we are actually specifying. In the first example, there are lots of redundant strings that the compiler can’t check for consistency, so if a field is misspelled, it can’t be caught by the compiler. It also makes it hard to understand the meaning of the fields with just the contents of the strings. The second example is clearer. It makes the test read like a specification so it’s clear what the code does. Instrumentation is one of my favorite techniques for helping me write clear and understandable unit tests that read like specifications.
I find that the discipline of thinking about the behaviors that I want to create before I think about how I want to implement those behaviors has helped me build better systems. As a result, the code I write is more partitioned and independently verifiable. I find that the technique of instrumenting my tests is valuable for making it clear what my code is doing.
As a result, I find that I have to write far less developer documentation because the way to use my APIs are how my tests consume them. With automated tests in place, I never have to run code through the debugger to verify that it works.
I find that one of the biggest challenges that developers face when adopting the test-first development methodology is knowing what tests are the correct tests to write. But if we think about our tests as a form of eliciting the behavior that we want to create, then it helps us build the right stuff and get immediate feedback that it works.
Instrumentation is a little extra effort but it gives me a great deal of value and allows me to turn an activity that I’m already doing, which is test-first development, into another activity—creating an executable specification for my code. I also get a suite of regression tests that validate that my features are written at the right level of abstraction to support me in refactoring my code later. To me, those are some pretty big benefits.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
December 4, 2019
Why Practice 7: Specify Behaviors with Tests
Developing software is a complex activity. Having worked with thousands of professional software developers, I recognize that there are many ways to implement any set of requirements and many are equally valid. This is true at least from the computer’s perspective but from the human perspective, we favor software that’s straightforward to understand and modify because the vast majority of costs in software have to do with extending it later.
For me, dropping the cost of development and ownership of the software we build is of paramount importance. Agile software development requires paying attention to what we’re doing when we build software, paying attention to the design, and paying attention to new and changing requirements. In my opinion, this is a far more disciplined approach than following a checklist as we did in Waterfall software development and really tuned out to everything but an old, out-of-date requirements document.
In Agile, we want to learn as we go and we want to build systems incrementally so that we can see them emerge and evolve. The most valuable way I have found to do this is through test-first development. Doing test-first development is an advanced engineering practice and I’ve seen it implemented in many different ways in the industry, not all of which are effective. Misko Hevery says that many developers think they know how to write a good test but they really don’t, and I have similar experiences.
Good tests are actually not that easy or obvious to write but they are very important, not only for doing test-first development correctly but for contextualizing our thinking about the behaviors that we want to build in ways that are effective and independently verifiable.
I am a huge advocate for test-first development but not as a replacement for doing quality assurance. I think that these are totally different activities and I see developers misapply test-first development by thinking it’s some sort of QA activity. This drives them to write too many tests and implementation-dependent tests that break when they go to refactor their code.
One of the main benefits of doing test-first development is having a good set of behavioral tests that we can use to verify that features work as expected. These tests should be designed to allow us to refactor our code safely. I want my unit test to have my back so that if I accidentally change the behavior of a feature when refactoring it, my tests will tell me by failing. I don’t want my test to break when I go to refactor my code because they’re brittle and implementation-dependent. That doesn’t help me, it slows me down because now I have to fix tests as well as code when I refactor my design.
I see a lot of confusion out there around writing unit tests but I find that the kind of tests that we write when doing test-first development is of paramount importance. They help us not only with regression but also by helping us define and implement cohesive and decoupled behaviors in the system. This is the core of a successful continuous integration system that supports DevOps and drops the cost of building software.
If we simply think of the tests that we write when we do test-first development as a way of eliciting or expressing the behaviors in the system that we want to create, then we can start to use the language of our tests to assert the behaviors that we want in a system. It actually turns out that this is a very straightforward way of writing tests and it gives us guidance on how to write tests for any behavior that we can think of.
That’s the idea in a nutshell. Instead of thinking of them as tests, use them to specify the behaviors you want to create.
The following seven blog posts come from a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software, called Seven Strategies for Using Tests as Specifications. These strategies help us build tests that maximize their value as both executable specifications of the system as well as helping everyone get on the same page with the minimum number of tests required to specify any behavior in a system.
Note: This blog post is based on one of the “Seven Strategies…” sections in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
November 25, 2019
Make Each Test Unique
I want to conclude this series of Seven Strategies for Great Acceptance Tests with the advice to make each test unique. I know this is easier said than done but it does get to the very core of what quality software development it’s all about.
When our unit tests test units of behavior, then every unit test is also an acceptance test. We are testing behaviors in the system and not how we implement those behaviors. This is the key to making our tests support refactoring of our code and helps us drop the cost of extending the software that we built.
The number one reason that I see teams fail when practicing both acceptance test-driven development and test-driven development is they think of these activities as testing activities and write far too many tests than is needed to specify the behavior they want to create. They’re using those tests as a form of quality assurance rather than as a form of specifying behaviors.
The tests that we write when we do acceptance test-driven development represent ideal examples of the kind of scenarios we expect in the system. Acceptance tests are not meant to fully describe the system but rather to just call out the main acceptance criteria for a feature. We want to do this in the most implementation-independent way that we can. This allows us the freedom to change the feature’s implementation details laster without breaking the acceptance test.
One of the main reasons that builds are slow is having redundant tests where we’re testing the same things over and over again. When constructing a test suite, we have to be sensitive to this issue and test each behavior once in the system. This is one of the hardest things I find for developers to practice. More tests are not always better.
I use tests as a way of articulating the behavior in a system. This is true for both acceptance tests at a high level and for unit tests at a much lower level when I’m implementing the behaviors of the system.
If I’m writing a blog post or a program specification I wouldn’t randomly repeat myself. If I’m writing a blog post or a program specification I wouldn’t randomly repeat myself (sorry). Why would I repeat myself in my test suite?
So what I’m saying is that if we think of our acceptance tests as specifying the behaviors of a system at a high level then we get a good grasp of how they should be written and what their benefits are.
Every test should validate a single intention in the system. That intention or behavior should exist in only one place in the system and therefore there should be only one test that exercises and validates that behavior. When I build a system using acceptance test-driven development and test-driven development, I find that I have far fewer redundant tests and consequently far fewer redundant implementations in my system. This means that I’m being more productive.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.
November 20, 2019
Split Behaviors on Acceptance Criteria
We think about what we build at different layers of abstraction. At the highest level, we have the minimum marketable feature set or MMF, which is composed of a list of features.
A feature is some value that someone gets from using the system. This may involve the fulfillment of several things in the system, which we can express as a set of acceptance tests. Each acceptance test expresses an acceptance criterion for the feature.
At this level, we are able to discuss the feature, what it does, who it’s for, and why they wanted. All of these things are good conversations to have.
Sometimes a feature will have many behaviors for components that could be built separately with their own acceptance criteria. In these cases, I like to split out these behaviors so that my milestones aren’t so big and building out behaviors based on acceptance criteria keeps me focused on writing code that’s of maximum value to my users.
Writing software to fulfill acceptance criteria keeps us focused on producing software that generates value for users. I find that I get lost less and so I write far less cruft and the code that I do write tends to have fewer defects because I’m much clearer on what the code is supposed to do.
Even though I’m often just a team of one, I still find the doing acceptance test-driven development and splitting behaviors on acceptance criteria to be useful practices.
Actually, I always work with a Product Owner when I do development. He is the Product Owner in my head. I have no trouble wearing the Product Owner hat when I do development but it took me many years to cultivate this skill.
Today I almost always do test-first development when I build software and I often get to do acceptance test-driven development, as well. I find that both are extremely valuable for understanding what needs to be built and giving me a context for building it in a way that allows me to verify my progress and understand it as I go so that I can rapidly implement it.
An acceptance test is an embodiment of a single acceptance criterion. We want to build out behaviors in the system based upon acceptance criteria because that’s when we derive the value from the behaviors we’re building.
In a lot of ways, acceptance test-driven development is about the value to me. It’s about the value of the features that we are building and being able to see our progress building out that value and getting on the same page with our Product Owners. This makes acceptance test-driven development one of the highest value activities that I engage in when I do software development.
Sometimes I write more acceptance tests but mostly I want to ensure that the basic acceptance tests I need for a system are there and those are the ones based upon the major acceptance criteria that my Product Owner gives me. This forces us to have a good conversation about what the most value is from the work that our team can provide and once we start to have this conversation I find that the team gives the Product Owner more of what they want in the Product Owner is more able to articulate to the team what the system should do.
Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.


