David Scott Bernstein's Blog, page 10

June 26, 2019

Try It, You’ll Like It


When I was a kid growing up in the 1970s there was a television commercial for Alka-Seltzer where one guy is giving some other guy some scary-looking food and says, “Try it, you’ll like it.” That became the catchphrase for an entire generation and it meant not to pass judgment on something until you have experienced it for yourself.





Inevitably I find that people who think that pair programming is a bad idea have no experience doing it or they did it incorrectly, and so they had a bad experience with it. I’m not saying that pair programming is for everyone or for every task but very often pair programming represents an opportunity to learn, and that’s usually a good thing.





My posts sometimes appear on other websites and I know that whenever a post that I’ve written on pair programming shows up on one of the programmer websites that inevitably I will get a comment from one individual who will remain nameless here about what a stupid idea pair programming is. I guess it is a dumb idea to want to share knowledge and learn from each other. I mean, who would want to improve themselves as a developer? Who would want to learn new techniques for solving problems or gain other perspectives?





Wait a minute. I would.





And these are just a few of the things that I’ve learned recently from pairing with other software developers. Pair programming is the fastest technique that I know for sharing knowledge among the team. Pair programming is doing a code review as it’s being written. It can be amazingly productive and useful.





Pair programming can also be a nightmare if it’s done wrong. With the wrong expectations or without the basic skills needed to do pairing successfully, we put ourselves at a loss just like we would in any technical practice. You wouldn’t expect that you’d be able to do continuous integration or code refactoring without learning some techniques and the same thing is true with collaboration. We need to learn some basic techniques for pairing and mobbing so we can do it successfully.





It turns out that convincing software developers of the value of doing pair programming is quite easy. Just give them an experience of doing it that’s positive and they’ll usually see the value and want to do more of it. If you only tell them about it, no matter how good your description is, it’s not nearly as compelling as experiencing it for themselves and so I strongly encourage developers to try it but first learn some basic techniques on being able to do it well or else you could fall into the trap of thinking that you’re doing pair programming when you’re actually just taking turns at the computer. Pairing is something much more than that so check out the rest of the posts in this series for more information on how to do pair programming well.





If you’re a manager who is skeptical about pair programming then I have to admit that you’re a bit more challenging to convince. Remember, the goal is not for developers to produce a lot of code. The goal is for developers to produce a lot of good code and very often they need to discover together what good code means. When we begin to recognize that the real bottlenecks in software development are not typing but rather understanding and interpreting requirements as well as debugging code and that both of these activities can be accomplished much more efficiently and effectively when pairing, then we can start to realize how pair programming can be a big time-saver as well as providing all of the other benefits we’ve discussed.





But this isn’t something that you can force the team to do and very often managers who try to force their developers to pair without the team’s agreement that it’s a practice they want to explore, end up getting a revolt on their hands. The best ways that I found to introduced teams to pair programming is to simply give them an experience of doing it. Maybe set up a practice time and do a code kata in pairs or take a challenging problem you’re working on and work through it with a colleague in a pair and to get the most out of your pairing session, follow the guidelines I offer in my next six posts.









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on June 26, 2019 08:05

June 19, 2019

Why Practice Four: Collaborate


Practice four from my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of your software, is Collaborate.





You may wonder why a technical book on agile software development practices would include a practice called Collaborate but it turns out that even though we often don’t get formal training in school on collaboration, we developers spend a great deal of time collaborating with other team members in order to build enterprise software. And just like any other technical practice collaboration requires skills.





Our industry desperately needs collaboration. We have to find ways of working together and sharing knowledge to a much greater degree because our industry is growing so rapidly. We need more opportunities for internships and apprenticeships within organizations. We need relevant education and training from schools that teach us valuable development practices instead of 20 year-old techniques that aren’t or shouldn’t be used in industry any longer.





Enterprise software construction is a social activity and unlike the stereotype of the computer programmer nerd who is entirely antisocial, we software developers pride ourselves on our abilities to communicate and collaborate.





The importance of collaboration in working together is another thing that Extreme Programming got right. We must be able to learn from each other and share knowledge. This is a fundamental tenet of agile software development because we need to be able to share knowledge if we’re going to grow as an industry and a discipline.





Think about what life would be like if doctors guarded their secrets for successful surgeries and didn’t share it with their colleagues. The medical community has an entire infrastructure designed to disseminate good ideas. Even accountants have efficient and effective mechanisms to get the latest tax codes for their clients.





It should be no surprise that the field of software development requires constant learning. We’re constantly discovering new and better ways to construct software. This most certainly attracts a certain kind of person to the field. Most everyone that I know who is a professional software developer learned their most important skills on the job. We need more ways to learn from each other and so Extreme Programming embraces techniques and practices like pair programming, spiking, swarming, and mobbing.





I started my chapter on Practice Four: Collaborate, in my book by saying, “The most valuable resource we have is each other.” I really believe that this is true and that we can figure out so much more together than we can on our own.





The following seven posts come from the section Seven Strategies for Pair Programming in my book Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software. While these posts are focused on pair programming, many of the concepts apply equally as well to mobbing, swarming, and other collaborative activities. Enjoy!









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on June 19, 2019 06:27

June 12, 2019

Fix Broken Builds Immediately


There are good crutches and bad crutches in life. Being addicted to using heroin is a bad crutch but being addicted to using a working build is a good, healthy crutch. This is the kind of crutch that I want to encourage.





A healthy, well-maintained build is the heartbeat of a project. Removing friction from the build can greatly improve developers productivity. We want to make sure that the build is easy-to-use and if it breaks we want to make sure that it gives us informative error messages so we can quickly diagnose and fix the problems.





To me, a healthy build is the very essence of Agile software development. I say this because the first principle of the Agile Manifesto says, “Our highest priority is to satisfy the customer through early and continuous delivery a valuable software.”





Continuous delivery of valuable software requires continuous delivery. We have to be able to automate, not only the build but the entire verification process for the behavior we build in the system.





This is what it means to be agile in software development and to be able to respond to last-minute changes. If we have a manual process for assuring quality then it means that any minor change or addition to the system will often require a complete regression test of the entire system making it prohibitively expensive.





My preferred way of building an automated system is through test-first development. Since all the code that I write when doing TDD is to make a failing test pass, I usually have 100% code coverage. The tests that I write when doing TDD are behavioral unit tests which I find are usually the best for validating features. I often find that it’s best to think about my unit tests and testing in general as happening in two phases and for two very different purposes.





The first set of tests that I write are behavioral tests or acceptance tests and I write them at the unit level around units of behavior. These become the acceptance tests for my projects they support me when I refactor my code because they should continue to pass when I correctly change the implementation. This actually gives me the support that I need as a developer to refactor existing code and this is one of the main values of having behavioral tests, which we get naturally when we do test-first development correctly.





These tests are great but they may not be enough to validate all aspects of a feature so I often build up a set of additional tests that reach a bit further into implementation. I typically write these tests after I write my code or someone on my QA team writes them with me. These tests may give me more confidence that the system is working as it supposed to but these tests might break when I refactor the code and if they do I generally delete them or rewrite them.





A healthy build is a prerequisite for nearly every client that I have. We depend so heavily on our build that when it breaks it often means that the entire team can’t make any progress. So, keeping the build up and working is imperative.





Several years ago, my friend Thomas was showing me around his company one evening after everyone else had left. They were a true XP shop with team agreements on the walls and pairing stations everywhere. It was beautiful.





I noticed some heavy duty hardware set up in the corner and I asked my friend about this. “That’s our build server,” he said. “Want to see what happens when somebody breaks the build?” He asked me.





He walked over to a workstation, logged in, and typed a few lines. Suddenly, all the fluorescent lights in the building went out. The emergency lighting came on in a siren began to sound. I thought it was the fire alarm and I got up and ready to go but my friend stopped me.





“This is not a fire,” he said. “This is what happens when somebody breaks the build,” he said. “We don’t tolerate broken builds in our organization.”





When somebody breaks the build in his company everybody knows it and they have to fix it immediately. They can’t go to the bathroom or do anything else. They have to either fix the broken build or back out their changes so they system reverts to when it was working before. This is how dependent teams get when they have a reliable build.





A lot of crutches are bad but I find that when I have a reliable build that I depend on, it can give me such great superpowers that I find my dependency is worth it.









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2019 07:46

June 5, 2019

Keep Test Coverage Where it is Needed


I’m not a believer in having standards for test coverage. I know teams that require 60%, 70%, or 80% test coverage for their code. I don’t like having standards like this because different code has different requirements for testing. 





Straightforward code like getters and setters don’t really need to be tested. However, more complex code that encapsulate business rules should be tested. When developers do “test after” software development by writing their tests after they write their code, they typically try to find the easiest code to test in order to meet their code coverage standards but often times this is not the code that we really need to have covered.





If you do test-first development the way I recommend it then you’re never putting any code into the system without it making a failing test pass. By that very definition all the code that you add to a system is covered with unit tests and indeed when you do test-first development correctly you should achieve 100% test coverage.





That is in an ideal world and there are a few things they can cause us to get dinged on test coverage such as when we make calls to external libraries. These little exceptions aside, we should find that if we are doing test-first development well that we have 100% or nearly 100% test coverage.





People have argued with me that you don’t need to have unit tests for simple code like accessor methods and I agree. I don’t write unit tests for accessor methods because I’m trying to lock down their behavior. I write unit tests for accessor methods because I think of my unit tests as a form of specifying the behavior in my system. I would mention that I would have accessor methods in my specification, if I was writing one, and since I think of my unit tests as living specifications I also want to “mention” them in my unit tests as well.





Thinking about tests as a form of specifying and articulating the behavior of a system is a far more detailed and precise way of specifying behavior then a written specification, which includes all the ambiguities of spoken language.





Because I see my unit tests as specifications, I strive for one hundred percent test coverage but I understand if other people feel that that’s unnecessary. I’m an idealist. I do find that thinking about test-first development as a form of specifying behavior in code really helps me understand how to write the right kind of behavioral tests for the system that I’m building.





Of course, another great benefit of doing test-first development is that you achieve a high degree of test coverage for code and the code that’s produced is highly testable. These things tend to be good for software.





If you want to get really precise in your using tools like sonar then you can look at test coverage based upon codes for climatic complexity. For example, we can say that we want 100% test coverage for all code who sick climatic complexity is above three. This approach may be a more accurate way to measure test coverage.





Ultimately, it’s the most complex code that we write that has the highest potential for defects and so that’s the code we want to have covered by unit tests because that’s the code that is more likely to harbor defects.









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on June 05, 2019 07:43

May 29, 2019

Write Testable Code


This has become my favorite subject because it turns out that testability, above nearly every other metrics, is important to strive for in developing software. All other code qualities seem to be a reflection of testability.





When I say testable code what I mean is code that can be verified programmatically and in a very granular way. When a test fails we wanted to tell us exactly why so that we know what went wrong and can correct the problem immediately. This means that we’re verifying very small units of behavior. This not only helps us debug code when there are problems, it also helps keep us focused on building software that fulfills some desired behavior.





When we write our tests around behaviors rather than the way we implement those behaviors then we are free to change the implementations later, and if the behavior doesn’t change then our tests shouldn’t need to change either.





I know people who insist that every methods must have a unit test. Well, I’m going to say a word here that I rarely say. That’s just WRONG. I don’t use that word often because there’s usually exceptions or contraindications that make something that’s wrong in one situation right in another. But this is not the case here. If you write too many tests or implementation tests when doing TDD then you’ll have issues when trying to refactor your code later.





Forget about code coverage for a second and think about testing behaviors. We don’t need to test private methods for example, if they can be exercised through the public methods that call them.





I find that a key characteristic of testable code it is code that is focused on doing one thing that produces some kind of externally visible result. Unit tests should be simple and easily verifiable.





If the only way you can test the system is by programmatically acting like a user then you have essentially untestable code. System tests can be useful but they shouldn’t be your only strategy for testing. The days of printing call stacks in running the debugger are far less prominent thankfully, because unit testing frameworks are far more valuable and durable. Testing is an investment and we want our investments to be valuable in the future.





Writing testable code means that the smallest components are independently verifiable. In order to do this, each component must have it’s dependencies injected into it. This means that code can’t reference global variables or use read/write singletons or service locators, etc. This may be a slightly different way of thinking about building a program than you’re used to but it can be highly efficient and effective way of building software and it can be programmatically verified.





What are some benefits of testable code? Well, for one thing it means that we can automate the verification process so we can keep our software in an always ready to release state. It also makes last-minute changes, the kind of changes that are often very important on a project, trivial and essentially cost-free. These are the same kind of changes that are often exorbitantly expensive to make late in a development project that doesn’t use automated regression testing.





Testable code is code of high quality. It’s cohesive and loosely coupled. It’s well encapsulated and in charge of its own state. In short, it’s singularly defined in its own proper place so that it’s straightforward to maintain and extend. Testability is one of my great teachers in revealing to me better options when I’m building software.





Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on May 29, 2019 09:58

May 22, 2019

Define Acceptance Criteria


“You are done, finished. Time to move on.”





How I long to hear those words when I’m working on a task. Specifications can be squishy and it can be difficult sometimes to determine if the task is complete or not. Software developers don’t want to gold plate. The problem is that we often don’t know how our software is going to be used and we don’t want to it fail in the field. As a result, we often try to make our code as robust as we can. In that process, we can end up overbuilding.





But there’s a solution to this dilemma. It’s really easy. Define upfront what has to happen in order for the task to be completed. In other words, before you start, ask yourself how you will know when you’re done? It seems like a reasonable question. Before we embark on any journey or on any task we should ask that question because when we fail to ask that question it’s entirely possible that will miss our mark. One of the problems with traditional specifications is that it often does not clearly delineate exactly when a feature is finished.





But there is a way to do this. We call them acceptance tests and whether they’re done manually or their automated, they essentially serve the same purpose, to tell us when we are finished with the task so that we can move on. Moving on is important. It gives us the opportunity to work on other tasks and even come back to the first task and enhance it later.





Building maintainable, resilient systems requires thinking about the software development process a bit differently than most people do. It requires understanding the nature of change so that we can accommodate it when it happens.





When defining “done” we have to take in several dimensions. In order for a feature to be considered done it not only has to work as expected but it has to be built in such a way that it’s straightforward to work within the future. Just getting a feature to work is not good enough, not when that feature has to be maintained through time. 





The software we write must be understood not just to the person who wrote it but to other professional developers as well. That means that we have to adopt common standards and practices for building code. If we’re working in an object-oriented language then we have to know when to use inheritance and when not to. But unfortunately, the software industry is not as mature as other industries and professions so we’re still struggling and reinventing the wheel in many places.





Today, acceptance criteria are central to my way of thinking about features. The very definition of a feature includes its acceptance criteria because that’s how all know when I’m done. I often express this using the “given, when, then” syntax. This says that given some initial conditions, when a trigger occurs that invokes the behavior that I want to create, then the system will be changed in specific ways. I can then compare these results to what I expect to determine whether my acceptance criteria have been met or not.





Acceptance tests are just the formalization of our acceptance criteria expressed in the form of a test or an assertion. This can be manual or it can be automated. Again, we are simply asking the question, “How will I know when this feature is finished, what result will it produce?”





When I have clearly defined acceptance criteria for the features that I’m building then I find I waste far less time overbuilding but even more important than that I find that I can move on without worrying or feeling guilty that I haven’t done enough because when my acceptance test passes, I know that I have accomplished what I set out to.

 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2019 08:35

May 15, 2019

Integrate Continuously


I believe that continuous integration is at the very heart of Agile software development. It allows us to go fast by going small. What I mean by this is that we learn by building in small increments that it’s more straightforward to create modular software that’s independently verifiable and by building smaller it gives us the opportunity for more feedback from our build, which helps keep us on track as were creating systems.





The first principle in the agile manifesto is:





“Our highest priority is a satisfied customer through early and continuous delivery of valuable software.”





Continuous integration is the way we implement “continuous delivery of valuable software.” And I don’t believe that the original authors of the Agile Manifesto meant that all software should be delivered on a continual basis but I believe that nearly all software should be built so that it is releasable at any point.





The way we build software in the Waterfall model was that we put off integration until the end of the development cycle. We had already analyzed, designed, coded, and tested our individual modules and then they sat in a queue somewhere waiting to be integrated. When we finished integrating all the modules in the system we found the nastiest box. Integration was often a living hell for us and required long nights trying to find elusive defects that only show up when we integrate the system together.





The best way I know, the only way I know, to avoid this problem in integration is to not put it off. Instead do it, do it all the time. When our build gives us fast, reliable feedback then we can integrate every little piece as we create it and the system alerts us at the moment there’s any problems so we can fix it immediately. 





This makes a huge difference in developer’s workflow because they don’t have to spend 90% of their time reacquainting themselves with problem software that they didn’t know had a problem until much later after they built it. This is the real beauty of continuous integration and I think one of the major values of Agile software development. 





Scrum talks about feedback and how important it is. We get feedback from our customers. We get feedback from our Product Owners. But I think that the most valuable feedback that we get when we’re developing software is the feedback that we get from our build. When we do continuous integration and have a good suite of unit tests that automate the verification of the features that we’re building then we can confidently know our progress and know that any major defects won’t creep into the system without us being alerted.





To me, that assurance is worth its weight in gold and it comes for free when you have a trustworthy continuous integration environment.









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on May 15, 2019 08:21

May 8, 2019

One-Click Builds End-to-End


We want to automate the build so that it is easy to use. We wanted to be so brain-dead simple to use that we use it all the time and we invoke the build whenever we make even a minor change to the system. This is how we get the most value from our build–when we use it all the time.





To facilitate this we have the idea of the one-click build which means that you can invoke the entire process with a single click. We can set this up so that when we save a change it re-compiles all of its dependencies on the local machine and then runs tests on the local machine. If these tests pass the changes are pushed up into the build server and a more complete set of tests are run against it. All of this should happen automatically. Ideally, we want the build to run fast and that means anywhere from a few seconds to a few minutes.





There is even a program called N-Crunch that watches you type keystrokes and when you enter in an executable line of code it automatically invokes the build so you don’t even have to do the single click to kick it off. N-Crunches is for .net and there’s an equivalent program for the Java world called Infinitest.





The build should run fast and give clear results. A lot of times when I’m developing I only want to invoke my local build until I have a feature ready for integration. I still try to build the smallest behavior that I can so that I can quickly verify it and move on.





Our compilers, lint checkers, automated unit tests and everything else that we use in the field are powerful tools that require compilable and/or executable code. We need to be running these tools frequently. I try to integrate the smallest pieces of functionality that I can so I use these tools often.





As a result, I have become somewhat addicted to the green bar. I like to see the green bar when I’m coding and very often if you ask me how I am doing I’ll respond by saying, “green bar!”





That’s my way of saying all is good.





I remember I was at the office early in the morning on the project about five years ago. I like coming in early in the morning because there is no one there and I can get a lot of work done. Anyway, I’m sitting there and I run my tests and then I sit there for a while and run my tests again.





I didn’t know it but one of my colleagues came in and was sitting behind me watching. He said that he saw me run my tests in green bar twice without making any changes to the code and asked me why I did that. He was concerned that perhaps I thought that the build was flaky and was giving intermittent results because running tests without making any changes shouldn’t change the test results. I said that wasn’t the issue. I said, “it just feels good.”





I must admit that I love being in the green bar. The green bar means safety because I can freely refactor my code while I’m in the green bar and I know that if I make a mistake my tests will catch it. This is the value of having a good and reliable build that is invoked through a single click. It has your back and what more could you ask for?









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on May 08, 2019 10:16

May 1, 2019

Use Version Control for Everything


Version control is an important part of file management on every software development project I’ve ever worked on in the last two decades, regardless of the methodology. I’ve used Subversion and Git and several others in the process of building software. 





I always find it a bit surprising that other industries don’t use version control nearly as much as we do in the software industry. For example, I know several writers who know nothing about version control and they would benefit greatly from it. The same is true for several artists and graphics designers who I know. I find that version control it is extremely easy to work with and gives me some key benefits.





Most people recognize that one of the major benefits of version control is being able to go back in time before a feature was added or create branches that allow you to test and change different parts of the system without making those changes permanent. Another advantage that I find from version control is that it allows me to see the order in which a system was built and very often that tells me a lot about how it was designed so that I better understand how to work with it.





I find the check-in messages are further useful when working with version control because they give me an opportunity to say what I did when adding a feature.





I remember I was teaching a class for a client who has a very major web presence but they had difficulty because there was some kind of discrepancy between their test environment and their production environment. They would test a release candidate and it would work perfectly but when they went to release it it didn’t behave as it did during testing and they were really confused by this because it was causing their customers issues.





They had developed the release strategy where they had two data centers and they would put a release candidate on one of the data centers and give it just 1% of the traffic. If nobody had any complaints then they would bleed another percent off and then another until slowly, over the course of three weeks the release candidate data center had 100% of the traffic and they would put the next release candidate on the other data center and start the process all over again. This was extremely painful and time-consuming.





I was teaching a class to their lead engineers and I was talking about how it’s important to version not just all of your code but everything involved in the build including stored procedures and configuration files and database schemas and everything else that the build depends on.





After I said this three of the students in the class got up and walked out of the room. I was afraid that I offended someone and later when I saw one of them on a break I asked if I said something wrong.





“No,” one of them said. He said that they had issues with the consistency of their build environment and they wanted to verify that all the things that the build depended on were included under version control. They looked into this and I got an email from him a few months later saying that they discovered that some of the files that their build depended on were not included in version control and when they fixed this they were able to get a consistent build environment, which made it far easier for them to put release candidates into the field. 





Now that their test environment was the same as their production environment they were able to put their tested release candidates into production with confidence that it would run. So remember, version everything that your bill depends on and try to keep your version control practices as frictionless as possible.





Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Agile Infrastructure.

 •  0 comments  •  flag
Share on Twitter
Published on May 01, 2019 09:58

April 24, 2019

Why Practice 3: Integrate Continuously


Once we define what we want and build the smallest increment, we want to integrate it into our system as soon as possible so that we can see it working and get a true measure of our progress.





Continuous integration is at the very heart of every Agile project and so I wanted to introduce it as early as possible. To me, it’s the third central pillar of agile software development. Continuous integration means having a fully automated build that not only compiles and runs a system but also fully tests it without the need for any human intervention.





When teams learn how to create a reliable build with dependable unit tests then they find that their confidence in their system goes way up and developers are able to freely change, refactor, and extend their code without the fear of constantly breaking it.





This is one of the main values of continuous integration, the ability to get instant feedback when you add a feature to know if it breaks anything in the system. With that level of instant feedback, you can immediately back out your changes if it breaks something in the system. This means you always have a buildable system.





When I worked on a Waterfall development team many years ago we used to fear integration. We put it off as long as possible to the end of our project because we knew it was painful. Continuous integration takes an entirely different approach. Instead of putting integration off, we do a little bit of integration every day, several times a day, in fact, and by doing so we turned what used to be a burden into a tremendous asset.





Agile and Scrum talk about all sorts of different kinds of feedback. We get feedback from our customers. We get feedback in our retrospectives. But to me, the most valuable kind of feedback we get in Agile software development is the feedback we get from a good suite of unit tests running on our build server. These tests help guide us as we are building a system to make sure we don’t break anything as we go and that feedback, which is immediate and highly interactive, is also highly valuable when constructing software.





The following seven blog posts are based on Seven Strategies for Agile Infrastructure from my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.





These next seven posts discuss some of my favorite tips and tricks on setting up a dependable and easy to use build server, along with other Agile infrastructure. You can set up a build server on any cheap PC or even in a virtual machine. It doesn’t have to cost much and all the software you need is free or inexpensive, depending upon which route you take.





The other thing that I find valuable about Agile infrastructure and focusing on it early, is that it creates a context for all the other Agile technical practices. Having a build server that is able to invoke a suite of unit tests gives us a space to create unit tests. And when we make the build easy to use it incentivizes developers to use it and then they will. And when they do, not only will they benefit greatly but so will their organization.









Note: This blog post is based on a section in my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software called Seven Strategies for Seven Strategies for Measuring Software Development.

 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2019 06:44