David Scott Bernstein's Blog, page 17

February 21, 2018

Keep Defects from Escaping

Toyota did not start out as a car company. They started by making industrial looms for the creation of textiles. They became a leading loom manufacturer by building looms that required minimal human intervention because they recognized that an operator’s time was expensive. If one operator could run a dozen looms instead of just one loom then production was far more cost-effective. This philosophy was quickly adopted by their customers and Toyota became a leader in automating loom operations. This philosophy is at the very core of Toyota, and when they started to manufacture cars, they applied very similar concepts.


One of the key differences between Toyota’s method of manufacturing of cars and the traditional method was that traditional assembly lines tried to ensure quality through a series of rigorous inspections. But Toyota recognized that these kinds of activities were a huge waste. Instead, they put their attention on finding ways to prevent defects in the first place.


Every workstation on a Toyota assembly line has a big red button designed to stop the line. If your job is to bolt seats in a car and it looks like you’re not going to be able to finish one before your car leaves your station, you push the red button. The first time you push the red button a manager appears and they don’t reprimand you. The supervisor’s job is to help you finish your task before the car leaves your station so the supervisor dives in and helps you finish your current task. If it looks like both of you are not going to finish the task in time, then you push the red button a second time and that stops the line so you can finish your task before the line resumes.


Stopping the line means the car will not leave a station until the job of that station is complete. It also means that no cars will leave any other stations because the entire assembly line is down. Because of this, there is no need for extensive inspections after a car comes off the assembly line. In fact, after a car comes off of the assembly line.


Something else happens when an operator stops the line, everyone involved does a retrospective. The purpose of the retrospective is not to lay blame but rather to figure out how to improve the system so that the situation is never encountered in the future. In this way, Toyota is continuously improving and fine-tuning their process.


This may seem highly inefficient but in practice, the line does not stop very often.


This is a rich analogy that shows us how we can build software much more effectively. Traditional software development has a lot of similarities to traditional manufacturing. We often design, build, and test code in distinct phases. While this makes sense in the construction industry, it’s very inefficient to apply these same concepts to creating software. Lean teaches us to detect defects early because it’s cheaper.


Many years ago, I read a report from TRW that said if it takes one unit of effort to change a program in the design phase then it takes 67 units of effort to make that same change right before release. So we’ve known since the sixties that early detection is more cost effective but Waterfall methodology still waits until the end of the process to inspect and detect defects.


While early detection is cheaper than late detection, it’s cheaper still to prevent errors from happening in the first place and this is what the practices of Extreme Programming are all about. With a good continuous integration server and reliable unit tests, we can catch defects as they’re written so they can be fixed immediately and often with minimal effort. This changes the dynamic of development where both quality and velocity start improving.

 •  0 comments  •  flag
Share on Twitter
Published on February 21, 2018 08:59

February 14, 2018

Why Why

In my last post, I discussed barely sufficient documentation and I said that the best place to express what the code is doing is in the code itself. But there is another form of documentation that we often overlook. It’s actually the most important kind of documentation yet we rarely devote even a sentence to it. The documentation I mean is not about what the code is doing but why we’re doing it in the first place.


Code can express what it does but it’s much harder for code to express why it’s doing that thing. The context for software is often outside of the software itself. The way most teams choose to address this is to completely ignore it, but understanding why we’re making certain decisions or implementing a design in a certain way can be very important in order to gain an understanding of how to extend that design correctly in the future.


When I ask developers why they made a particular design choice they always have a reason. We make decisions based upon the knowledge we have at the time, but very often the reasoning behind our decisions isn’t documented, so it’s lost to future developers who might have to maintain that code.


I strongly believe that although code should express what it’s doing, we still need external documents to describe why the code is doing what it’s doing. Understanding the reasoning behind a design can help us a lot when we go to extend or modify that design.


I distinguish between what I call “what” comments and “why” comments. The what comment describes what the code is doing. When I feel the need to write a what comment, I often step back and ask myself if there are ways to express what the code is doing more clearly in the code itself. Often this leads me to rewrite some of the code so it’s more expressive.


Comments that express why a design is the way it is are entirely valid. We might say that we’re conforming to a federal regulation or that the object we’re using is a collaborator in a specific design pattern. These kinds of comments can be very helpful to readers in the future.


Vision documents and other artifacts can also be helpful in expressing why the code is doing what it’s doing. By understanding that rationale, we’re often better equipped to work with the code in the future. I’ve worked on hundreds of projects in my career and rarely have I ever seen any documentation on why decisions were made, even though many of the developers I talk to agree that this kind of information would be very valuable.


So why don’t we create this kind of documentation? The answer is typically that we haven’t been asked to. If our Product Owner isn’t asking for it, we don’t do it.


An organization has to be aware that these kinds of activities can be valuable to those who will be maintaining the code in the future. Unlike what documentation that often feels like busy work, documenting why we decided to do things the way we did can be a lot of fun. It gives developers a chance to express their reasoning behind their choices and we all love doing that.


So the next time you come up with a design choice, ask yourself why you made that decision then write down the answer. It could be very helpful for another developer in the future.

 •  0 comments  •  flag
Share on Twitter
Published on February 14, 2018 08:08

February 7, 2018

Barely Sufficient Documentation

In Agile, when we say “barely sufficient documentation,” we’re not referring to user documentation. User documentation has been notoriously bad throughout the entire history of the software industry. As an industry, we must do much better with user documentation than we have in the past, but this post is not about user documentation, it’s about internal documentation. It’s about documenting the design and the code for other developers so that it will be more understandable and easier to work with later.


There are two important aspects to internal documentation, one of which is often neglected: documentation to express what the code is doing, and documentation to express why the code is doing what it’s doing.


In traditional software development, we spend an enormous amount of effort documenting what the code does. We do this with design diagrams, specifications documents, and comments in the code. But this kind of documentation can actually become an impediment to maintainability rather than an aide to it.


The ultimate document that explains exactly what the code is doing is the code itself. Anything else is a distant second and as we write more documents that express what the code does, or pepper our code with comments that express what the code is doing, there is a tendency to make the code itself less expressive. This is a mistake.


The reason we use programming languages rather than hexadecimal machine instructions to make a computer do something is so that the instructions are understandable to us, the people who write and maintain the code. We have a very different set of concerns than the computer does. The computer isn’t trying to understand the code we write, it simply executes it. But in order for us to change that code, we must understand it. And so we write our software in an intermediate form that’s comprehensible to us and can get compiled down into something the machine can execute.


If you look at the code that’s generated from our compilers it’s highly efficient. And if highly skilled programmers were willing to sit down for many hours they might be able to write code that’s even more efficient. But efficiency is not our only concern. We’re also concerned with understandability. And if we have to trade some efficiency for greater understandability then it’s often worthwhile to do that.


As we take these ideas to their logical conclusions, we realize that the bulk of programming is all about communication. Our jobs as programmers is to make our code expressive so that it’s understandable, not just to us but to others as well. And we make code understandable by using metaphors and analogies as well as good intention revealing names to model the domain we’re working in. We should name methods for what they do so the name of the method becomes the most important form of documentation.


In the past, I have been accused by managers of encouraging developers to write uncommented code and while this is true I have a good reason for it. Software developers don’t like redundancy and if they know they have to write a comment that says what the code was doing, they’ll tend to rely on that comment to communicate the intention of the code rather than the code itself. When this happens the code becomes less expressive and it can be a drudge to read.


Instead of writing a lot of comments in code, we should find ways of communicating that information with the code itself so our code is more expressive and there’s less need to comment on what the code is doing. However, using block comments to express why the code is doing what it’s doing can often be very helpful and appreciated by the people who maintain the code.

 •  0 comments  •  flag
Share on Twitter
Published on February 07, 2018 09:15

January 31, 2018

Take Your Caller’s Perspective

As software developers, many of us have been trained to dive right into implementation. As soon as we learn about a problem, we think about how to code a solution.


But this is not always the right approach. Sometimes, it makes more sense to step back and look at the big picture, to try to see how the piece we’re working on will fit into the whole system. Many integration and incompatibility issues can be solved if we take a moment to consider them up front. The expression is “ready, aim, fire” not “fire, aim, ready.”


So then what do we need to know before we dive into implementation?


Start by taking your caller’s perspective. In other words, when writing an API, don’t think about what you want to get as parameters but rather think about what your client wants to give you. This advice is echoed in the sixth SOLID principle in software development, which we refer to as the Dependency Inversion Principle.


We normally think about implementing an API without giving much consideration to how our clients want to call our API. But that can make it awkward for our callers so instead, we take their perspective and build a method signature for our API that corresponds to what our callers want to give us.


Of course, sometimes what our callers want to give us is not what we want to receive, and it’s part of our job to reconcile this. For example, if our caller wants us to parse a document on a server then they may want to give us the URL of that document on a server. In this case, they’re asking us, the API provider, to do two things. First, they’re asking us to locate and retrieve the document from the server then they’re asking us to parse the document. This may make perfect sense from the caller’s perspective but these two activities, when combined together, make our API very difficult to test. What we should do instead is break them up internally using a simple technique called peeling.


Peeling can be useful in situations where the first things we need to do is retrieve some data we’ll then act on. The first few lines of our API are involved in locating and accessing the data. If we use peeling to extract these lines out into its own method, our API can call this method then call the main method with the document to parse. This breaks the dependency on needing a web server to retrieve a document because we can test the code that parses the document by passing a document to the internal method rather than spinning up a web server. If we don’t create a peel then in order to test our API we would have to bring up a web server.


Most of the time, by taking our caller’s perspective, we can forge method signatures that are more convenient to our callers so they’re easier to use. This can greatly simplify the software we build.


Taking our caller’s perspective can also help us create better encapsulation because it reinforces information on an object having an inside and an outside. This allows us to hide implementation details inside of an object so that those outside are unaware of those details, and when they change, callers are not affected.


Taking our caller’s perspective helps us build out observable behaviors in a system, which can provide a great deal of overarching cohesion to our software.

 •  0 comments  •  flag
Share on Twitter
Published on January 31, 2018 11:37

January 24, 2018

Slow Down to Go Faster

I know it seems like a paradox but often we have to slow down in order improve our productivity. We need to step back and look at what we’re doing and see if we can find more effective ways to meet our goals.


A NIST study (http://www.cse.psu.edu/~gxt29/bug/localCopies/nistReport.pdf) showed that over 80% of software developers’ time is spent undoing the things they did in the first 20% of their time on the project. Developing software is fraught with inefficiencies from the process of writing down then interpreting requirements to the separation of coding from testing and the enormous amount of time lost when debugging software.


Clearly, there must be better ways of constructing software than the way most people are doing it, and there are, but some of the techniques are not commonly known.


Extreme Programming (XP) has been promoting emergent design and test-first development since the early 2000s, and those of us who apply these practices well have been seeing their value and have been reaping great rewards. But this is a complex field and just following the practices of Extreme Programming doesn’t guarantee success. Writing software involves many skills and building it well involves even more skills.


I’m an advocate for test-first development because I find it helps me go faster by slowing me down.


Test-first makes me first think about the interface I want to create and then build a strong contract around that interface. Building strong contracts around interfaces is an important skill for good software architecture, and TDD has me focus on this as one of the first things I do when building a system. I like that.


Another thing that TDD does to help slow me down is that it concretizes abstract requirements by giving me examples of the behaviors I want to create. This is a pretty different way of thinking about building software than most of us were trained.


In the past, if I wanted to handle image compression, for example, I would go off and build an image compression library. It would involve weeks of design and planning and then even more weeks of coding until I could get enough of the system to compile that I could actually test it at some level—and this may be several months into a project. In many ways, I’m flying blind. I get no feedback from my tools until my code compiles and runs.


By contrast, building a system incrementally using test-first development and by specification by example means you always have a working version of the system and you’re building it interactively. All the tools you have, including your compiler and all the unit tests you’ve written, are constantly running to support you and tell you when you make mistakes. This is a far more preferable way of building software because it’s safer.


Which brings me to the best way to slow down and that is to have a fast build.


Having a fast build is central to doing Extreme Programming effectively. If you have a slow build then you probably have technical debt that has to be paid off in order for your system to run more efficiently, in which case it also pays to slow down and pay off your debt so that system runs efficiently.


Understanding design options and principles also help me to slow down because I know that it’s often trivial to refactor code from one design to another design. I don’t have to feel rushed into committing to any particular design when I’m building a system because I know that as I learn more I can refactor it fairly easily.


With techniques that allow me to build systems a piece at a time, allowing the design to emerge, I find that the quality of my work increases significantly. Emergent design does work—when we pay attention to some fundamental principles that make code more testable and straightforward to work with.

 •  0 comments  •  flag
Share on Twitter
Published on January 24, 2018 08:17

January 17, 2018

Changeable Code

So, what are the practices that support changeability in code?


There are surprisingly few, and I’ve written about them extensively here in this blog, in my book, and I discuss them in detail in my courses: inject dependencies, embody the SOLID principles, keep code quality high, and understand design patterns… There’s a lot to it but the list is finite. If you’re aware of just a few dozen key concepts you’ll be well equipped to write changeable code.


We can sum up what good code is all about by saying that it is changeable. That’s why we care about code at all because we need the ability to change it. If it was sufficient to write software once and never touch it again then we could write it any way we wanted and the quality of the code wouldn’t matter. But since software will need to be changed in its lifetime, we must write code in such a way that it is changeable.


This is why we moved to object-oriented programming and away from procedural programming. By defining our programs as a collection of objects, we could limit the knowledge of each object in the system, thus decreasing the impact of change across a system.


That was the intent of coding to an object model, anyway. Object-oriented programs are far more encapsulated and maintainable than procedural programs because, if we create our objects well, they will encapsulate their implementations and give us the freedom to change them later without breaking a lot of other code.


But object-oriented programs have their own problems.


Just because we can write decoupled code in an object-oriented language doesn’t mean all developers do. Object-oriented programs are like power tools. You can cut more wood with a chainsaw than a hand saw but you can also hurt yourself much worse. With the great power of object-oriented programming also comes the great responsibility to use OO in ways that improve maintainability.


Most programs that I’ve seen written in an object-oriented language do not take advantage of any object-oriented features. They are procedural programs written in an object-oriented language, so they’re difficult to maintain and extend. Objects aren’t encapsulated so their responsibilities become spread out. Even when objects are encapsulated, we build dependencies in such a way that it’s nearly impossible to break them, so our code isn’t independently testable or reusable.


When we write procedural code in an object-oriented language we miss opportunities to increase maintainability. It’s like mixing metaphors. Ideas become hard to follow. Behaviors are intertwined with other objects. Code degrades into a mess.


Supporting changeability is all about writing functionally independent pieces that are independently verifiable. It’s about creating accurate models in code. The “right design” is revealed by the problem itself, so we have to learn to listen to our code and see what we need to do to improve it.


My suggestion is to start by paying attention to testability because when we can verify that a small behavior does what it’s supposed to do, independently from the rest of the system, it’s more likely that code will be straightforward to change in the future.

 •  0 comments  •  flag
Share on Twitter
Published on January 17, 2018 08:29

January 10, 2018

Doing More of What We Love

Developers love writing code. It’s what we’re good at. It’s what we do.


But all sorts of things get in our way: Meetings. Reading and interpreting specifications. And the worst time-sink of all—debugging. Most of us would rather do anything other than debugging, but that’s where we end up spending a lot of our time.


Doing test-first development isn’t debugging, it’s writing code.


It replaces the need for extensive internal documentation because it exercises the system the way it will be used. It concretizes abstract requirements by forcing developers to create examples for the behavior they want to build through the act of writing tests.


This is equivalent to creating a rough sketch of what you want to build before you build it. Test-first development helps keep you on track and lets you know when you’re done building a feature. And knowing when to stop building a feature is important. We tend to overbuild not because we like overbuilding, but because we’re scared that that feature might be used in the field in ways we hadn’t anticipated, so we tend to overbuild to compensate.


Having a test that embodies the service we want to create, how we want to call it, and what we expect back, is equivalent to aiming a rifle before you fire it. Knowing your criteria for completion before you start is a good habit to get into.


All this is embodied in the unit test we write before we write the production code. Tests are the concretization of abstract requirements so they’re easier to understand and work with. Writing the test first isn’t busy work, it helps us nail down the behaviors we want to create and think about them apart from their implementation. This is central to properly encapsulating a system so it’s modular, verifiable, and extensible.


Writing the test first forces us to think of the services we write from our caller’s perspective. This also helps simplify the system. When we write the test first it guarantees that the code we write to make that test pass will be testable. It must be, by definition, so this means we’re always writing testable code.


Too often I see developers write their code first only to find that trying to write a unit test for the existing code is so hard they usually have to rewrite the code to make it more testable. If you write the test first then you never have this problem.


Doing TDD is writing code: test code and production code. We spend less time on written requirements and debugging and more time on what we love doing most: writing code.

 •  0 comments  •  flag
Share on Twitter
Published on January 10, 2018 09:36

January 3, 2018

Don’t Be Scared to Touch Legacy Code

As Frank Herbert said in Dune, “Fear is the mind-killer.” He could have been talking about legacy code. As soon as we fear touching legacy code, we won’t do it.


As a consultant, I know that merely giving my clients tools and techniques for working with legacy code isn’t enough, I also have to help them get over the fear of working with legacy code. That can often be more challenging than teaching the skills to work with legacy code.


Most developers fear working with legacy code for good reason. They’ve had the experience of making a tiny change to an existing system only to see it sprout several bugs. The way most systems are built, a tiny, seemingly innocuous change to a system can cause unanticipated consequences. Breaking a system in production costs time and money, so we try to avoid it.


Most existing code is so intertwined that it’s very difficult to change. Without good automated tests, we have to manually retest the system, which can be expensive and time-consuming. This discourages last minute changes, which are often the most important to the customer.


It’s not easy, but we have to break this cycle. We have to find ways of restoring our confidence in legacy code and our ability to improve it. The problems that companies are facing with their existing code are not unique. I see the same challenges show up again and again on team after team. These are solvable problems. There are safe, disciplined ways of transforming legacy code into reliable, repeatable code. We just have to get over the fear of touching legacy code.


When we integrate continuously and we have reliable automated tests, whenever we have even the smallest bit of functionality we can make changes to code with confidence, knowing our tests will tell us if something breaks.


When we get instant feedback from our build and test suite whenever we make a change to any part of our system, the cost of making changes goes way down and developers no longer fear touching the code. And with version control we can always revert a system back to a previously known state, so the fear of making changes should go way down. With good test coverage, the fear should go down even more.


Once the fear to touch legacy code subsides then we realize there are ways of dealing with it. We can use known transformations to improve it safely. With effort, it is possible to improve legacy code so it’s more straightforward to work with and extend.


To learn more, check out my book, Beyond Legacy Code: Nine Practices to Extend the Life (and Value) of Your Software.

 •  0 comments  •  flag
Share on Twitter
Published on January 03, 2018 08:58

December 20, 2017

Flying Blind

The way most software is developed—designing entire programs on paper and then typing them into a computer in their entirety before they run—is like flying blind.


Software is virtual. We can’t see it or touch it. The only feedback we get is when we compile and run our program. But on a Waterfall project that can take weeks or even months to happen.


If I were building a compression library using Waterfall, for example, I’d design the platform, code the services, and it might take several months before I could start running the program through a debugger. All of that time I’d be flying blind.


Without the feedback of my compiler, I won’t even know if I’m making syntax errors. I just have the fallible human process of checking by hand until I get enough of the program written that I can run it and see some results.


This is why I write software test first.


I start with the smallest test I can think of so I have something up and running right away. I then add the smallest bits of functionality I can think of as I build out the system, all the while compiling and running my tests as I go.


Doing TDD means that I always have a buildable version of the system as it’s being built, even on day one. My system may not yet be able to do anything meaningful on day one, but it’s buildable, and as the smallest bits of functionality get added, all of the previous tests get run to ensure there are no subtle interactions between components and keep things functionality independent.


When I do TDD I can see into my system and always know what state it’s in: green or red. Green bar means everything’s great. Red bar no so much. We never let our system stay red for very long and since we make small, incremental changes to the system, we know it’s usually that last change we made that caused the system to break so we can easily back out the change with version control.


Our tools are only as good as we let them be. A shortwave radio can be used to call for help but only if you turn it on. In order for all software tools to work, and this includes compilers; lint checkers; linkers; unit, functional, integration, and system tests, you have to have a buildable version of your code so you can exercise it the way it will be used in the program. The fast and easy way to do this is to build a test harness. And if you’re going to have unit tests for your code then you may as well write it test first.


Then you’ll be flying with your eyes wide open.

 •  0 comments  •  flag
Share on Twitter
Published on December 20, 2017 07:37

December 13, 2017

Change Your Mind, Again

When I was a kid I was told that it was a woman’s prerogative to change her mind. For some people, though, changing your mind can be seen as wishy-washy. We like to stand by what we say. We like to be right.


But giving yourself the freedom to change your mind is one of the key characteristics of all great designers.


Assuming we can’t accurately predict the future—and we can’t—we must find ways of accommodating change when it inevitably happens. This is entirely possible if we make maintainability and changeability of our code a priority. But most developers build software that’s intertwined with itself and is hard to independently test and extend. We overuse inheritance, write concrete implementations, construct the services we use… and these things make it difficult to modularize our code.


But if we pay attention to good design principles and practices, we should be able to change our minds and refactor our designs without having to pay a high price later. This is the whole purpose of good object-oriented software. It’s not just to make the computer do something today, it’s to create a model that’s understandable and reliable so that it can be understood in the future and be adapted to the changing needs of the user. Building software so that it remains an asset not just right now but far into the future can be done through understanding what’s being modeled and modeling it in such a way as to reflect that understanding. This allows others to see how to extend the design naturally.


Being willing to change your mind means that you can start anywhere and refine as you go. It means you don’t need to figure it all out up front before you get started, and it turns out that figuring things out as you go can be much more efficient. The key to being able to change your mind effectively in software development is to know how to refactor code.


Usually, refactoring code from one design to another design is simply a matter of repackaging the code. So refactoring a design to accommodate new features does not usually require a lot of extra work. There are ways to refactor code safely when it’s under automated tests. Most of the time, it’s quite straightforward to refactor from one design to another, making it easy to change your mind.


In Agile, we go into iterations with unknowns. We learn as we go, but sometimes we get blocked. Sometimes the problem appears to have one solution but then as we get new requirements we realize we need a more flexible solution. Once we know that, we can refactor our design into something better. We don’t have to make all the right decisions up front as long as we’re willing to go back and improve our designs in the light of new information.


Changing your mind tends to mean you’ve discovered a better solution—take advantage of it!

 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2017 08:59