Markus Gärtner's Blog, page 8

February 10, 2016

Testing inside one sprint’s time

Recently I was reminded about a blog entry from Kent Beck way back in 2008. He called the method he discovered during pairing the Saff Squeeze after his pair partner David Saff. The general idea is this: Write a failing test on a level that you can, then inline all code to the test, and remove everything that you don’t need to set up the test. Repeat this cycle until you have a minimal error reproducing test procedure. I realized that this approach may be used in a more general way to enable faster feedback within a Sprint’s worth of time. I sensed a pattern there. That’s why I thought to get my thoughts down while they were still fresh – in a pattern format.



Testing inside one Sprint’s time

As a development team makes progress during the Sprint, the developed code needs to be tested to provide the overall team with the confidence to go forward. Testing helps to identify hidden risks in the product increment. If the team does not address these risks, the product might not be ready to ship for production use, or might make customers shy away from the product since there are too many problems with it that make it hard to use.


With every new Sprint, the development team will implement more and more features. With every feature, the test demand – the amount of tests that should be executed to avoid new problems with the product – rises quickly.


As more and more features pile up in the product increment, executing all the tests takes longer and longer up to a point where not all tests can be executed within the time available.


One usual way to deal with the ever-increasing test demand is to create a separate test team that executes all the tests in their own Sprint. This test team works separately from the new feature development, working on the previous Sprint’s product increment to make it potentially shippable. This might help to overcome the testing demand in the short-run. In the long-run, however, that same test demand will pile further up to a point where the separate test team will no longer be able to execute all the tests within their own separate Sprint. Usually, at that point, the test team will ask for longer Sprint lengths thereby increasing the gap between the time new features were developed, and their risks will be addressed.


The separate test team will also create a hand-off between the team that implements the features, and the team that addresses risks. It will lengthen the feedback between introducing a bug, and finding it, causing context-switching overhead for the people fixing the bugs.


In regulated environments, there are many standards the product should adhere to. These additional tests often take long times to execute. Executing them on every Sprint’s product increment, therefore, is not a viable option. Still, to make the product increment potentially shippable, the development team needs to fulfill these standards.


Therefore:

Execute tests on the smallest level possible.


Especially when following object-oriented architecture and design, the product falls apart into smaller pieces that can be tested on their own. Smaller components usually lead to faster execution times for tests since fewer sub-modules are involved. In a large software system involving an application server with a graphical user interface and a database, the business logic of the application may be tested without involving the database at all. In hardware development, the side-impact system of a car may be tested without driving the car against an obstacle by using physical simulations.


One way to develop tests and move them to lower levels in the design and architecture starts with a test on the highest level possible. After verifying this test fails for the right reasons, move it further down the design and architecture. In software, this may be achieved by inlining all production code into the test, and after that throwing out the unnecessary pieces. Programmers can then repeat this process until they reached the smallest level possible. For hardware products, similarly focued tests may be achieved by breaking the hardware apart into sub-modules with defined interfaces, and executing tests on the module-level rather than the whole product level.


By applying this approach, regulatory requirements can be broken down to individual pieces of the whole product, and, therefore, can be carried out in a faster way. Using the requirements from the standards, defining them as tests, and being able to execute them at least on a Sprint cadence, helps the development team receive quick feedback about their current process.


In addition, these tests will provide the team with confidence in order to change individual sub-modules while making sure the functionality does not change.


This solution will still provide an additional risk. By executing each test on the smallest level possible, and making sure that each individual module works correctly, the development team will sub-optimize the testing approach. Even though each individual module works correctly according to its interface definition, the different pieces may not interact with each other or work on varying interface definitions. This risk should be addressed by carrying out additional tests focused on the interfaces between the individual modules to avoid sub-optimization and non-working products. There will be fewer tests for the integration of different modules necessary, though. The resulting tests will therefore still fit into a Sprint’s length of time.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 10, 2016 13:41

January 10, 2016

Interview with Jerry Weinberg

Last year, I interviewed Jerry Weinberg on Agile Software Development for the magazine that we produce at it-agile, the agile review. Since I translated it to German for the print edition, I thought why not publish the English original here as well. Enjoy.



Markus:

Jerry, you have been around in software development for roughly the past 60 years. That’s a long time, and you certainly have seen one or another trend passing by in all these years. Recently you reflected on your personal impressions on Agile in a book that you called Agile Impressions. What are your thoughts about the recent up-rising of so called Agile methodologies?


Jerry:

My gut reaction is “ Another software development fad.” Then, after about ten seconds, my brain gets in gear, and I think, “Well, these periodic fads seem to be the way we advance the practice of software development, so let’s see what Agile has to offer.” Then I study the contents of the Agile approach and realize that most of it is good stuff I’ve been preaching about for those 60 years. I should pitch in an help spread the word.


As I observe teams that call themselves “Agile,” I see the same problems that other fads have experienced: people miss the point that Agile is a system. They adopt the practices selectively, omitting the ones that aren’t obvious to them. For instance, the team has a bit of trouble keeping in contact with their customer surrogate, so they slip back to the practice of guessing what the customers want. Or, they “save time” by not reviewing all parts of the product they’re building. Little by little, they slip into what they probably call “Agile-like” or “modified-Agile.” Then they report that “Agile doesn’t make all that much difference.”


Markus:

I remember an interview that you gave to Michael Bolton a while ago where you stated that you learned from Bernie Dimsdale how John von Neumann programmed. The description appeared to me to be pretty close towards what we now call test-driven development (TDD). In fact, Kent Beck always claimed that he simply re-discovered TDD. That made me wonder, what happened in our industry between 1960s and the 2000s that made us forget the ways of smart people. As a contemporary witness of these days, what are your insights?


Jerry:

It’s perfectly natural human behavior to forget lessons from the past. It happens in politics, medicine, conflicts—everywhere that human beings try to improve the future. Jefferson once said, “The price of liberty is eternal vigilance,” and that’s good advice for any sophisticated human activity.


If we don’t explicitly bolster and teach the costly lessons of the past, we’ll keep forgetting those lessons—and generally we don’t. Partly that’s because the software world has grown so fast that we never have enough experienced managers and teachers to bring those past lessons to the present. And partly it’s because we don’t adequately value what those lessons might do for us, so we skip them to make development “fast and efficient.” So, in the end, our development efforts are slower and more costly than they need to be.


Markus:

The industry currently talks a lot about how to bring lighter methods to larger companies. Since you worked on Project Mercury – the predecessor for Project Apollo from the NASA – you probably also worked on larger teams and in larger companies. In your experience, what are the crucial factors for success in these endeavors, and what are the things to watch out for as they may do more harm than good?


Jerry:

In the first place, don’t make the mistake of thinking that bigger is somehow automatically more efficient than smaller. You have to be much more careful with communications, and one small error can cause much more trouble than in a small project.


For one thing, when there are many people, there are many ways for new or revised requirements to leak into the project, so you need to be extra explicit about requirements. Otherwise, the project grows and grows, and troubles magnify.


It is very difficult to find managers who know how to manage a large project. Managers must know or learn how to control the big picture and avoid all sorts of micromanagement temptations.


Markus:

A current trend we see in the industry appears to evolve around new ways of working, and different forms to run an organization. One piece of it appears to be the learning organization. This deeply connects to Systems Thinking for me. Recognizing you published your first book on Systems Thinking in 1975, what have you seen being crucial for organizations to establish a learning culture?


Jerry:

First of all, management must avoid building or encouraging a blaming culture. Blame kills learning.


Second, allow plenty of time and other resources for individual learning. That’s not just classes, but includes time for reflecting on what happens, visiting other organizations, and reading.


Third, design projects so there’s time and money to correct mistakes, because if you’re going to try new things, you will make mistakes.


Fourth, there’s no such thing as “quick and dirty.” If you want to be quick, be clean. Be sure each project has sufficient slack time to process and socialize lessons learned.


Finally, provide some change artists to ensure that the organization actually applies what it learns.


Markus:

What would you like to tell to the next generation(s) of people in the field of software development?


Jerry:

Study the past. Read everything you can get your hands on, talk to experienced professionals, study existing systems that are doing a good job, and take in the valuable lessons from these sources.


Then set all those lessons aside and decide for yourself what is valuable to know and practice.


Markus:

Thank you, Jerry.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on January 10, 2016 08:21

August 21, 2015

Save Our Scrum – Tools, Tips, and Techniques for Teams in Trouble

During the Agile 2014 conference in Orlando, I talked a lot with Matt Heusser. Over the conference we bounced back and forth one or another idea. In the end, we had an idea for a new book: Save Our Scrum. A self-help book on many of the troubles we see out there happening with this wide-spread approach. We had the vision to base some of the lessons we learned in our consultant work, and see how we may help others with this. That’s the whole vision.


Skip forward one year, and we made some progress. We finished off the first few chapters with a more general introduction to Scrum itself alongside with some of the problems we are seeing. At one point we decided to put out what we had created thus far, in order to receive feedback from the people that are seeking such help. That’s why we recently put it up on LeanPub, so that folks can get access to it, and help us continue the momentum with great feedback.


Matt and I are pretty busy in our consultant work. That slowed down progress a bit in the past months. Right now, though, we seem to be in a writing burst with new content created constantly throughout the week. We started work on getting down the nuggets – that’s what we call the little lessons from all over the world with teams struggling with Scrum.


That said, if you get the book now, you will receive weekly updates – that’s what we promise you. Every week we publish anything that we have created throughout the week. We hope to keep progress flowing. I think this week both of us each worked on getting down at least four nuggets. That’s eight new lessons for you to read. If we can maintain this progress, we expect a good draft finished by end of September.


But, wait, there is more. You can get famous by helping us. We opened up feedback channels. We created a Slack team for open discussions. This is not limited to typos and missed commata, but you may also leave us your thoughts on nuggets that we forgot there, or share struggles that you have to improve our book.


We really look forward to your feedback and ideas and suggestions to advance our book. Hope you will enjoy it. And if not… well, at least you know some channels now to let us know.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on August 21, 2015 10:50

July 27, 2015

Working in a distributed company

In my courses, one or more of the participants almost always raise a question like this:



How do you set up a team across many sites?


Almost always when digging deeper I find out that they are currently working in a setting with many sites involved. Almost always they have a project organization set up with single-skill specialists involved. These single-skill specialists are almost always working on at least three different projects at the same time. In the better cases, the remote team members are spread across a single timezone. In the worst cases I have seen so far, it had been a timezone difference of ten hours.


I will leave how to deal with such a setting for a later blog entry. Today, I want to focus on some tips and tricks for working with remote team members and remote colleagues.


Tripit reported that I was on the road in 2012 for 304 days. I hardly believe that since I stayed at home for our newborn son Nico the whole June back then. (I think they have had a bug there.) But it was close. I have worked with remote team members and remote project teams in distributed companies since 2006. I hope I have some nuggets worth sharing.



Remote comes with a price

When it comes to distributed working, remoteness comes with a price. The hard thing most managers don’t realize does not stem from the difference in wage. In fact most companies I have seen merely out-source to far distant locations only because they take the wage savings into account – but not the social loss alongside.


The social loss is what happens when team members don’t know each other since they didn’t meet for a single time in person.


What happens with social loss?


Richard Hackman provides a some insights in his book Leading Teams. According to Hackman, teams are subject to several gains and losses over the course of their lifetime together. There are gains and losses on the subject of effort, performance strategy, and knowledge and skill.


When it comes to effort, social loafing by team members may stand in the way of the development of high shared commitment to the team and its work. For performance strategy, mindless reliance on habitual routines can become an obstacle to the invention of innovative, task-appropriate work procedures. For knowledge and skill, inappropriate weighting of member contributions can become a drag against sharing of knowledge and development of member skills.


All these three losses – social loafing, mindless reliance on habitual routines, and inappropriate weighting of member contributions – are more likely when team members are separated from each other. If they don’t know each other, they can’t make good decisions about distributing the work since people don’t know each other well enough to do so. They also are less likely to have a shared understanding of the organization’s context. They won’t know how to come up with better work procedures for the task at hand. And finally, the knowledge will be less likely shared among team members. It’s so hard to do when you have only two hours of common office hours between sites.


Besides the wage differences, these factors are hard to price. Thus, so it’s even harder to compare the costs of the decision to work in remote sites. You can make these costs a bit more transparent if you raise the question what it would cost to bring the whole team together every other week. That’s unlikely? That’s hard to do? That’s expensive? Well, how expensive is it in the first place to not do that? A couple of years ago I worked with a client that flew in their team from Prague every second week. They were aware of the social costs attached. And they were up to pay the price for it.


Though, that’s not a guarantee that it will work. But it makes the failure of teams less likely.


But what if you don’t want to pay that price? Well, there’s always the option to create teams local to one site. When you do that, make sure that you make the coordination effort between teams as less awkward as possible.


Video conferencing to the rescue

A couple of years ago, I found myself on a project team with some people set up in Germany and others in Malaysia. That’s a six hours timezone difference.


We were doing telephone conferences every other work day. And we were noticing problems in our communication. The phone went silent when one side had made an inquiry. Usually, that was an awkward silence. And more often than not – we found out later – that silence led to some undone tasks. (Did I mention the project was under stress?)


At one point, I was able to convince the project manager at my side of the table to talk to his pendant in the other location. They set up video conferencing. We coupled that with a phone call, still, but at least we could see each other. From that day on we had a different meeting. Now, we were able to see the faces of the others. We were able to see their facial expressions. We were able to see if they were not able to understand something that was said on the other end. And we were able to repeat the message or paraphrase it to get the point across. Certainly, the same happened for the other side. That’s what changed the whole meetings for us.


So, if you decide to set up remote team members, then also make sure they have the technology necessary to communicate and coordinate well enough between the different locations. One prospective client that I visited once had taken over a whole meeting room. The room was no longer bookable by persons in the organization. They had set up the various boards of all the teams in that meeting room. They also had a video projector and a 360° camera installed there. The whole set up was targeted to make it easy to have cross-site communication available. I can only urge you to invest in such technology. Your meetings will become more effective in the long run.


Transparency creates trust

Seeing each other during meetings is a specialization of a more general concept. Transparency related to work- and task-oriented data creates trust. I have seen work teams mourning over each other just because they stopped to know what “the other” team was doing. The trust also turns into distrust when there is a lack of transparency in certain regards, and the transparency you get just confuses.


Unfortunately, creating transparency also takes effort in most cases. You have to provide the numbers that others want to see like percentage of code covered by unit tests or net profit values of new features. In software, you will be providing those numbers maybe by another program. In non-software teams, you may need to find other ways to provide such information. Still, it will take effort.


Is that effort well spent? If my claim is correct, that transparency creates trust (lots of, actually), you should aim for the sweet spot between creating enough trust and effort spent on providing the necessary transparency. In other words, ask yourself the question, is the effort I spend on creating transparency well spent for the trust that we gain as a group?


A couple of years ago, Kent Beck raised another point in this regard. He said that he always tried to be transparent in every regard. Because hiding information takes effort. When you are completely transparent, you can save those efforts that go into hiding information and use it to provide value. I like that attitude.


One final thought: if creating transparency is indeed too effortful for you, remember there is always the option to work in a non-distributed setting. When you have chosen to work for a distributed company, the extra trust through transparency should be the price that you want to pay.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on July 27, 2015 13:15

June 14, 2015

So, I went for the board

End of August two years ago, I announced that I was going for the AST board. I kept my expectations pretty low, and I am glad that I did. Two years have passed, so I figured let’s revisit that decision from back then. Long story short: I won’t go for another two years. Read on to find out why.



The secretary

For two years, I have been the secretary of the board of directors of the Association for Software Testing. Pretty nice sounding title, eh? Well, there are three in-person board member meetings a year. We also went from bi-weekly calls to monthly ecoms during my term.


Usually the in-person meetings are done over a weekend from Saturday morning until Sunday noon-ish. That’s where most of the decisions are made for the ongoing course. In between, we tried to get work done.


Well, we tried at least. Part of the problem is when you depart from the in-person meetings, and get back to your daily routine, many of the good intentions to move something forward are eaten up by daily business. That’s happened to me, that’s happened to other board members, and many of them admitted it. It’s bit of a tragic situation, considering that folks get elected for having a loud voice on twitter or in the community, but you don’t know for sure if they get anything done in a group of volunteers.


So, for anyone out there wanting to go for the board, keep in mind not to set yourself a too high target. Remember that you will be dealing with a totally mixed set of volunteers, and it’s very hard to predict if and how you can work together. Folks from different directions have different opinions about different things. It might be that you will totally rock the place. But in hindsight, after two years, I think that the context-driven testing community is set up with many opinionated people, that can make the “getting something done” side of business hard at many times.


So, why I’m stepping down?

Besides all the things that we kept rolling, over the course of the last year, I figured that being the first European board member is causing me much of stress. Usually I took Fridays to fly into board member meetings, and returned early mornings on Mondays when we had an in-person meeting. That worked to some extent with my schedule, and I think that’s based upon the company I work for, and the freedom that I could take there.


And I think after two years, I have seen enough to recognize that it’s way harder to participate in online discussions with the timezone difference. It’s way harder to contribute to the board discussions that we have online or in email, and so on.


In September, I also became a CST which made me life with keeping track of online discussions harder. When you’re in class for two or three days straight, you don’t return to your hotel room to read the updates from the day.


So, over time, I figured that I couldn’t put in the amount of time that I felt would be necessary. I also recognized, that it became harder and harder for me to contribute. That’s when I decided that the AST would be better off if someone else had the chance to step in.


It’s not all bad

But it’s not all bad. During the last year we made the decision to update the BBST course material, to move the website to a new system, and to form a group working on standards and certifications. I think these are good steps, and they were long overdue.


That said, I look forward to the new elected board members, and how they will continue the work of 10 years of board members that came before them. I leave the group with a whining and a smiling face.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2015 13:01

May 31, 2015

Why you should go to CAST

Last year, when the Association for Software Testing announced the location for their next annual conference CAST 2015, Grand Rapids, MI, there was an up-roar happening on social media and back channels like Skype and private conversations. To my own surprise, I saw members of the context-driven testing community falling short of their very own principles. Rather than observing and interacting with people, it seemed that some persons preferred to derive their knowledge about Grand Rapids based upon a prior CAST conference there. Experience may be a good resource to start looking at, but I found that I should trust the folks from the local area that I knew to put together an awesome conference – more so since they could explain to me why the past experience was not so well received. When it came to the October 2014 AST board member meeting, Pete Walen, the conference chair, the guy who managed to send in a proposal prior to CAST 2014 so that the AST board could decide upon it, invited us to the conference location, so that it was easy to see for us where we were going with the proposal. Here is what I learned during my two nights in the conference venue – and why I think you should attend.



The Venue

The conference hotel is located right down-town in Grand Rapids. There was an art exhibition during my time in October there with many tourist flooding the halls. The conference hotel itself was huge – not as huge as those Gaylord hotels, but quite huge. There were several meeting rooms on various floors. Pete made sure to drop all of us arriving at the airport, and giving us a round at the hotel.


There are at least 15 floors with bedrooms, if I remember correctly. The main entrance hall had a chandeller hanging in the middle, with meeting rooms on the first floor. There was also a Starbucks, a restaurant, where I ate the most delicious food in my short life so far, and right across the reception was a bar that kept open even quite late into the night.


The first floor – I think in American counting it’s the second floor – appeared to be for all the conferences happening there. There is enough room to hold three CAST conferences there at the same time – but I think we just booked enough conference rooms for one event. In the evening, there was a wedding happening where I believe will be the main hall for CAST. It was a huge wedding, so was the main hall.


I found the conference hotel pretty awesome. Especially I like conferences where you stay with speakers and attendees at the same spot rather than spreading across the city. Thereby you can hang out with most folks without the “I need to get back to my hotel” crap, even if tester’s night or the challenges or whatever you have runs late into the night.


The location

When you fly into Grand Rapids, you can certainly check out the famous park calc parking lot. But the location of the hotel is actually right downtown. Close to the hotel, right across the street are two or three restaurants with any food you can think of. There is a brewery close by, and also museums. If that’s not enough, I am sure that some of the locals like Pete or Matt Heusser will be happy to point you to sights that you should visit if you haven’t used all your energy up for the conference.


In stories and rumors I heard that Grand Rapids was a boring spot with nothing to do, and so on. Certainly, I saw a totally different city back in October.


The program

Oh, year, there is also a program for CAST. The main theme is “Moving testing forward”. Monday usually is tutorial day at CAST, and we are happy to offer tutorials from Fiona Charles, Christine Wiedemann, Rob Sab, and Dhanasekar Subramaniam. Fiona will deal with transporting difficult messages, while Christine will be doing a tutorial on how to questioning rules and overcoming your own biases and conventions. Rob Sab has the basics for you if you want to become an experienced tester, and Dhanasekar will deal with mobile app testing. Usually tutorials fill pretty fast. So if you want to join any of these, make sure to sign up soon.


Karen Johnson will talk about how to move testing forward in her Tuesday’s keynote. The keynote on Wednesday will be held by my old fellow weekend tester Ajay Balamurugadas. In it, he will explain why the future of testing is already here.


Just in case, make sure to check the schedule online. Oh, and I probably also should mention that this Friday, June 5th 2015, the Early Bird will end, meaning that the prices will be higher if you wait longer.


As a closing to this blog entry, let me transpose the context-driven testing principles – deliberately – to conferences:



The value of any conference depends on its context.
There are good conferences in context, but there are no best conferences.
People, working together, are the most important part of any conference’s context.
Conferences unfold over time in ways that are often not predictable.
The learning is a solution. If the wisdom isn’t solved, the conference doesn’t work.
Good software testing conferences are a challenging intellectual process. Only through judgment and skill, exercised cooperatively throughout the entire conference, are we able to do the right things at the right times to effectively further our community.


If you plan to attend CAST 2015 with these in mind, I am certain you’ll get value out of the conference.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on May 31, 2015 12:21

May 10, 2015

Not so fast food

A while ago I considered adding some new type pf posts to my blog. There are many things that I notice in my leisure time at all the different spots that makes most folks wouldn’t think about, yet I see a relation to some of the stuff that I learned while diving into Agile software development. This blog entry kicks off these new type of posts.


I am not sure about other places on Earth, but in Germany the McDonald’s corporation recently started to re-design their restaurants. (Yes, I admit that at times a McDonald’s is one of my choices to take something to eat while switching trains.) They are now offering two types of counters together with a self-service counter using credit cards or bank cards. The in-person counters a split by the counter where you order and pay for your meal, and the counter where you can then pick your meal. Based upon subjective experiences, I think this move is a dramatic move away from customer friendliness.


What’s the problem?


My observation into many of these encounters was, that I needed to wait way longer to get food than it was the case before this re-design. Waiting times used to annoying to us customers even in the old design, but now with the new design, things got worse according to my personal experiences. Of course, it’s not much of a problem if there are few other customers. But imagine the situation when a full train arrives with hundreds of passengers during lunchtime at a local train station. Some of them at least will walk into the nearest McDonald’s to grab some food.


In the old workflow design, there were usually three to five counters open during such high times. There were several queues forming in front of these counters, you would be waiting in line for your turn, order, pay, wait for your meal to be prepared, and then leave. Overall this had been between 5-15 minutes at times for me.


Now let’s take a look at the new workflow design. You enter the restaurant, see two open counters with lots of folks waiting there, and you see the full list of already ordered meals on the screen above the counter for the pick-up. If you’re like me you check the self-service machines in such case. Funny thing is that in 95% of the times I ran into such a situation, these were out of order.


So, we get in line in front of the in-person counters. We wait until it’s our turn, order, pay, and receive a receipt with a number for the pick-up counter. There you queue up until it’s your turn to pick up your meal. But wait, there is only one guy serving the pick-up counter. So, all the customers from the two in-person order counters end up in a single queue to get their food. It would be even worse if one or more of the three self-service machines worked.


What’s happening here?


As Mary Poppendieck taught me a while ago, whenever there is a queue, there is certainly sub-optimization happening. What’s the sub-optimization here? McDonald’s seems to optimize for cash-flows and sparing personell costs from my point of view. Two in-person counters, and one guy serving the meals is three quarters of the salary they used to pay before the re-design.


Of course, people wouldn’t buy into that without some benefits. The self-service machines and the new fancy restaurant layout seem to be the selling points here. But it seems that they have under-estimated the demand and the necessary queues for such a move.


So, what’s better about the old system?


Did you notice the hand-off between the two counters? Yeah, I think this is the major culprit. On face value, you saved one person’s salary with the new workflow. On the other hand, you made it way harder for people to support each other.


Consider this situation that I observed last week. The pick-up counter was heavily under stress with 5-10 items still in line. At some point one of the in-person counters clerks decided to help them out, move from the counter, and got into the kitchen. Unfortunately the other cashier also had to leave for some reason I didn’t get. I observed five new customers stepping into the restaurant totally confused about where to get food, since none of the cashier stations was served.


The problem is that the new workflow makes it close to impossible for other people in the restaurant to help out the current bottleneck in the overall system. In the past I have observed folks from McCafe stepping in to help out others if there was a need. Now, it’s impossible.


All of this came with the specialization that slipped into the workflow from separating cashing and food preparations. The one additional hand-off made it less likely for me personally to enter a McDonald’s restaurant if I want to use my 15 minutes train switching time to grab something to eat. Maybe my wife will like this in the long, but on face value, I think McDonald’s harmed their overall business more than necessary with this one additional hand-off and specialization that happened alongside.


I certainly don’t know any business numbers from McDonald’s, but I imagine that client happiness with the new workflow restaurants dropped dramatically probably resulting in fewer returning customers.


Now, think what happened when the software industry introduced hand-offs between Analysis, Design, Architecture, Coding, and Testing.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on May 10, 2015 14:04

November 2, 2014

How to sell TDD to ?

Every once in a while I read something like this:



Yeah, [TDD|BDD|ATDD] is great. But how do you convince [your manager|your employer|your colleagues] to get the time to do it?


In the past week I decided that I need something to point folks to when this questions comes up again. So, here it is.


TDD does not work

First of all, I think it helps to apply a lesson that I learned years ago as a swimming trainer. I had several exercises in my repertoire that were a bit unusual, and at times hard to do. These included variations of swimming strokes in unusual positions.


Every now and then when I gave out one of the exercises to the kids, some or all of them were complaining: “this doesn’t work”, “I can’t swim like that”, etc.


What the kids didn’t know was that I had learned to try out these exercises on my own first to get a grasp of how difficult they were. Thereby I also knew that they were possible to do. Over time, I realized that “this doesn’t work” could be easily translated to “I don’t know how I can make this work”, and tada, let me see how I can help you with that.


Today, I apply the same lesson to TDD. Whenever someone tells me that TDD does not work in his code base, well, I make the mental translation to “I don’t know how TDD works on my code base”, and off we go.


My boss won’t let me

Yeah, right. Here’s a hard message for you: are you telling your carpenter how to hold his hammer? Are you telling your plumber how to use the pipe wrench? Are you telling your car mechanic when to replace the oil filter?


Seriously, why is your boss, your project manager or whatever excuse you have not use TDD telling you how to do your job? I thought you are a highly educated knowledge worker. If you are convinced about the effectiveness of TDD, then no boss or project manager should be telling you how to do your job.


Oh, sorry, there actually is one case when this might be appropriate for your boss to tell you. When you are not able to deliver working software that adheres to the business goals using TDD.


But remember: that’s feedback about how you use TDD, not about how bad your boss or project manager may be. So, better practice applying TDD and helpful design practices to be able to better serve the projects you are working on.


TDD does not work with my [language|framework|etc.]

Sure. That’s an easy excuse. Yeah, those darn language or framework programmers weren’t helpful. That’s how it’s going to work.


Uhm, wait a minute. What do you think how old TDD actually is? A thing from the Smalltalk community? It turns out, not quite right.


A while ago, Arialdo Martini wrote a blog entry on how old TDD actually is. Click that link, go there. Make sure, to read it up until the end. I’ll wait here with my rant.


Surprised? So was I – to some extent. Besides the fact that things like TDD have been mentioned in papers and publication by Dijkstra, and the first NATO conferences, TDD actually is way older than that.


Also note what Jerry Weinberg says in this interview with Michael Bolton about TDD:



Michael: [...] I’ve learned about both from conversations that I’ve had with you and other smart people. I remember once that Joshua Kerievsky asked you about why and how you tested in the old days—and I remember you telling Josh that you were compelled to test because the equipment was so unreliable. Computers don’t break down as they used to, so what’s the motivation for unit testing and test-first programming today?


Jerry: We didn’t call those things by those names back then, but if you look at my first book (Computer Programming Fundamentals, Leeds & Weinberg, first edition 1961 —MB) and many others since, you’ll see that was always the way we thought was the only logical way to do things. I learned it from Bernie Dimsdale, who learned it from von Neumann.


When I started in computing, I had nobody to teach me programming, so I read the manuals and taught myself. I thought I was pretty good, then I ran into Bernie (in 1957), who showed me how the really smart people did things. My ego was a bit shocked at first, but then I figured out that if von Neumann did things this way, I should.


John von Neumann was a lot smarter than I’ll ever be, or than most people will ever be, but all that means is that we should learn from him.[...]


So, the next time your boss approaches you asking to leave out those unit tests or stop that TDD thing, tell them the story on how Jerry Weinberg learned it from Bernie Dimsdale who learned it from John von Neumann. Then ask them that you don’t consider yourself smarter than John von Neumann.


Oh, and besides, even though some of our hardware became more reliable, most of our software hasn’t. When answering the question why we ever gave up something that John von Neumann taught us, I wouldn’t accept that excuse either.


What if all of that doesn’t work?

A couple of years back, I attended a Code Retreat session in Bielefeld. We worked all day using TDD, in six consecutive sessions, always deleting our code at the end, since we strived to learn about TDD, not to come up with a beautiful solution to a long solved problem.


At the end of the day, we held a quick retrospective. Everyone shared what they learned that day, and what they would be doing differently back at work next Monday. One guy stepped forward and said that he would change jobs on Monday since he never would be able to use TDD at his current job. Now, after he experienced it, he never wanted to do anything else.


That said, of course the “change your organization or change your organization” phrase also applies to TDD. If you are convinced about the approach, and never want to do anything else, and your environment currently doesn’t support it, well, move ahead.


In case you want to learn more, attend one of the events for the Global Day of Code Retreat in two weeks. Since I always learn one little thing at each of these events, I will be attending the one in Bielefeld, Germany.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on November 02, 2014 10:45

August 26, 2014

On auditing, standards, and ISO 29119

Disclaimer:

Since I am publishing this on my personal blog, this is my personal view, the view of Markus Gärtner as an individual.


I think the first time I came across ISO 29119 discussion was during the Agile Testing Days 2010, and probably also during Stuart Reid’s keynote at EuroSTAR 2010. Remembering back that particular keynote, I think he was visibly nervous during his whole talk, eventually delivering nothing worth of a keynote. Yeah, I am still disappointed by that keynote four years later.


Recently ISO 29119 started to be heavily debated in one of the communities I am involved in. Since I think that others have expressed their thoughts on the matter more eloquently and deeper than I going to do, make sure to look further than my blog for a complete picture of the whole discussion. I am going to share my current state of thoughts here.



Audits

In my past I have been part of a few audits. I think it was ISO 9000 or ISO 9001, I can’t tell, since people keep on confusing the two.


These audits usually had a story before the audit. Usually one or two weeks up-front I was approached by someone asking me whether I could show something during the audit that had something to do with our daily work. I was briefed in terms of what that auditor wanted to see. Usually we also prepared a presentation of some sorts.


Then came the auditing. Usually I sat together with the auditor and a developer in a meeting room, and we showed what we did. Then we answered some questions from the auditor. That was it.


Usually a week later we received some final evaluation. Mostly there were points like “this new development method needs to be described in the tool where you put your processes in.” and so on. It didn’t affect my work.


More interestingly, what we showed usually didn’t have anything to do with the work we did when the auditor left the room. Mostly, we ignored most of the process in the process tool that floated around. At least I wasn’t sure how to read that stuff anyways. And of course, on every project there was someone willing to convince you that diverting from whatever process was described was fruitful in this particular situation and context.


Most interestingly, based upon the auditing process people made claims about what was in the process description, and what the auditor might want to see. No one ever talked to them up-front (probably it wasn’t allowed, was the belief). Oh, and of course, if you audit something to improve it that isn’t the thing that you’re doing when you’re not audited, then you’re auditing bogus. Auditing didn’t prevent us from running into this trap. Remember: If there is an incentive, the target will be hit. Yeah, sounds like what we did. We hit the auditing target without changing anything real.


Skip forward a few years, and I see the same problems repeated within organizations that adopt CMMi, SPICE, you-name-it. Inherently, the fact that an organization has been standardized seems to lead to betrayal, mis-information, and ignorance when it comes to the processes that are described. To me, this seems to be a pattern among the companies that I have seen that adopted a particular standard for their work. (I might be biased.)


Standards

How come, you ask, we adopt standards to start with? Well, there are a bunch of standards out there. For example, USB is standardized. So was PS/2, VGA, serial and parallel ports. These standards solve the problem of two different vendors producing two pieces of hardware that need to work together. The standard defines their commonly used interface on a particular system.


This seems to work reasonably for hardware. Hardware is, well, hard. You can make hard decisions about hardware. Software on the other hand is more soft. It reacts flexibly, can be configured in certain ways, and usually involves a more creative process to get started with. When it comes to interfaces between two different systems, you can document these, but usually a particular way of interface between software components delivers some sort of competitive advantage for a particular vendor. Though, when working on the .NET platform, you have to adhere to certain standards. The same goes with stuff like JBoss, and whatever programming language you may use. There are things that you can work around, there are others which you can’t.


Soft-skill-ware, i.e. humans, are even more flexible, and will react in sometimes unpredictable ways when challenged in difficult work situations. That said, people tend to diverge from anything formal to add their personal note, to achieve something, and to show their flexibility. With interfaces between humans, as in behavioral models, humans tend to trick the system, and make it look like they adhere to the behavior described, but don’t do so.


ISO 29119

ISO 29119 tries to combine some of the knowledge that is floating around together. Based upon my experiences, I doubt that high quality work stems from a good process description. In my experience, humans can outperform any mediocre process that is around, and perform dramatically better.


That said, good process descriptions appear to be one indicator for a good process, but I doubt that our field is old enough for us to stop looking for better ways. There certainly are better ways. And we certainly haven’t understood enough about software delivery to come up with any behavioral interfaces for two companies working on the same product.


Indeed, I have seen companies suffer from outsourcing parts of a process, like testing, to another vendor, offshoring to other countries and/or timezones. Most of the clients I have been involved with were even suffering as much as to insource back the efforts they previously outsourced. The burden of the additional coordination was simply too high to warrant the results. (Yeah, there are exceptions where this was possible. But these appear to be exceptions as of now.)


In fact, I believe that we are currently exploring alternatives to the traditional split between programmers and testers. One of the reasons we started with that split, was Cognitive Dissonance. In the belief that a split between programmers and testers only overcomes Cognitive Dissonance, we have created an own profession a couple of decades ago. Right now, we find out with the uprising of cross-function teams in agile software development that that split wasn’t necessary to overcome Cognitive Dissonance. In short, you can keep an independent view if you can maintain a professional mind-set, while still helping your team to develop better products.


The question I am asking: will a standard like ISO 29119 keep us from exploring further such alternatives? Should we give up exploring other models of delivering working software to our customers? I don’t think so.


So, what should I do tomorrow?

Over the years, I have made a conscious effort to not put myself into places where standards dominated. I put myself simply speaking into the position where I don’t need to care, and can still help deliver good software. Open source software is such an environment.


Of course, that won’t help you in the long run if the industry got flooded with standards. ISO 29119 claims it is based upon internationally-agreed viewpoints. Yet, it claims that it tries to integrate Agile methods into the older standards that it’s going to replace. I don’t know which specialist they talked to in the Germany Agile community. It certainly wasn’t me. So, I doubt much good coming out of this.


And yet, I don’t see this as my battle. Since a while I realized that I probably put too much on my shoulders, and try to decide which battles to pick. I certainly see the problems of ISO 29119, but it’s not a thing that I am wanting to put active effort to.


Currently I am working on putting myself in a position where I don’t need to care at all about ISO 29119 anymore, whatever will come out of it. However, I think it’s important that the people that want to fight ISO 29119 more actively than me are able to do so. That is why, they have my support from a far.


– Markus Gärtner


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2014 04:07

May 28, 2014

Principles of ATDD: Single Responsibility Principle

The other day, I sat down Kishen Simbhoedatpanday in order to talk about ATDD, and eventually an upcoming class on the implementation side of ATDD. We talked about maintainable tests, and how you could refactor tests to yield better tests. Gojko wrote about the anatomy of a good acceptance test a while back, and I think we can be more explicit here. Then it struck me. The Single Responsibility Principle also applies to your automated examples – and why that’s the case. You just need to think on a conceptual level about it. Mixing business domain concerns with application concerns in your examples – and sometimes even with driver concerns (think Selenium) – is a terrible thing to do.


Let’s explore this idea further.



The problem

An example would be handy right now. So let’s take a look. Here is an example that I found with a quick google search:



Feature: As a user I should be able to perform a simple google search


Scenario: A simple google search scenario

Given I am on the main google search

When I fill in “q” with “ATDD by example”

And I click “gbqfb” button

And I click on the first result

Then I should see “ATDD by Example: A Practical Guide to Acceptance Test-Driven Development (Addison-Wesley Signature Series (Beck))”


What’s wrong here? A while back I thought the problem lies in several levels of abstractions in the same textual description. But now, I would like to stress the point that there are different concepts dealt with.


For example, the line When I fill in “q” with “ATDD by example” deals with the domain concept of the search term together with the application domain of the html-page, and how to drive the browser using a driver library. Similarly, the line And I click “gbqfb” button deals with the application domain, and the business domain. The business domain is “searching”, the application domain is the particular implementation detail that there is a button for searching.


The problem lies in the different concepts involved here. There are three problems with these:



They are harder to write
They are harder to read
They are harder to maintain

Harder to write

Wait, what? They reflect how the application is implemented, so they are quite easy to write, right? Wrong.


To be frank, I have written such tests a lot in my life. From those dark days, I remember a couple of things. What I remember mostly is that I got home pretty tired. I needed to put in so many hours to remember all the various contexts I dealt with. “If I exercise the application this way, and then see this happening in the domain, then there is a problem.”


Can you spot it? Yeah, it’s context switching at its best. So, writing these kind of tests is a whole lot harder rather than grasping the business domain, and clearly expressing that. You need to keep track about various things, on the business domain side, and the application domain side, and sometimes even on the technical driver domain side.


Now, compare the above piece with this example:



Feature: As a user I should be able to perform a simple google search


Scenario: A simple google search scenario

When I search for “ATDD by example”

Then I should see one result from “amazon.com”


Is that clearer? Maybe. How hard was it to write that thing? Not that hard. I didn’t need to remember the tiny nifty details about the html page, where the search field is, and which button to press. I focus the test on the business domain: “search for something, and expect at least one result from that side”. Straight-forward for me.


But, but, but, you can record these tests easily! That’s right. But think again: what happens when that test fails in five months?


Harder to read

So, basically the same argument applies as well for the reading part of the tests. You need to do a lot of context-switching for the involved expressed domains. In the example above I counted at least three domains in five lines. And that’s merely one scenario, one test. How large is your test suite?


Clearly, unfocused tests can become a great deal of pain when trying to read them and find out what the problem is. Tests that deal with several domains at the same time are unfocused. You need to keep track of several domain concepts. You need to translate between them, in order to keep track. It’s like you read three stories at the same time: the story from the business, the story from the application, and the story from the test automation.


Compare that with the second example I provided. Is that clearer? I think so. Since it’s focused on the business domain, we can focus on reading that piece. That also explains why I like to write unit tests for my test automation code. Because I need to deal with all the fluff that is in place there to express how I hooked up the application to my test automation code. I want to make sure to have tests for that domain of my problem as well. Oh, and of course, you also need application unit tests in the end. We certainly shouldn’t forget these.


Harder to maintain

But the biggest problem lies in the long-term maintainability. Think about it. If your examples express concerns from three different domains, your tests are highly coupled to these three domains. So, when you change the driver for your application, your tests might go bye-bye. If your application changes, your tests might go bye-bye. If there is a change in business rules, your tests might go bye-bye. Sounds like a lot of maintenance risk to me.


So, the way to go here, is to separate the three different concerns, each into its own place in the whole code base. The business domain should go with the examples. The application domain should go with the system under test, or if you need to translate between the driver for your tests and your application, then you want to address that in your support code. Oh, and you want to test your stuff that is using a particular test driver as well.


Three different concerns, three different ways to tell you that something had changed there.




Separation of concerns is not a new concept in software development. The hard part is to spot when we mix concerns, and then separate these concerns accordingly again. In the end, it will result in fewer work writing these tests, fewer work reading these tests, and fewer work maintaining these tests. So, it sounds worthwhile to follow that tiny extra effort.

Can you spot more problems in the code above? Can you spot some Single Responsibility Principle violations in your acceptance tests? We need to develop a sense for finding these kind of smells.



Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on May 28, 2014 00:00