Markus Gärtner's Blog, page 19
August 6, 2012
The BDD pitfall
Last week I went to the 2bd SoCraTes conference, the German Software Craftsmanship and Testing conference. We did two days of open space discussions, and we all had great fun. One thing though that caused me some extensive amount of trouble, was the amount of sessions around BDD.
Some time ago, I wrote about the given/when/then-fallacy. But this time was different. Despite the amount of emphasize that BDD puts on the ubiquitous language, I was shrugged by the fact that folks were pointing to different things while talking about BDD: It seems BDD suffers from the same thing that it tries to prevent: having a coming understanding about stuff.
I don’t know where this particularly comes from, and I also saw a couple of bad scenarios when it comes to the usage of tools like Cucumber or JBehave. I don’t consider myself a BDD expert, and people pointed out that I do something different around acceptance tests. Still I thought to expose some of my thoughts about some of the examples that I ran across recently – and helped out improving. Here’s my though process on two of these scenarios.
The first example I crossed while reviewing some sessions for the XP Days Germany later this year. I offered some help with improving it. Here is the original version of a scenario:
Given a recorded customer X
Given an existing product Y
and its availability should be two
When X orders Y once
Then expect a remaining availability of Y of one
Now, I see several things that could help make this scenario more independent from the particular (current) implementation. The first line appears to me to be there just for technical reasons. So let’s try to drop it. The second and the third line actually set up the context together. I think we should try to combine them. Such a combination could work like the follwoing:
Given a product Y that is available “2″ time(s) in stock
Now, we have the direct connection between the product and the availability. That should do it for the time being.
The fourth line is a bit troublesome to me. It seems to be hard to reuse it later, if we happen to decide to go for ordering multiple items of product Y. Anyways, we wanted to include the customer in there. How about this:
When a customer orders product Y “1″ time(s)
A bit better. In particular, I don’t like the “time(s)” reference, but would go for that for the time being.
The final line suffers from a similar problem as the fourth line. Let’s try to remove that problem there as well, maybe trying to reference the given context we identified earlier. Here’s a first throw:
Then the product Y remains in stock “1″ time(s)
Sort of ok. Let’s put the three together:
Given a product Y that is available “2″ time(s) in stock
When a customer orders the product Y “1″ time(s)
Then the product Y remains in stock “1″ time(s)
This could be a good starting time. Maybe we need to give it some more thought. That would depend from my part on how often we are going to reuse which part, and what is likely to change in the next few weeks, and what is unlikely to change. Notice that most of the technically coupled details from the original scenario are still there, and I pushed that down one more level to the concrete implementation of the step-definitions. And now we won’t need to change all tests because we cluttered the example descriptions with it. Instead we will need to change the step definitions in case we change something to that binding.
I often fail at identifying the proper binding, and I try to learn from it each time I do. There is still coupling between the examples and the product implementation, though, and I hope that I picked the right trade-off decision here. I can’t know it, but it seems to be my best option so far. Maybe that is why I don’t try to automate everything, but tackle the more interesting pieces with more easy to maintain tests run by hand – maybe supported by the automation that is already there.
Passive scenarios
A second problem I see a lot in scenarios has to do with passiveness. Most writers (in English and German, at least to my experience) know that passive writing is boring, and usually it is edited out by copy-editors (just like the last part of this sentence would have been). In terms of scenarios this seems to happen a lot once the word “I” is used – leaving the future reader of that scenario unaware of who that I is.
Here is a second scenario I saw on the flip charts at SoCraTes:
Given I have started my day
When I open the Sales App
And I want to tweet my location
Then my location is available on twitter for my customers
Notice that you don’t have a clue who started his day. Notice the amount of effort it takes you to follow that scenario without it. The person in question is mentioned in the user story above. It’s a hot dog seller scenario. Without mentioning it, you can actually replace that. So, what let’s try to make the whole thing more active:
Given a hot dog seller
When he tweets his location at location Bielefeld, Germany
Then his customers will see Bielefeld, Germany as his location
Now, the actor is clear. It’s way harder to take this scenario out of its meaningful context in the way I formulated it. And there is still something that troubles me. But I will leave this to the interested reader to tear out. Thereby I hope to expose the power of collaboration on the scenarios.
Don’t overestimate BDD
I was a bit peeved the other while listening to a rant about agile people testing, and just focusing on BDD in that. Maybe I was just made aware of the problem. With about five sessions in the SoCraTes open space on the topic, yet no programmer really agreeing on what that term actually means, made me suspicious. If anything at all, you should make take BDD as a tool, using it when it’s appropriate, and don’t overuse it, or even overestimate its benefit. You will still leave gaps of things you didn’t consider. Maybe put more emphasize on the polimorphic actions that are hard to automate. Oh, and you can’t do that with Cucumber or JBehave.









July 31, 2012
My future wishes for software testing
Huib Schoots approached me late last year regarding some contribution for the TestNet Jubilee book he was putting together. The title of the book is “The Future of Software Testing”. I submitted something far too long, so that most of it fell for the copy-editing process. However, I wanted to share my original thoughts as well. So, here they are – disclaimer: they might be outdated right now.
In the past I made terrible experiences trying to predict the future of something. As a tester I have often failed to predict the future in my estimates. When the reality of a project kicked in, things delayed more and more, up to the point where the deadline actually was past the point where I got the software that I should be testing. What have I learned from this experience about the future of testing? I think efforts trying to predict or estimate something are doomed efforts. Instead of trying to predict the future, I prefer to see what I desire, and then derive my goals from that.
In order to consider my wishes, I have to take a look back on my experiences in the field. I remember back in April 2006 when I first entered the field of software development after having studied at the university. Originally I applied for a different position, but I was hired as a software tester as the original position was already taken. I was completely unaware of what software testing consists of. Back in university I didn’t even have the offer to visit a course completely on software testing. The only mention software testing got was in a course on Software Engineering in one single sentence. That was in 2001. Having spent some of my time in this profession, I really wish that software testing becomes a field that is taught with respect in universities. A student shouldn’t be forced to visit a course on software testing, but if he wants to dig deeper, he may get the opportunity to do so.
This wish also holds for the education of the future generation of software testers. As a profession we can not wait until the university adapt their syllabus to meet the reality of software development. I really wish to see the field of professional education for software testers grow. With professional I don’t mean certain courses that include the word “profession” or “professional” and teach you only basics about our craft ‑ sometimes even using misconceptions about practices and models. Instead I wish to see apprenticeship programs on software testing coming up. Matt Heusser’s Miagi‑Do school of software testing is one approach to this. James Bach’s and Michael Bolton’s Skype coaching sessions yet another. I wish to see more approaches as these evolving.
Last, I wish to see testers being treated with more respect. Good education for aspiring software testers will build the basis for this. With more and more great testers getting into our field, we will eventually become first‑class citizens. Imagine a world where testers are treated with honor and respect. A world where testers not only get the same payment as their programming counterparts, but where they are working together face‑to‑face. A world where we are allowed to speak to the programmers, the customer, and the project manager as peers, not as subordinates. With the downfall of factory school testing that treats testers as part of a machine that can be exchanged easily, we will eventually see this happening.
These are my three pillars for the future of testing. University education courses on software testing as well as education programs for software testers that don’t pay lip service to our craft, but actually teach people how to test, and how to do it well. Finally, a world where testers are treated as peers to their colleagues. In such a world questions like ” should we continue testing if smoke tests fail?” or “what is the best programmer to tester ratio?” or “what kind of metrics should be used to compare manual and automated software testing methods?” would be superfluous. In order to get beyond questions like “who is responsible for which kind of testing?” let’s seek an understanding of the testing that we can perform as a team ‑ jointly with programmers and testers. I am certain that we already have everything we need in order to achieve this. Let this wish become a truth.









July 30, 2012
The Testing Quadrants – We got it wrong!
The Testing Quadrants continue to be a source of confusion. I heard that Brian Marick was the first to write them down after long conversations with Cem Kaner. Lisa Crispin and Janet Gregory refined them in their book on Agile Testing.
I wrote earlier about my experiences with the testing quadrants in training situations a while back. One pattern that keeps on re-occurring when I run this particular exercise is that teams new to agile software development – especially the testers – don’t know which testing techniques to put in the quadrant with the business-facing tests that shall support the team. All there is, it seems, for them is critique to the product. This kept on confusing me, since I think us testers can bring great value. Recently I had an insight motivated by Elisabeth Hendrickson’s keynote at CAST 2012.
Two weeks ago, I attended CAST 2012. Elisabeth Hendrickson held a keynote there. If you haven’t already, you probably want to watch it:
In her talk Elisabeth provided a new interpretation of the testing quadrants that looks like a promising experiment for overcoming some of the drawbacks I had with the second quadrant, the business-facing tests that shall support the team. Elisabeth proposed a re-labeling of the two columns to confirmation vs. investigation.
The Thinking Tester, Evolved from Elisabeth Hendrickson
With the new labels the quadrants are a bit different than the ones that Brian Marick originally wrote down. On the left-hand side the confirmatory tests that are technology-facing are of course the unit tests and those class-level integration tests that developers write. These help drive the development of the project forward.
In the second quadrant now are the expectations of the business. Usually I try to express most of them in the form of executable specifications, that I can automate. And I can also imagine other business-facing expectations that I cannot automate that probably fall in here, like reliability of the software product.
In the third quadrants on the top right are tests that help to investigate risks concerning the external quality of the product. For me Exploratory Tests fall in this category, but also usability concerns, and end-to-end functionality.
The fourth quadrant then probably consists of internal qualities for the code like changeability (static code analysis comes to mind), maintainability (how high is your CRAP metric these days?), and design flaws like code smells.
If you now wonder where quality attributes like performance and load testing are in this new model, I consider them to be either part of business-facing expectations, or they are part of the investigative process of external quality attributes. I think this depends on whether they have been made explicit early enough, or we find out during our investigations that load times are too low, for example.
I think there are more things to discover in this new model. Like with any models, they can help you think in a particular direction, but please don’t try to use this as your only guide. Use this model wisely, and apply critical thinking to your results.
P.S.: If you noticed the typo in the slide deck from Elisabeth you are not the first to recognize. If you watch the video, one of the attendees of the keynote points that out, too.









July 29, 2012
Actually, there are best practices
A while back I ranted about best practices. Among the things I found in that particular blog entry is that there are quite some definitions for the term “best practice” out there. Nowadays, if it’s not on google, then it doesn’t exist. For best practices google it turns out is quite capable of delivering a definition. Although I resonate with the principles of context-driven testing, I recently found the second principle unhelpful. The second principle states
There are good practices in context, but there are no best practices.
Like many other people that I respect I used to start ranting about best practices whenever people would ask for them. Particularly in training situations, though this does not help so much. J.B. Rainsberger‘s introduction to the Satir Communication Model helped me understand why that is.
The Satir Interaction model
The Satir communication model describes four phases for communication. First, there is intake where the words from the other partner are taken into my braincells. Then my brain will form meaning out of it. Based on my previous impressions I will derive different conclusions. Based on my interpretation of the meaning I derived from the intake, I will assign significance to the message, and decide whether and how to respond.
In any of these four phases miscommunication can happen. If my hearing is impaired, I will take in a different message. If I derive a different meaning from the words as spoken, I will come up with a different interpretation, and probably assign a different significance to the words spoken. Early on I found that there was some miscommunication happening between people looking for best practices and my understanding of it. So, let’s take a look on what happens when miscommunication happens.
Debugging a conversation
Assuming I have heard the message from the person talking to in the right way, I can derive a different meaning than the other one based on our different experiences. We will then have a mis-understanding of the words as spoken. In order to debug our conversation, we will have to work towards a shared understanding between us.
When I react in an emotional way to the message, it might be that I derived a different significance to the words as spoken. Then we will have a mis-interpretation between us. In order to debug our conversation, I will need to understand the interpretation of the other person.
With these two concepts in mind, let’s take a look on what happens when someone says “best practice”.
Debugging best practices
Google cites Wikipedia on best practice:
A best practice is a , method, process, activity, incentive, or reward which conventional wisdom regards as more effective at delivering a particular outcome than any other technique, method, process, etc. when applied to a particular condition or circumstance. …
Note that the definition above includes particular conditions or circumstances where practices deliver more effectively according to conventional wisdom. Usually some folks refer to folclore wisdom in particular contexts like Waterfall, Scrum, or XP.
Usually I thought about best practices as practices for which no better way exists. No better way now, and no better way in the future. In the end it’s the best way to do something, right?
Well, it seems that there is a mismatch between the Wikipedia definition, and my understanding of the term “best practice”. During the past one or two years I found out that I cannot solve this miscommunication by responding emotionally, harsh, or in other ways. Talking someone down does not help in this situation.
Instead, I started to question for what the other person looks for in best practices. By then I can find out if he is really looking for a practice that will always work. More often than not, though, I found out so far that the other person is actually looking for some hints of practical application of the thing that I am speaking about. And there are more helpful ways to react to that expectation than show more respect for the other person.









July 19, 2012
What I learned at Test Coach Camp 2012
A few days before CAST 2012 I attended Test Coach Camp in San Jose, CA. There were thirty passionate people spending time exchanging coaching tricks and ideas. Here is what I learned in these two days.
The most important that I learned started in the morning. I approached Sigurdur Birgisson, and he could finally explain to me how to tie my shoe laces. All weekend my laces didn’t open by themselves. I think I live happier since then. At Test Coach Camp there were other sessions on concrete skills, like juggling, and how coaching in Kendo and swimming relates to coaching testing skills.
Besides that I attended a session by Michael Larsen on the method he uses for coaching boy scouts called EDGE. The acronym stands for Explain, Demonstrate, Guide, and Enable. We suggested later to treat the second E more as Empower. The idea is to start with a connection to the students first. Explain them why you are the person that should explain this new thing to them. Get their attention, and raise their curiosity. Then demonstrate the particular thing you want them to learn. Michael showed us how to make a longer rope accessible with a monkey thing (I don’t remember the term, nor can I find it). The idea is to knot the rope in multiple loops together, and still being able to extend them if you have to. After having shown it to use he helped us to do it on our own. That was the guiding part. He tied a head start, and handed it over to one of us, who continued two to three loops, and then passed it further on. In the end, the last person did the final few loops, and the finishing part. In order to empower us, he extended the rope again, and asked use to do it now on our own.
I see lots of parallels between the EDGE method as Michael showed to us, and the 4Cs from Training from the back of the room. There, you start with the connection of learners to the material, just as Michael did when he raised our attention, and explained the problem we would like to solve. The second C stands for Concept. By demonstrating it, Michael introduced the concept of the knots to us. Then we entered Concrete Practice (third C) by actually doing it, and concluded (fourth C) with a try from scratch completely on our own.
I had other smaller take-aways, mostly from the conversations in between sessions – although we did an OpenSpace at Test Coach Camp. One of them is the idea to teach Exploratory Testing to Product Owners, thereby helping them with accepting user stories. This idea from Matt Barcomb is completely in line with a talk from Sigurdur Birgisson on Exploratory Testing for programmers that he will hold at Agile 2012 as well. I think both ideas can bring great support for testers, and extend their reach. If testers are taking up more and more programming skills on an Whole Team, it seems reasonable to me that programers and product owners also learn more about the particular skills it takes for testing an application.
There were other interesting sessions that I attended, but the learnings from these sessions probably take on more time for me to digest. Overall it was an awesome two day OpenSpace, and will be working hard to make it to next year’s Test Coach Camp again.









July 16, 2012
CAST 2012: Helping Thinking Testers Think
At CAST 2012 Geordie Keitt held an introduction to the work of Elliott Jaques in Organizational Psychology.
Geordie Keitt provided an introduction to Elliott Jaques. Jaques developed the field of organizational psychology, and the concept of midlife crisis. Jaques noticed worldwide patterns of fair pay after world war II. Keitt introduced declarative, cumulative, serial and parallel processing patterns. These four patterns refer back to the four logical operators OR (declarative), and (cumulative), if-then (serial), and If-and-only-If (parallel).
As an example for parallel processing he provided wolves in a pack hunt. or elephants defending their young. They coordinate their efforts for the common goal. They communicate by doing certain actions.
For serial processing Keitt provided schooling fish, or migrating birds and beasts. Honeybees waggle dance is an example for Cumulative processing. For declarative – one thing at a time – you can find examples in ameba, unicellular organisms, and zombies.
US humans use recursion over higher orders of abstraction. We are able to combine these different patterns of processing. For example while building a house with concrete blocks, we can parallel processing in two groups, despite the individual efforts being rather serial.
The way our brains work is to categorize information in different levels of abstraction. For higher levels of abstraction we for example us intangibles as tests, features, requirements, oracles, product areas, or risks. We also have higher abstractions at hand like a business domain or the testing industry as a whole. Getting more abstract, we have ethics, justice, society, and culture. These are categories of categories of categories of intangible objects within systems.
Keitt introduced the Jaques’ concept of Applied level. He claimed that us testers face the challenge to process as close as we can to our maximum level of our work.
I got the clue when Keitt provided a picture of the seven levels as work role strata how Jaques called them. There are Procedural, Diagnostic, Managerial, Orchestral, Definitive, Integrative, and Strategic. The typical timespan of discretion usually varies from 1 day to 3 months on Level I (procedural) and 20+ years on Level VII (strategic).
Keitt described that the main challenge is to align the stratum with the right level to tackle it. In order to create a dream job, you eventually have to do that. Assign testers to roles where the stratum equals the tester’s level of maximum Applied level. Similarly make sure the tester’s boss is one level higher, for example working on strategical decisions while you are work on the integrative level.
Keitt explained the difference between the context and the stratum being on the right level, being too low, and too high. This forms a problems space, and drives out in essence the difference between a dream job, and a dissatisfying job.
In the Open Season Keitt challenged the participants to provide him other examples to work through.








CAST 2012: The Testing Dead
At CAST 2012 Ben Kelly spoke about the Testing Dead. I think it was a reference to some folks claiming testing to be dead, while Ben rather pointed out that testing is not dead, but rather undead with zombies carrying their undead bodies in front of a computer to do a 9-5 job.
Ben defined zombie testing in the beginning. Ben spoke about Frederick Taylor, and how he introduced scientific management. Taylor broke down activities from craftsmen to smaller steps, and worked with them to make these steps more efficient. Basically he introduced process in order to turn craftsmen into zombies following it, Kelly claimed.
This refers to zombie testing with all of its templates, test plans, and biasing the work of us testers by a particular context. Of course this does not help if the context changes over time. Kelly claimed that any thinking tester should be able to be dropped into any context, and be able to adapt to it within seconds. I think we need a large tool belt of practices and approaches in order to do that.
Kelly provided several classes of zombies, and especially testing zombies. The Passenger is riding the testing bus on the way to a better destination. He typically does enough to avoid reprimand but little more. He also doesn’T see a point in improving their testing skills.
The Confused believes that they are in ‘Quality Assurance’. He enjoys the prestige of the title – like Quality Assurance Engineer – despite having none of the authority that should accompany it. He doesn’t seem to recognize the blame for failure accompanying this position as ‘you’re doing it wrong’.
Kelly raised whether or not this is really a problem. Why wouldn’t we just live and let them (un)live besides us? Are they hurting anyone, anyways? Kelly pointed out to a culture of Zombie testers that hurts us indeed. The segregation of programmers and testers, and backlash against testers in general. Kelly pointed out to a discussion between project managers for half an hour discussing whether this or that bug would be prio 1 or 2. Everyone with children knows there is no point in discussing who is first or second. You eventually have to deal with both. The main problem with these actually is not so much that they exist, but that they are seen as the norm by non-testers like programmers and managers.
What should you do with zombie testers in your office? How do we deal with them? Blog about thinking thinking testers, testers who are not dead. For not influencing the culture of your company, you should probably keep them out of your building. This reminded on the No-asshole rule by Bob Sutton – a great book that you should read, if you haven’t already. You can keep undead testers out of your building by looking properly on their resumes when they apply for a job. Do they have 6 or rather 2 years of experience for three times? Can they define testing in a way that you agree with?
Kelly described the real issue as other people we work with believing zombie testing is the norm. The challenge lies in presenting this to them in a way that is both meaningful and compelling. This is easier said than done, especially for management level people.
Thinking testers are often faced with responses such as “how do you measure your testing?”, or “how do you know if people are doing their job? Kelly explained that depending on the level of organizational dysfunction – I would claim how infected your company by the testing undead is – you will likely need to do a degree of re-education of your non-testing peers. A testers code of conduct sets expectations and lets them know what you will and will not do. From my experience this is proper expectation management that we need anyways.
Help other people get the message. We should go out to non-testing conferences. Present at conference for programmers, project managers, and recruiters. HE also picked up a topic I am currently working on to offer guest lectures at universities for students of computer science and information systems. Get together in user groups and gatherings for programmers. Talk to them about testing, and help them improve their work with testers. Also make sure to invite them to your own testing events.








July 13, 2012
How would you test this: Passenger Airline System
This is an experience report that falls in many categories at the same time. I think the most remarkable one is the personal fail category (hooray, I learned something!).
As a consultant I do some fair amount of traveling. Most of the time I stay on the ground, though on my most recent trip to San Jose, CA for Test Coach Camp and CAST that was not an option. So, while lying jetlagged in the hotel room, I decided to blog about my trip here, and why I ended up testing a passenger airline system, which bugs I found, and which follow-up tests I can imagine to run from here.
A few months ago I booked my plane tickets to San Jose for CAST and Test Coach Camp. At that time, I was not totally whether I would start from home or whether I would be starting from some place else. I decided that it will be most likely that I will travel from home, and looked for flights from Düsseldorf – which is the most comfortable nearest-by airport for me – to San Francisco. I found several trips, among them one going through Munich. I decided to take that flight. For the case I ended up with a client gig in Munich, it would be easier for me re-schedule – leaving out the flight from Düsseldorf to Munich, and going from Munich instead.
When heading back, it seemed more comfortable to go through Frankfurt, and head home from there. So I ended up with the following flights:
Thursday, 12:50 fron Düsseldorf to Munich
Thursday, 16:05 from Munich to San Francisco
Next week, 14:55 San Francisco to Frankfurt
I booked them in segments.
At the end of last week it turned out that I will be in Munich on Tuesday and Wednesday for a client gig. Going home from Munich is a bit tedious for me. It takes 6 hours going by train, and close to as long going by plane considering traveling overhead. So, I decided to fly from Munich directly rather than going from Munich to home to go to Düsseldorf heading for Munich the next day.
As a user of the airplane system, I thought it would be ok if I headed to Munich airport by noon on Thursday, checking on my bags, and getting some food before leaving. That’s what I did. I checked my bags at 12:20, and headed for some food at the airport. My plane was going to leave at 16:05. So, once I checked in, I assumed everything quite fine for the time being.
When boarding finally started, I figured that was not the case. I was asked to have a chat with a clerk from the airline there. It seems since I didn’t check in in Düsseldorf the airplane system took me off from the connecting flight from Munich to San Francisco – automatically. Oh, and even though I already had checked in my baggage for that flight.
Now, it was not a problem to get me a seat on that plane, my bags didn’t make it on the plane. From a technical viewpoint it seems a bit weird for me to have a partial transaction (my bag) in the airline system, and have everything else cancelled. Something like a plausibility check could have prevented remaining portions of my transaction ending up in that system, and could have served as a heuristic for the check-in clerk who put my bag at noon.
I also interviewed the clerk what I could have done differently. He told me, that I should have phoned them. Fair enough.
So, while I now for some obvious reasons do not recommend to test an airplane system in production, here are a few questions that keep on puzzling me now, and should trigger some follow-up tests:
Where did my bag go once the system in Düsseldorf cancelled my connecting flight? It was already checked in in Munich at that time, I assume.
The airline woman who checked in my bag in Munich didn’t seem aware that I was supposed to check in my bags in Düsseldorf. Did she ignore that thought, or simply wasn’t aware?
Would there have been no problem at all if I showed up an hour or two later at the Munich airport?









July 9, 2012
Dear Management,
Dear Management,
I owe you an apology. I have misled you by listening to someone who misled me instead of building my own opinion based on the sources that someone lied out. That said, I have been wrong by repeating rants, and inventing my own once based on a false premise. As a math teacher of mine once explained to me: Based on a false premise you can proof anything.
I found people willing to challenge my belief system, and I thank these people for that. They made me go the hard path to building my own opinion on the topic, and on the sources that the one who initially misled me pointed to. I was amazed that I came to a different understanding. A similar one in some sort of ways, a different one in the most basic claims of that other person. I am thankful for having the opportunity to learn something from this.
And now, beware alpha animal like betrayer of my thoughts. You not only lost a follower of your rants when I reached my own insights, but you also lost my respect. From now on I will tell others about my perspective on things, and where I think you have been astray. I will provide my own picture, and criticize your ideas wherever I feel it’s appropriate. I will warn others from you, if you cannot back up your claims with references. I can.
Sincerely yours
learning employee, and hardest critique.
An Afternote
I didn’t relate this to any real thing that happened in the text in the hope that you will notice how and where you have been misled in the past. Instead of being depressed about it, recognize your failure, learn from it, and confront the other person with your new insights. The worst thing that might happen is that you find out that this particular person is not interested in debating. In that case make sure to turn your Bozo filter on. Things could be worse.









July 8, 2012
My biggest take-away from Bug Advocacy
At times I find quite interesting things in topics that I don’t seem too particularly interested in. A recent example – again – comes from Let’s Test in May 2012. While at the conference, I read through the program and thought that I don’t need to learn anything new from recent trends in bug reporting. Preferring to work on Agile projects, I don’t think I will use a bug tracker too much in the future.
On the other hand, I knew that I had signed up for BBST Bug Advocacy in June. So I kept wondering what I will learn there, and whether it will be as work intensive as Foundations was. I was amazed at some things. This blog entry deals with my biggest learning: for building blocks for follow-up testing – something I think a good tester needs to learn regardless of their particular background.
When I find a bug in a program, I usually start with some follow-up testing activities. But how do you test around a problem? Well, easy, vary the variables. What’s a variable? Elisabeth Hendrickson once taught me, everything that can be variable. Cem Kaner explains four categories for variables:
Vary my behavior
Vary data
Vary settings
Vary configuration
Let’s take a look into each of these in some more detail.
Vary my behavior
After having replicated a problem, I am interested in exploiting the problem. What’s the worst case scenario that I can make this bug replicate in? A first approach to this can be to vary my own behavior. For example instead of hitting Ctrl-V I can use the right mouse button context menu, or I can use the edit menu, and hit the menu entry in a usual desktop application.
I can also try to type very slowly into a text editor, or I can use the mouse to navigate between entry fields instead of the tabulator key – or maybe use the tabulator key instead of the mouse.
With my variations I can trigger different circumstances. Think about asynchronous transfers in today’s AJAX websites. What is the fastest pace I can trigger events so that I run into a race condition? This could easily outline a problem in the security system or with transactions. Everything that makes the bug even worse in daily use is interesting for me in follow-up testing mode.
Vary data
Similarly I can vary all the data that my program is using. I can vary input files, the data I use to replicate the bug. Can I trigger the same behavior when I use a different language? What about unicode characters? What about sentences that do not make sense?
Vary settings
I may also vary settings like configuration files, registry entries, cookies in the browser, or /etc/ files on Unix machines. Even on mobile phones, I can sometimes vary behavior by uninstalling the Dropbox application, or switching off some of the security settings, turning on cellular data, or wifi.
Vary configuration
When it comes to vary the configuration I may want to use a different machine with slower or faster bandwidth to the Internet, slower or faster CPU, memory, different Operating System like Windows 7 vs. Windows Vista or XP. But also think about iOS 3.5 vs. iOS 5 and iOS 6. I might also change the particular driver I use for a printer, use the default mouse driver instead of the Logitech one, or use the trackpad.
I think these category ideas help to come up with more follow-up test ideas. If we complement these ideas with Elisabeth Hendrickson’s Test Heuristic Cheat Sheet we may also come up with a lot of different more concrete variations.
Happy follow-up testing.







