Markus Gärtner's Blog, page 22
March 23, 2012
TestBash: An 8-layer model for Exploratory Testing
At the TestBash in Cambridge,
Steve Green introduced an 8-layer model for Exploratory Testing.
Steve started with some quotes about Exploratory Testing that he hears from other people:
Ad-hoc and random
Unstructured and unplanned
just trying to break the system
don't now what you've done
don't know what you haven't done
not repeatable
A bunch of them I encounter as well during my work.
If you take a look into other professions, exploration is a highly valued activity, Steve explained. Sir Francis Drake for example certainly had a plan and knew where he had been, still people took him very seriously. How come this is different in testing?
In Steve's earlier days, he started with steps to test a system. They eventually broke the whole system. Identifying the steps to reproduce the bugs, and finding the root cause thereby, they could identify why it failed. That took a lot of time earlier. Over the past ten years our community came to an understanding of Exploratory Testing that is highly structured in nature. These structures includes the six building blocks:
Inventory – what is there to test?
Oracles – how do we know if it's right?
Test plan – a flexible outline of our work
8-layer testing model – a structured approach to exploration
Reporting – minimal but sufficient test reporting is crucial in a management report, and we can zoom into more detailed information if necessary
Management – ideally session-based (but I don't dare to call it a best practice)
The 8-layer model is a framework, not a process. It helps us plan and control our testing. It also helps us to find bugs with the simplest sequence of events and the most "vanilla" data, making diagnosis and bug advocacy easier. The 8-layer model also provides a vocabulary for reporting test coverage.
Steven explained that there is an underlying paradigm. We just do things to the extend that it is useful to do so. That might mean documentation that is minimal in regards to test coverage and results.
The 8-layer testing model consists of
Input constraint and data validation tests
Input combination tests
Control flow tests
Data flow tests
Stress tests
Basic scenario tests
Extended scenario tests
Freestyle exploratory tests
(ICICCFDSBSESFE doesn't form a Mnemonic, unfortunately.)
You can leave some of the layers. For example if your developers have earned your credibility that they are doing decent unit testing, then you can focus more on the later layers of the model.
In the first layer, we focus on input constraints and data validation tests. The TestObsessed heuristic cheat sheet provides some examples for these. Mandatory fields, maximum length of fields, as well as domain constraints like permitted character, and formatting rules are some examples that Steve mentioned. If there is a functional specification or data dictionary we can compare the actual behavior with the intended behavior. If there isn'T, we progressively build our own data dictionary. Steve explained that they once used a voice recognition software to feed data into the system. At that point they found out that they were no keyboard and mouse clicks, still the software should work.
On layer two we start looking for combinations and how things interact with each other. We test relevant combinations if we know that inputs interact with each other. We also look for non-documented interactions. Steve also pointed to the pairwise testing work from James Bach and Justin Hunter which can help come up with a minimal set of combinations for a given set of inputs.
In layer three we take a look on flows through the system. These are aimed at the business logic in a structured manner. We identify all logical paths through the system, and the data required to force the system through those paths. Generally we use very "vanilla" data to avoid triggering bugs that are not related to the logic. We use unique data in every field where possible, constructed such that we can easily tell if it is corrupted, missing or it is in the wrong place.
Layer four is very related to this. Here we test data flow through the system. It's similar to control flows in layer three. We push data in through the front-end and identify where it goes as all the logical paths are exercised.
In layer five we stress the system. Once we have identified where all the data goes, we push the maximum possible amount through each field and look for truncation or other forms of corruption.
Steven's sixth layer refers to basic scenario tests. Individual functions are executed in sequences that replicate basic happy path user behavior. In comparison the extended scenarios in layer seven combine multiple basic scenarios, and execute large numbers to simulate real user behavior over a longer period. That might mean that we repeat the same test many times, or the same set of scenarios in different sequences. He referred to James Whittaker's book on Exploratory Software Testing whose tours expand on this concept in layer seven.
The eighth and final layer of the model uses "What if…" tests to investigate what the user on a system can do. We use our full knowledge of the system to do things like looking for race conditions, or multiple concurrent login on the same account, or edit URLs. He referred to "How to Break Web Software" from James Whittaker for examples on this.
Overall I think that the proposed model can help more traditional testers see the bridge between &§$%&-certification lessons and Exploratory approaches to software testing. I expect thinking testers to adapt from that soon, and find more creative ways to test. If you don't do that as a tester, you're probably more sticking to the dogma that Alan Richardson referred earlier today. In the end I wondered how the layers could map onto different mission types in session-based Exploratory Testing – but I leave that to the ambitious readers of my blog. :)









TestBash: The Tale of a Startup
At the TestBash in Cambridge, Ben Wirtz talked about a tale of a startup, or when the chance of passing a test successfully is less than 50%.
Ben explained that he doesn't know a lot about testing, but about startups. He wanted to talk about the value that testers can bring to startups. Ben developed an Android application (look for "Handy Elephant" (please don't test it, he said)). He did the bootstrapping for a year. He eventually received some investment. They changed their business model in the past – a lot. Despite the changes in business plans, they tried to reduce the technical risk – at least that was the plan. It eventually turned out that the changes in business plans eventually leads to a sinking ship with too much technical debt incurred.
The chance of anything passing a test successfully is less than 50%. Ever, Wirtz explained. From day one startups are jeopardizing to become a sinking ship. Lean Startup therefore focusses on learning. Experiment (and deploy) continuously, and pivot your strategy if you need to. This refers Build-Measure-Learn loop that I also found in the Lean Startup book from Eric Ries. Through testing hypotheses you accelerate learning. By automating a lot of tests you can make changes in your business model while still keeping your tech working through practices like Continuous Integration and Continuous Deployment. The main objective for a Lean Startup (or any Startup for that matter) is to find the value proposition and your business model through accelerated learning.
Ben explained that they hadn't done much manual testing in their environment. Instead they work from a highly automated testing basis at the code level, so that the startup can pivot quickly based on this. The advantage of being a startup is that users – Early Adopters – are often forgiving. Users are also often enthusiastic about it.
Traditional testing and quality assurance is usually mostly about not screwing up your product. In a startup this kind of manual testing is not that helpful. In a Startup the situation is shifted. You are not too sure about your business model, and will find out more and more about it on your way. Therefore building the perfect app is not a goal of a startup. That's why more traditional approaches to testing likely fail in Startups, I want to add.
As tester raise constructive criticism with questions like "what does the user actually want to do?" or suggesting alternative implementations. He also added that if nobody listens to you, focus on learning yourself, and start your own company. Ben reminded us that we should assume that the app will fail also even after passing all of the automated tests.









TestBash: The Evil Tester's Guide to Eeevil
At the TestBash in Cambridge, Alan Richardson gave an introduction to Eeevil (with capital E) testing.
Alan started from beliefs. He started with attitudes. Testers have to think and act differently in projects. We also have a mandate for humor in our work. We need to take responsibility for the product and our work, and how we communicate, since words impact the thinking of the people around us. We should not start from dogma, but rather from our knowledge.
Alan introduced the Evil Tester avatar that he spreads his work on. The Evil Tester mocks Alan during his work. He explained that he studied evil in all it's forms, so he was the right one to tell us about it:
I've studied evil in all its forms, I've read a lot of comics.
When he was little, he learned about the left hand path that leads to the forbidden path towards knowledge. The path to the forbidden rather than the certainty helps us to reach for knowledge. Most testers take the certain route to the product. This is based on dogma, following a clear path. You don't have to think too hard to reach your goal, you don't have to justify your doing since it's all laid out for you. The harder route is to go for the forbidden.
When alan was a baby tester, testing was a necessary evil. In the early years within a Waterfall project there were a bunch of good things to do. He discovered that these good things didn't work for him. These things blocked a lot of people on the project.
Alan referred to the psycholinguistics of good and evil. "Are you a good little tester?" could have been a question asked also in my earlier years. When you become Eeevil you can start fighting those words, or use these words for your advantage. Eeevil provides Alan a great power to his testing approach.
Citing Stan Lee, "with great power comes great responsibility". Alan explained that Eeevil does not know limits. Instead we impose our own limits when we start taking responsibility of our own lives. Alan explained that no one is born Eeevil. It takes hard work to become Eeevil. Do you take testing serious enough to mock it? If you do, you can find all the things that are wrong with it. If you can turn out the ridiculousness of your work, you can take completely new approaches to testing.
Reminding myself on Cockburn's self-imposed rules, Alan continued with beliefs are made to be broken. Six Sigma's Five Why's for example is aiming at the beliefs that underlie our thinking. Start asking yourself why you are doing certain things. If you do, you will drive out much of the ridicule of your work.
Eeevil is seductive, Richardson explained. Taking a bite from the apple in the Garden Eden was seductive. Richardson explained that Eeevil is simple. Sin is the easy path, while doing the right things takes a lot of discipline. Alan explained that he is not Eeevil enough to go into a project and drop a bunch of templates on the testing team, saying "Here is what you need to do".
Alan referred to a children experiment. They were locked into a room with some sweets. The teacher went out stating that they are good if he re-enters the room and the sweets are still there, otherwise you're bad. The teacher would hand some sweets to the kid if it didn't touch them upon re-entering the room. The clever kid would figure out that people sometimes lie, and take the sweets anyways, taking into account to be a bad kid, but still getting the sweets.
Alan quoted the last assassin: Nothing is true, everything is permitted. Referring to religion, there is one dogmatic road to heaven, but there are a thousand roads to hell. For testers this means that we should take path to hell before anyone else does. By then we can only help others avoid these paths, but also see how bad a place hell really is. Exploring the paths to hell helps to gather information about the product.
Eeevil is wrong. Alan explained that we are paid to be Eeevil and be wrong. 1 + 1 might equal "two" instead of 2. Be prepared to be told that you are doing the wrong things in the wrong. If you are not on a dogma path, you have to find your own path.
At times, others might look to you for certainty. Alan proposed to foster doubt in such cases. This will help others to leave the dogmatic path. Testers need to have a bad attitude. We are still team players, but we are not seeing and saying the things that other people do. We help others – foremost managers – see the information they need to realize in order to fix the situation.
Eeevil is a means to an end for Alan. The safety net of dogma can be defeated by your own knowledge. You still have the safety net of dogma, but you shouldn't use it while balancing over it.









TestBash: Visualizing Quality – A Random Walk of Ideas
At the TestBash in Cambridge, Dave Evans held the first session on Visualizing Quality with a walkthrough of some ideas about it.
Dave opened with a hint on systems. He introduced his family as a system. He made same references to the quality of the relationships he recognized recently. Dave reminded us that quality is a relation between people. Too often do we forget this when measuring presumably objective figures from systems. He referred to Jerry Weinberg's definition of quality as quality being value to some person (at some time, I might add).
Dave proposed some idea on visualizing quality. With a comparison of military budgets, he raised the point of putting data in context. The US budget of $ 607 billion compared to China ($ 61 billion), UK ($ 60 billion), or Japan ($ 47 billion) the US budget seems to be roughly ten times bigger. A different context is the debt in Africa in total, which sums up at $ 254 billion. Taking a look on the percentage of GDP, the US scores at 4%, while Myanmar spends a higher percentage figure on its military budget. Evans reminded us that figures and numbers have a context, and I think we should be aware of that context when providing these figures.
Another idea that Evans provided was based on heat maps. He showed a heat map from Manchester, where the biggest heat was not coming from the center of the city. Instead the suburbs produced more heat than the center. He also had another heat map from one player in a football game. Would if we had a heat map of the locations where a developer spent the most time in the code? Actually, I crossed an approach a few years ago based on a paper called "Don't program on Fridays". They correlated data from open source bug trackers and the version control system to indicate change risky areas in an eclipse plugin. I haven't crossed such an approach in a live project. I am pretty sure that every programmer in a project actually knows where the red areas in such a heat map would be. What if we went over and asked to show us "THE class"?
On Scale Evans showed a scaled representation of the sensory cortex of a Penfield Homonculus. Fingers for example have a much larger area of neurons in this representation. He crossed the gap to quality with this representation by referring to different aspects for visualizing our sensory information regarding our lines of code, but also about our customers.
Evans showed some examples for visualization of software systems and quality. He started with an info graphic of Napoleon's Russian invasion and retreat in 1812 and 1813. It showed the number of soldiers by the width. Like a burn-up chart the retreat was drawn by a black line. He showed a map from Code City which tries to visualize a city map of your code base.
Dave referred also to James Bach's Low-tech testing dashboard as a way to visualize software quality. The emphasize there is always on subjective qualities of the testing project. How much effort are you putting in? How much is the coverage? What is your subjective assessment of the quality in that area? You cannot get this kind of information out of most test management systems around.
Dave also showed an example from a project they created at SQS. It reminded me in its nature on Bach's Low-Tech Testing Dashboards, but was more based on more traditional test planning efforts like amount of tests prepared, and so on.
Evans reminded us on a few thins we might be interested in tracking:
Profitability & Success
Customer goals and stakeholder values
customer needs of our systems
patterns of satisfaction and dissatisfaction
product qualities and their current levels
measures of qualities during development
progress of the act of measuring qualities
Evans distinguished between people and systems. He referred to Weinberg's definition (see above), and the one from Phil Crosby: "Quality is conformance to requirements". The first one subjective, the latter one claims to be objective.
He introduced the user story clause "As a I want so that ". Evans explained that he sees a lot of teams in the Agile context who don't give enough care to these difference stakeholder values for their user stories. I agree with that. On product quality attributes he used the user story format to refer to the quality of the feature in the clause. Referring to Tom Gilb the how is based on quantified values on a defined scale.
Evans showed a pre-historic windshield which was a manual flexible windshield. It does the job, but how well does it do it? Raising the question on how it can be improved, you can come up with things like automating it, and improving on keeping rain from the windshield.
Combining stakeholder values and product qualities leads Evans to value maps. He worked through an example on car parking. He introduced different stakeholders and their particular why – the business value from the stakeholders, like different driver types with their goals. He showed different features for the different stakeholders. The feature itself is not particular interesting to them, but how well we introduce that feature. This consideration finally leads to different acceptance criteria.
Dave closed with some things to remember:
remembering the humanity of quality
communicating with stakeholders
find the right level at the right time
turn data into information
engage more of the brain









March 18, 2012
Let's Test prequel with Huib Schoots
As a prequel to the Let's Test conference in May, I interviewed some of the European context-driven testers. Today we have Huib Schoots who is a board member of the Dutch testing network association TestNet.
Markus Gärtner: Hi Huib, could you please introduce yourself? Who are you? What do you do for work?
Huib Schoots: Hi Markus, my name is Huib Schoots. I live in Den Bosch in the Netherlands. I am a software tester. Why? Because testing is fun! I have 15 years of experience in IT and software testing. After my study in Business Informatics I started in IT as a developer. Soon I found out that developing wasn't my cup of tea. After a year of being a test engineer automating tests, I found out that software testing was fun. I have experience in various roles within testing. At several companies I provided (international) testing consultancy in various roles: tester, test coordinator, test manager, trainer, coach but also in project management.
Currently I am team manager test, release and implementation management at Rabobank International. I like combining people (management), test management and agile testing trying to improve testing and making it more fun. I try to share my passion for testing in coaching, training and giving presentations on several testing topics.
In my free time I am board member of TestNet, the Dutch testing network association. TestNet offers its 1500+ members the opportunity meet with other testers and share knowledge and experience. Within TestNet I am responsible for the special interest groups and I participate in the event & program committee. I also co-founded and participate in DEWT, the Dutch Exploratory Workshop on Testing. I am a member of AST and a student in the Miagi-Do school of software testing.
Besides testing I love to play trombone in my brass band, read, travel, take photos, scuba dive and play golf. To stay fit I recently started running. I also love to play strategic board games, but unfortunately I do not have the time to do that on a regular basis.
Markus Gärtner: How have you crossed the path of context-driven testing?
Huib Schoots: Without knowing I have practiced the Basic Principles of the context-driven school in the past and Lessons learned in software testing has been my favorite testing book for years now. Quite some time ago I was not very happy how the majority in the Netherlands approached their testing. I saw too much process, favoring certificates over skills. In some projects I tried exploratory testing and I liked it.
In 2010 I saw some tweets from people I knew from TestNet speaking about exploratory testing. We met at my kitchen table and we founded DEWT. I felt good to have a group of like-minded people to discuss testing. The Rapid Software Testing course I did in 2011 made it perfectly clear to me that I am quite context-driven.
Markus Gärtner: Connecting with like-minded people is one aspect of context-driven testing. Workshops like DEWT help, but also conferences. But what do you do to hone your testing skills?
Huib Schoots: I hone my testing skills in different ways: conferencing, writing and practicing. As you mentioned, I am in DEWT and here we discuss testing and learn from each other. I like visiting conferences and even better: do talks at conferences. I have a blog and write articles for a test news website and agile record. I am also writing a book for TestNet on the future of software testing. Preparing talks and writing helps me structure my thoughts but also forces me to investigate subjects in more depth. The feedback I receive helps me to gain insights form different angles. This is also why I am very active within TestNet. As a board member I am responsible for special interest groups and I participate in the event committee. This gives me the opportunity to facilitate groups and organize events where testers meet and learn.
Last year I became a student in the Miagi-Do school of software testing. I consider this school a community where enthusiastic testers learn and help each other become better testers. The instructors give the students challenges to practice their testing skills and when you show your skills you can earn belts. I like the challenge I received from you last week where I have to study an advice given in a 90 minute video of a presentation. My assignment is to develop my own approach to testing. This is great stuff and I can't wait to start working on this.
Recently I have organized a testing dojo at work. It was fun and we learned a lot. Some others became enthusiastic about this as well and they will organize the next dojo. I wish I had more time because Weekend Testers sounds like fun and a great opportunity to learn, but unfortunately I can't find the time to participate.
In April I will do the BBST foundation course. What I like in this course is that you learn more about testing but also improve online study skills, such as learning more from video lectures and associated readings and improve online discussion and working together online in groups. I am looking forward in meeting new testers online and work with them in this course.
Markus Gärtner: How do you apply context-driven testing at your workplace?
Huib Schoots: In the past I applied some context-driven elements in several projects without knowing it was context-driven. For example: try exploratory testing, consider the context in my project more important as the process and organize intervision sessions to learn from others are some.
In 2010 I joined Rabobank International as a team manager. I also became a member of the lateral meeting. This is a formal body consisting of testers and team managers who decide about test policy, test approaches and training for our testers. Here I told the others about Rapid Software Testing. In June 2011 Michael Bolton came to Rabobank to teach 50 testers Rapid Software Testing. This kicked off an interesting change, since then we try new things like mind maps, low tech dashboards, heuristics and we focus more on skills and less on process.
Markus Gärtner: Considering my experiences with the banking sector, do you face struggles with the context-driven way at your workplace? What do you do to change your organization to a perceived less structured way of testing?
Huib Schoots: Not really (yet). We don't do exploratory testing full swing. Change within our organisation is difficult: it takes quite some time to change the way we work. The most resistance to a different way of working comes form an unexpected side. Some time ago I had a meeting with our audit department to discuss exploratory testing. I expected a lot of resistance, but to my great surprise, they didn't see any problem in testing exploratory as long as we were in control. Session-based test management (SBTM) will gives us enough control, they expect.
The resistance to change in our organisation lays within the testers themselves. They are used to structuring their testing with processes and test cases upfront. Exploratory testing is perceived less structured and is different to what they are used to.
Markus Gärtner: I face similar struggles with testers in Germany as well. How do you help testers overcome their resistance then?
Huib Schoots: Most important is to focus on "what is in it for them". I always try to find added value for an individual. A reason why they should change and that is mostly different with every tester I work with. Let me give you an example. We are implementing an agile way of working and we use OpenUp. I see a lot of resistance against implementing this method. I hear things like: "We tried this 3 years ago and it didn't work" or "It's OpenUp now, but it will be different again in 5 years, so why change?". If we focus more on solving current problems, improving and adding value by implementing some practices, resistance decreases. That is why I think we need more coaching in our organisation. Coaches have the time and the skills to work with people on an individual basis and help them improve. If you want to change a whole organization, you can only do it one by one.
Markus Gärtner: Your talk will be on tester and programmer collaboration, and how each other can learn from it. Without spoiling possible participants, what is your experience with programmer and tester collaboration?
Huib Schoots: My experience is that they do not collaborate really. Of course they work together and discuss things like planning, bugs and requirements. But what I would like to see is testers helping developers with unit testing. Testers are able to improve unit testing by only showing interest and asking simple questions like: what are you trying to test here and why? Spending time pairing with the developer to go through the unit tests gains insight and better understanding. My experience is that developers are every capable and willing to expend their unit tests when you ask them.
Developers can help testers to speed up their testing. Developers are able to build "tools" or scripts in just a couple of hours which can save hours of work sorting out reports and log files. Testers should focus more on testability and developers are there to make our life easy. They are more than willing to do that, but we need to help them by telling them what we need.
I have had a nice talk with some developers explaining what I could do for them and how I need their help in my work. At first they were very skeptical and showed a lot of resistance. At the end they were more than willing to collaborate. You just need to take away the perception that testers do not care about what developers do and are only there to show that they are doing a lousy job by focussing on bugs.
Markus Gärtner: Programmers and testers working together, this sounds dangerous to some folks. What do you say to them to overcome their fears?
Huib Schoots: Programmers are great to work with. They can do amazing things for testers like writing scripts, add logging or create special tooling that can speed up testing and make my job easier and faster. The trick is to ask the right questions, show interest in their work and collaborate. Let me generalize a little here. Programmers think they can do a decent job testing. They also think testers can't code. So a lot of programmers I have met need to be shown what a tester can do for them. But also the other way around!
I was in a programmers meeting a couple of months ago to discuss testing within OpenUp. The first thing that happened was that one of the programmers told me he could test and testers couldn't code. So I challenged his remark by asking some questions to find out what he was trying to say. It turned out that in his project the tester couldn't keep up with the 3 programmers so the programmers had to do a lot of testing. I smiled at him and asked him what he did to help the tester do a better (or in this case a faster) job. I don't know in detail what happened after that, but they started to collaborate more and the programmers started to help the testers. When I spoke to the programmer a couple of weeks later he was very happy because collaborating with the tester speeded up his programming. Examples like this help to overcome these fears. What also helps is to show your skills as a tester and teach your team mates to do better testing.
Markus Gärtner: What do you hope to learn at Let's Test?
Huib Schoots: My main reason to go there is to meet other context-driven testers. I also hope to learn new insights in testing, find argumentation or ideas to introduce new stuff and see the latest developments and opinions. I like to submerge myself in a place where all people do is talk about testing. Confer with my peers will refresh and sharpen my ideas and mind set.
By enlarging my network I know I will be able to learn from them long after the conference is over. It helps me to read their blogs and articles. If you know somebody and have had discussions with them, you hear them talk when you read their stuff (in a way of speaking of course). It also makes it easier to contact people directly if you have questions or need help.
Markus Gärtner: Speaking about the future of testing, imagine time skipped forward over night, and we're now in 2030. What has changed? What's still the same?
Huib Schoots: Good question! We are writing a book on the future of software testing with a group of TestNet colleagues in the Netherlands. But we only look into the future 5 years. It is very hard to predict the future and I think there is not one future for all. Technology like tablets and smart phones are upcoming and apps are getting all over the place. The cloud will have a great influence in the areas like security, performance and privacy. Also the social changes in organisations and people working from home might have impact on our craft, one thing I think of is crowd-sourced testing to cope with all the different devices.
We will do more and more projects in an agile way of working. In agile projects everyone is responsible for the quality of the delivered products. A tester is not alone in this, after all it's the whole team that tests. Testers will get a coaching role and will coach their teammates. They will teach and distribute their expertise over the team.
To cope with the ever-growing complexity and ever-changing world around us, the requirements of a tester are getting higher. It requires a higher level for the skills that make a good tester: critical thinking, questioning, but also mastering testing skills, for example, applying testing techniques or doing (risk) analysis. In the past testers could get away with not having good testing skills. Because of the mainly confirmative way of testing, coverage of requirements was important. If the report was fine and all requirements were covered, testing was okay. However, finding the major bugs in a short time is a different story. In preparation for test execution, writing the test cases, the tester formerly had plenty of time and the opportunity to hide. In future, expectations will go up: with limited preparation he must show his testing skills applying testing techniques while he tests (exploratory). In projects testers collaborate increasingly closely with their team mates. The tester will quickly fail and not be taken seriously if he can't show his skills.
Markus Gärtner: Thanks for your answers and your time. Looking forward to meet you in May in Sweden.









March 11, 2012
Let's Test prequel with Jean-Paul Varwijk
As a prequel to the Let's Test conference in May, I interviewed some of the European context-driven testers. Today we have Jean-Paul Varwijk who became a black-belt in the Miagi-Do school recently-
Markus Gärtner: Hi Jean-Paul, could you please introduce yourself? Who are you? What
do you do for work?
Jean-Paul Varwijk: I live in Houten, which is near Utrecht, in the Netherlands. I am married and father of two boys.
I started testing as a business acceptance tester at Rabobank Nederland back in 1998. It was the time of the Euro introduction and the Millennium bug. As time passed by I found myself shifting from my regular work to testing more and more. Eventually the opportunity arose to become a full time software tester at the Group ICT department in 2004. Including a switch to Rabobank International in 2007 I have been testing a myriad of internal and external software banking products since.
The last couple of years my work has changed from strictly software testing and coordinating to more management oriented tasks. I participate in a number of interdepartmental workgroups and meetings. I am now also engaged in thinking about testing and in writing test policy, test methodology and defining testing skills and needs. The most rewarding change however is that testers and non-testers regularly ask me for my opinion, help or coaching with regard to testing.
Markus Gärtner: On coaching testers, which testing essentials do you see testers
lacking most? How do you help them learning these?
Jean-Paul Varwijk: I see testers missing two things the most. First a lot of testers do not exercise critical thinking in their approach of testing. For some part this could be due to the fact that they see testing as just another job and as long as there is money on the bank it's a nine to five thing. But what I think is possibly a more worrying reason is the fact that they were taught not to be critical. Many initial test training is based on the premise that software testing is something that you can learn in a couple weeks, get a certificate and a presto you are an excellent tester if only you follow the recipe. A lot of managers buy in to this and judge testing as matching some standard execution based on the requirements. Many a tester sees that this is how it should go and gets away with the appearance of having done this. The second thing I see testers missing the most are practical skills. Stuff like getting to know the product, the context and then to apply appropriate test techniques. With the first I try to help them by setting challenges, make critical comments and to be available as an oracle. For the second part I think they need to practice and take their trade seriously. Here I sometimes give advice or review by asking them to explain their choices. Since I am however involved in projects myself I do not do both as often or as widely as I would like to.
Markus Gärtner: How have you crossed the path of context-driven testing?
Jean-Paul Varwijk: When I started at Rabobank Group ICT in 2004 they had a TMap oriented test methodology in place called START (Structured Testing At Rabobank based on TMap). This methodology, at the time, was heavily process oriented with a lot mandatory deliverables and templates. This always made me a bit rebellious since I did not see the point of many of these artefacts and ceremonies within the context of my projects other than that they were mandatory. After my change to Rabobank International I was able to better adjust my work and the content of documents to suit the stakeholders needs. Until, I think, 2010 I had no idea that there was a school of thought called context-driven testing that held similar ideas to testing. The term happened to be coined at EuroSTAR Copenhagen. Once I had googled for it and started reading blogs I have full heartedly embraced the idea.
Markus Gärtner: How do you apply context-driven testing at your workplace?
Jean-Paul Varwijk: First let me start by pointing out that I am in the lucky position that there are more context-driven testers within my company. I have a great compatriot in, fellow DEWT, Huib Schoots. As we are together in a lateral test managers group we are able to slowly but steadily change the mindset of other managers and testers towards using a context-driven testing approach.
This has expressed itself with less focus on templates and procedures. And has added focus towards the content of test plans, test reports and test activity to suit the project and stakeholders' needs. This concept is now being incorporated into the test policy and methodology. The organization has a higher focus on teaching the right skills. Testers have an educational budget and have the possibility to visit conferences like the Dutch Testing Conference, EuroSTAR or the Agile Testing Days. Last year we arranged with Michael Bolton to come over for a week and give his Rapid Software Testing class to all our internal and external testers. We organize two test events per year.
Personally I have started to organize quarterly Intervision sessions (on testing) for which we invite expert talkers and discuss with them. Last but not least, I am involved in hiring a larger part of the new testers at our business line and my selection criteria include the testers mindset, her adaptability, her interest in testing outside of work and I have introduced a small challenge during the intake to see how they approach testing.
Markus Gärtner: How do you spot a context-driven tester based on their CV? Do you
think we need some sort of certification program? Why?
Jean-Paul Varwijk: I have no idea. Seriously most CV's I see are written to please the HR department or else they would not pass the first selection. What I look for though are things like being a member of TestNet, visiting conferences, reading books, and other stuff indicating that you are interested in testing as such. If people have BBST on their CV they would really catch my attention. But I do not think we need specific certification. A certificate to me is just a piece of paper. What I want to hear is why you did a course, what you learned, and how this has helped you. Unfortunately this is something I hardly ever see on an application.
Markus Gärtner: Would you share some of the challenges you give to testers in order to learn about the skills they will need? An example would be helpful right now.
Jean-Paul Varwijk: One of the challenges I have used the last year while interviewing new applicants is something I copied from a James M. Bach lecture. It is a diagram portraying a box with the word Input in it. From here a line is going to a Diamond with the text X>3 and two lines out of that. One towards another box with the text Print and an exit out of it that joins the second exit of the diamond to an end point. I show them the diagram and ask "How would you test this? And please think aloud." The point of the challenge is that there is no right or wrong solution, but I do want to listen to their reasoning. I think I did that challenge around 25 times last year and sometimes I get a plain answer like "There are 2 test cases" and sometimes I get vivid discussions on what all the single components mean. So far I have only gotten only 1 response, out of 25, that showed real interest and critical thinking and about 5 or 6 where they were going in the right direction.
Markus Gärtner: I assume here that you see context-driven testing as a working alternative to more traditional testing training programmes. What can we do to make the benefits of context-driven testing transparent to others? What are those benefits that you think others will easily take
as a starting point for change?
Jean-Paul Varwijk: That is a tough question to which I unfortunately do not have a readymade answer. Sometimes I get the feeling that people have to flip a switch in their brains to see it. The difficulty is that context-driven testing is not easy, there is no recipe to follow, you need to be a critical thinker and have feeling for your context. Traditional training programmes ignore these difficulties and use simplistic examples that seem obvious and are easy to follow. I think to teach context-driven testing it is good to step into practical problems and show people that there are different ways to approach the problem and that this provides different results that are likely to be more informative. So I opt for a hands on practical approach and interactive learning.
Markus Gärtner: Imagine a time machine was invented. How would you test it? Would your first test send something to the past, or rather to the future? What would you like to use it for?
Jean-Paul Varwijk: If a time machine was invented I would test it as follows. I would send to myself in the near future, say two weeks from now. The time machine would be containing an explanation where it came from and what to do next. This would include to provide some proof that it was in the future, get a newspaper or such. Then to send it to a date one week from now. I would then hope to find it one week from now with proof of it having been one week further as well. If I did not find it then I would look for evidence in the past to see if it had miscalculated and arrived at an earlier date and meanwhile see if it turns up the next week. If on both occasions the time machine did not reappear, in present or history, I could only conclude that it had done something to disappear but sadly had no information, at that point, if it travelled ahead or back in time.
Personally I would not use it. It is very tempting to go for some beneficial travel to know for instance the outcome of a lottery, but I have made due without it and that turned out pretty well both with its good and bad parts.
Markus Gärtner: What should we do to spread the context-driven word out there even
more? What is working for you to spread the word? And what isn't?
Jean-Paul Varwijk: I think that in the Netherlands with DEWT we are on the right track of spreading the word. We are voicing our opinions on blogs, twitter and by getting support from people like Michael Bolton we are being noticed. Other people in the testing scene are contacting us and just recently we had a context-driven theme night and TestNet to which 150+ people came. For myself I think that my being enthusiastic about it and always being available to react to questions works. What I see is not working is ISTQB and TMap bashing. Even if their ideas do not match the context-driven idea this should be no reason to be negative about them. I do not believe that people change because somebody shouts that what they are doing now is bad. People change because they see an alternative and realize that what they are doing is the lesser option.
Markus Gärtner: What do you expect to happen after Let's Test? What's the best thing, and what's the worst thing for you?
Jean-Paul Varwijk: The best thing to happen after Let's Test I think will be that it will be a good foundation to built on context-driven testing in Europe. I hope it will proof to be a stepping stone for testers to connect to context-driven testing and context-driven testers.
The worst thing for me will be that, as far as I can tell at the moment, I will not have been there….
Markus Gärtner: Thanks Jean-Paul for your time. Looking forward meeting you in Stockholm for Let's Test.









March 4, 2012
Let's Test prequel with Henrik Andersson
As a prequel to the Let's Test conference in May, I interviewed some of the European context-driven testers. Today we have Henrik Andersson from the House of Test with us who also co-organized the conference.
Markus Gärtner: Hi Henrik, could you please introduce yourself? Who are you? What do you do for work?
Henrik Andersson: I'm Henrik, one of the many great testers we have here in Sweden. I have been involved in testing since the late nineties and over these years in a wide variety of businesses. I'm a student of Jerry Weinberg's work and taken his Problem Solving Leadership training (PSL) and a returning participant to the Amplifying Your Effectiveness conference (AYE).
In 2008 i co-founded House of Test, a consultancy and outsourcing based in Sweden, Denmark and Shanghai, China. We are a company that is driven by the context-driven testing principles. Today House of Test consists of ten sharp testers and I'm acting as the CEO but still do lots of testing in parallel. I mostly provide coaching in Exploratory Testing, Session Based Test Management (SBTM) and Agile transitioning.
Besides this I'm one of the guys setting up Europe's first Context-Driven Test Conference, Let´s Test, that will take place in Stockholm in May. This will be a really awesome event gathering the European context-driven testing community to get to know each other better and to learn from our peers.
Markus Gärtner: How have you crossed the path of context-driven testing?
Henrik Andersson: This was a while back. It started out that I read something about ET in the beginning of 2000, but then I did not know better so I just dismissed it. But somewhere around 2006/2007 I invited James Bach to Sweden to do his Rapid Software Testing course and this was the first time I met James. I had the opportunity to spend some time with James and I think we had some really nice and colorful discussions. During our time together I found that we shared lots of views on testing. James invited me over to CAST in 2007 and also to participate at the WHET3 peer workshop in Seattle. It was after this week of meeting all great people and sharing all experiences that I got involved in the Context-Driven testing community, I found my playground. Since then I have been an active member of this community where I get challenged and can challenge others and for me this also is a great way to learn. Today I have many close friends in this community that I highly value and we always have lots of fun when we meet. James has during these years always been a great mentor to me.
Markus Gärtner: How do you apply context-driven testing at your workplace?
Henrik Andersson: This is my foundation and my preferred way of working. As a consultant it is important to quickly be able to understand my client's context and problems and not to push a simple "turn key best practice" solution on them. I use the context-driven principles to find great ways to contribute value to my clients. In practice it helps me going in with an open mind and not being afraid to ask questions. It brings value to the ones I work with since I seek the root cause of a problem and finding a suited solution. Not just grabbing the oversimplified easy to like solution. I have previously worked at a company that push their "best practice" on every client they meet. I have seen and felt very ashamed of the damage that does and I'm done with that crap. Context-driven testing is to me not only about the seven principles it also includes honesty, integrity and openness towards the work I do. To be able to stand tall when the boat starts to rock.
Markus Gärtner: I could draw the conclusion that you consider context-driven testing to be a "best practice" in itself. I am quite sure that you disagree with me, but where do you see the difference?
Henrik Andersson: As you expect I strongly disagree with that statement. To me a best practice is context independent, it is the one best way to perform something under any circumstances, it will always work.
Context-driven testing is not a specific procedure or a technique of doing things. It is an approach that consists of a set of principles that there are million different ways to apply depending on your context.
In the first two principles we recognize that there are good and appropriate practices under specific context. In contrast to a best practice that claims to have the best way to do testing, we seek to do the best testing possible under the conditions we have. To me that is a huge difference.
Markus Gärtner: Honesty, integrity and openness towards the work you do – how do you help testers (maybe even programmers?) to stand on their own? Can you give us a brief insight into your coaching work?
Henrik Andersson: Many testers have a "I just do what they told me" attitude. I do not believe this reflects specially flattering on them. I help testers, developers, managers and teams to take responsibility of the work they do. Again it is much about awareness, I help them find their preferred ways of doing their work. I help people to be proud of the work they do and to strive towards mastery. When someone asks them why they did the work the way they did they shall be aware of this and be able to tell a good story of why this is an appropriate way of doing it. My coaching is focusing on the persons in the team and the collaboration between them. It is important for an effective team to become greater than the sum of its individuals. I work much by asking them questions to help them reflect on why they do the way they do and if there is another way in which they rather could do it. I also point them to different resources that they can access for free to get inspiration and learning. So each can put themselves in the driver seat of their own personal development and not be dependent on what the company decides to provide them. My goal is that the teams I work with will have fun when doing great and valuable work.
Markus Gärtner: What inspired you to set up Let's Test in 2012? I heard some rumors about a EuroCAST, but that never made it. What drove you to organize this event?
Henrik Andersson: The idea of setting up a conference was born during a SWET peer conference about a year ago. I think it was Tobbe Ryber who brought this up late one evening and we were many that had have the same thoughts. Our drive is that most test conferences in Europe have the same format, have you been to a few you have been to them all. Also they do not facilitate the conferring part of the word conference. We wanted to set up a conference that would thrill us to attend and involve both speakers and attendees in the sharing of experiences.
Several initiatives and smaller groups of context-driven testers are forming in different places in Europe and we want to set up a conference that helps building relations and a community in Europe. This is why Let´s Test is let up like a camp where conference, lodging and meals are included in the conference fee. We will stay at the same place and be together for the whole three days. We will also have lots of fun arrangements during the evenings.
At CAST in Seattle last summer we announced that there was going to be a context-driven conference in Europe. This took a greater spin than we expected and lots of people were excited. Soon the name EuroCAST started to spread, however it was neither we nor AST (Association for Software Testing) who started that. Let´s Test is not arranged by AST (who arrange CAST) but we are supported and sponsored by AST and they for sure are our friends and Let´s Test supports the AST mission.
Markus Gärtner: There are more and more voices coming up about the bad shape of software testing in general. Ben Kelly for example has a talk on "The Testing Dead" in the programme. What can we do to help the "Testing Dead" become alive again from your perspective?
Henrik Andersson: I´m glad you mention Ben, he is putting up a good fight against the testing dead.
I think we need to work from two ways here.
We need to strangle the demand for zombie testers. We can do this by getting more involved and helping HR and the lower to middle management that hires tester. We need to wake up the people who are in a position to hire testers. Way too few actually test the tester during an interview. Firstly they need help to be aware of what skills they are looking for and secondly they need help to be better prepared on how to evaluate the skills of a tester.
The other front we need to work from is to inspire the one next to us instead if dying with him. Sadly, I still think we who are "alive" are the minority and I have seen too many inspired testers coming into a new environment and slowly turning into testing dead by their surroundings. We who are alive need to find ways and connections so we can inspire each others so our lights doesn't go out. That way we will have the energy and drive to go the match against the testing dead. To do this is a very energy draining activity because we have to repeat ourselves so many times to so many people. It is not only the never ending repletion that is draining, it is also that we always have to start from the basics of testing. It is a bit sad that we yet not are past this.
One thing is to fight the big battle by making your voice heard to many people but as important is to do the everyday fighting by being an example and role model at your workplace by doing great testing and to help and inspire others to do the same.
But also be realistic you can't wake every dead, spend your energy wisely on those there is hope for.
Markus Gärtner: What are your prospects on the European Testing community after Let's Test? What do you expect to happen? What do you hope?
Henrik Andersson: My hope is that after the conference our context-driven community in Europe gets a bit more united and that we know each other closer on a personal level. I hope that we see more exchanges between groups in different countries and that Let´s Test is only the first in many initiatives to get us all together and become tighter as a community.
Markus Gärtner: If you wrote a letter to yourself 15 years ago, what would you write in it? What would you tell yourself, if you could go back?
Henrik Andersson: Sorry mate, this will be a boring answer. I would not write much. I'm quite happy with the life I have lived and the decisions I made. I guess I would have given myself some recommendations on which stocks to buy and when to sell them. :)
Markus Gärtner: Consider time travelling is now possible. You can pick whether to go to the future and see what the (testing) life will look like in 20, 100 or 1000 years. Or you could go back 20, 30, maybe 40 years in time, and change anything you liked. What would you do? Take a sneak peek, or changing the world?
Henrik Andersson: Hehe, I now realize that I would be able to give a much better answer to this if I were a Sci-fi lover and spent my time fantasying about these things.
I think I would go to the future but not more than 20 years just before retirement just to see if the testing dead won the game. If so then I would bail out of this profession right away.
Markus Gärtner: Oh, then I'm glad that we don't seem to have a sneak peek at the testing dead so soon, so you will maybe stay another 20 years with us. Thanks for your answers and your time. Looking forward to meet you at Let's Test in May.









March 3, 2012
Lean Startup Testing
A while ago I started reading the book The Lean Startup – How Today's Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses from Eric Ries. Some of my colleagues already propagated some of the insights from the Lean Startup, and I heard about the Lean Startup for the first time back in November 2010 while attending a workshop with Kent Beck (see a write-up here).
I wasn't aware that I read about some contents of the book from a different perspective. Back in 2009 Michael Bolton and James Bach reported on testing an internet application which was from their perspective more than buggy. Even continuous deployment didn't help much there. The company, IMVU, was mentioned throughout the book by Eric. So, I was curious now about the connection between Lean Startups and Testing. Despite the chapter with the name "Test" in it, I wasn't surprised that I have to make that connection myself.
Since Phil Kirmham asked for a perspective from my point when he saw I was reading the book, I wanted to share my insights I got from the book, and my vision for testing in a Lean Startup.
Build, Measure, Learn
The main focus of the Lean Startup method relies on the Build, Measure, Learn loop. The main approach to building your product is based on the concept of validated learning. This also drives the measure and learning part. Once the first version is released, we can decide whether or not to change our strategy based on our hypothesis that we projected before building a single line of code. If our projections don't fit in, we have to find a new way to go with our technology. Otherwise we may continue our approach.
Validated Learning
At the core of Ries' approach is validated learning. Instead of hoping that you hit the right market, he challenges the reader to look for the right hypothesis – the growth hypothesis for your future market, and the value hypothesis. The value hypothesis puts to numbers what you expect to happen once you release the first version to the market. How much are customers going to pay? How much value can we generate with this product? Maybe, within the first month. The growth hypothesis puts to numbers the growth expectation within our market. Are we going to make a thousand new sign-ups with our next set of features? Or are we going for a million new downloads?
Of course, putting down our hypothesis before we build a single code line, is just one part of the mix. We need to come up with actionable metrics so that we can track what actually happens, and whether or not the market we picked first is the right one to grow in the long-term.
With actionable metrics Ries describes that these should lead you directly to the actions you need to take. Instead of tracking the number of downloads of your products, or the amount of accounts that you have on your system, you should track the percentages. Dividing your customer base in so called cohorts helps you to come to action once your expectations match the actual numbers you get from your tracking data.
At a client I am currently involved in in a startup product. We came up with actionable metrics for our customer base. It's a mobile phone application. The things we want to track are how many downloads the app has, how many users install it, and start it for the first time, how many people actually create an account in the service, and how many people connect to a third-party network with our application. This customer segmentation will tell us whether our growth theory matches or not. If there are just 1% of the people signing up to the service, we are truly on the wrong path.
Pivot or persevere
Both hypothesis build the foundation for validated learning. By noting down our expectations before releasing or building anything, and by tracking the numbers after the release, we can decide whether we are on the right track or not. Ones our expected numbers and our actual numbers match, we should try to raise the next question, and keep our current strategy. If the numbers disagree with our previous expectation, we know that we are on a dead end, and need to change the direction. The first decision is called to persever, the second one to pivot.
For a pivot we want to preserve what we have build so far to some degree, but we also want to change our course. Ries describes several such changes in strategy for the product that you are building. You might want to change to a different customer base, or you might want to concentrate on one particular segment. You may want to change the directions based on the feedback that you received.
Ries cites several products that initially started to focus on a different market segment, but made several such pivots in order to become really awesome. Once you really found out what your customers need, you can follow this path to conquer your market.
Testers in a Lean Startup
Now, what would it mean to work as a tester in a Lean Startup? What would I pursuit in order to serve our company?
First of all I am convinced that a checking tester would not help the situation much. There are no test scripts that can help to find out more about the market or your future business in the hypothesis. The same goes for fake-testing, but I don't think that fake-testing serves any purpose at all.
A real tester would help to identify underlying assumptions. These assumptions guide your validated learning. A good tester is trained to challenge assumptions well. He or she knows about the right questions to ask in order to validate the assumptions. Based on these questions, you can drive forward the motivation for the guiding metrics as well as build the growth and value hypothesis.
An example would be helpful right now. Taking from the same client as before, we sat together in a meeting to discuss the course of our product backlog. We started to challenge different assumptions. One thing was to challenge the need for a login functionality. So far we had followed the path to build a necessary login and register function for the service. But challenging the underlying assumption, that such a login function would not be needed in the first version, led to the conclusion that we might lose the necessary feedback in our customer cohorts. Many customers would not sign up for the service, probably. So, with the function we would end up with biased metrics, and we could draw the wrong conclusions from this, leading us to the wrong decision to persevere or lead to the wrong pivots in the future.
Another point where testers can bring great value to Lean Startups lies in our testing abilities. We need to take into account the current target of our product, and be aware of the degree that we need to test the product. It's actually ok to test for less in this release of the minimal viable product, maybe, since we have several questions to answer before our next decision point about persevere or pivot. We should be aware of functions that do not yet exist, but are on the road map for future versions.
Conclusions
I think the uprising of methods like Kanban and the Lean Startup show once again that the time of fake-testing and checking testers will come to an end. I am convinced that more and more actual testers will bring forward the word of challenging assumptions, and ask the right questions about the future of our markets.
Though, we will have to be patient while testing in a Lean Startup. We need to know about the degree of fulfillment in this version of our product, and what the right questions to ask are. I think we are on a good path with this right, though we probably have to convince the startups out there about the value that we can bring them. James Bach's and Michael Bolton's critique on IMVU went in the direction of a useless product. I think that proper exploration of early prototypes also help you make the right decisions for the future. Startups need to be aware of the liabilities of the product in the long-run. Test automation does not reveal much information about this, but exploration of the product may guide future versions of the product as well.









February 29, 2012
Lessons Learned from Context-driven testing
There has been some fluff and rumor around context-driven testing yesterday. Some folks even talked about the death of context-driven testing. Most of it was issued by the about page from Cem Kaner. If you haven't read it yet, go ahead, read it now, I will wait here.
Back? Alright. Now, I would like to take a pick on what context-driven testing means to me, and why I think the whole schools concept can help us shape something. These are the rough ideas I had around a proposal for CAST 2012 which was not accepted. It is based on the combination of the schools concept with complexity thinking and the CDE-model. Oh, you don't know that one? I will introduce it.
Here is the abstract that I submitted:
Title: Significant Differences and Transforming Exchanges
In this workshops participants will apply three different concepts from complexity thinking to the schools of software testing model. The three different concepts – containers, differences, and transformational exchanges – will be explained in the workshop. We will directly apply complexity thinking to the schools of testing, and discuss where we see the schools help to shape different containers, what the significant differences between the schools are, and how transformational exchanges between the different schools could happen, and maybe where they will even fail.
Armed with these tools, we will discuss how to evolve our craft of software testing, eventually extending the the concept of the different schools of thought, and find platforms for transforming software testing for the 21st century.
On Containers, Differences, and Exchanges
In the human-system dynamics model and the CDE-model from Complexity Thinking there are three main concepts: containers, significant differences, and transformational exchanges. Self-organization emerges from these three concepts. First of all you need containers that shape the focus of the group or team. Departments are one form of a container in an organization, a project could be another one, the community of testing profession is also shaping a container.
Within a container there are usually differences between the people in that container. If the differences are significant enough, they provide a basis for self-organization. In most teams there is a significant difference between containers of programers and testers, and in the surrounding organization there may be a difference between teams that do Scrum and teams that do waterfall. There might even be insignificant differences. For example the project team on floor A2 might have much in common with the project team on floor B1 in your building. It's the significance that is – forgive me the joke – significant.
If we have a a container with significant differences, then we also need exchanges to discuss these differences. If we can align these exchanges in a way that our organization is transformed, we have successfully seeded the sprouts for self-organization. One such an exchange could be the weekly project status meeting, or the monthly open space in a consulting company. (I am not referring to any life experiences I had – well, maybe I am.)
The trick is to keep these three pillars in balance in order to have self-organization kick in. At times you might want to re-shape the container to result in more significant differences, or you might want to introduce a new way of exchange in order to have the transformation kick in.
What are schools of testing?
Referring back to the schools of testing concept, I think it introduced to the container of software testers in the world a model to think about the significant differences between different groups of testers. Now, you have a concept that describes something about the different thoughts you have when you speak about testing. There are the early Agilists who claimed that unit testing will be enough. Then there were the testers that called for standards in testing. Yet another way to think about testing was maybe the factory school where testing can be outsourced to a large testing hub somewhere in a timezone that is hostile for you. (I'm exaggerating maybe.)
From the inception of the book Lessons Learned in Software Testing (this book is dangerous, as one reviewer said, so I won't link to it) where the basis for context-driven testing was founded until today, we might now expect for some self-organization to kick in. Not necessarily so. Eleven years of context-driven thinking and the schools concept have not helped us find transformational exchanges for the self-organization to kick in.
I could ask why now, but I refuse to do that. Maybe there hasn't been much innovation in the testing sphere due to this, maybe there was more based on that. It wouldn't change the situation that we are now in much. But what's the situation?
Fight the testing zombies
Take a closer look into the testing sphere. Some claimed there are undead testing zombies out there, that did not yet realize that they are dead, but that keep on following different cargo-cults. I think in order to bring the testing sphere to a more vital living, we have to find ways to organize transformational exchanges. Peer workshops provide one way to have such exchanges, conferences provide another, though I did not find them that transformational, rather evangelized in one way or the other. The real exchanges on conferences are rather missing.
As I read Cem Kaner's words on the about page of context-driven-testing.com, I read it as that we have failed to organize such transformational exchanges to bring forward the testing space. We have failed to discuss our differences, and find self-organized ways to test software, and to innovate. Maybe that is also why quality is dead.
I don't think I can change this on my own. But if you ask me whether I would be in to try to find transformational exchanges for testers, count me in. I think that we can change the world of testing, and reanimate the undead corpses out there. I'm eager to find out how to do that. Whether or not this means to use the context-driven principles as a basis, is a thing that we will find out while doing so.









February 26, 2012
Let's Test prequel with Zeger van Hese
As a prequel to the Let's Test conference in May, I interviewed some of the European context-driven testers. Today we have Zeger van Hese who is – besides many other things – the program chair for this year's EuroSTAR conference in Amsterdam.
Markus Gärtner: Hi Zeger, could you please introduce yourself? Who are you? What do you do for work?
Zeger van Hese: Well, I'm a Belgian tester who rolled into IT after a short stint at the movies distribution business. I work for CTG, an IT service provider and Belgian market leader in testing. I started out as a developer, but was once asked to help out at a testing assignment. I said yes, and I've been testing ever since. The last years I've been a test manager at client sites, mostly in agile development environments. This year I'm also quite busy with my programme chair tasks for EuroSTAR.
Markus Gärtner: The theme you picked for the EuroSTAR conference – Innovate & Renovate: Evolving Testing – has a pick of solution-focused coaching in it. What influenced you to pick this theme?
Zeger van Hese: From my first moments as a programme chair I knew I wanted to have a theme related to learning and creativity in testing. But at the same time it had to reflect the need to evolve as well. Innovation and renovation embodies creativity and learning, it's about looking forward and looking back.
Markus Gärtner: How have you crossed the path of context-driven testing?
Zeger van Hese: I realized early on that what I liked in testing was not necessarily what other people liked in it. I grew fond of exploratory testing because that seemed to fit my natural testing style. But it wasn't until 2007, when I participated in a Rapid Testing tutorial by Michael Bolton, that all pieces fell into place. That was an important moment for me; I hadn't heard of the context-driven school before, but what I learned about its principles fit me like a glove. Later on, I discovered that apart from the CDT-paradigm, there is also a vibrant community, a group of people passionate about testing and advancing the craft. They are all about sharing, coaching, learning, challenging each other. My kind of thing.
Markus Gärtner: Speaking about advancing the craft of software testing, which major
contributions to the craft do you see in Europe?
Zeger van Hese: I'm seeing peer workshops emerge and doing interesting stuff: The UK one is a pioneer (LEWT) in Europe; in Sweden there's SWET, in Denmark DWET, in Germany GATE and in The Netherlands there is DEWT, which I'm involved in. These meet-ups really invigorate me and spawn some great discussions and insights. What helps us, of course, is that people like Michael Bolton and James Bach are really supportive and help where they can. I'm seeing Skype coaching taking off, and testers working their skills through the European Weekend Testing chapters (although things have been a bit slow there, lately). A lot of people from the community are avid bloggers and put out some really innovative stuff. Conferences are getting more practical and hands-on as well, with test labs, dojos, roundtables, rebel or other alliances.
Markus Gärtner: Where do you see the biggest struggles for the context-driven testing community in Europe? How can we overcome them?
Zeger van Hese: I feel that the context-driven principles are still relatively unknown outside the community, so I could argue that there is still some evangelizing and awareness-creation to be done. On the other hand, I think the principles and the set of ethics that comes with them are pretty personal – either they resonate with someone or they don't, so we should be careful not to be too pushy. Another challenge I see is how we can educate recruiters away from using certifications as their main means of filtering CVs, which I think causes them to miss valuable and experienced candidates. This means influencing and changing the selection process and coming up with good alternatives, a daunting task.
Markus Gärtner: How do you apply context-driven testing at your workplace?
Zeger van Hese: I apply the principles in the sense that I am a strong advocate for testing as an intellectual process wherever I go, and I try to constantly be aware of the context I find myself in, and let that drive my actions. That can be quite challenging, e.g. in environments where there are a lot of templates to be used and processes to be followed, but even there you can try your best to appropriately inform those who are "driving your context" that there are other options that may be of more value to the project.
Markus Gärtner: What will be your biggest take-away from Let's Test?
Zeger van Hese: My biggest take-aways? Or my audience's biggest take-aways? Mine will hopefully be meeting new people, re-connecting with old ones and getting loads of new insights from people that are way brighter than me. My audience's take-aways would be 1) that thoughtfully looking at a piece of software (testing, anyone?) has lots in common with thoughtfully looking at art 2) that software can learn from the tools art critics use, to become software critics 3) Artists can inspire us in coming up with fresh ways of looking at the world.
Markus Gärtner: Besides programme chairing for EuroSTAR 2012, which plans do you have for testing after Let's Test?
Zeger van Hese: It promises to be a pretty busy year all throughout. In March I'll be presenting at the Belgium Testing Days, in April at STAREast and in May at Let's Test. In between those gigs, there is indeed a EuroSTAR Programme to be assembled, which will take some time away as well. And all that jazz has to be combined with my regular day-job of course, which makes for long days. I didn't plan too many things in the second half of the year, yet. Of course there will still be some room for the occasional peer workshop or meet-up or something unplanned – it's always good to leave some room for some serendipity in the schedule.
Markus Gärtner: In the past you worked a lot on the connection between art and testing. Unsurprisingly your talk at Let's Test will also be on this topic. Without spoiling participants too much, why is Software Testing an art, and not – say – an engineering discipline?
Zeger van Hese: As you know, I like metaphors and analogies and test side stories. I think they are a great way to link the new to the old, they have the potential of generating innovative ideas as well. Art and testing – and more specifically the ways in which testers can benefit and learn from art – are keeping me quite busy and intrigued. Now, I am not going as far as stating that software testing is an art form – like the tenth art or so – but it is a craft in its own right, and I do think it is creative work more than it is engineering work.
Markus Gärtner: Consider time traveling being possible now. You travel into the year 2030. How has the world of software testing changed?
Zeger van Hese: That's a hard one – my crystal ball has been letting me down lately. It would be nice if there wouldn't be any different testing schools anymore: everyone talking the same testing language, honoring the same values and with a shared vision on how to perform good testing. I'm being overly optimistic here, I guess.
Software development-wise: I think the development process will move further away from the classical approaches and towards a more collaborative way of developing, Of course, that trend has been going on for a while already and will continue in the future, I think. Even in 2030, sapient manual testing will still be a necessity, I'm not a very firm believer that automation is the answer to everything. But hopefully, by that time, our manual testing toolkit will see some very nifty tools that broaden our reach and effectiveness as a manual tester.
Markus Gärtner: Thanks for your time. I look forward meeting with you at Let's Test.








