Markus Gärtner's Blog, page 12

February 12, 2014

Don’t try to run before you walk

I have to admit: I am German. Us Germans are well-known in the world to deliver good results. That said, we are very dedicated to deliver efficient results. There is one thing that troubles me about that. Having our main business in Germany, I often face questions with regards to efficiency while introducing agile methodologies. You can efficiently drive your car right into a wall. That would be very efficient, but unless you try to kill yourself, not very effective. The English saying ‘don’t try to run before you walk’ expresses this gallantly from my point of view. Let’s explore that thought.



Effectiveness is about doing the right thing. While you can jump on a train to get you anywhere, it is usually more helpful to get on the train that drives in the direction of your destination.


Where do we find ineffectiveness in our workplaces?


At a client a few years ago, there were highly political games involved. In order to drive home a decision for a meeting you needed to schedule several 1on1 meetings with the attendees in order to prepare the decision-making process. If you assumed that you could simply meet with the relevant people in one room in order to decide about something, you could find out that your approach was not very effective, having no decision after the meeting at all.


I saw several other coaches in that company that were aware of that. They used 1on1s to prepare the meetings, and the underlying decisions. That approach did not feel very efficient for us coaches, though it was way more effective.


The same holds for software development. Some practices like test-driven development or Exploratory Testing seem to be inefficient. I think this has to do with our engineering tendency to do one thing well. Unfortunately us humans are not like machines. Our brains are highly associative. That means that the direct line between two points does not necessary work best for us. Instead, we sometimes reach our goals by wandering around, getting lost, and returning on our track back.


For TDD this means that the effort of writing a failing unit test first, might look like it’s inefficient. And it is surely less efficient than simply writing down the production alone. However, in the long-run, when the complexity of the growing function, class, and application overwhelms our minds, we are able to rely on this double bookeeping in our code. We are more flexible, and can rely on the safety-net in our code base by executing the tests more often (if they tell us what we need to know). So, in the long-run we are way more efficient if we stick with less efficiency in the beginning.


Counter-intuitive, isn’t it?


Now, consider the overhead of meetings that you introduce when introducing Scrum. There is a planning meeting every two weeks. There is a daily meeting to coordinate daily fuzz. And there are review and retrospective meetings every two weeks as well. With all these meetings, when do we get work done?


It’s the same as with TDD here. The problem with software development is that we have a hard time to see queues in our systems. Unlike a physical engineering project, all the decisions we made are put in a sheer endless storage system – the codebases in our computers. That’s why it’s hard to see queues with large batches piling up.


Unfortunately large batches have the tendency to lead to sub-optimization. With sub-optimization we make more assumptions. With more assumptions we end up with more “wrong guesses”. More wrong guesses means that we will have to more re-work later. So, in short-term we face better efficiency. But since we cannot see the amount of assumptions that floated into that efficiency, we can’t see how wrong we were.


It’s always easier to get the software right the first time. Unfortunately most systems need to be coordinated between several persons nowadays. That also means that we need to talk to other people more often. Without that talking we would be more likely to build more wrong assumptions upon our current wrong assumptions. If you take a closer look into Winston Royce’s paper on waterfall development, you’ll notice that he is stating that this model probably can only work in short cycles. Short meaning two weeks.


Royce understood that the creation of software needs to take frequent coordination among several people. Scrum fixes this misconception in the Waterfall model by introducing more meetings to the workplace. So, the workers who are familiar with traditional style development will find that introduction less efficient. In the long-run those meetings are way more effective since they lead to less rework, and therefore to more efficiency in the long-run.


I think this is why we should focus more outcomes rather than outputs. The outcomes in the long-run can tell us whether we are doing an efficient job. Don’t just focus on the quick-wins as they will lead to sub-optimization. Instead watch out for the results in the long-run, and re-evaluate your underlying models. Eventually you will find out that some things need to appear less efficient to be effective – and therefore more efficient in the long-run.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 12, 2014 23:00

February 11, 2014

What running taught me about Exploratory Testing

Before I joined it-agile in 2010, I was exercising up to six times per week. When I joined it-agile, I knew I would be traveling more. It took me a bit more than four months until I noticed that I lacked some exercise. So, in January 2011 I started to go running, as this seems to be the only exercise compatible with a travel-rich job. Last year, I completed a 31,1 km run close to my hometown. While learning to to run, I noticed lots of parallels to Exploratory Testing. Here are the things that stuck.



You can go faster on familiar terrain

This is a lesson I had to learn early on. I remember my first exercises close to my regular stay in Hamburg, Germany in early 2011. There was a parc close by. I often ran through that parc. I had sort of a stable route around 5-8 kms. Of course, over time I became better and better at running. That was when the old routes stopped working for me.


I tried some variations in the beginning. The most conservative choices were to run the same distance twice. That was safe, as I knew already how to get back. But that was also boring.


That’s when I noticed that I needed some exploration. But exploratory running courses were harder for me. Why? First of all, I didn’t know the terrain. That also meant that I needed to concentrate more. Look for things to remember so that I can find the route back “home”. Besides the exercise exhaustion, that also exercised my brain cells.


Don’t get me wrong. I think that navigation systems were invented for people like me. For example, we visited the same swimming competition for ten years straight. It was always the same driving route. All those years I screwed up to find that route back again, even though I drove it one year for three times.


The same applies when I try to find new running routes. I am anxious that I may get lost.


When I ran the new route for the second or even third time, I always could go faster than the first time. I felt more familiar with the terrain, and didn’t need to be as anxious about getting lost.


When you are exploring, you are working “slower”, because you should be paying more attention to what’s happening. That also holds for new terrain in your applications. Because you are looking for all the stuff that could be wrong, you are slower, but you are also able to catch more bugs. This is one of the main differences to more scripted testing where you focus on one thing to check.


When running, exploring takes more effort from your brain, and is usually slower. That does not help with the training effect very much. When you are exploring while testing, the outcome of the slower progress is intentional.


You notice little details every time you run the same route for another time

When running, I noticed that over time I became more and more familiar with a particular route. That was also when I could start to enjoy the environment more and more. I was able to notice the flowers on the side of the track. I was able to notice different people. I was able to notice finer and finer details of the world.


That is also helpful when testing. Think about a session charter focused on exploring a particular bugfix. Since you already know the terrain around the bug very well, you are more open to notice new details that you were not familiar with when you visited that piece of the application for the first time.


That also holds true when you dive deeper into a particular bug in order to write a good bug report. When you try to reproduce the bug, you start to notice more and more nifty details along the road. You refine your model about the application in your head by doing so.


Since your mind can let go of some of the overwhelming details that you encounter when you run the track for the first time, it will start to work with the environment on a coarser, more abstract level, and has some capacity free for dealing with all these nice little details. You can leverage that effect when testing, too.


Avoid running up-hill too early into your course

I participated last year in a longer run that went up-hill three major mountains in our terrain. That was quite exhausting. What was even more exhausting was that I didn’t prepare well enough for these (since I didn’t know the full track) – resulting in me walking the final 4kms of the race.


Exhaustion happens when running when you go up-hill too early into your route, facing a longer remaining track. Up-hill alone is not a problem, usually. You just need to take it with a pace that works for you in the long run.


That also means that you can train for running up-hill, thereby improving your pace.


The same happens when testing exhaustively too early into a project. You will have less attention span for the more troubling bugs later into the project. The agile community often speaks about the concept of a sustainable pace. Be aware that it’s possible to burn yourself out too early into a project. And remind yourself from time to time to gently stress your current capabilities in order to improve at what you are doing.


Three lessons

These are the three main lessons I learned while running. When you pay more attention, you make slower progress. When you visit a place for a second time, you notice other details. Stick and train for a sustainable pace. These lessons also apply to software testing, and especially Exploratory Testing. Be aware of these, and you will do fine.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 11, 2014 23:00

February 10, 2014

To project or not to project

Over the years, I became more and more suspicious about the concept of a project. I have worked in several companies, at times working with products, at times working with projects. I have seen more waterfall projects, I have seen waterfall products, and I have seen both agile projects and products. What strikes me most is the amount of ignorance most companies have with regards to the effect that projects have in the longer run. Let’s see some stories.



The missed Valentine’s

A few years back I was involved in a large migration project. We worked on replacing an IT system, and needed to transfer accounts from the old system to our new system. The project lasted over the period of a whole year, and we managed to transfer about 1/10th of the accounts in that period.


I recall working 60 hour weeks in the final phase, roughly two months. I remember that all the decisions we made early into the project needed to be re-visited later. There were functionalities that we developed early that didn’t make sense, since we didn’t understand the requirements – no, not the requirements documents, but the requirements – to start with.


We finally managed to deliver the system on the birthday of the CEO of the customer, so to speak as a birthday gift. We celebrated that date. It was back in late November, right before the Christmas period.


There was a follow-up project coming up where we were supposed to transfer the remaining 10 million accounts into our system as well for the next year.


It took us until June to seriously get started with that one.


Why?


In hindsight, I would say that we were roughly burnt-out after that project that got a whole department of 40 people busy for a year. It put down further product development for a serious amount of time. There were some minor projects in between. We simply were not able to deliver them. I remember that it took us from late December until late January to analyze a project due for Valentine’s day. We missed delivering on that.


The split project member

Another project I remember kept one of my colleagues busy. He was involved with custom development for a customer. Overall there were several colleagues involved. The project delivered within two months to our external branch office in Italy.


We didn’t hear anything from that project for another half year. Of course, life went on, projects went on. When our colleagues in Italy finally started to use the software, they found out there were lots of problems that we needed to deal with.


Of course the colleague that was initially involved into that project was already working for another, more important project. Suddenly he found himself in the position that he needed to serve two projects: one that was important for the overall company, and another one that was important for our branch office, and suffered already from some cost of delay.


The sixth UAT

Another project was an internal project. A colleague and I were asked to join the project team. They already worked on the solution since August. We joined in early February. There already had been five appointments for an internal user acceptance test (UAT). They all failed since the solution wasn’t working – at all.


We joined three weeks before the sixth UAT. We eventually made that date, and received additional budget to continue the project.


For the project folks from three different locations were put together. These folks didn’t have any project, and needed to learn the product that steered the solution. The project was in place to create a demonstration platform for potential future customers.


The bottom line

What do these three stories tell you about projects in general? From the first story, I learned that project tend to dramatically sub-optimize. That is why it’s ok to work overtime. In German I like to refer to that as the “Schweinezyklus eines Projecktes”, the cycle of a project. In the end, work gets cramped in, and needs to be done before the deadline that was set months ago. After these acts of heroism, people need to cut short for a while. They need to recover from the stress, meet their families, and so on. Nothing gets done after the deadline. At least nothing serious.


From the second story, I learned that people will be assigned to projects, and those might strike back later. I learned that it will be hard to deal with long-term quality problems once the team members from the earlier project moved on. Usually that leads to people being thrown into multiple projects at the same time, with allocations of 20% and 80% or something similarly rubbish. Why rubbish? Whenever I needed to solve a technical problem, I didn’t care about “is my 20% timeslot for this project now used up?” If I really needed to solve that problem, I did it, and then moved on.


From the final story, I learned that projects make terrible decisions in the beginning, when everything is vague. These problems need to be dealt with later. It might feel great being the hero to rescue the project – and it shouldn’t be necessary to be that hero to start with. All of these three situations convinced me that projects make short-term decisions that will hurt you in the long-run. These include partial allocation caused by earlier sub-optimizations.


Personally, I have never seen such things happening in product development where a stable product team has to deal with the short-comings of today in a direct manner. The folks involved in product organizations are fully accountable for the quality they create today, and they are totally aware of that. In most organization that got rid of projects, I have seen people being more aware of bad decisions.


That’s why usually I don’t recommend projects to most of my clients. That said, I have seen projects work as well. I didn’t try to understand the dynamics in place when it worked. It seemed to have to do lots with the responsible project manager. On average though, projects failed more often than product development. Your mileage may vary.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 10, 2014 23:00

February 9, 2014

Getting external help

Over the course of the past week, I realized the importance of external help – and some of my cultural biases for admitting that I might need it. Here is my story, hoping that it might help others seeking out for help.



I think deeply rooted in our culture lies the notion that getting help from outside comes with a double-edged message. On the good side you can understand that message as a self-revealing message that you noticed that something is wrong with you, and you realized that you need help. That is why you reached out to someone that can actually help you. Good one. On the bad side, you are giving out the message that you need help, that something is just not “right” with you – for whatever that means.


Since a couple of years, I am working with a psychologist. As part of my job, I receive a lot of things over the day. At times people get mad at me during training classes. At times, people get mad at me during a coaching situation. At times, I see a lot of people suffering from a missing work-life-balance. Although I try to take nothing personal before 6pm, I sometimes fail at that – and I am aware that I need help to seek options to alter my behavior.


A while back, I was visiting a client. We were in the process of setting up several Scrum teams at that client. We were starting to kick the first one off. As part of that we invited the developers from the team that was supposed to start the next day for a round of estimations. To make a long story short, we faced some resistance – up to the degree that I felt offended of being a racist.


I knew that this offense had nothing to do with me personally, since the offender couldn’t know me at all.


Now, if I had taken that personally, after 6pm, I would have told that guy that my school certificate from first grade said that I was strong at making friends, especially foreign ones, and helped them to integrate. I had long-term friendships with several folks from various countries. And so on. If you know me a tiny bit, you will have various stories to tell on a similar line.


The bottom line? Without personal coaching months before that situation, I might have reacted differently. I might have reacted personally. I might have made things worse.


Being a coach myself, I am aware of this.


A few months earlier, a colleague of mine reported that his first gig at a client was shadowed by a strange acting developer. Basically, that developer was fighting every outside coaching, everything new. He even left the workplace earlier without mentioning a word to my colleague.


When people act strange, that usually triggers some awareness for me. Usually it has nothing to do with whatever I do at a client. Uncovering the backgrounds of these situations is crucial, and at times hard to take personally.


Later, I learned that that developer was undergoing some personal stress. If I recall it correctly, his wife was dying.


Last year, I heard a story from Johanna Rothman. She told that she had a 1on1 with someone that was working for her. She told that he was acting strange, and she wanted to help her act differently. He told Johanna in that 1on1 that his wife was dying.


He would never have told her that story without the privacy of a 1on1.


Johanna was able to fix the situation, and get that guy some relief. Personally, I would have cried out after that meeting.


Last week, I had a discussion with someone from my family. He told me that he had never got over the situation when his father died. Only three years later he was suffering from burn-out because of that. He needed to take visits at a psychologist for fifteen months.


That story stroke me. What stroke me more was the fact that he didn’t dare to tell that story at his workplace. Personally, I have been in the position of a team leader. It was my job to engage with my colleagues, and find out what troubles them, besides leading them technically – two hard duties, at times warranting one person for each.


If one of my colleagues back then told me that he needed some psychological support, I would have had a hard time – and I would have granted him whatever support I could give, maybe even with the support of my superiors.


What I told my family member was that he shouldn’t be ashamed to talk about that with his superior. He shouldn’t be ashamed to admit that something was influencing his working life. He should have told at his workplace that he needed support, and that that support would help him contributing by having fewer distractions.


I think we shouldn’t be ashamed about some of the psychological stress that we undergo. My father died about a month before I joined my first job. That shaped me. Still, I know that I need professional support from all the situations at work, at home, and maybe in my past. We shouldn’t be ashamed to talk at our workplace about that. We should be proud that we are able to admit that, and seek for help where it’s due.


Don’t be ashamed.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 09, 2014 23:00

February 6, 2014

What’s wrong with Software Development?

So far, I think I have undervalued the importance of some practices when it comes to working in a large-scale development shop with lots of teams. One of the major problems with software development in the large is that we as an industry of software developers are terrible. We have bad development practices in place, and it’s strikingly easy to hide your bad software development skills in larger corporations. I also think that the Craftsmanship movement could come up just because we have badly educated software developers since decades.



The problem

Last year, I did a training class with a development company. They had three teams. I had a training with three developers on TDD with Javascript. I never programmed anything in Javascript. I prepared well enough for the class to be able to help out with Javascript. The most junior person on the team had been working with the company for half a year. We worked together on a code kata during the class. I noticed a pattern. That guy was basically using Google in order to find snippets to copy and paste, and try them out. Half a year into a programming language that guy was still using the same pattern that I used a few days earlier to get familiar with the language.


He hadn’t advanced in the language for half a year.


At another client, I worked with a colleague together to run a training class on TDD, and help with some technical coaching for the transition. We paired with programmers over the course of the day. One guy explained to me that he just had left university (half a year ago), and asked me for a book on unit testing, mocking frameworks, and so on. At that time, I didn’t know one concise book on that, and provided him with a couple of books instead.


Even after visiting university and working for half a year, that guy did not advance with unit tests for half a year.


Do you see a pattern here? I think it comes back from the 90s and early 2000s when people exclaimed that programming was the future, and we needed to have more programmers. People entering university were also told that you can earn a whole bunch of money if you joined the programming business. Skip forward a few years, and I can see lots of programmers in the IT departments of this world where programming is merely a job, not a passion.


What Bob Sutton taught me in “The No Asshole Rule” is that these folks over time infect your company. Motivation drags down from all these Wallies out there. Also people stop advancing, and put more energy into fighting with each other in order to justify their salaries. It’s bad, and I think it’s a problem we constructed years ago by demanding more programmers even though we could have been aware of Brooks’ Law – and the generalizations of it.


That said, also notice that the problem is not the problem, but how we tried to cope with the problem years ago.


I strongly believe that coping did never work, does not work now, and will never work in the future.


How to fix it

While reading through Larman and Vodde’s Practices for Scaling Lean and Agile, I noticed their strong belief that Continuous Integration and Acceptance Test Driven Development help there. A notion that I never actually got so far – until now.


Figure two companies. Company A has component teams. Company B has feature teams. Company A needs the teams to coordinate among each other. They need architects and requirements analysts to split customer requests among the different teams. They also need integration testing in the end when all the teams delivered there piece of the mix, and they need managers that keep track about that.


Company B delivers functionality from each of their teams. They don’t need architects and requirements analysts that break down requirements and architecture on various components – they need these folks as part of their feature teams. They don’t need integration testing in the end, since all that needs to be integrated can be dealt with within the delivery cadence of the team. They also don’t need to manage stuff between the teams, since they are self-organizing between each other.


I know, I probably drew a false dichotomy here. These might be ideal pictures, and I have seen some companies being more like company B. Most companies though I have seen are more like company A.


But what enables folks in company B to deliver working software more quickly? Continuous Integration with a high degree of coverage for their unit tests, and a combination of Acceptance Test Driven Development with Exploratory Testing.


What?


Really. Instead of relying on a manager to coordinate these folks among each other, with these practices in place they can coordinate themselves more easily. ATDD will help them understand the customer requirements better, thereby getting the functionality right the first time more often. They will also uncover hidden risks and assumptions with Exploratory Testing. They will also be able to work on the same code base since unit tests guide them once they introduce a problem. Oh, and the CI process makes them aware whenever they oversee something.


Yeah, right, so what?

Now comes the hard part: these practices are hard to learn. Maybe that is why universities skip these practices in their courses. My mentor during my diploma thesis urged me to use a version control system. I was pretty surprised when I worked with a team a while back that didn’t have SCM in place – that was years after my diploma thesis.


We as an industry are terrible at some practices. That is why we are suffering so much. We are suffering so much because in times of urgency and pressure it’s ok to skip all this hard stuff. But as a drug addict, that will eventually dramatically hurt us and slow us down in the long run.


I think we can improve as a profession. I think we have to improve as a profession. I think we have to show that we can do better than what we currently do. So, next time someone asks you to skip that particular practice – probably because he doesn’t understand it or doesn’t see the value in it – refuse. You will be better off in the long-run – and won’t hurt your profession. That would be the morally right thing to do.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2014 23:00

The blueprint fallacy

When it comes to consulting, I notice three kinds of solutions out in the field: the blueprinters, and the emergenters, and the ones that float between these two worlds). Dealing with blueprint solutions is a fallacy in my point of view. Let’s take a closer look on each of these three patterns I saw, and see what’s going on.



The Blueprinter

Blueprint solutions provide a common set of practices. There are many blueprint solutions to various problems out there. Methodologies are one example. Best Practice collections in its various forms are another. I avoid to name any, hoping you will recognize one if you see them in the field on your own.


The bottom line is this: do what we recommend, and you will be fine. In my experience companies buy blueprint solutions for a reason. That reason is safety. Safety that a set of practices will help them. Also since the consultants coming with the blueprints are familiar with the solution, they know how to implement them in our company.


Here is what I consider wrong with blueprint solutions. The safety that companies buy with the blueprint solutions is a false safety. It’s actually a case of shallow agreement when it comes to safety. Safety really means in this context that someone buying the solution does not want to take on any risk, especially not the risk of loosing his or her position.


Second, if the implementation fails, what do you think will be the reason? Of course, “ will not work in our company.” “We can’t do X.” Years as a swimming trainer have taught me that “X doesn’t work” really means “I don’t know how X works in my context.” That is also true for various testing practices, development practices, and agile practices.


For the third point, here is a story slightly related to the same effect. In a city traffic measures were taken. On a particular road, there were many accidents caused by speeders. The city mayor ordered a set of speeding controls. Skip forward two years, another traffic measure is taken. The amount of accidents on that road went up. But now the city’s budgets rely on the speeding fines from the speeding control. So, guess what will happen? Will the speeding control be removed?


I think this is the worst thing about blueprints. Soon many contractors and consultants will become more and more reliant on the solution since they sell it. Their whole business model relies on that blueprint. Guess how open these folks will be to critique on their solution?


The blueprinters – or solution-problemers as students from Jerry Weinberg call them – are the worst tribe that happened to our industry. These come in many forms, and I will not name any of them to protect the guilty. Often these folks are narrow-minded, unable to unlearn things that don’t work, and have a hard time fiddling with changes. That’s bad, and I think it’s holding our industry back since decades. But the brain of an engineer seems to look for clear, binary solutions, and that’s why folks will continue to give in to that demand.


The Emergenters

How to solve this? Then there are the “it depends”, and “what works on a web startup won’t work on the International Space Station” kind of folks. These folks are hard to deal with. They critique a lot of things when you hire them, but you will have a hard time receiving a clear answer to address your safety concerns.


Most of the time, these folks come in, try one thing, another, then maybe a third, and eventually you throw them out since they don’t implement something that’s helping you. Or is not helping enough. Or you don’t like that guy.


I think that’s a bit unfortunate – for two reasons. First, you didn’t receive for your money what you wanted to have. Your problem is not solved, and you wasted a couple of money on that. That’s bad. Second, the consultant in question properly didn’t do a good job of setting the psychological contract with you. That’s why you had to mistrust his solution, and he didn’t negotiate the steps with you. Of course usually these folks are a bit reluctant to learning from that experience since they didn’t receive the feedback necessary to learn.


The emergenters come up with emergent solutions. They are at times less directive. All depends on something, and nothing gets done, eventually. That’s tragic. Emergent practices need a starting point. If you don’t get to that starting point, you can’t learn, you can’t emerge from that.


The Hybrids

Then there are the hybrids. These folks float between the two worlds. They take one idea from the blueprint world, and try to apply them. Maybe they realize that the approach fails, and they need emergent practices.


In my experience these folks have a high potential – for good and for bad. The bad thing is to take all the habits from the blueprint that result in the least change in your organization. That usually is a recipe for failure. The reason why other methodologies end in better results usually comes from challenging underlying assumptions. If you cherry-pick pieces and practices from these methodologies that are in line with your current approach, you are not getting the benefits out of it that you potentially could.


On the other hand, innovation comes from combining two great ideas. If you can combine things from the blueprint world with things from the emergent world, and make them succeed even more than your current approach, you are way better off. This is the good potential in the hybrids. This approach needs some risk-taking to find a good mix.


What’s worse?

To get anything done in your organization I think you need three different consulting styles. You need the clear direction of the blueprinters to get anything done. You also need the learning cycle from the emergenters to see what’s working for you, and what is not. Finally, you need hybrids to cherry-pick things, and challenge assumptions of the blueprinters.


I have seen adoptions with clear direction that were a failure when the consultant was in the company. Usually these improved dramatically when narrow-mindedness went out, and opened the thoughts for emergent practices.


That doesn’t mean that I would recommend blueprinters.


If you can’t hire three consultants in the different styles mentioned, I think you should look for track records of hybrids. Good hybrids will give you practical solutions, clear direction when you need them, and emergence when you something is not working in your context.


But how do you find these? That’s the harder question, and it takes some effort to filter the good ones from the bad ones. Unfortunately there is no blueprint solution for that. Look for recommendations. And inspect and adapt as you go.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2014 00:00

February 5, 2014

The blueprint fallacy

When it comes to consulting, I notice three kinds of solutions out in the field: the blueprinters, and the emergenters, and the ones that float between these two worlds). Dealing with blueprint solutions is a fallacy in my point of view. Let’s take a closer look on each of these three patterns I saw, and see what’s going on.





Blueprint solutions provide a common set of practices. There are many blueprint solutions to various problems out there. Methodologies are one example. Best Practice collections in its various forms are another. I avoid to name any, hoping you will recognize one if you see them in the field on your own.


The bottom line is this: do what we recommend, and you will be fine. In my experience companies buy blueprint solutions for a reason. That reason is safety. Safety that a set of practices will help them. Also since the consultants coming with the blueprints are familiar with the solution, they know how to implement them in our company.


Here is what I consider wrong with blueprint solutions. The safety that companies buy with the blueprint solutions is a false safety. It’s actually a case of shallow agreement when it comes to safety. Safety really means in this context that someone buying the solution does not want to take on any risk, especially not the risk of loosing his or her position.


Second, if the implementation fails, what do you think will be the reason? Of course, “ will not work in our company.” “We can’t do X.” Years as a swimming trainer have taught me that “X doesn’t work” really means “I don’t know how X works in my context.” That is also true for various testing practices, development practices, and agile practices.


For the third point, here is a story slightly related to the same effect. In a city traffic measures were taken. On a particular road, there were many accidents caused by speeders. The city mayor ordered a set of speeding controls. Skip forward two years, another traffic measure is taken. The amount of accidents on that road went up. But now the city’s budgets rely on the speeding fines from the speeding control. So, guess what will happen? Will the speeding control be removed?


I think this is the worst thing about blueprints. Soon many contractors and consultants will become more and more reliant on the solution since they sell it. Their whole business model relies on that blueprint. Guess how open these folks will be to critique on their solution?


The blueprinters – or solution-problemers as students from Jerry Weinberg call them – are the worst tribe that happened to our industry. These come in many forms, and I will not name any of them to protect the guilty. Often these folks are narrow-minded, unable to unlearn things that don’t work, and have a hard time fiddling with changes. That’s bad, and I think it’s holding our industry back since decades. But the brain of an engineer seems to look for clear, binary solutions, and that’s why folks will continue to give in to that demand.


The Emergenters

How to solve this? Then there are the “it depends”, and “what works on a web startup won’t work on the International Space Station” kind of folks. These folks are hard to deal with. They critique a lot of things when you hire them, but you will have a hard time receiving a clear answer to address your safety concerns.


Most of the time, these folks come in, try one thing, another, then maybe a third, and eventually you throw them out since they don’t implement something that’s helping you. Or is not helping enough. Or you don’t like that guy.


I think that’s a bit unfortunate – for two reasons. First, you didn’t receive for your money what you wanted to have. Your problem is not solved, and you wasted a couple of money on that. That’s bad. Second, the consultant in question properly didn’t do a good job of setting the psychological contract with you. That’s why you had to mistrust his solution, and he didn’t negotiate the steps with you. Of course usually these folks are a bit reluctant to learning from that experience since they didn’t receive the feedback necessary to learn.


The emergenters come up with emergent solutions. They are at times less directive. All depends on something, and nothing gets done, eventually. That’s tragic. Emergent practices need a starting point. If you don’t get to that starting point, you can’t learn, you can’t emerge from that.


The Hybrids

Then there are the hybrids. These folks float between the two worlds. They take one idea from the blueprint world, and try to apply them. Maybe they realize that the approach fails, and they need emergent practices.


In my experience these folks have a high potential – for good and for bad. The bad thing is to take all the habits from the blueprint that result in the least change in your organization. That usually is a recipe for failure. The reason why other methodologies end in better results usually comes from challenging underlying assumptions. If you cherry-pick pieces and practices from these methodologies that are in line with your current approach, you are not getting the benefits out of it that you potentially could.


On the other hand, innovation comes from combining two great ideas. If you can combine things from the blueprint world with things from the emergent world, and make them succeed even more than your current approach, you are way better off. This is the good potential in the hybrids. This approach needs some risk-taking to find a good mix.


What’s worse?

To get anything done in your organization I think you need three different consulting styles. You need the clear direction of the blueprinters to get anything done. You also need the learning cycle from the emergenters to see what’s working for you, and what is not. Finally, you need hybrids to cherry-pick things, and challenge assumptions of the blueprinters.


I have seen adoptions with clear direction that were a failure when the consultant was in the company. Usually these improved dramatically when narrow-mindedness went out, and opened the thoughts for emergent practices.


That doesn’t mean that I would recommend blueprinters.


If you can’t hire three consultants in the different styles mentioned, I think you should look for track records of hybrids. Good hybrids will give you practical solutions, clear direction when you need them, and emergence when you something is not working in your context.


But how do you find these? That’s the harder question, and it takes some effort to filter the good ones from the bad ones. Unfortunately there is no blueprint solution for that. Look for recommendations. And inspect and adapt as you go.
PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2014 23:00

February 4, 2014

On being helpful

I am guilty. Over time I have served in several online communities. I have tried to provide some of my limited knowledge to help people overcome some of the struggles they were facing. I have a confession to make: I am addicted. I am addicted at offering help. I am addicted to help others. I am addicted to keep them in a symbiotic relationship. Over the years, here is what I learned by trying to be more helpful.



Let Others Work

In my earlier days, back when I was hanging around in IRC, I had a particular time when I was passionate about becoming an IRC op. In the networks that I was around that involved helping out about IRC issues on the #help channel. I remember a particular week between Christmas and New Years, right before heading back to university classes starting from the second week of the New Year. It was the same period of my life when I joined a channel dedicated to learning how to do IRC client scripting, called #help.script. I saw similar patterns occurring there.


People would join, ask a question, sometimes they even waited long enough for someone paying attention to them. At high peaks in a game related channel after a new release, people would join a lot. We are talking 300-2000 people here. New people would join, ask a question, becoming desperate as no one answers, and leaving again. I have a quote somewhere from these days:



* guy joins channel

[guy] I will ask a question

* guy left channel


Oftentimes there were folks who joined, and rather asked whether it’s ok to ask a question, instead of asking it. There were folks who asked who could help them for a particular problem they were facing: “Who can help me with X?”


When looking at my mailing lists, I still see the same pattern occurring. We had an acronym for these types of persons, and I think we educated them to stick with that pattern of behavior. We called the behavior: LOW – that’s short for Let Others Work.


Oftentimes, it’s easier to raise a question towards a forum where you can find answers. Oftentimes it’s easier to consult with someone who claims to be an expert in their field. Oftentimes it’s easier to rely on the expertise of the experts.


LOW taught me, that you don’t need that. You don’t need confirmation. You need that kind of feedback only if you find yourself in an environment that favors blaming for failure rather than encouraging learning from it. LOW occurs in environment that are not safe-to-fail. LOW occurs in environments where the demand of a task is at least two levels higher than the current ability of the person that the work is delegated to. In either case, there is a problem behind problem. If you only solve the facing problem, you are giving a fish to someone rather than teaching that guy how to fish.


Next time, someone asks you a question, teach the guy to answer that question on his own.


Terrible ideas stick

Back then, there were a lot of people that tried to achieve something. Oftentimes someone would join the #help.script channel, and ask how to implement an away script. Me, being a jiggling artist came up with ideas like a here-script as a response to that.


Seriously, there were a lot of away scripts already around. Of course everyone needed to have their special style that fit their purpose. I – and others around me – found the motivation for these things terrible. First of all, an IRC chat is a replacement for a face-to-face communication. Face-to-face communication comes with the drawback that, if you are not available, you won’t be responding. An away script will not solve that.


I think it’s similar with today’s vacation notification in email. A notification that you hate receiving another email in your response will not solve the underlying problem. The underlying problem is that some folks are good at online communication, others are terrible at it. A few weeks ago I considered myself good in online communication. Today, I turned on my vacation responder to indicate that people should take care of their messages to me, since I might not respond for another month or so.


That said. I try to be helpful by putting up a sign. But that might or might not fix the underlying problem. The problem is that – as a species – we have educated ourselves that it’s ok to respond to email quickly. We shouldn’t do that. If someone wants to have an urgent answer from you, he should pick up the phone, and not write you an email. In busy environment, I figured that it’s mostly a problem that you are so busy attending (probably unimportant) meetings, that people will not get you on the phone, so that they need to write you emails, so that you will have less time for meetings. An epic downward spiral.


The whole point? Terrible ideas like away scripts, dependent automated tests, and emails stick to the minds of people. Learn to unlearn stuff, and fall back for some of your habits to the ones that you used when you didn’t have mobile phones, and internet all day. These channels are way more efficient.


Some people won’t understand

Of course, there are some people that will not understand what you’re saying if you paraphrase me. Of course, there are certain people that will stick with their behavior as that served them well for the time being. Help them read the appropriate stuff. Help them to open their minds.


Some of these minds will remain douchebags. Some of these minds will never understand even if their life depended on it. Don’t focus your energy on these folks. Instead focus your helpful energy on the ones that are open for whatever you have to offer. Of course, there is an element of expectation management in here. And be aware that there are lost minds. Give those some more time. They will need it to unlearn their current behavior. If you don’t, you will overload yourself by forming a symbiotic relationship with the folks seeking that help.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2014 23:00

February 3, 2014

Shallow Disagreements

Stick long enough in context-driven testing, and you will hear the term “shallow agreement” one time or another. A shallow agreement happens when we forget to confirm our understanding regarding a user story before starting to work on it, and find out during the Sprint Review – or worse: later – that the functionality did not meet the expectations of our ProductOwner or end-user. Shallow agreement happens when we find out too late that we seemed to agree on something, but really weren’t. We didn’t check our assumptions, and usually both parties end up being disappointed by each other.


Last year, I realized there is also something like shallow disagreements – and I am not sure whether these are worse than shallow agreements.



What’s a shallow disagreement? With the description of a shallow agreement given in the introduction, a shallow disagreement happens when we seem to disagree, but are not. Shallow disagreements happen when we argue for hours about a particular design, and no one of us used code to clarify that, we find out after implementing the first approach that we had the same picture in mind, but could not get along with each other – for whatever reason.


Shallow disagreements happen if we fight with each other over a matter where each of us theorizes about the objectives of the other party, and never check our assumptions. Shallow disagreements happen when we fight over thoughtleaders’ approaches without diving into their teachings or visiting one of their courses. Shallow disagreements happen when we engage in a twitter fight, and don’t realize that the brevity of 140 characters does not provide us with enough communication bandwidth to get certain points across.


Personally, I think that shallow disagreements drown our energy, since we put so much of it into a fight that does not need to be a fight if only we could stop for a minute, and actively listen to the thoughts of the other. If only we would stop, and open our mind for the point, and try to understand their position. Shallow disagreements happen when we shot down our empathy for the sake of fighting with each other.


But what causes a shallow disagreement? I think it’s not lack of empathy to start with. Virginia Satir’s communication model taught me that it’s a lack of understanding the other party. Schulz von Thun taught me that it’s caused by a difference in argumentation levels. If I argument on the context level, and you only listen on the relationship level, then we are more likely to disagree with each other. Or as one of my colleague put it: if your thinking is no longer constraint by logic, you probably can only be reached by emotional argumentation.


What causes us to argument on different levels? What causes us to listen on different levels? Personally, I think a lack of trust can result in over-listening on the relationship level. Also a low self-esteem can result in being picky about the relation that the other one is trying to enforce on you. If you perceive a threat by the content of the message, you are also likely to react on a different level than the speaker tries to connect to you.


How can we dissolve shallow agreements? We have to ensure that we reach the same level of communication. Non-violent communication achieves that by making sure to address the crucial pieces of a conversation: the self, the other, and the context. The same applies to congruent communication based upon Virginia Satir’s model. Also some kind of empathy is necessary to notice for the individual speaker that he’s currently not reaching the same level as the other is hearing on. With that awareness, I found that I can more easily change my communication style to make sure that I reach the correct level in that other person.


What’s worse? Shallow agreement or shallow disagreement? I am probably biased here, but I find unnecessary fighting is dragging much energy from myself that I could otherwise use to bring value to others. Of course, shallow agreements are terrible. And I think that discussions and agreement are not binary. They are more on a fluent scale. That said, there is a thin line between checking assumptions to avoid shallow agreements, and over-checking assumptions and creating a shallow disagreement. If we miss to reach out to that other person on his or her level, then we are most likely to engage in a energy-drowning shallow disagreement that does not lead to new conclusions.


So, next time, you find yourself in a fight, check whether you have a shallow disagreement by varying your message to reach different levels of listening on the other person. Only if you find an unshallow disagreement, engage in the fight to avoid shallow agreements.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 03, 2014 23:00

February 2, 2014

Scaling Agile – A Meta Framework

Scaling Agile appears to be a theme. At times, I am wondering how many organizations there actually are that would adopt large scale Agile to start with. Currently, I have the impression there are as many scaling frameworks out there as participants that joined a casting show. But I am exaggerating at bit here. With all these frameworks out there, how do you pick the right one for you? Or should it be a mixture of all of the frameworks available? While diving deeper into scaling frameworks, I found some considerations at picking the right one fruitful. Here they are.



Retrospectives

Usually when I start evaluating a framework, I am starting with the most important thing in any agile adoption: the retrospective. The retrospective is a core practice. Indeed all other practices and principles can be derived from a working retrospective.


That said, when I evaluate a framework, I jump to the retrospective part first. What’s described there? How many different variations of retrospectives are recommended? Is there a pointer to literature on the topic? What particular phases are mentioned? What do the authors recommend for a large-scale distributed retrospective?


In my experience a simple retrospective format that focusses on what went well, and what didn’t work so well, does not work in the long run. Over time participants will leave the retrospective with the feeling that nothing is changing, and everything is becoming more and more terrible to work with. I am guilty of facilitating a couple of such retrospectives on my own. If the reader could be left with the impression that there does not need to be someone driving forward the actions identified in the retrospective, the retrospective format is likely to miss the point. If the reader is left with the impression that the ScrumMaster or Agile Coach takes all the action items with her, then the retrospective will not be leading to a self-organizing team, rather one that the ScrumMaster or Agile Coach is organizing. This is a sure way to a symbiotic relationship between the team and the ScrumMaster or Agile Coach. And it’s a sure way to self-organization failure.


Retrospectives are the place for the team to inspect and adapt their working process. They own the process, so they are the ones in charge for changing it. That means that team members should take action items with them into the next planning meeting and iteration.


On a larger scale the inter-team process also needs to have an inspect and adapt process. Is something mentioned in your framework about that? Who is participating in this larger retrospective meeting? Who will be in charge to move forward with the current process? These questions should give you some clues on the level of empowerment behind the particular framework that you evaluate.


The basis

The next thing I look for lies in the basis for the framework, and which Agile methodologies are described. The agile umbrella describes a couple of methodologies. Some of them are still in use 13 years after the original authoring of the agile manifesto. Others almost have disappeared.


For sure, no scaling framework needs to address all the methodologies that were around since 2001. A good scaling framework would recognize those methodologies that are widespread together with those methodologies that are helpful. To date, and in my experience, that would include the project management framework of Scrum, the technical practices from eXtreme Programming, and – maybe – the portfolio and support approaches from Kanban.


But there is more. Underlying successful implementations of Agile lie principles. If these principles are not understood, then single- and multi-team adoption of Agile is certainly going to fail. Alistair Cockburn has written about a couple of these principles in his Agile Software Development – The Cooperative Game. Other principles include systems thinking, queueing theory, beyond budgeting, and Lean thinking.


There are probably more which I am not aware of. When evaluating a framework, right after the retrospectives, go to the underlying basics. If these are not mentioned, you are likely to do a poor job of applying the framework, and its intended principles. Even if it comes with “best practices”, you are likely to miss the intention behind those practices, and might drop some which are crucial to success.


Scaling of key roles

After the retrospectives and the basis for scaling makes sense to me, I start to take a closer look into how key roles are scaled. How many ProductOwner do we need? How many coaches, ScrumMasters? How are legacy structures like architects and testers dealt with?


These questions are crucial when it comes to size. Some might claim that you shouldn’t scale any product development. Some might say that you need more managers. How do you know which one is true?


When it comes to agile software development, I have seen large organizations with manager ratios between one manager for 50 people and up to three managers per developer. If you need three managers to coordinate the work of one developer, then I would claim there is something wrong. Figure a road worker where three people tell him what to do. I don’t see how this is going to work effectively – and surely not efficiently.


Self-organization is a key concept in agile. That said, ProductOwners, coaches, and architecture and testing should work in self-organized groups. They should coordinate to evolve a common vision for the speciality. And they should work with their teams to improve that vision.


What’s the source?

Finally, I seek the source of the scaling framework. How often has this actually been implemented? How successful was the scaling approach in the long run? What about the market that organization falls into?


I suspect that “this one worked once here” methodology is something that you want to try with your large organization. If I were in charge of a large organization, maybe responsible for 500 employees, I would also look for success stories of the framework in question. How long is the reported period of time? When investing money to transition 500 people to a new methodology, I want to be sure enough, that it worked on a timeframe of at least seven years. I don’t want to tell half of my staff that I made a terrible mistake with that transition in five years just because I did a sloppy job of research today.


Also, take a closer look at the original companies where the framework comes from. How successful are they in their market? Did they go out of business? Did they miss important market windows lately? All of these questions could indicate that these companies are not as agile as they claim to be.


A Simple Framework Evaluation Framework

Retrospectives, underlying principles and values, scaling of key roles, and the origins – that’s all I need to dive into to come up with an opinion about a scaling framework. Thus far, it has not failed me.


A final piece of caution: though Jerry Weinberg points out in Becoming a Technical Leader that one path to innovation lies in copulation – that is combination of two successful ideas – cherry-picking some practices from one framework and another does not result in copulation. That would be more like taking an arm from one body of knowledge, and a leg from another one. If you miss to take the head and the heart, you will probably end up with Frankenstein’s monster.


Print Digg StumbleUpon del.icio.us Facebook Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 02, 2014 23:00