Markus Gärtner's Blog

April 21, 2025

Inline-Method refactoring considered under-valued

If you are like me and fell into programmer without proper introductions to the tools of the trade, you may have always wondered what those fancy options in the refactoring menu of your IDE are. One of the refactorings I often-times under-value and under-appreciate is the Inline refactoring. In a recent video from Arjan Egges on his YouTube channel I was reminded about the power that this refactoring sometimes has. Let’s explore.

In his video, Arjan shows a series of refactoring steps to simplify a simple code base. Watch his video to get an introduction and some good tips about how to tackle problems in your code base.

At a certain point, Arjan decides to combine several methods in the Company class of that Python code base. There is a find_vice_presidents() method, a find_support_staff() method, which he combines to a more general find_by_role(role) method Here is the relevant code before his steps:

class Company: """Represents a company with employees.""" def __init__(self) -> None: self.employees: list[Employee] = []... def find_managers(self) -> list[Employee]: return [e for e in self.employees if e.role == "manager"] def find_vice_presidents(self) -> list[Employee]: return [e for e in self.employees if e.role == "vice-president"] def find_support_staff(self) -> list[Employee]: return [e for e in self.employees if e.role == "support"] ...

Those look quite similar. Arjan introduces the new method:

... def find_by_role(self, role: str) -> list[Employee]: return [e for e in self.employees if e.role == role] ...

Then he deletes the old methods, but has to manually adapt the callers in his main() function in the script. When I first saw that, I started to wonder why he’s not making use of the inline refactoring instead. Here’s how I ended up doing the same without the need to manually adapt the calls in the main() function.

First of all, I replaced the original methods with calls to the new find_by_role() method. Here is the result:

... def find_managers(self) -> list[Employee]: return self.find_by_role("manager") def find_vice_presidents(self) -> list[Employee]: return self.find_by_role("vice-president") def find_support_staff(self) -> list[Employee]: return self.find_by_role("support") ...

After that I went to each of the methods I wanted to get rid of and chose the inline refactoring from my IDE (I use PyCharm in my instance while Arjan relies on Visual Studio Code. I’m not sure whether VSCode has the automated refactoring available to him.)

After inlining I ended up with basically the same code as Arjan, but felt a little less anxious if I had tried the manual approach – especially if this was a large code base with lots of uses of the original functions.

I think I learned this trick from Joshua Kerievsky in Refactoring to Patterns. I certainly used similar steps when I attended a Coding Dojo several years back in Düsseldorf, Germany, and had the self-identified architect in that dojo surprised by my refactoring steps when getting rid of a circular dependency between two classes.

Does it always work? Frankly, no. Later in the video, Arjan refactors the notification functionality. I tried to apply similar thinking there, but couldn’t pull this off with just relying on the inline refactoring.

Sometimes IDEs decide to do weird stuff with the inline as well. I recall instances during my Rust journey where I tried the same trick with inline, but for all the parameters the inline refactoring decided to create their own variables, even though they were already residing in another variable. I’m not sure whether that’s a Rust problem, or how the refactoring is implemented, but I found it annoying up to the point where I avoided inline refactorings in that language alltogether.

Why do I find the automated refactorings more safe than the manual approach? IDEs with their automated refactorings go the extra mile to make sure what you are trying to refactor is safe. If that’s not the case, the IDE will you right out so. I know, I’m human, and I can make mistakes. Sure, automated unit tests help me to identify where I screw up. But a carpenter also knows his toolbelt well, and knows how to use the tools they carry with them. I think we should do the same as software crafters.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on April 21, 2025 12:17

February 17, 2025

Cross-Team Coordination Patterns: Ordered

Effective cross-team collaboration is crucial for any organization, especially as teams grow in size and complexity. Without a structured approach, communication breakdowns, misaligned goals, and inefficiencies can arise. One way to ensure seamless collaboration is through Ordered Coordination, a pattern in which teams follow a mandated structure dictated by leadership.

In this blog post, we’ll dive deep into the Ordered Coordination pattern, how it works, its benefits and challenges, and best practices for implementation.

What is Ordered Coordination?

Ordered Coordination is a hierarchically mandated approach where an authority—such as leadership, project management, or governance teams—dictates how teams should collaborate. Unlike more decentralized coordination patterns, Ordered Coordination establishes strict processes and workflows to ensure consistency across teams.

Key Characteristics of Ordered CoordinationTop-Down Control: Leadership determines how teams interact, rather than teams deciding organically.Standardized Processes: Teams follow predefined steps and rules to ensure smooth coordination.Clear Escalation Paths: When conflicts arise, there is a structured process for resolution.Predictable Workflows: Tasks and responsibilities are executed in a set order to minimize confusion.Example of Ordered Coordination

Let’s consider a large enterprise IT department handling cybersecurity incidents:

Security Operations Team detects a potential breach.They escalate it to the Incident Response Team, which investigates the issue.If the breach is confirmed, the Engineering Team is notified to implement security patches.The Compliance Team ensures that legal and regulatory reporting requirements are met.

Each team has a defined role and follows a set process dictated by company policy. No team operates independently; instead, they must adhere to structured workflows mandated from the top.

Why Use Ordered Coordination?

Ordered Coordination is particularly useful in highly regulated industries, large organizations, and environments where compliance and risk management are top priorities.

Benefits of Ordered Coordination

✔ Consistency Across Teams

When teams follow a standardized approach, there is less room for misalignment or misinterpretation.This is especially valuable in industries like finance, healthcare, and government, where strict regulations apply.

✔ Predictability and Efficiency

Because teams know exactly how to operate, projects can be executed in a more organized and timely manner.This is particularly useful for large-scale enterprises with multiple teams working on interdependent tasks.

✔ Improved Accountability

Each team has a clearly defined role, making it easy to track progress and identify bottlenecks.If a project is delayed, leadership can pinpoint where the issue occurred and address it accordingly.

✔ Better Risk Management

Ordered Coordination ensures that critical processes (e.g., security, compliance) are followed precisely, reducing the risk of costly errors.This is essential for companies handling sensitive data or regulated products.Challenges of Ordered Coordination

❌ Lack of Flexibility

Teams may feel constrained by rigid processes, which can stifle creativity and innovation.If unforeseen challenges arise, teams must wait for approvals rather than acting independently.

❌ Bureaucratic Slowdowns

Because Ordered Coordination often involves hierarchical approvals, decision-making can be slower.This can be frustrating in fast-paced environments like startups, where agility is crucial.

❌ Resistance from Teams

Teams used to autonomous decision-making may resist Ordered Coordination.If not implemented carefully, it can lead to frustration and decreased morale.How to Implement Ordered Coordination EffectivelyStep 1: Define the Chain of CommandClearly outline who is responsible for what at each level.This should be documented in organizational policies and communicated to all teams.

Example:

Project Manager → Coordinates overall strategyTeam Leads → Ensure team members follow assigned tasksIndividual Contributors → Execute assigned tasks according to the ordered workflowStep 2: Establish Standardized WorkflowsDevelop process maps that clearly illustrate how teams should collaborate.Include detailed steps for approvals, escalation, and handoffs between teams.

Example: A structured product development workflow in an enterprise setting:

Product team gathers requirements and seeks approval from executives.Design team creates mockups and gets approval from stakeholders.Development team builds the product based on approved designs.Quality Assurance team tests the product before launch.Deployment team releases the product according to the planned rollout schedule.Step 3: Use Communication Tools for CoordinationTo avoid bureaucratic delays, leverage project management tools like Jira, Asana, or Trello to track workflows.Establish clear documentation and shared dashboards for visibility across teams.

Example: A cybersecurity team using incident tracking tools (e.g., ServiceNow) to ensure compliance with security policies.

Step 4: Regularly Review and Optimize ProcessesConduct retrospectives to analyze whether the Ordered Coordination approach is working effectively.Adjust workflows to address inefficiencies and improve team satisfaction.

Example: If a software release process is slowed down by excessive approvals, leadership can adjust policies to streamline decision-making.

When Should You Use Ordered Coordination?

Ordered Coordination is ideal when:

✅ Regulatory Compliance is Essential

If your industry requires strict adherence to legal guidelines, Ordered Coordination ensures compliance.Example: Healthcare companies following HIPAA regulations.

✅ Multiple Teams Work on a Single Deliverable

If several teams contribute to the same product or service, a structured workflow prevents misalignment.Example: A large-scale IT migration project involving security, infrastructure, and application teams.

✅ Risk Management is a Priority

If mistakes could lead to financial, legal, or reputational damage, Ordered Coordination provides checks and balances.Example: Financial institutions managing fraud detection processes.5. Ordered Coordination vs. Other Coordination PatternsFeatureOrdered CoordinationNamed CoordinationOblivious CoordinationControlCentralized (Top-down)DecentralizedNo formal controlDecision-MakingStructured & HierarchicalTeam-basedIndependentFlexibilityLowHighVery HighIdeal Use CaseRegulated industries, enterprise workflowsAgile teams, product developmentStartups, innovation-driven teamsChallengesSlow decision-making, rigid processesRisk of silos, misalignmentLack of coordination, inefficiencies6. Conclusion

Ordered Coordination is a powerful pattern for organizations that require structure, compliance, and risk mitigation. By implementing clear workflows, predefined roles, and standardized communication methods, teams can work together effectively while ensuring consistency across departments.

However, organizations must be mindful of its potential downsides, such as reduced flexibility and bureaucratic slowdowns. The key to success is balancing structure with adaptability, ensuring that processes remain efficient without stifling innovation.

Key Takeaways:

✔ Ordered Coordination works best for large, complex, and compliance-driven organizations.
✔ It ensures clear roles, predictable workflows, and accountability.
✔ Challenges include bureaucracy, slow decision-making, and resistance to top-down control.
✔ Implementing efficient workflows and communication tools can optimize Ordered Coordination for better performance.

By carefully structuring Ordered Coordination, organizations can create a more disciplined, efficient, and collaborative environment.

And if you made it up to this point and wondered “what the heck is he talking about?”, this pattern was generated by a large-language model. My dear readers probably know that I might have a different take on things. Let’s see when I will do a write-up of my own. It probably won’t be that positive as the generated text.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2025 11:22

February 12, 2025

My job went to Nvidia

… and all I got was this lousy blog entry.

For the title of this blog entry, I was reminded of reading about a book by Chad Fowler titled “My job went to India”. I read its successor, The Passionate Programmer, as it was discussed during the initial days of the software craft movement on the mailing list. Why did so many jobs move to Nvidia, the graphic processor company? Or didn’t it? Let’s find out.

At the moment, it seems that every company out there is innovating hard on AI approaches. Not just the executive suites are playing with it, but they also ask the technology side of things to implement it into their product. New job titles like “prompt engineer” can be seen on the job markets, and one of the beneficiaries behind all the hype stems from the underlying hardware technology that is used by so many LLM-providers: Nvidia with its Cuda-core technology first of all used on graphics processors to create 3D game worlds, and their respective computing power behind it.

Don’t get me wrong, I’m not trying to run an advertisement here.

Back in my university days, I first worked on an approach to detect faces in a stream of pictures. Real-time processing was our goal, meaning we could detect all faces in at least 30 frames per second. We did some research, and decided to implement an offline learning algorithm that was published by some folks at Intel called AdaBoost. We collected face sample images, and non-face sample images, fed it into our training algorithm. We were allowed to use some faculty machines, so with training on 8 or so computers, the training of the resulting detector took about a week or so. After that, we could evaluate the detector performance with an evaluation set of images, that was not part of the original training set to get a feel for the false positive and false negative rates of the detector, and even try it out on some live camera images.

That was back in 2002-2003 on a Pentium III 800 processor.

Later on, we used the same training and detection algorithm as part of the “Visual Active Memory Processing and Interactive Retrieval” (yes, V.A.M.P.I.Re.) project to detect usual office supplies like a keyboard, a mouse, a TESA roller, and so on. We had to deal with a camera mounted on a helmet and the respective views on the office supplies.

Even later, I wrote my diploma thesis on hand gesture detection in a robot-human interaction scenario.

I recall wondering in 2010 that – at that point in time – my telephone was able to do face detection in real-time and it fits into my pocket.

That’s when I realized a certain glimpse of Moore’s Law in effect.

Skip forward to today, and our computing power increased to the degree that Large Language models can now be trained in data center, or as DeepSeek showed us, even at home on your very own GPU.

The downside?

Everyone plays around with a use case inside the larger things that used to be called AI back in my day. Heck, it wasn’t even called AI back in my day. It was just a means to detect certain aspects in an image.

So….. did my job/your job/anyone’s job move to Nvidia?

I don’t think so. The problem with the underlying learning algorithms to some extend lies in a detector or a generator becoming to over-specified. With all the texts out there currently being generated by those LLMs, the new training data will have its difficulties to generate good texts that are not overly specific – and I also find them boring. There are several parameters at play that a human needs to watch in the training process. For example, in the face detection scenario there was a problem with reliably detecting white person faces and black person faces in the same quality. Usually you could only have reliability for one kind, but not the other at the same kind. In the years to come, the resulting LLMs will be trained on too many LLM-generated texts to produce an output that’s still worthwhile to read in the end. Probably something similar will happen with image generation.

Personally, I like Taiichi Ohno’s “automation with a human touch” when I think about tools like these. It was the same with test automation, now it’s the same with LLMs. They may have a use to make tedious tasks easier, but I still want to double-check their results. For that, we still need our skills from the old days, but can get to some results quicker. Not sure whether these results will always be better, though.

Take-away?

Hone your skills, train them, get better at what we humans do best: learn. With a one week feedback cycle we had quite a long feedback loop for our face detectors. Nowadays you might get a shorter feedback loop. That means, that you can learn faster, though you will still need to rely on your skills to double-check the results, rather than taking a good quality for granted. And that will stay.

What are your thoughts?

This text was human-generated.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on February 12, 2025 10:58

January 27, 2025

Cross-team coordination patterns – Oblivious

With the advent of what I call the second generation of scaling approaches out there, maybe it’s time to dive deeper into the patterns that larger organizations underlie.

I have been cooking this blog entry series for a while now in my head. Time to get some structure to it. This first entry deals with organizations that are oblivious to their need for cross-team coordination practices.

It’s easy to fall into the oblivious category. Actually, I think it’s a trap that some companies set out for themselves.

These companies do not realize that their teams have a need to coordinate across teams. Sometimes they set themselves up to do so.

The problem usually is this: The company had to deal in the past with an ever growing product. At a certain point, divide-and-conquer strategies were applied to the underlying organizational problem, cutting down the product into multiple parts. Unfortunately, someone with a technical background was in charge of doing so, so the company ended up with different technical teams.

Stemming from statistical process control and the underlying Tayloristic views, the company ends up with a front-end department and a back-end departments. Or a user interface group, a middleware group, and a database group, Or… I think you get the point.

These companies are usually lead in a way to optimize all the parts, therefore the overall product will be optimized. Unfortunately, that thought could not be further from the truth. A friend of mine used to say

If everyone thinks about themselves first, everyone is thought about.

(I’m not sure whether that translates that well.)

Problems arise from ignoring the interconnectedness of all the parts to form a solution for people out there. These “people out there” might be customers in the market, they might be internal users. If part of an overall solution is optimized for itself, the interconnectedness of the different parts may eventually yield a less-than-optimal overall solution to one or more than the other parts of the solution.

Companies that are oblivious to the need for cross-team coordination also don’t recognize the interconnectedness of the different teams. Thus they fall short to deliver a good product, since they might have optimized their AI-efforts, but the user interface lacks clarity. Or they might have invested in a high-performing database solution, but their middleware was written by the last five generation of interns.

Of course, I’m over-dramatizing.

I’ll be looking at other patterns of cross-team coordination in up-coming posts to maybe offer some different perspectives.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on January 27, 2025 12:09

January 15, 2025

Dead, undead, or alive – which is it now?

Back last year, I started a reflection on whether Agile might be dead, undead, or still alive and thriving. Ever since I saw my blog entry series popping up in Jurgen Appelo’s collection of blog entries on the topic, I was reminded that I still lack a wrap-up of my thoughts, after having gone through them. Let’s start this year with a reflection on my reflections.

Other entries in this series:

“Agile is dead” – and I thought, I had that conversation…Agile is deadAgile is undeadAgile is alive

P.S.: On a personal note, 10 out of 45 colleagues at it-agile have been informed last week, that their employment contract will end this year. That will include myself as well. You can see the more general announcement (in German) on LinkedIn. That said, if you’re looking for someone with my skills, talents, or know anyone that I might be a good fit for, I’d appreciate if you reach out to me, or gave me a hint in another fashion. Thank you in advance.

Different things to different people

I basically started off this series with a re-utterance of Jerry Weinberg’s famous “Quality is quality to some person” statement, and the generalizations from James Bach and Michael Bolton.

Looking back, the context-driven testing community names the underlying problem a shallow agreement. We think we agree on what certain things mean, but that ain’t so. That might have been the problem with the Agile goldrush, as Scott Ambler coined it a while ago in a similar entry.

Agile is dead

Companies that tried to jump onto the Agile hype train sooner or later found out that it’s not just a thing you do in the IT department of your larger enterprise. That’s when crisis hit the internal agile coaches and Scrum Masters that focused solely on their own team without keeping the underlying organizational system in their view.

When Agile is understood merely as a tool to bring shareholders value by re-organizing the IT department, things are bound to go awry. Or as Virginia Satir put it:

The beginning of a crisis is always the end of an illusion.

The beginning of the crisis in the Agile community that appears to spread not only nationally in Germany, but world-wide, is merely the end of the illusion, that – 20 or so years in – the goldrush will continue.

Agile is undead

That sort of effect comes bundled with a burnt landscapes (hope that wording is not too soon after the California fires). Companies applied their understanding of Agile with their understanding of corporate structures and ceremonies.

That kind of mix-and-match approach unfortunately did not work, does not work, and will probably never work. Usually my articles are colored and inspired by recent happenings and the things I’m reading at the moment in time. John Roberts refers to the pitfalls and failures of the mix-and-match approach in his book The Modern Firm. Event after decades of Organizational Development floating into the enterprises of the world, they still appear to lack that particular insight, thereby creating Agile zombies with their version of mix-and-match.

Agile is alive

Yet, there are few more skillful Organizational Developers, that seem to succeed. In those places, the discussion around Agile or not has vanished. Instead these companies excel – even in economic stressful times – at what they’re doing.

After all, the whole conversation around Agile once started with adaptability. The question though remains, to which extend will the Agile undead companies infect their thriving competitors with their version of the zombie virus.

Call it Moral Mazes, Mad Business, or the Knowing-Doing Gap, maybe sticking to survivor’s bias will not work in the long run, even for an approach that claims to be adaptive.

So what? Now what?

From my point of view, Agile is not dead or undead. The problems stem from the lack of adaptability in the Agile methods and frameworks on their own. Some companies succeed, yet, still too few to have an impact.

Along comes the aging of the original proponents. The last update to the Scrum Guide has taken place nearly five years ago. Even though, responsibilities to keep the Guide current also face the struggle that there is a thriving industry behind it that earns money by sticking to the old ways, yet neglecting considerations like non-static, more dynamic teams, and the uprise of second- or even third-generation scaling frameworks like UnFIX or FAST Agile diversify the landscape that was once focused on Scrum vs. Kanban.

I think this is a natural development that had to take place. Don’t ask me where it will lead, though. There might be a post-Agile hype goldrush on the brink right now, there might not. I suspect, only time will tell us, so there’s value in staying patient, and continue to educate and coach what we learned works on our work places.

No deep insight, you may suspect, but remember, there is no silver bullett.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on January 15, 2025 02:57

November 1, 2024

Agile is alive

After I went over the various reasons why Agile may be perceived dead, or undead, it’s time to take a look on the reasons why Agile is alive – and I think it will remain so for some time. Maybe the goldrush times of Agile are over, but there are certainly some things that will stick. Let’s explore again.

Other entries in this series:

“Agile is dead” – and I thought, I had that conversation…Agile is deadAgile is undead

Maybe the various methodologies will go away, maybe they will be replaced with the next thing. Still, I think, there are many vital lessons that we will take with us into a potential post-Agile world.

One thing that Agile methodologies emphasized the most if probably to work more effectively rather than more efficiently. Effectiveness in this context means to do the right things, while efficiency means to do the things the right way. Rather than sticking to doing the wrong things righter, we learned how to do the right things in the first place, and eventually getting to a state where we can do them efficiently as well.

On this topic, there are many skills of a Scrum Master, that can help us to do the right things by offering an outside perspective, creating the right atmosphere to come up with a decision as a team, or just by challenging us to think outside the box.

Even if we abandon many more things in the post-Agile world, getting people to do the right things is certainly a topic that not only applies to an Agile world, but beyond that.

Of course, that raises the meta question of how do we know what the right things are? The reflective mindset of a Scrum Master that challenges us to get in touch with real users before relying on too many assumptions will come in handy there as well.

But what are the things that groups of people or teams will be working on in the companies in the post-Agile world? Certainly, there will be companies that will work on providing technically feasible products that users want and generate a continuous revenue stream for them.

I recall I once researched the origins of the famous comic on what the customer wanted, what the architect came up with, what the designer ended up with, and so on. To my surprise the original comic was published in a newspaper in the year 1973. That’s when I realized how old this problem actually is.

All that said, we will most likely need people that not only can deliver a product in a short amount of time, but also people with the tools at hand to know what is valuable to users out there, and how to build that in iterative-incremental cycles. In other words, we will need the whole skillset of good product owners that act as entrepreneurs in organizations, and invite users, stakeholders, and the team to co-create products that people are willing to pay for.

And we will certainly also need people that can deliver the implementations of those products. Those people need the skillset of programmers, but also other skills like risk-exploration, user experience design, documentation, and the like. In other words, we will need people contributing to developing a product.

Especially in larger organizations we will also need people that help people with all those different skillsets to thrive. Sometimes the serveant leader of a Scrum Master alone is not enough. Then we might need people with lateral leadership skills. Those people can make all the other aware of the different constraints within they are expected to organize themselves. You may call them managers, line managers, or even Scrum Masters.

All that said, to me it seems all those skills in the different roles in the agile methodologies are needed, even if we discontinue calling it all by the term Agile. But if we need all those skills, how come we may think Agile is dead then? I don’t think so, that’s why I claim that Agile is well alive, we just have not found the words to bring that message across in a concise, digestable way for all the people out there to hear it.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2024 08:00

October 31, 2024

Agile is undead

Continuing our journey in the “Agile is dead” space, I thought Halloween was the perfect date to explore the ways in which Agile is actually not just dead, but undead in many companies. If you read my earlier entry, you might have come across the thought that this was where we were heading towards. Let’s investigate further.

Other entries in this series:

“Agile is dead” – and I thought, I had that conversation…Agile is dead

For quite some time companies have jumped on the bandwagon of “agile” – without trying to understand the paradigm shift associated with it, asking for measurements, metrics, asking for higher velocity, roadmaps for the next ten years, and whatever horror stories you may have come across.

At its core, Agile development methods are a bet to deliver the biggest value given the time that we have. Agile is a bet to deliver the most valuable product that we can in the available time.

Of course, at its core, there is a trade-off decision. We could deliver more gold-plated features if we had more time. But current market-pressure, opportunities against our competitors, and legal deadlines don’t always allow us to take the time to gold-plate everything.

But then again, people in companies from more traditional processes are used to getting more reliable data on where things are. If you dig deeper, though, you will find out that it’s an illusion of safety they are getting. More often than not, deadlines pass by, features are scrapped or delivered in lower quality, since more traditional processes do not allow to put counter-measures on the various trade-off decisions in developing a product. “Follow the plan”, “The deadline and scope is fixed” are phrases you may have come across.

Given that situation, whenever Agile methodologies tried to cross the chasm to the Early and Late Majority, the core of the message got washed out. Different approaches started to maintain the illusion of safety and control to upper management. Tool vendors jumped on that waggon, and incorporated metrics for all those false idols that are dominant in the corporations out there.

And don’t get me started on the things people started to do when they tried to pick the easy parts of a certain scaling framework – with neglecting the harder ones (or the framework even confusing well-established terms and tools, contributing to the whole situation we find ourselves in now).

“I focus on my team, since I don’t know how to change the whole organization” is the most scariest phrase I have heard over the years from a Scrum Master. Basically, it’s part of your job as a Scrum Master to develop the organization to help your team bring better outcomes for your customers and the company your team works in. If you fail to do that by “just focussing on your team” with facilitation of various meetings, you are missing the point.

True, there is a time for good facilitation. Over time, though, your team should now on its own how to facilitate themselves and solve their own problems. In most companies that I have seen, highly-educated people work on Scrum Teams, and those adults are capable to solve problems, even complex social ones. Let them shine on that instead of turning them into zombies.

It does not surprise me at all, that so many companies stop to see value in their Scrum Masters and Coaches, if the only thing they do is to turn their developers into zombies. Unfortunately though that means that the companies that would benefit from good coaches are turning their heads from them. A vicious cycle.

Is all hope lost? Let’s see in the next entry.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2024 08:00

October 30, 2024

Agile is dead

This entry is a continuation of the discussion I started yesterday. Stick long enough in any community, and over time you will hear claims about the movement being dead. For some communities, there is some reasonable truth to the claim – at a certain point in time. To others, there is not. Let’s tackle some of the background on why I think that Agile might be dead.

Other entries in this series:

“Agile is dead” – and I thought, I had that conversation…

Some of my long-time readers may or may not know that I was involved back in the days when the Software Craft movement started to form. I recall the vivid discussions we had around the new left side when it came to value statements, taking on the Agile manifesto by formulating stronger points that we were striving for in the Craft manifesto. After that we thought hard on our version of the 12 principles in the Agile manifesto. The whole discussion on a list of 13 people ended up in the statements:

We careWe practiceWe learnWe share

To this day, I still love the brevity of those statements that Doug Bradbury eventually found in our discussion.

That was 15 years ago, though.

We didn’t change much with the manifesto. True, there is the next Global Day of Code Retreat coming up, there are many local meet-ups around the globe, various German communities in different cities, and of course the many conferences that sprang from the Software Craft and Testing (SoCraTes) (un)conferences, mostly around Europe.

I never thought that we would change the world through the publication of a manifesto, though. 15 years in, many people still struggle with TDD, taking baby steps, heck, even I do most of the time. A week after the manifesto got published, someone reported on the mailing list that so-and-so-many people had signed it in the meantime. When it came to interpreting the numbers, someone summed it up as “so-and-so-many people fight crappy code.” That felt good in the moment.

Taking a turn back to the Agile community, we’re at the point in time, 23 years after the manifesto got published, that things sort of feel different, too.

Originally, there were many methods under the Agile umbrella: Crystal, Scrum, Extreme Programming, Pragmatic Programming, Adaptive Software Development, DSDM, FDD, Conext-driven testing, you name it.

Today? In most organizations that explore which Agile flavor fits best for them, Scrum or Kanban are probably the most explored – and nothing in between. Heck, Kanban even does not want to be an Agile method.

23 years, and all we learned was Scrum or Kanban.

True, there is more. From my standpoint, we’re currently experiencing a third wave of scaling approaches with unFIX and FAST being the new entries that join more classical scaling frameworks like LeSS or Nexus from the first wave – or was it the second? I’m not sure.

Personally, I’m joining Mike Beedle’s viewpoint, that S_Fe never was an Agile scaling approach. Yet, I think, it turned many things for the worse. If you ask around in some of the S_Fe companies, one phrase keeps on repeating: “Oh, we’ve picked the things we liked from the framework. We’re not doing all of it.” To no surprise, those are also the companies that try to distance themselves from Agile.

But I digress.

I think Agile is dead in those companies that saw Agile as a new vehicle, without trying to understand the underlying paradigm. To make matters worse, the same companies took from different paradigms things they liked before understanding multi-method, multi-paradigm design decisions, and how they could be properly joined together to yield the benefits they tried to gain. Unfortunately, S_Fe with its marketing fluff that appeals to “you don’t need to change much” managers put the downfall of Agile on steroids, from my perception.

Going back to my personal story, and how I got to know Agile, people never experienced the benefits the way I did in the shops that gave up on Agile. Team work, fun, alongside with meaningful business benefits. I can totally understand that people distance themselves from it right now. I even fear I might have contributed to that downfall.

I think it’s a good thing, Agile is dead in those companies. I hope they might discover something new over time. The world of work has been shifting for some time now. Maybe it’s time to discover new ways of working in a post-Agile world. Who knows what’s good or bad?

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on October 30, 2024 08:00

October 29, 2024

“Agile is dead” – and I thought I had that conversation…

Stick around long enough in the agile world, and sooner or later you come across the age-old discussion about whether agile is dead – as is the case at the time of me writing these lines. Just when I had that discussion again at two recent user groups, I recalled on my way back home that I had that kind of discussion about a decade ago. So, when I came home, I had to digest my old blog entries. To my surprise, I found something similar, yet, seemingly different in my blogosphere past.

“Agile is dead” vs. “Quality is dead”

The current discussions about agile being dead or not felt quite familiar to me. So, when I looked for older posts from me, I put the whole thing into the timeframe when we worked on the Craft manifesto. To my surprise, that wasn’t the case. The most relevant discussions I found were Quality is not dead and a rant a few years later on Quality /is/ dead.

The first entry is mainly a reaction to a blog entry from James Bach back in the days. One point I raised in there – of course – came from Jerry Weinberg:

Quality is value to some person.

James and Michael Bolton generalized Jerry’s original point over the years:

For any abstract X, X is X to some person at some time.

If we transfer that back to Agile, we get:

Agile is Agile to some person at some time.

Of course, in general, Agile is an abstract term that might mean different things to different people. The latest “Agile is dead” discussion portions stem from different understandings of what Agile may or may not mean. Given the situation in Germany right now, it seems that most larger companies stopped their investments in any agile transitions, transformation, or however they ended up calling the organizational change endeavour.

Of course, that means that the C-level suit in all of these companies stopped to see the benefits they hoped for. So, senior leadership fell back to their cost-cutting cookie cutter approaches, and threw out folks that they did not see contributing to product development – like Scrum Master, Agile Coaches, and the like.

Given we just came out of a pandemic when – a couple of years back – many Agile Coaches started to leave companies that were underinvesting into being Agile, many of them only paying lip-service to their efforts, relabelling older roles, and hoping for different results than before, the move now from senior management feels like a form of retribution – at least to me from the outside. At least, I find the situation ironic to some extent – and it would be funny if it were not as serious as I think it is.

The discussions I was part of in the past 2+ years on whether Agile is dead, alive, or maybe even undead seem strikingly similar to the discussions I was part of when Quality was declared dead, undead – and maybe even alive. I’m going to explore the different perspectives over the next few days. Let’s see where I end up.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on October 29, 2024 08:00

September 30, 2024

My view on the current state of Agile

Stick along long enough in software development, and you may have come along the ride for Agile software development practices, methodologies, and/or frameworks. Stick along long enough in that community – ok, that might be a reach too far, given that community only exists about 30 years or so. All of the above said, I notice some developments lately around the term Agile, and need to get my thoughts down. Not that I think I have a particular relevant perspective to start with. But maybe I can offer some perspective to one or another reader.

If you know me, you probably know that I have a tendency to look back on where did we come from to better understand things happening in the present. If you are not that kind of understander, maybe this blog entry might not be for you. So, be warned before moving on.

Approaches and Frameworks

Starting from the early days, with Scrum, eXtreme Programming, Feature-driven development, Pragmatic Programming, DSDM, Crystal and Adaptive Software Development joined by the Context-driven testing school of thought forces joined together to discover a commonality among them. I once heard one of the co-authors say that he thought the particular meeting in Snowbird, UT in February 2001 was probably the only point in time when the 17 people attending the event was able to agree on something.

Some years past, and eventually at some point I also became aware of this particular group of people and the writings. It was the where things felt still engaging – at least to me. I also started to hear concerns from the originators of the Agile Manifesto. Take for example Brian Marick’s AR⊗TA keynote, or the missing fifth value pair. It was the time close to the emergence of the Kanban community as well.

In hindsight I consider the approaches that came out of all those efforts, like the Craft community and the Lean Kanban movement as sort of a 2nd generation of approaches. Some never really took off, ,though they shared the sentiment I experienced back in the day, and they had some really great ideas floating around. But some of them wouldn’t stick, but the whole Agile movement continued.

Value systems and Methodologies

Something similar happened when it comes to value systems alongside guiding principles in my view. Let me try to explain. Early on, the principles behind the context-driven testing school were a clear driver for my work as a tester – back in the days. After experiencing first-hand how the ideas behind XP, Scrum, and so on came to fruition and actually worked, thought it seemed counter-intuitive on first read-throughs, I was convinced there is a better way to work than the models I had experienced up until then. I became addicted – and still am.

Skip forward a few years, and the initial methodologists starting to publish new models of their then-current understandings of the market situations. It was the times when Joshua Kerievsky started to share his Modern Agile thoughts, Alistair Cockburn wrote about his Heart of Agile, and I’m sure there are many that I missed. In fact, I recall reading from Alistair Cockburn back in the day on a social media platform that was still known to be Twitter, that lots of original methodologists were publishing new things, and he appeared curious to me on the things to come.

I think this uprising was the 2nd generation of methodologies that was coming along. A second wave for the folks in the early and late majority of the technology adoption curve.

Scaling – or what about more than one team?

Another re-appearing topic over the years has been the question: “But Agile favors development of a product in one team. What do I do when I have larger products?”

In the beginning there was one initial publication on the topic: the Crystal family of approaches (I hope Alistair bears with me on calling the Crystal family approaches). You had Crystal White for smaller developments up to Crystal Red for larger product development groups.

In my eyes, the downfall started to appear whenever some large business consultancy coined the phrase “Spotify Model” deriving it from the 2013 presentations and write-ups on how Spotify worked then, combined with the up-roar of what Mike Beedle always called S_Fe, since it never had a grounding in the missing letter. That was also the time when Large-scale Scrum got a name (even though their approaches were floating around way longer than that), Scrum@Scale started to appear, and I got involved in the ScALeD principles and the Agile (De-)Scaling Cycle. Those appeared to me as a 2nd generation of scaling approaches.

Skip forward a few years, and the neglected two years of a global pandemic, and approaches like UnFix and FAST among others started to appear. By my counting those would the 3rd generation of scaling approaches.

Given we appear to be in the 3rd generation of the scaling approaches, while still being on the 2nd generation of methodologies and frameworks, I dare to raise the point that folks in the late majority are now trying to think Agile from the perspective of “many teams first”. And I assume they will soon start to rediscover the forgotten lessons of the things that came earlier, like test-driven development, software teaming, and internal open source.

So, where are we heading to?

Honestly, I really can’t tell. More and more I hear from companies that got disappointed from the gap between early Agile adoption promises and the results they see now. Maybe by a lack of understanding of the necessary investments they need to make (i.e. inspired from the Agile Fluency model). Maybe the companies paid a lip-service to these efforts anyways to check mark on things they tried. Maybe because they tried, and for reasons yet unknown they couldn’t get it working for themselves.

I still see things from the Agile movement that we will need in a potential post-Agile world, and there are some things that will be more likely to stick, while others may or may not go away completely.

No matter how I look at things, though, I have a strong belief in the value of the things I learned over the years, and the things I passed on, stemming from my experiences working in an agile way. I really don’t care too much whether we will continue to call it Agile in the future or not. But given how many companies have a negative association with that term right now, we might be on the brink of discovering a new name – alongside with some more generations of ways of working that I am sure will be helpful to the generations of product developers yet to come.

I look forward to that. And maybe it will be the time when we stop fighting over a particular item in the Definition of Done, or whether to use Scrum or Kanban, and just know how to work in an effective way.

Print Twitter LinkedIn Google Bookmarks

 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2024 13:01