Kill It with Fire: Manage Aging Computer Systems (and Future Proof Modern Ones)
Rate it:
Open Preview
21%
Flag icon
From an economic perspective, there’s a difference between risk and ambiguity.
22%
Flag icon
Artificial consistency means restricting design patterns and solutions to a small pool that can be standardized and repeated throughout the entire architecture in a way that does not provide technical value.
22%
Flag icon
Figuring out when consistency adds technical value and when it is artificial is one of the hardest decisions an engineering team must make. Human beings are pattern-matching machines. The flip side of finding familiar things easier is that we tend to over-optimize,
23%
Flag icon
A big red flag is raised for me when people talk about the phases of their modernization plans in terms of which technologies they are going to use rather than what value they will add.
23%
Flag icon
Teams tend to move in the direction they are looking. If we talk about what we’re doing in terms of technical choices, users’ needs get lost.
23%
Flag icon
three principles when developing a strategy around a new legacy system.
Dale Alleshouse
.c1
23%
Flag icon
Modernizations should be based on adding value, not chasing new technology. Familiar interfaces help speed up adoption. People gain awareness of interfaces and technology through their networks, not necessarily by popularity.
Dale Alleshouse
.c2
24%
Flag icon
The terms legacy and technical debt are frequently conflated.
25%
Flag icon
A system cannot have performance issues unless the organization that owns it has defined expectations.
27%
Flag icon
If your goal is to reduce failures or minimize security risks, your best bet is to start by evaluating your system on those two characteristics: Where are things tightly coupled, and where are things complex?
27%
Flag icon
Loosening up the coupling of two components usually ends with the creation of additional abstraction layers, which raises complexity on the system. Minimizing the complexity of systems tends to mean more reuse of common components, which tightens couplings.
27%
Flag icon
A helpful way to think about this is to classify the types of failures you’ve seen so far. Problems that are caused by human beings failing to read something, understand something, or check something are usually improved by minimizing complexity. Problems that are caused by failures in monitoring or testing are usually improved by loosening the coupling (and thereby creating places for automated testing).
28%
Flag icon
When you first take on a legacy system, you can’t possibly understand it well enough to make big changes right away.
29%
Flag icon
When both observability and testing are lacking on your legacy system, observability comes first. Tests tell you only what won’t fail; monitoring tells you what is failing.
29%
Flag icon
every large-scale legacy system has at least one square peg to contend with. It’s impossible to finish the job if all you know how to do is solve for round holes.
29%
Flag icon
If the old system is written in an obsolete technology relevant only to that particular system, the team maintaining the old system is essentially sitting around waiting to be fired. And don’t kid yourself, they know it.
30%
Flag icon
The most relevant guide for legacy modernizations is Michael Feathers’ Working Effectively with Legacy Code.
30%
Flag icon
Although it might seem risky, consider iteration in place to be the default approach. It is most likely to produce successful results in the greatest number of situations.
30%
Flag icon
Good planning is less about controlling every detail and more about setting expectations across the organization.
30%
Flag icon
your plan should focus on answering the following questions: What problem are we trying to solve by modernizing? What small pragmatic changes will help us learn more about the system? What can we iterate on? How will we spot problems after we deploy changes?
31%
Flag icon
I tell my engineers that the biggest problems we have to solve are not technical problems, but people problems.
31%
Flag icon
the habit of confusing the quality of the outcome with the quality of the decision. In psychology, people call it a self-serving bias.
31%
Flag icon
Success and quality are not necessarily connected.
32%
Flag icon
Scale always involves some luck. You can plan for a certain number of transactions or users, but you can’t really control those factors, especially if you’re building anything that involves the public internet.
32%
Flag icon
I don’t know anyone who can predict how multiple technologies will behave in every potential scale condition, especially not when they are combined.
32%
Flag icon
whether a service works at its initial scale and then continues to work as it grows is always a mix of skill and luck.
32%
Flag icon
In Moravec’s own words, “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
32%
Flag icon
Hardware and software interfaces haven’t gotten simpler in the last two decades, we’ve just abstracted away a lot of annoying differences that made the issue of x86 versus x64 or downloading drivers a normal part of working even casually with computers.
32%
Flag icon
The older a system is, the more likely the platform on which it runs is itself a dependency.
34%
Flag icon
Organizations that think the tools are the solution typically end up with longer, more painful, and more expensive modernizations.
35%
Flag icon
Software renovation is intended to be semi-automatic: the analysis is automatic, but software engineers do the actual work of restructuring the code.
35%
Flag icon
the methodology is what drives the bulk of the impact. The tools themselves are not as important as the phases of excavating, understanding, documenting, and ultimately rewriting and replacing legacy systems. Tools will come and go.
35%
Flag icon
guidelines: Keep it simple. Don’t add new problems to solve just because the old system was successful. Success does not mean the old system completely solved its problem. Some of those technical decisions were wrong, but never caused any problems. Spend some time trying to recover context. Treat the platform as a dependency and look for coupling that won’t transfer easily to a modern platform. Tools and automation should supplement human effort, not replace it.
35%
Flag icon
Particularly when the organization is big, the pressure to run projects the same way everyone else does, so that they look correct even at the expense of being successful, is significant.
36%
Flag icon
The funny thing about big legacy modernization projects is that technologists suddenly seem drawn to strategies that they know do not work in other contexts.
36%
Flag icon
Few modern software engineers would forgo Agile development to spend months planning exactly what an architecture should look like and try to build a complete product all at once. And yet, when asked to modernize an old system, suddenly everyone is breaking things down into sequential phases that are completely dependent on one another.
36%
Flag icon
what works when rebuilding a system is not all that different from what worked to build it in the first place. You need to keep the scope small, and you need to iterate on your successes.
36%
Flag icon
Assuming you fully understand the requirements because an existing system is operational is a critical mistake.
36%
Flag icon
Existing systems can be a distraction. The software team treats the full-featured implementation of it as the MVP, no matter how large or how complex that existing system actually is. It’s simply too much information to manage. People become overwhelmed, and they get discouraged and demoralized.
36%
Flag icon
if all the work is structured around one critical problem that you can measure and monitor, these conversations become much easier.
36%
Flag icon
Legacy modernization projects go better when the individuals contributing to them feel comfortable being autonomous and when they can adapt to challenges and surprises as they present themselves because they understand what the priorities are. The more decisions need to go up to a senior group—be that VPs, enterprise architects, or a CEO—the more delays and bottlenecks appear.
37%
Flag icon
Having a goal means you can define what kind of value you expect the project to add and whom that value will benefit most. Will modernization make things faster for customers? Will it improve scaling so you can sign bigger clients? Will it save people’s lives? Or, will it just mean that someone gets to give a conference talk or write an article about switching from technology A to technology B?
37%
Flag icon
Good modernization work needs to suppress that impulse to create elegant comprehensive architectures up front. You can have your neat and orderly system, but you won’t get it from designing it that way in the beginning. Instead, you’ll build it through iteration.
37%
Flag icon
If you’re thinking about rearchitecting a system and cannot tie the effort back to some kind of business goal, you probably shouldn’t be doing it at all.
37%
Flag icon
the number-one killer of big efforts is not technical failure. It’s loss of momentum.
37%
Flag icon
To be successful at those long-term rearchitecting challenges, the team needs to establish a feedback loop that continuously builds on and promotes their track record of success.
38%
Flag icon
Good measurable problems have to be focused on problems that your engineers give a shit about.
38%
Flag icon
Have you ever found yourself in a meeting that felt like it was running around in circles? Meetings where people seemed to be competing to see who could predict the most obscure potential failure? Meetings where past decisions were relitigated and everyone walked away less certain as to what the next steps were?
38%
Flag icon
Facilitating technical conversations is more important than being the decision-maker because unproductive and frustrating meetings demoralize teams.
38%
Flag icon
I can typically sort meeting information into three buckets: things that are true, things that are false, and things that are true but irrelevant. Irrelevant is just a punchier way of saying out of scope.