More on this book
Community
Kindle Notes & Highlights
Read between
February 27 - March 17, 2022
From an economic perspective, there’s a difference between risk and ambiguity.
Artificial consistency means restricting design patterns and solutions to a small pool that can be standardized and repeated throughout the entire architecture in a way that does not provide technical value.
Figuring out when consistency adds technical value and when it is artificial is one of the hardest decisions an engineering team must make. Human beings are pattern-matching machines. The flip side of finding familiar things easier is that we tend to over-optimize,
A big red flag is raised for me when people talk about the phases of their modernization plans in terms of which technologies they are going to use rather than what value they will add.
Teams tend to move in the direction they are looking. If we talk about what we’re doing in terms of technical choices, users’ needs get lost.
The terms legacy and technical debt are frequently conflated.
A system cannot have performance issues unless the organization that owns it has defined expectations.
If your goal is to reduce failures or minimize security risks, your best bet is to start by evaluating your system on those two characteristics: Where are things tightly coupled, and where are things complex?
Loosening up the coupling of two components usually ends with the creation of additional abstraction layers, which raises complexity on the system. Minimizing the complexity of systems tends to mean more reuse of common components, which tightens couplings.
A helpful way to think about this is to classify the types of failures you’ve seen so far. Problems that are caused by human beings failing to read something, understand something, or check something are usually improved by minimizing complexity. Problems that are caused by failures in monitoring or testing are usually improved by loosening the coupling (and thereby creating places for automated testing).
When you first take on a legacy system, you can’t possibly understand it well enough to make big changes right away.
When both observability and testing are lacking on your legacy system, observability comes first. Tests tell you only what won’t fail; monitoring tells you what is failing.
every large-scale legacy system has at least one square peg to contend with. It’s impossible to finish the job if all you know how to do is solve for round holes.
If the old system is written in an obsolete technology relevant only to that particular system, the team maintaining the old system is essentially sitting around waiting to be fired. And don’t kid yourself, they know it.
The most relevant guide for legacy modernizations is Michael Feathers’ Working Effectively with Legacy Code.
Although it might seem risky, consider iteration in place to be the default approach. It is most likely to produce successful results in the greatest number of situations.
Good planning is less about controlling every detail and more about setting expectations across the organization.
your plan should focus on answering the following questions: What problem are we trying to solve by modernizing? What small pragmatic changes will help us learn more about the system? What can we iterate on? How will we spot problems after we deploy changes?
I tell my engineers that the biggest problems we have to solve are not technical problems, but people problems.
the habit of confusing the quality of the outcome with the quality of the decision. In psychology, people call it a self-serving bias.
Success and quality are not necessarily connected.
Scale always involves some luck. You can plan for a certain number of transactions or users, but you can’t really control those factors, especially if you’re building anything that involves the public internet.
I don’t know anyone who can predict how multiple technologies will behave in every potential scale condition, especially not when they are combined.
whether a service works at its initial scale and then continues to work as it grows is always a mix of skill and luck.
In Moravec’s own words, “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Hardware and software interfaces haven’t gotten simpler in the last two decades, we’ve just abstracted away a lot of annoying differences that made the issue of x86 versus x64 or downloading drivers a normal part of working even casually with computers.
The older a system is, the more likely the platform on which it runs is itself a dependency.
Organizations that think the tools are the solution typically end up with longer, more painful, and more expensive modernizations.
Software renovation is intended to be semi-automatic: the analysis is automatic, but software engineers do the actual work of restructuring the code.
the methodology is what drives the bulk of the impact. The tools themselves are not as important as the phases of excavating, understanding, documenting, and ultimately rewriting and replacing legacy systems. Tools will come and go.
guidelines: Keep it simple. Don’t add new problems to solve just because the old system was successful. Success does not mean the old system completely solved its problem. Some of those technical decisions were wrong, but never caused any problems. Spend some time trying to recover context. Treat the platform as a dependency and look for coupling that won’t transfer easily to a modern platform. Tools and automation should supplement human effort, not replace it.
Particularly when the organization is big, the pressure to run projects the same way everyone else does, so that they look correct even at the expense of being successful, is significant.
The funny thing about big legacy modernization projects is that technologists suddenly seem drawn to strategies that they know do not work in other contexts.
Few modern software engineers would forgo Agile development to spend months planning exactly what an architecture should look like and try to build a complete product all at once. And yet, when asked to modernize an old system, suddenly everyone is breaking things down into sequential phases that are completely dependent on one another.
what works when rebuilding a system is not all that different from what worked to build it in the first place. You need to keep the scope small, and you need to iterate on your successes.
Assuming you fully understand the requirements because an existing system is operational is a critical mistake.
Existing systems can be a distraction. The software team treats the full-featured implementation of it as the MVP, no matter how large or how complex that existing system actually is. It’s simply too much information to manage. People become overwhelmed, and they get discouraged and demoralized.
if all the work is structured around one critical problem that you can measure and monitor, these conversations become much easier.
Legacy modernization projects go better when the individuals contributing to them feel comfortable being autonomous and when they can adapt to challenges and surprises as they present themselves because they understand what the priorities are. The more decisions need to go up to a senior group—be that VPs, enterprise architects, or a CEO—the more delays and bottlenecks appear.
Having a goal means you can define what kind of value you expect the project to add and whom that value will benefit most. Will modernization make things faster for customers? Will it improve scaling so you can sign bigger clients? Will it save people’s lives? Or, will it just mean that someone gets to give a conference talk or write an article about switching from technology A to technology B?
Good modernization work needs to suppress that impulse to create elegant comprehensive architectures up front. You can have your neat and orderly system, but you won’t get it from designing it that way in the beginning. Instead, you’ll build it through iteration.
If you’re thinking about rearchitecting a system and cannot tie the effort back to some kind of business goal, you probably shouldn’t be doing it at all.
the number-one killer of big efforts is not technical failure. It’s loss of momentum.
To be successful at those long-term rearchitecting challenges, the team needs to establish a feedback loop that continuously builds on and promotes their track record of success.
Good measurable problems have to be focused on problems that your engineers give a shit about.
Have you ever found yourself in a meeting that felt like it was running around in circles? Meetings where people seemed to be competing to see who could predict the most obscure potential failure? Meetings where past decisions were relitigated and everyone walked away less certain as to what the next steps were?
Facilitating technical conversations is more important than being the decision-maker because unproductive and frustrating meetings demoralize teams.
I can typically sort meeting information into three buckets: things that are true, things that are false, and things that are true but irrelevant. Irrelevant is just a punchier way of saying out of scope.