More on this book
Community
Kindle Notes & Highlights
Ludwig Boltzmann, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.
Restoring legacy systems to operational excellence is ultimately about resuscitating an iterative development process so that the systems are being maintained and evolving as time goes on.
Like pottery sherds, old computer programs are artifacts of human thought. There’s so much you can tell about an organization’s past by looking at its code.
To understand legacy systems, you have to be able to define how the original requirements were determined. You have to excavate an entire thought process and figure out what the trade-offs look like now that the options are different.
It is easy to build things, but it is difficult to rethink them once they are in place.
Legacy modernizations are hard not because they are technically hard—the problems and the solutions are usually well understood—it’s the people side of the modernization effort that is hard.
In technology, “good enough” reigns supreme.
The first mistake software engineers make with legacy modernization is assuming technical advancement is linear.
Alignable differences are those for which the consumer has a reference point. For example, this car is faster than that car, or this phone has a better camera than that phone. Nonalignable differences are characteristics that are wholly unique and innovative; there are no reference points with which to compare. You might assume that nonalignable differences are more appealing to potential consumers. After all, there’s no competition! You’re doing things differently from everyone else.
Adopting new practices doesn’t necessarily make technology better, but doing so almost always makes technology more complicated, and more complicated technology is hard to maintain and ultimately more prone to failure.
So, the takeaway from understanding that technology advances in cycles isn’t that upgrades are easier the longer you wait, it’s that you should avoid upgrading to new technology simply because it’s new.
Amazingly, an 80-column width is the size of the old mainframe punch cards that were used to input both data and programs into the room-sized computers built during the 1950s and 1960s. So right now, solidly in the 21st century, programmers are enforcing a standard developed for machines most of them have never even seen, let alone programmed.
Computers wouldn’t be developed to work with monitors until 1964 when Bell Labs incorporated the first primitive visual interface into the Multics time-sharing system. We had no way of seeing the input the computer was receiving, so we borrowed an interface from the telegraph, which, in turn, was borrowing one from 18th-century French weavers.
Truly new systems often cannibalize the interfaces of older systems to create alignable differences.
This is why maintaining technology long term is so difficult. Although blindly jumping onto new things for the sake of their newness is dangerous, not keeping up to date is also dangerous. As technology advances, it collects more and more interfaces and patterns. It absorbs them from other fields, and it holds on to historic elements that no longer make sense. It builds assumptions around the most deeply buried characteristics. Keep your systems the way they are for too long, and you get caught trying to migrate decades of assumptions.
In the 1960s, psychologist Robert Zajonc conducted a series of experiments documenting how even a single exposure to something increased positive feelings about it in later encounters. He found this effect with languages, individual words, and images. Later researchers have observed similar preferences in how financial professionals invest,3 how academic researchers evaluate journals,4 and what flavors we enjoy when we eat.5 In psychology, the term for this is the mere-exposure effect. Simply being exposed to a concept makes it easier for the brain to process that concept and, therefore, feels
...more
The lesson to learn here is the systems that feel familiar to people always provide more value than the systems that have structural elegances but run contrary to expectations.
Engineers tend to overestimate the value of order and neatness. The only thing that really matters with a computer system is its effectiveness at performing its practical application.
There’s a point where familiarity breeds contempt.
Risks are known and estimable threats; ambiguities are places where outcomes both positive and negative are unknown. The traditional school of thought tells us that human beings are averse to ambiguity and will avoid it as much as possible. However, ambiguity aversion is one of those decision-making models that test well in laboratories but break down when brought into the real world where decisions are more complex and probabilities less clearly defined. Specifically when the decision involves multiple attributes, a positive framing of the problem can flip people’s behavior from
...more
We know that past the upper bound of mere exposure, once people find a characteristic they do not like, they tend to judge every characteristic discovered after that more negatively.17 So programmers prefer full rewrites over iterating legacy systems because rewrites maintain an attractive level of ambiguity while the existing systems are well known and, therefore, boring. It’s no accident that proposals for full rewrites tend to include introducing some language, design pattern, or technology that is new to the engineering team. Very few rewrite plans take the form of redesigning the system
...more
A big red flag is raised for me when people talk about the phases of their modernization plans in terms of which technologies they are going to use rather than what value they will add.
A system cannot have performance issues unless the organization that owns it has defined expectations.
No changes made to existing systems are free. Changes that improve one characteristic of a system often make something else harder.
We had a system where multiple services needed access to a giant unstructured data store. The data had grown to a size that deleting some of it from the data store was such a resource-intensive process, it affected the performance of normal reads and writes.
Large problems are always tackled by breaking them down into smaller problems. Solve enough small problems, and eventually the large problem collapses and can be resolved.
In 1983, Charles Perrow coined the term normal accidents to describe systems that were so prone to failure, no amount of safety procedures could eliminate accidents entirely. According to Perrow, normal accidents are not the product of bad technology or incompetent staff.
When both observability and testing are lacking on your legacy system, observability comes first. Tests tell you only what won’t fail; monitoring tells you what is failing.
The only real rule of modernizing legacy systems is that there are no silver bullets.
The older a system is, the more likely the platform on which it runs is itself a dependency. Most modernization projects do not think about the platform this way and, therefore, leave the issue as an unpleasant surprise to be discovered later.