More on this book
Community
Kindle Notes & Highlights
Read between
December 5 - December 11, 2021
When we study a complex system, it’s beneficial to consider how its functioning behaves differently at different scales. Looking at the micro level may mislead us about the macro, and vice versa. In general, systems become more complex as they scale up. Greater size means more connections and interdependencies between parts. Thus, it’s important to combine scale with bottlenecks.
If you do not look at things on a large scale, it will be difficult to master strategy.
But things will always be different as a system scales, and a collection of teams within a company will never be able to communicate like a small company. The larger the company grows, the more work it takes to ensure information flows to the right places.
As changes to the system are implemented in response to growth, the question always is: How will this system fare in the next year? Ten years? A hundred years? In other words, how well will it age?
As growth occurs, resilience can be increased by keeping a measure of independence between parts of a system. Dependencies tend to age poorly because they rely on every one of their dependencies aging well.
Scaling up from the small to the large is often accompanied by an evolution from simplicity to complexity while maintaining basic elements or building blocks of the system unchanged or conserved.
Understanding that systems can scale nonlinearly is useful because it helps us appreciate how much a system can change as it grows.
Jane Brox writes in Brilliant that “gaslight divided light—and life—from its singular, self-reliant past. All was now interconnected, contingent, and intricate.”19 People’s homes became part of a larger system.
Artificial light increased the scale of what we could see at night and thus opened up new businesses and new ways of conducting one’s day. Festivities and holiday celebrations began to move later and later into the evening.
Artificial light changed the scale at which human activities can happen. In many ways, the limits of our lights are the limits of our world. There are still places where we lack the means to eradicate darkness, such as outer space and the deepest parts of the oceans.
When you scale up a system, the problems you solved at the smaller scale often need solving again at a larger scale. In addition, you end up with unanticipated possibilities and outcomes. As the scale increases, so does its impact on other systems.
there are often new impacts and requirements as the system develo...
This highlight has been truncated due to consecutive passage length restrictions.
A more interconnected, larger system may be able to handle variations better, but it may also be vulnerable to widespread failures. Increasing the scale of a system might mean using new materials or incorporating methods like the ones that worked on a smaller scale. It might mean rethinking your whole approach.
Systems change as they scale up or down, and neither is intrinsically better or worse. The right scale depends on your goals and the context. If you want to scale something up, you need to anticipate that new problems will keep arising—problems that didn’t exist at a smaller scale. Or you might need to keep solving the same problems in different ways.
When we interact with complex systems, we need to expect the unexpected. Systems do not always function as anticipated. They are subject to variable conditions and can respond to inputs in nonlinear ways. A margin of safety is often necessary to ensure systems can handle stressors and unpredictable circumstances. This means there is a meaningful gap between what a system is capable of handling and what it is required to handle. A margin of safety is a buffer between safety and danger, order and chaos, success and failure. It ensures a system does not swing from one to the other too easily,
...more
This world of ours appears to be separated by a slight and precarious margin of safety from a most singular and unexpected danger. » Arthur Conan Doyle1
engineers know to design for extremes, not averages. In engineering, it’s necessary to consider the most something might need to...
This highlight has been truncated due to consecutive passage length restrictions.
many more than 5,000 cars cross it in a day. A large margin of safety doesn’t eliminate the possibility of failure, but it reduces it.
For investors, a margin of safety is the gap between an investment vehicle’s intrinsic value and its price. The higher the margin of safety, the safer the investment and the greater the potential profit. Since intrinsic value is subjective, it’s best this buffer be as large as possible to account for uncertainty.
When calculating the ideal margin of safety, we always need to consider how high the stakes are. The greater the cost of failure, the bigger the buffer should be.
A system can’t keep working indefinitely without anything breaking down. A system without backups is unlikely to function for long.
If you’re going hiking in the wilderness alone, you might want more than one communication method. You’re safer in an airplane than a car, in part because it has so much backup; after all, the cost of failure is higher.
margins of safety sometimes create perverse incentives. If we change our behavior in response to the knowledge that we have a margin of safety in place, we may end up reducing or negating its benefits.
There is a difference between what’s uncomfortable and what ruins you. Most systems can be down for an hour. Our bodies can go without food or water for days. Most businesses can do without revenue for a little while. Too much margin of safety could be a waste of resources and can sow the seeds of becoming uncompetitive.
The more we learn, the fewer blind spots we have. And blind spots are the source of all mistakes. While learning more than we need to get the job done can appear inefficient, the corresponding reduction in blind spots offers a margin of safety. Knowledge allows us to adapt to changing situations.
“over time, I learned how to anticipate problems in order to prevent them, and how to respond effectively in critical situations.”
the ability to parse and solve complex problems rapidly, with incomplete information in a hostile environment—was not something any of us had been born with. But by this point we all had it. We’d developed it on the job.”
Our ego gets in the way of capitalizing on the margin of safety that is produced by knowing more than you need to. Often we learn enough to solve today’s problem but not enough to solve tomorrow’s. There is no margin of safety in what we know.
life will throw at you challenges that require capabilities outside your natural strengths. The only way to be ready is to first build as vast a repertoire of knowledge as you can in anticipation of the possibilities you might face, and second to cultivate the ability to know what is relevant and useful.
“truly being ready means understanding what could go wrong and having a plan to deal with it.”
The professionals plan for “mild randomness” and misunderstand “wild randomness.” They learn from the averages and overlook the outliers. Thus they consistently, predictably, underestimate catastrophic risk.
We cannot have a backup plan for everything. We do too much in a day or a year to devote the resources necessary to plan for dealing with disaster in all of our endeavors. However, when the stakes are high, it is worth investing in a comprehensive margin of safety. Extreme events require extreme preparation.
“To lead is to anticipate” was the motto of Jacques Jaujard*, director of the French National Museums during World War II.
Jaujard’s experiences taught him it was best to move Paris’s treasures away if there was any risk whatsoever of attack.15 That way, no matter what, France could hold on to a piece of its pride knowing part of its culture was safe.
We can learn from Jaujard’s removal of artwork from Paris during the war the importance of building in a significant margin of safety when the risk of failure is high. The future is seldom predictable, and so the greater the threat, the more it’s important to plan for the worst.
Broad competence seems very costly compared to specialization, but it is more likely to save us in the outlier situations of life. Efficiency is good for small tasks where failure has little consequence, but life is not exclusively filled with minor challenges and minimal consequences. We are all going to face extreme events where failure is disastrous.
A margin of safety can be an excellent buffer against the unexpected, giving us time to effectively adapt.
Since churn of some sort is inevitable in all systems, it’s useful to ask how we can use it to our benefit. Is it worth going through contortions to keep every customer, or should we let a certain percentage go and focus on the core customers who keep our business going?
Understanding your situation through the lens of churn can help you figure out how to harness the dynamics that drive it.
When we have a customer retention rate of 90%, we may think we’re doing great. But over time, the 5% difference between us and our competitor means we have less growth and have to work a lot harder to keep up.
Algorithms turn inputs into outputs. One reason they are worth understanding is because many systems adjust and respond based on the information provided by algorithms.
Another reason is that they can help systems scale. Once you identify a set of steps that solve a particular problem, you don’t need to start from scratch every time.
“Algorithm” is arguably the single most important concept in our world. If we want to understand our life and our future, we should make every effort to understand what an algorithm is, and how algorithms are connected with emotions. An algorithm is a methodical set of steps that can be used to make calculations, resolve problems, and reach decisions. An algorithm isn’t a particular calculation, but the method followed when making the calculation. » Yuval Noah Harari1
Algorithms are useful partly because of the inherent predictability of their process. That’s why we like them. We can think of algorithms as a series of if–then statements that are completely unambiguous.
In Intuition Pumps and Other Tools for Thinking, Daniel Dennett* defines an algorithm as “a certain sort of formal process that can be counted on—logically—to yield a certain sort...
This highlight has been truncated due to consecutive passage length restrictions.
three defining characteristics of algorithms: Substrate neutrality: “The power of the procedure is due to its logical structure, not the causal powers of the materials used in the instantiation.”3 It doesn’t matter whether you read your recipe on a phone or a book; neither has impact on the logic of the algorithm. Underlying mindlessness: “Each constituent step, and the transition between steps, is utterly simple.”4 For a recipe to be an algorithm, it must tell you the amounts of each ingredient you need as well as walk you through the process in steps so clear that there is no room for
...more
human learning as being the product of biological algorithms.
Moving beyond computers, all systems need algorithms to function: sets of instructions for adapting to and solving problems. Increasingly, algorithms are designed to be directionally correct over perfect.
They often evolve—or are designed—to get useful and relevant enough outputs to keep the system functioning properly.
When groups of people work together with a shared goal, they need coherent algorithms for turning their inputs into their desired outputs in a repeatable fashion. For many people to move toward the same aim, they must know how to act, how to resolve problems, and how to make decisions in a consistent and reliable manner.