More on this book
Community
Kindle Notes & Highlights
Read between
October 14, 2019 - January 27, 2020
One of Watts’s projects has been to verify Milgram’s six degrees of separation result using huge amounts of data on e-mail messages sent between people so as to determine how many links are needed to connect any arbitrary two of them. This was important because Milgram’s work, which was based on traditional letters sent via the regular postal service, had been heavily criticized for its relative sparseness of data and lack of systematic control.
origins and providing possibly new insights and answers. The provocative work of Milgram and Zimbardo strongly suggests that the conundrum as to why good people can do very bad things originates in peer pressure situations, fear of rejection, and a desire to be part of a group where power and control are conferred on individuals by authority.
we are continually bombarded with so many sights, so many sounds, so many “happenings,” and so many other people at such a high rate that we are simply unable to process the entire barrage of sensory information. If we tried to respond to every stimulus, our cognitive and psychological circuitry would break down and, in a word, we would blow a fuse just like an overloaded electrical circuit. And sadly, some of us do. Milgram suggested that the kinds of “antisocial” behaviors we perceive and experience in large cities are in fact adaptive responses for coping with the sensory onslaught of city
...more
You will notice that the sequence of numbers that quantify the magnitude of these successive levels of the group hierarchy—5, 15, 50, 150—are sequentially related to each other by an approximately constant scaling factor of about three.
This presumed connection between brain size and the ability to form social groups is called the social brain hypothesis.
This is reminiscent of the ubiquitous generality of the “bell-curve” distribution used for describing statistical variations around some average value. Technically this is called a Gaussian or normal distribution and arises mathematically whenever a sequence of events or entities, whatever they are, are randomly distributed, uncorrelated, and independent of one another. So, for example, the average height of men in the United States is about five feet ten inches (1.77 meters) and the frequency distribution of their heights around this mean value—that is, how many men there are of a given
...more
composite risk index, which is defined as the impact of the risk event multiplied by the probability of its occurrence.
Employment, wealth creation, innovation and ideas, the spread of infectious diseases, health care, crime, policing, education, entertainment, and indeed, all of the pursuits that characterize modern Homo sapiens and are emblematic of urban life are sustained and generated by the continual exchange of information, goods, and money between people.
The job of the city is to facilitate and enhance this process by providing the appropriate infrastructure such as parks, restaurants, cafés, sports stadiums, cinemas, theaters, public squares, plazas, office buildings, and meeting halls to encourage and increase social connectivity.
The systematic increase in social interaction is the essential driver of socioeconomic activity in cities: wealth creation, innovation, violent crime, and a greater sense of buzz and opportunity are all propagated and enhanced through social networks and greater interpersonal interaction.
Just as raising the temperature of a gas or liquid increases the rate in the number of collisions between molecules, so increasing the size of a city increases the rate and number of interactions between its citizens.
the sublinearity of infrastructure and energy use is the exact inverse of the superlinearity of socioeconomic activity. Consequently, to the same 15 percent degree, the bigger the city the more each person earns, creates, innovates, and interacts—and the more each person experiences crime, disease, entertainment, and opportunity—and all of this at a cost that requires less infrastructure and energy for each of them. This is the genius of the city. No wonder so many people are drawn to them.
Knowing the inverse linkage between these two different kinds of networks, it should come as no great surprise that precisely the opposite behavior arises in social networks. Rather than the pace of life systematically decreasing with size, the superlinear dynamics of social networks leads to a systematic increase in the pace of life: diseases spread faster, businesses are born and die more often, commerce is transacted more rapidly, and people even walk faster, all following the 15 percent rule. This is the underlying scientific reason why we all sense that life is faster in a New York City
...more
Zahavi discovered the surprising result that the total amount of time an average individual spends on travel each day is approximately the same regardless of the city size or the mode of transportation. Apparently, we tend to spend about an hour each day traveling, whoever and wherever we are. Roughly speaking, the average commute time from home to work is about half an hour each way independent of the city or means of transportation.
The conclusion is clear: the size of cities has to some degree been determined by the efficiency of their transportation systems for delivering people to their workplaces in not much more than half an hour’s time.
This surprising observation of the approximately one-hour invariant that communal human beings have spent traveling each day, whether they lived in ancient Rome, a medieval town, a Greek village, or twentieth-century New York, has become known as Marchetti’s constant, even though it was originally discovered by Zahavi. As a rough guide it clearly has important implications for the design and structure of cities. As planners begin to design green carless communities and as more cities ban automobiles from their centers, understanding and implementing the implied constraints of Marchetti’s
...more
Like geology and for that matter the social sciences, astronomy is a historical science in that we can test our theories only by making postdictions for what should have happened according to the equations and narratives of our theories, and then searching in the appropriate place for their verification.
The size of an average individual’s modular cluster of acquaintances who interact with one another is an approximate invariant—it doesn’t change with city size.
In assessing the performance of a particular city, we therefore need to determine how well it performs relative to what it has accomplished just because of its population size. By analogy with the discussion on determining the strongest champion weight lifter by measuring how much each deviated from his expected performance relative to the idealized scaling of body strength, one can quantify an individual city’s performance by how much its various metrics deviate from their expected values relative to the idealized scaling laws.
Roughly speaking, all cities rise and fall together, or to put it bluntly: if a city was doing well in 1960 it’s likely to be doing well now, and if it was crappy then, it’s likely to be crappy still. Once a city has gained an advantage, or disadvantage, relative to its scaling expectation, this tends to be preserved over decades. In this sense, either for good or for bad, cities are remarkably robust and resilient—they are hard to change and almost impossible to kill.
For instance, the total number of establishments in each city regardless of what business they conduct turns out to be linearly proportional to its population size. Double the size of a city and on average you’ll find twice as many businesses.
For instance, at the coarsest level of the NAICS classification scheme traditional sectors such as agriculture, mining, and utilities scale sublinearly; the theory predicts that the rankings and relative abundances of these industries decrease as cities get larger. On the other hand, informational and service businesses such as professional, scientific, and technical services, and management of companies and enterprises, scale superlinearly and are consequently predicted to increase disproportionally with city size, as observed. As a concrete example, consider the number of lawyers’ offices.
...more
A major theme running throughout this book is that nothing grows without the input and transformation of energy and resources.
On the supply side, metabolic rate in organisms scales sublinearly with the number of cells (following the generic ¾ power exponent derived from network constraints) while the demand increases approximately linearly. So as the organism increases in size, demand eventually outstrips supply because linear scaling grows faster than sublinear, with the consequence that the amount of energy available for growth continuously decreases, eventually going to zero resulting in the cessation of growth. In other words, growth stops because of the mismatch between the way maintenance and supply scale as
...more
Historically, companies have been viewed as the necessary agents that organize people to work collaboratively to take advantage of economies of scale, thereby reducing the transaction costs of production or services between the manufacturer or provider and the consumer. The drive to minimize costs so as to maximize profits and gain greater market share has been extraordinarily successful in creating the modern market economy by providing goods and services at affordable prices to vast numbers of people.
Although binning is not a rigorous mathematical procedure, the stability of obtaining approximately the same straight-line fit using different resolutions lends strong support to the hypothesis that on average companies are self-similar and satisfy power law scaling.
A crucial aspect of the scaling of companies is that many of their key metrics scale sublinearly like organisms rather than superlinearly like cities.
As we’ve seen, growth in both organisms and cities is fueled by the difference between metabolism and maintenance.
Of the 28,853 companies that have traded on U.S. markets since 1950, 22,469 (78 percent) had died by 2009. Of these 45 percent were acquired by or merged with other companies, while only about 9 percent went bankrupt or were liquidated; 3 percent privatized, 0.5 percent underwent leveraged buyouts, 0.5 percent went through reverse acquisitions, and the remainder disappeared for “other reasons.”
no matter which sector or what the stated cause is, only about half of the companies survive for more than ten years.
the risk of a company’s dying does not depend on its age or size.
The half-life of U.S. publicly traded companies was found to be close to 10.5 years, meaning that half of all companies that began trading in any given year have disappeared in 10.5 years.
Nevertheless, we can learn something instructive about the aging of companies from the general characteristics of these very long-lived outliers. Most of them are of relatively modest size, operating in highly specialized niche markets, such as ancient inns, wineries, breweries, confectioners, restaurants, and the like.
Compustat data set and the S&P and Fortune 500 lists. In contrast to most of these, these outliers have survived not by diversifying or innovating but by continuing to produce a perceived high-quality product for a small, dedicated clientele. Many have maintained their viability through reputation and consistency and have barely grown. Interestingly, most of them are Japanese. According to the Bank of Korea, of the 5,586 companies that were more than two hundred years old in 2008, over half (3,146 to be precise) were Japanese, 837 German, 222 Dutch, and 196 French. Furthermore, 90 percent of
...more
Nevertheless, from analyzing the Compustat data set we found that the relative amount allocated to R&D systematically decreases as company size increases, suggesting that support for innovation does not keep up with bureaucratic and administrative expenses as companies expand.
finite time singularity, which is a signal of inevitable change, and possibly of potential trouble ahead. A finite time singularity simply means that the mathematical solution to the growth equation governing whatever is being considered—the population, the GDP, the number of patents, et cetera—becomes infinitely large at some finite time, as illustrated in Figure 76. This is obviously impossible, and that’s why something has to change.
Thus, to avoid collapse a new innovation must be initiated that resets the clock, allowing growth to continue and the impending singularity to be avoided.
This can be restated as a sort of “theorem”: to sustain open-ended growth in light of resource limitation requires continuous cycles of paradigm-shifting innovations,
The general concept of a singularity plays an important role in mathematics and theoretical physics. A singularity is a point at which a mathematical function is no longer “well behaved” in some specific way, such as becoming infinite in the manner I have been discussing. Defining how to tame such singularities stimulated enormous progress in nineteenth-century mathematics, and this subsequently had a huge impact on theoretical physics. Its most famous popular consequence was the concept of black holes, which arose from trying to understand the singularity structure of Einstein’s theory of
...more
Maxwell’s unification of electricity and magnetism, which brought the ephemeral ether into our lives and gave us electromagnetic waves;
Data are good and more data are even better—this is the creed that most of us take for granted, especially those of us who are scientists. But this belief is implicitly based on the idea that more data lead to a deeper understanding of underlying mechanisms and principles so that credible predictions and further progress in constructing models and theories can be built upon a firm foundation subject to continual testing and refinement.
If we are not to “drown in a sea of data” we need a “theoretical framework with which to understand it . . . and a firm grasp on the nature of the objects we study to predict the rest.” One final point: The IT revolution is