More on this book
Community
Kindle Notes & Highlights
by
Ray Kurzweil
Read between
June 29 - December 24, 2024
At the start of the twentieth century, electrification in the United States was limited mainly to large urban areas.[72] The pace of electrification slowed around the start of the Great Depression, but during the 1930s and 1940s, President Franklin D. Roosevelt championed massive rural electrification programs, which aimed to bring the efficiency of electric machinery to America’s agricultural heartland.[73] By 1951 more than 95 percent of American homes had electricity, and by 1956 the national electrification effort was regarded as essentially complete.[74] In other parts of the world,
...more
The first transformative communication technology enabled by electricity was radio. Commercial radio broadcasting in the United States began in 1920 and by the 1930s had become the nation’s primary form of mass media.[79] Unlike newspapers, which were mainly limited to a single metropolitan area, radio broadcasts could reach audiences across the country. This spurred development of a truly national media culture, as people from California to Maine heard many of the same political addresses, news reports, and entertainment programs. By 1950 more than nine out of ten American households had a
...more
The adoption of television followed a pattern similar to radio’s, but with the nation already much more developed, its exponential growth was even faster. Scientists and engineers began theorizing in the late nineteenth century about the advances that would lead to television, and by the late 1920s the first primitive television systems were being developed and demonstrated.[84] The technology had reached commercial viability in the United States by 1939, but the outbreak of World War II brought worldwide television production to a virtual halt.[85] As soon as the war ended, though, Americans
...more
Personal computers started entering American homes during the 1970s, with machines such as the Kenbak-1, and in 1975 the wildly popular Altair 8800, which was sold in build-it-yourself kits.[92] By the end of the decade companies like Apple and Microsoft were transforming the market with user-friendly personal computers that ordinary people could learn to operate in an afternoon.[94] Apple’s famous 1984 Super Bowl commercial got the whole country talking about computers, and the proportion of US households with a computer nearly doubled within five years of its airing.[95] During that period
...more
By 2017–2021 about 93.1 percent of US households had computers, and the percentage continues to rise as the Greatest Generation declines and millennials start families of their own.[99] Meanwhile, there has been a steady rise in worldwide computer ownership. Computers embedded in smartphones have rapidly expanded market penetration in the developing world, and as of 2022 around two-thirds of the world’s population has at least one.[100]
A thousand years ago, European life expectancy at birth was just in the twenties, since so many people died in infancy or youth from diseases like cholera and dysentery, which are now easily preventable.[103] By the middle of the nineteenth century, life expectancy in the United Kingdom and the United States had increased to the forties.[104] As of 2023 it has risen to over eighty in much of the developed world.[105] So we have nearly tripled life expectancy in the past thousand years, and doubled it in the past two centuries.
Today, though, most of this low-hanging fruit has been picked. The remaining sources of disease and disability spring mostly from deep within our own bodies. As cells malfunction and tissues break down, we get conditions like cancer, atherosclerosis, diabetes, and Alzheimer’s. To an extent we can reduce these risks through lifestyle, diet, and supplementation—what I call the first bridge to radical life extension.[106] But those can only delay the inevitable. This is why life expectancy gains in developed countries have slowed since roughly the middle of the twentieth century. For example,
...more
Since the completion of the Human Genome Project in 2003, the cost of genome sequencing has followed a sustained exponential trend, falling on average by around half each year. Despite a brief plateau in sequencing costs from 2016 to 2018 and slowed progress amid the disruptions of the COVID-19 pandemic, costs continue to fall—and this will likely accelerate again as sophisticated AI plays a greater role in sequencing. Costs have plunged from about $50 million per genome in 2003 to as low as $399 in early 2023, with one company promising to have $100 tests available by the time you read
...more
As AI transforms more and more areas of medicine, it will give rise to many similar trends. It is already starting to have a clinical impact,[109] but we are still in the early part of this particular exponential curve. The current trickle of applications will become a flood by the end of the 2020s. We will then be able to start directly addressing the biological factors that now limit maximum life span to about 120 years, including mitochondrial genetic mutations, reduced telomere length, and the uncontrolled cell division that causes cancer.[110] In the 2030s we will reach the third bridge
...more
This highlight has been truncated due to consecutive passage length restrictions.
In addition, today’s poor are much better off in absolute terms due to the wide accessibility of free information and services via the internet—such as the ability to take MIT open courses or video-chat with family continents away.[129] Similarly, they benefit from the radically improved price-performance of computers and mobile phones in recent decades, but these are not properly reflected in economic statistics. (The failure of economic statistics to sufficiently capture the exponentially improving price-performance of products and services influenced by information technology is discussed
...more
Many of us have a deeply ingrained tendency to view the struggle for scarce resources as an unavoidable cause of violence and as an inherent part of human nature. But while this has been the story of much of human history, I don’t think this will be permanent. The digital revolution has already rolled back scarcity conditions for many of the things we can easily represent digitally, from web searches to social media connections. Fighting over a copy of a physical book may be petty, but on a certain level we can understand it. Two children may tussle over a favorite printed comic because only
...more
The spread of democracy from its roots in medieval England parallels, and likely results largely from, the rise of mass communication technologies. The Magna Carta, which famously articulated the rights of ordinary people not to be unjustly imprisoned, was penned in 1215 and signed by King John.[184] Yet for most of the Middle Ages, commoners’ rights were often ignored and political participation was minimal. This changed with Gutenberg’s invention of the movable-type printing press around 1440, and with its rapid adoption the educated classes were able to spread both news and ideas with far
...more
The spread of knowledge brought wealth and political empowerment, and legislative bodies like England’s House of Commons became much more outspoken. While most power was still held by the king, Parliament was able to deliver tax protests to the monarch and impeach those of his ministers it did not like.[186] The 1642–1651 English Civil War eliminated the monarchy altogether, after which it was reinstalled in a form subservient to Parliament; later the government adopted a Bill of Rights that clearly established the principle that the king could rule only by consent of the people.[187] Prior to
...more
This highlight has been truncated due to consecutive passage length restrictions.
Two centuries ago in the United States, most people still did not enjoy full rights of political participation. In the early nineteenth century, voting rights were limited mostly to adult white males with at least modest property or wealth. These economic requirements allowed a majority of white men to vote but almost entirely excluded women, African Americans (millions of whom were held in chattel slavery), and Native Americans.[191] Historians disagree on precisely what percentage of the population was eligible to vote, but it’s most commonly considered to have been between 10 and 25
...more
Despite the high aspirations of its advocates, democracy gained ground only slowly over the course of the nineteenth century. For example, the 1848 liberal revolutions in Europe mostly failed, and many of the reforms of Tsar Alexander II in Russia were undone by his successors.[193] In 1900 just 3 percent of the world’s population lived in what we would currently consider democracies, as even the United States still denied women the right to vote and enforced segregation against African Americans. By 1922, in the aftermath of World War I, that had climbed to 19 percent.[194]
The postwar years saw a rapid spike in the proportion of the world’s people living under democracy, largely driven by the independence won by India and Britain’s other colonies in South Asia. For most of the Cold War, the reach of democracy stayed roughly steady, with just over one in three people in the world living in democratic societies.[195] Yet the proliferation of communication technology outside the Iron Curtain, from Beatles LPs to color TVs, stirred discontent against the governments that suppressed it. With the breakup of the Soviet Union, democracy again expanded rapidly, reaching
...more
The most fundamental trend at work here is the exponentially improving price-performance of computation—that is, how many computations per second can be performed for one inflation-adjusted dollar. When Konrad Zuse built the first working programmable computer, the Z2, in 1939, it could perform around 0.0000065 computations per second per 2023 dollar.[201] In 1965, the PDP-8 managed around 1.8 computations per second per dollar. When my book The Age of Intelligent Machines was published in 1990, the MT 486DX could achieve about 1,700. When The Age of Spiritual Machines appeared nine years
...more
As a personal example, when I attended MIT in 1965, the school was so advanced that it actually had computers. The most notable of them, an IBM 7094, had 150,000 bytes of “core” storage and a quarter of a MIPS (million instructions per second) of computing speed. It cost $3.1 million (in 1963 dollars, which is $30 million in 2023 dollars) and was shared by thousands of students and professors.[205] By comparison, the iPhone 14 Pro, released while this book was being written, cost $999 and could achieve up to 17 trillion operations per second for AI-related applications.[206] This is not a
...more
During the late 2020s we will start to be able to print out clothing and other common goods with 3D printers, ultimately for pennies per pound. One of the key trends in 3D printing is miniaturization: designing machines that can create ever smaller details on objects. At some point, the traditional 3D-printing paradigms, like extrusion (similar to an ink-jet), will be replaced by new approaches for manufacturing at even tinier scales. Probably sometime in the 2030s this will cross into nanotechnology, where objects can be created with atomic precision. Eric Drexler’s estimate in his 2013 book
...more
Lagarde’s last challenge to me was that land is not going to become an information technology, and that we are already very crowded. I replied that we are crowded because we chose to crowd together in dense groups. Cities came about to make possible our working and playing together. But try taking a train trip anywhere in the world and you will see that almost all of the habitable land remains unoccupied—only 1 percent of it is built up for human living.[215] Only about half of the habitable land is directly used by humans at all, almost all of it dedicated to agriculture—and among
...more
This transition is already underway, accelerated by the social changes necessitated by COVID-19. At the peak during the pandemic, up to 42 percent of Americans were working from home.[217] This experience will likely have a long-term impact on how both employees and employers think about work. In many cases the old model of nine-to-five sitting at a desk in a company office has been obsolete for years, but inertia and familiarity made it hard for society to change until the pandemic forced us to. As the LOAR takes information technologies into the steep parts of these exponential curves and AI
...more
One of the most important transitions arising from the exponential progress of the 2020s is in energy, because energy powers everything else. Solar photovoltaics are already cheaper than fossil fuels in many cases, with costs rapidly declining. But we need advances in materials science to achieve further improvements in cost-efficiency. AI-assisted breakthroughs in nanotechnology will increase cell efficiency by enabling photovoltaic cells to capture energy from more of the electromagnetic spectrum. Exciting developments are underway in this area. Putting tiny structures called nanotubes and
...more
This highlight has been truncated due to consecutive passage length restrictions.
In the years ahead, nano-based technology will also reduce manufacturing costs by facilitating 3D printing of solar cells, which will make decentralized production possible so photovoltaics can be created when and where they are needed. And unlike the big, clumsy, rigid panels used today, cells built with nanotech can take many convenient forms: rolls, films, coatings, and more. This will reduce installation costs and give more communities around the world access to cheap, abundant solar power.
In 2000 renewables (mainly solar, wind, geothermal, tidal, and biomass, but not hydroelectric) accounted for about 1.4 percent of global electricity generation.[224] By 2021 it had risen to 12.85 percent, for an average doubling time of about 6.5 years during that span.[225] Doubling is faster in absolute terms, since the total amount of power generation is itself growing—from the equivalent of about 2...
This highlight has been truncated due to consecutive passage length restrictions.
Costs of solar electricity generation are falling quite a bit faster than those of any other major renewable, and solar has the most headroom to grow. The closest competition to solar in price declines is wind, but solar has been falling roughly twice as fast as wind over the past five years.[227] Further, solar has a lower potential floor because materials science advances directly translate to cheaper and more efficient panels, and current technology captures only a fraction of the theoretical maximum—typically somewhere around 20 percent of incoming energy out of a theoretical limit around
...more
As investment pours into renewables and the cost of renewables falls, this is pulling resources and innovation efforts into storage, because energy storage is so important to the ability of renewables to compete with fossil fuels for the lion’s share of electricity generation. Continuing exponential gains will also be enabled by convergent advances in materials science, robotic manufacturing, efficient shipping, and energy transmission. The implication is that solar will dominate sometime during the 2030s.
And while most batteries aren’t suitable for utility-scale storage, advanced batteries using lithium ions and several other chemistries are now rapidly increasing in cost-effectiveness. For example, between 2012 and 2020, lithium-ion storage costs fell roughly 80 percent per megawatt-hour and are projected to continue declining.[237]
In 1990, about 24 percent of the world’s people did not have regular access to relatively safe sources of drinking water.[241] Thanks to development efforts and advancing technology, the figure is now down to somewhere around 1 in 10.[242] That is still a large problem, however. According to the Institute for Health Metrics and Evaluation, around 1.5 million people around the world, including 500,000 young children, died in 2019 from diarrheal disease—mostly via drinking water contaminated by the bacteria in feces.[243] These diseases include cholera, dysentery, and typhoid fever and are
...more
Most archaeologists estimate that the birth of human agriculture took place around 12,000 years ago, but there is some evidence that the earliest agriculture may date as far back as 23,000 years.[250] It is possible that future archaeological discoveries will revise this understanding even further. Whenever agriculture began, the amount of food that could then be grown from a given area of land was quite low. The first farmers sprinkled seeds into the natural soil and let the rain water them. The result of this inefficient process was that the vast majority of the population needed to work in
...more
A useful way of quantifying this progress is crop density: how much food can be grown in a given area of land. For example, corn production in the United States uses land more than seven times as efficiently as a century and a half ago. In 1866, US corn farmers averaged an estimated 24.3 bushels per acre, and by 2021 this had reached 176.7 bushels per acre.[252] Worldwide, land efficiency improvement has been roughly exponential, and today we need, on average, less than 30 percent of the land that we needed in 1961 to grow a given quantity of crops.[253] This trend has been essential to
...more
In addition, 3D printing allows manufacturing to be decentralized, empowering consumers and local communities. This contrasts with the paradigm that developed during the twentieth century, in which manufacturing is largely concentrated in giant corporate factories in major cities. Under this model, small towns and developing countries must buy their products from far away, and shipping is expensive and time-consuming. Decentralized manufacturing will also have significant environmental benefits. Shipping products from factories to consumers hundreds or thousands of miles away generates
...more
In addition to manufacturing of everyday goods like shoes and tools, new research is applying 3D printing to biology. Scientists are currently testing techniques that will make possible the printing of human body tissues and, ultimately, whole organs.[273] The general principle involves a biologically inactive material, such as synthetic polymer or ceramic, printed into a three-dimensional “scaffold” in the shape of the desired body structure. Fluid rich with reprogrammed stem cells is then deposited over the scaffold, where the cells multiply and fill in the appropriate shape, thereby
...more
We have now had about two decades of exponential progress in genome sequencing (approximately doubling price-performance each year) from the completion of the Human Genome Project in 2003—and in terms of base pairs, this doubling has occurred on average roughly every fourteen months, spanning multiple technologies and dating all the way back to the first nucleotide sequencing from DNA in 1971.[284] We are finally getting to the steep part of a fifty-year-old exponential trend in biotechnology.
Our natural immune system, which includes T cells that can intelligently destroy hostile microorganisms, is very effective for many types of pathogens—so much so that we would not live long without it. However, it evolved in an era when food and resources were very limited and most humans had short life spans. If early humans reproduced when young and then died in their twenties, evolution had no reason to favor mutations that could have strengthened the immune system against threats that mainly appear later in life, like cancer and neurodegenerative diseases (often caused by misfolded
...more
This highlight has been truncated due to consecutive passage length restrictions.
The ultimate goal is to put our destiny in our own hands, not in the metaphorical hands of fate—to live as long as we wish. But why would anyone ever choose to die? Research shows that those who take their own lives are typically in unbearable pain, whether physical or emotional.[290] While advances in medicine and neuroscience cannot prevent all of those cases, they will likely make them much rarer.
Finally, some have an ethical concern about equity and inequality. A common challenge to these predictions about longevity is that only the wealthy will be able to afford the technologies of radical life extension. My response is to point out the history of the cell phone. You indeed had to be wealthy to have a mobile phone as recently as thirty years ago, and that device did not work very well. Today there are billions of phones, and they do a lot more than just make phone calls. They are now memory extenders that let us access almost all of human knowledge. Such technologies start out being
...more
Exponentially improving information technology is a rising tide that lifts all the boats of the human condition. And we are now about to enter the period when this tide surges upward as never before. The key to this is artificial intelligence, which is now allowing us to turn many kinds of linearly advancing technology into exponential information technology—from agriculture and medicine to manufacturing and land use. This force is what will make life itself exponentially better in the time ahead.
But autonomous vehicles won’t just disrupt the jobs of people who physically drive behind the wheel. As truck drivers lose their jobs to automation, there will be less need for people to do truckers’ payroll and for retail workers in roadside convenience stores and motels. There’ll be less need for people to clean truck stop bathrooms, and lower demand for sex workers in the places truckers frequent today.
The latest estimates, such as a 2023 report by McKinsey, found that 63 percent of all working time in today’s developed economies is spent on tasks that could already be automated with today’s technology.[23] If adoption proceeds quickly, half of this work could be automated by 2030, while McKinsey’s midpoint scenarios forecast 2045—assuming no future AI breakthroughs. But we know AI is going to continue to progress—exponentially—until we have superhuman-level AI and fully automated, atomically precise manufacturing (controlled by AI) sometime in the 2030s.
It’s not clear whether Ned Ludd actually existed, but legend has it that he accidentally broke textile factory machinery, and any equipment damaged thereafter—either mistakenly or in protest of automation—would be blamed on Ludd.[25] When the desperate weavers formed an urban guerrilla army in 1811, they declared General Ludd their leader.[26] These Luddites, as they were known, revolted against factory owners—they first directed their violence primarily at the machines, but bloodshed soon ensued. The movement ended with the imprisonment and hanging of prominent Luddite leaders by the British
...more
At the beginning of the nineteenth century, the United States was an overwhelmingly agricultural society. As more settlers poured into the young nation and moved west of the Appalachians, the percentage of Americans employed in farming actually rose, peaking at over 80 percent.[40] But in the 1820s this proportion began a rapid decline as improved agricultural technology made it possible for fewer farmers to feed more people. Initially this was the result of a combination of improved scientific methods of plant breeding and better crop rotation systems.[41] As the Industrial Revolution
...more
During the twentieth century the advent of improved pesticides, chemical fertilizers, and genetic modification led to an explosion in crop yields. For example, in 1850 wheat yields in the United Kingdom were 0.44 ton per acre.[44] As of 2022 they had risen to 3.84 tons per acre.[45] During roughly that same span, the United Kingdom’s population rose from about 27 million to 67 million, so food production was able to not just accommodate a growing number of citizens but make food much more abundant for each person.[46] As people got access to better nutrition, they grew taller and healthier and
...more
In the first decade of the nineteenth century, about 1 out of 35 American workers were employed in manufacturing.[50] The Industrial Revolution soon transformed major cities, though, as steam-powered factories sprang up and demanded millions of low-skilled laborers. By 1870 almost 1 in 5 workers were in manufacturing, mainly in the rapidly industrializing North.[51] The second wave of the Industrial Revolution brought a new mass of workers—largely immigrants—into manufacturing around the start of the twentieth century. The development of the assembly line greatly increased efficiency, and as
...more
Then two technology-related shifts began to erode US factory employment. First, innovations in logistics and transportation, most notably containerized shipping, made it cheaper for companies to outsource manufacturing to countries with less expensive labor and import finished products to the United States.[55] Containerization is not a flashy technology like factory robotics or AI, but it has had one of the most profound impacts on modern society of any innovation. By drastically reducing the cost of worldwide shipping, containerization made it possible for the economy to become truly global.
...more
In February 2001, just before the post-dot-com recession, 17 million Americans had manufacturing jobs.[57] This dropped sharply during the recession and never recovered—jobs stayed flat at around 14 million all through the mid-2000s boom despite a substantial increase in output.[58] In December 2007, at the start of the Great Recession, about 13.7 million Americans were working in manufacturing, and this had fallen to 11.4 million by February 2010.[59] Manufacturing output quickly rebounded and by 2018 was back near all-time highs—but many of the lost jobs never came back.[60] Even in November
...more
Since the start of the twenty-first century, the labor force has slightly shrunk as a proportion of the total population, but a major reason for this is that a higher percentage of Americans are now of retirement age.[67] In 1950, 8.0 percent of the US population was sixty-five or older;[68] by 2018 that had doubled to 16.0 percent, leaving relatively fewer working-age people in the economy.[69] The US Census Bureau projects—independent of any new medical breakthroughs that may be achieved in the coming decades—that over-sixty-fives will constitute 22 percent of the population by 2050.[70] If
...more
In the long term, the economic incentives for automation will push AI to take over an ever-expanding set of tasks. All else being equal, it is less expensive to buy machines or AI software than to pay ongoing labor costs.[86] When business owners are designing their operations, they often have some flexibility over the balance between capital and labor. In places where wages are relatively low, it makes more sense to use labor-intensive processes. Where wages are high, there’s more of an incentive to innovate and design machines that require less labor. This is likely one reason why Great
...more
From the first quarter of 1950 to the first quarter of 1990, real output per hour in the United States increased by an average of 0.55 percent per quarter.[87] As personal computers and the internet became widespread in the nineties, productivity gains accelerated. From the first quarter of 1990 to the first quarter of 2003, quarterly increases averaged 0.68 percent.[88] It appeared that the World Wide Web had unleashed a new age of rapid growth, and as late as 2003 there was a widespread expectation that this pace would continue.[89] Yet starting in 2004, productivity growth began to
...more
This highlight has been truncated due to consecutive passage length restrictions.
Classical economic theory says that prices will tend toward goods’ average marginal cost—because businesses can’t afford to sell at a loss, but competitive pressure forces them to sell as cheaply as they can. Further, since more useful and powerful products have traditionally cost more to produce, there was historically a strong relationship between a product’s quality and its price reflected in GDP. Yet many information technologies have become vastly more useful while prices have stayed more or less constant. A roughly $900 (2023 inflation-adjusted) computer chip in 1999 could perform more
...more
That said, these changes haven’t affected all areas of the economy evenly. For example, despite dramatic deflation in computing prices, health care has been getting more expensive faster than overall inflation—so someone who needs a lot of medical treatments may not be comforted much by how much cheaper GPU cycles are getting.[104] The good news, though, is that artificial intelligence and technological convergence will turn more and more kinds of goods and services into information technologies during the 2020s and 2030s—allowing them to benefit from the kinds of exponential trends that have
...more

