More on this book
Community
Kindle Notes & Highlights
by
Kevin Kelly
Read between
January 22 - January 28, 2021
The internet could have been commercial rather than nonprofit, or a national system instead of international, or it could have been secret instead of public. Telephony—long-distance electrically transmitted voice messages—was inevitable, but the iPhone was not. The generic form of a four-wheeled vehicle was inevitable, but SUVs were not. Instant messaging was inevitable, but tweeting every five minutes was not.
We are morphing so fast that our ability to invent new things outpaces the rate we can civilize them.
The true leaders in digital money, for example, are in Africa and Afghanistan, where e-money is sometimes the only functioning currency. China is way ahead of everyone else in developing sharing applications on mobile. But while culture can advance or retard the expression, the underlying forces are universal.
Banning the inevitable usually backfires. Prohibition is at best temporary, and in the long run counterproductive.
Our greatest invention in the past 200 years was not a particular gadget or tool but the invention of the scientific process itself.
Get the ongoing process right and it will keep generating ongoing benefits. In our new era, processes trump products.
The more complex the gear, the more (not less) attention it will require.
I used to upgrade my gear begrudgingly (why upgrade if it still works?) and at the last possible moment. You know how it goes: Upgrade this and suddenly you need to upgrade that, which triggers upgrades everywhere. I would put it off for years because I had the experiences of one “tiny” upgrade of a minor part disrupting my entire working life. But as our personal technology is becoming more complex, more codependent upon peripherals, more like a living ecosystem, delaying upgrading is even more disruptive.
Technological life in the future will be a series of endless upgrades. And the rate of graduations is accelerating. Features shift, defaults disappear, menus morph. I’ll open up a software package I don’t use every day expecting certain choices, and whole menus will have disappeared.
A world without discomfort is utopia. But it is also stagnant. A world perfectly fair in some dimensions would be horribly unfair in others. A utopia has no problems to solve, but therefore no opportunities either.
None of us have to worry about these utopia paradoxes, because utopias never work. Every utopian scenario contains self-corrupting flaws. My aversion to utopias goes even deeper. I have not met a speculative utopia I would want to live in. I’d be bored in utopia. Dystopias, their dark opposites, are a lot more entertaining. They are also much easier to envision.
The flaw in most dystopian narratives is that they are not sustainable. Shutting down civilization is actually hard.
However, neither dystopia nor utopia is our destination. Rather, technology is taking us to protopia. More accurately, we have already arrived in protopia. Protopia is a state of becoming, rather than a destination. It is a process. In the protopian mode, things are better today than they were yesterday, although only a little better.
What we all failed to see was how much of this brave new online world would be manufactured by users, not big institutions. The entirety of the content offered by Facebook, YouTube, Instagram, and Twitter is not created by their staff, but by their audience. Amazon’s rise was a surprise not because it became an “everything store” (not hard to imagine), but because Amazon’s customers (me and you) rushed to write the reviews that made the site’s long-tail selection usable. Today, most major software producers have minimal help desks; their most enthusiastic customers advise and assist other
...more
Today’s web is remarkably ignorant of the past. It may supply you with a live webcam stream of Tahrir Square in Egypt, but accessing that square a year ago is nearly impossible. Viewing an earlier version of a typical website is not easy, but in 30 years we’ll have time sliders enabling us to see any past version. Just as your phone’s navigation directions through a city are improved by including previous days, weeks, and months of traffic patterns, so the web of 2050 will be informed by the context of the past. And the web will slide into the future as well.
But, but . . . here is the thing. In terms of the internet, nothing has happened yet! The internet is still at the beginning of its beginning. It is only becoming.
A similar transformation is about to happen for every other X. Take chemistry, another physical endeavor requiring laboratories of glassware and bottles brimming with solutions. Moving atoms—what could be more physical? By adding AI to chemistry, scientists can perform virtual chemical experiments. They can smartly search through astronomical numbers of chemical combinations to reduce them to a few promising compounds worth examining in a lab.
Here are other unlikely realms waiting to be cognitively enhanced: Cognified music—Music can be created in real time from algorithms, employed as the soundtrack for a video game or a virtual world. Depending on your actions, the music changes. Hundreds of hours of new personal music can be written by the AI for every player. Cognified laundry—Clothes that tell the washing machines how they want to be washed. The wash cycle would adjust itself to the contents of each load as directed by the smart clothes. Cognified marketing—The amount of attention an individual reader or watcher spends on an
...more
This highlight has been truncated due to consecutive passage length restrictions.
I’ve thought a lot about that conversation over the past few years as Google has bought 13 other AI and robotics companies in addition to DeepMind. At first glance, you might think that Google is beefing up its AI portfolio to improve its search capabilities, since search constitutes 80 percent of its revenue. But I think that’s backward. Rather than use AI to make its search better, Google is using search to make its AI better. Every time you type a query, click on a search-generated link, or create a link on the web, you are training the Google AI. When you type “Easter Bunny” into the image
...more
In a quarterly earnings conference call in the fall of 2015, Google CEO Sundar Pichai stated that AI was going to be “a core transformative way by which we are rethinking everything we are doing. . . . We are applying it across all our products, be it search, be it YouTube and Play, etc.” My prediction: By 2026, Google’s main product will not be search but AI.
Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples as a child before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI breakthrough lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self-tracking, web cookies, online footprints, terabytes of storage, decades of search results,
...more
This perfect storm of cheap parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI.
As it does, this cloud-based AI will become an increasingly ingrained part of our everyday life. But it will come at a price. Cloud computing empowers the law of increasing returns, sometimes called the network effect, which holds that the value of a network increases much faster as it grows bigger. The bigger the network, the more attractive it is to new users, which makes it even bigger and thus more attractive, and so on. A cloud that serves AI will obey the same law. The more people who use an AI, the smarter it gets. The smarter it gets, the more people who use it. The more people who use
...more
If AI can help humans become better chess players, it stands to reason that it can help us become better pilots, better doctors, better judges, better teachers.
Nonhuman intelligence is not a bug; it’s a feature. The most important thing to know about thinking machines is that they will think different. Because of a quirk in our evolutionary history, we are cruising as the only self-conscious species on our planet, leaving us with the incorrect idea that human intelligence is singular. It is not. Our intelligence is a society of intelligences, and this suite occupies only a small corner of the many types of intelligences and consciousnesses that are possible in the universe. We like to call our human intelligence “general purpose,” because compared
...more
Facebook has the ability to ramp up an AI that can view a photo portrait of any person on earth and correctly identify them out of some 3 billion people online. Human brains cannot scale to this degree, which makes this artificial ability very unhuman. We are notoriously bad at statistical thinking, so we are making intelligences with very good statistical skills, in order that they don’t think like us. One of the advantages of having AIs drive our cars is that they won’t drive like humans, with our easily distracted minds.
In a superconnected world, thinking different is the source of innovation and wealth.
A few really smart people, like astronomer Stephen Hawking and genius inventor Elon Musk, worry that making supersmart AIs could be our last invention before they replace us (though I don’t believe this), so exploring possible types is prudent.
In the real world—even in the space of powerful minds—trade-offs rule. One mind cannot do all mindful things perfectly well. A particular species of mind will be better in certain dimensions, but at a cost of lesser abilities in other dimensions. The smartness that guides a self-driving truck will be a different species than the one that evaluates mortgages. The AI that will diagnose your illness will be significantly different from the artificial smartness that oversees your house. The superbrain that predicts the weather accurately will be in a completely different kingdom of mind from the
...more
The taxonomy of minds must reflect the different ways in which minds are engineered with these trade-offs. In the short list below I include only those kinds of minds that we might consider superior to us; I’ve omitted the thousands of species of mild machine smartness—like the brains in a calculator—that will cognify the bulk of the internet of things. Some possible new minds: A mind like a human mind, just faster in answering (the easiest AI mind to imagine). A very slow mind, composed primarily of vast storage and memory. A global supermind composed of millions of individual dumb minds in
...more
This highlight has been truncated due to consecutive passage length restrictions.
The types of artificial minds we are making now and will make in the coming century will be designed to perform specialized tasks, and usually tasks that are beyond what we can do.
Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
To really solve the current grand mysteries of quantum gravity, dark energy, and dark matter, we’ll probably need other intelligences beside human. And the extremely complex harder questions that will come after those hard questions may require even more distant and complex intelligences. Indeed, we may need to invent intermediate intelligences that can help us design...
This highlight has been truncated due to consecutive passage length restrictions.
What are humans for? I believe our first answer will be: Humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences. We should really call AIs “AAs,” for “artificial aliens.”
In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.
Two hundred years ago, 70 percent of American workers lived on the farm. Today automation has eliminated all but 1 percent of their jobs, replacing them (and their work animals) with machines. But the displaced workers did not sit idle. Instead, automation created hundreds of millions of jobs in entirely new fields. Those who once farmed were now manning the legions of factories that churned out farm equipment, cars, and other industrial products. Since then, wave upon wave of new occupations have arrived—appliance repair person, offset printer, food chemist, photographer, web designer—each
...more
To understand how robot replacement will happen, it’s useful to break down our relationship with robots into four categories.
We aren’t giving “good jobs” to robots. Most of the time we are giving them jobs we could never do. Without them, these jobs would remain undone.
This is the greatest genius of the robot takeover: With the assistance of robots and computerized intelligence, we already can do things we never imagined doing 150 years ago. We can today remove a tumor in our gut through our navel, make a talking-picture video of our wedding, drive a cart on Mars, print a pattern on fabric that a friend mailed to us as a message through the air. We are doing, and are sometimes paid for doing, a million new activities that would have dazzled and shocked the farmers of 1800.
Before we invented automobiles, air-conditioning, flat-screen video displays, and animated cartoons, no one living in ancient Rome wished they could watch pictures move while riding to Athens in climate-controlled comfort. I did that recently. One hundred years ago not a single citizen of China would have told you that they would rather buy a tiny glassy slab that allowed them to talk to faraway friends before they would buy indoor plumbing. But every day peasant farmers in China without plumbing purchase smartphones. Crafty AIs embedded in first-person shooter games have given millions of
...more
It is a safe bet that the highest-earning professions in the year 2050 will depend on automations and machines that have not been invented yet. That is, we can’t see these jobs from here, because we can’t yet see the machines and technologies that will make them possible. Robots create jobs that we did not even know we wanted done.
Everyone will have access to a personal robot, but simply owning one will not guarantee success. Rather, success will go to those who best optimize the process of working with bots and machines. Geographical clusters of production will matter, not for any differential in labor costs but because of the differential in human expertise. It’s human-robot symbiosis. Our human assignment will be to keep making jobs for robots—and that is a task that will never be finished. So we will always have at least that one “job.”
This is not a race against the machines. If we race against them, we lose. This is a race with the machines. You’ll be paid in the future based on how well you work with robots. Ninety percent of your coworkers will be unseen machines. Most of what you do will not be possible without them. And there will be a blurry line between what you do and what they do. You might no longer think of it as a job, at least at first, because anything that resembles drudgery will be handed over to robots by the accountants.
Real-time books, ditto. In predigital days I bought printed books long before I intended to read them. If I spied an enticing book in a bookstore, I bought it. At first, the internet deepened my hefty backlog because I encountered more and more recommendations online. When the Kindle came along, I switched to primarily purchasing only digital books, but I kept the old habit of purchasing ebooks whenever I encountered a great recommendation. It was so easy! Click, got it. Then I had an epiphany that I am sure others have had as well. If I purchase a book ahead of time, it just sits in the same
...more
The cloud is the reservoir that songs escape from. The cloud is the seat where the intelligence of Siri sits, even as she speaks to you. The cloud is the new organizing metaphor for computers. The foundational units of this third digital regime, then, are flows, tags, and clouds.
Here are eight generatives that are “better than free.”
IMMEDIACY Sooner or later you can find a free copy of whatever you want, but getting a copy delivered to your inbox the moment it is released—or even better, produced—by its creators is a generative asset. Many people go to movie theaters to see films on the opening night, where they will pay a hefty price to see a film that later will be available for free, or almost free, via rental or download. In a very real sense, they are not paying for the movie (which is otherwise “free”); they are paying for the immediacy.
PERSONALIZATION A generic version of a concert recording may be free, but if you want a copy that has been tweaked to sound acoustically perfect in your particular living room—as if it were being performed in your room—you may be willing to pay a lot.
INTERPRETATION As the old joke goes: “Software, free. User manual, $10,000.” But it’s no joke. A couple of high-profile companies, like Red Hat, Apache, and others make their living selling instruction and paid support for free software. The copy of code, being mere bits, is free. The lines of free code become valuable to you only through support and guidance. A lot of medical and genetic information will go this route in the coming decades. Right now getting a full copy of all your DNA is very expensive ($10,000), but soon it won’t be. The price is dropping so fast, it will be $100 soon, and
...more
AUTHENTICITY You might be able to grab a popular software application for free on the dark net, but even if you don’t need a manual, you might want to be sure it comes without bugs, malware, or spam. In that case you’ll be happy to pay for an authentic copy.