More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
September 8, 2017 - March 24, 2025
about half of the AI experts at our Puerto Rico conference guessed that it would happen by 2055. At a follow-up conference we organized two years later, this had dropped to 2047.
My guess is that we will have something resembling AGI by the late 2030's. What I mean by this is that it probably won't meet all critera for being AGI but it will appear like AGI to many people and perhaps pass new "Turing tests". Put me in the full AGI and then Super AI by 2050 group.
hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So we should instead ask: “What should happen? What future do we want?”
1. Do you want there to be superintelligence? 2. Do you want humans to still exist, be replaced, cyborgized and/or uploaded/simulated? 3. Do you want humans or machines in control? 4. Do you want AIs to be conscious or not? 5. Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out? 6. Do you want life spreading into the cosmos? 7. Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?
In a sense, the central entities of life aren’t minds, but experiences: exceptionally amazing experiences live on because they get continually copied and re-enjoyed by other minds,
today: their economy is rather decoupled from that of the machines, so the presence of the machines elsewhere has little effect on them except for the occasional useful technologies that they can understand and reproduce for themselves—much as the Amish and various technology-relinquishing native tribes today have standards of living at least as good as they had in old times. It doesn’t matter that the humans have nothing to sell that the machines need, since the machines need nothing in return.
the prospect of getting uploaded in the future has motivated over a hundred people to have their brains posthumously frozen by the Arizona-based company Alcor.
To the vastly more intelligent entities that would exist at that time, an uploaded human may seem about as interesting as a simulated mouse or snail would seem to us. Although we currently have the technical capability to reanimate old spreadsheet programs from the 1980s in a DOS emulator, most of us don’t find this interesting enough to actually do it.
each person receives a basic monthly income from the government, which they can spend as they wish on products and renting places to live. There’s essentially no incentive for anyone to try to earn more money, because the basic income is high enough to meet any reasonable needs. It would also be rather hopeless to try, because they’d be competing with people giving away intellectual products for free and robots producing material goods essentially for free.
It’s interesting to compare this with the so-called theodicy problem of why a good god would allow suffering. Some religious scholars have argued for the explanation that God wants to leave people with some freedom.
that like human slaves, non-human animals are subjected to branding, restraints, beatings, auctions, the separation of offspring from their parents, and forced voyages.
Of all traits that our human form of intelligence has, I feel that consciousness is by far the most remarkable, and as far as I’m concerned, it’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them.
Although political opposition has thus far prevented the full-scale implementation of such a system, we humans are well on our way to building the required infrastructure for the ultimate dictatorship—so in the future, when sufficiently powerful forces decided to enact this global 1984 scenario, they found that they didn’t need to do much more than flip the on switch.
We’ve had to rely on luck to weather an embarrassingly long list of near misses caused by all sorts of things: computer malfunction, power failure, faulty intelligence, navigation error, bomber crash, satellite explosion and so on.
Maybe it hasn't been luck. Maybe we already are living in a world with a super intelligence that guides us past these near misses.
To me, the most inspiring scientific discovery ever is that we’ve dramatically underestimated life’s future potential. Our dreams and aspirations need not be limited to century-long life spans marred by disease, poverty and confusion. Rather, aided by technology, life has the potential to flourish for billions of years,
O’Neill cylinders can provide comfortable Earth-like human habitats if they orbit the Sun in such a way that they always point straight at it.
nature appears to have a built-in goal of producing self-organizing systems that are increasingly complex and lifelike, and this goal is hardwired into the very laws of physics.
In practice, these agents have what Nobel laureate and AI pioneer Herbert Simon termed “bounded rationality” because they have limited resources: the rationality of their decisions is limited by their available information, their available time to think and their available hardware with which to think.
This means that when Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself.
it implements a hodgepodge of heuristic hacks: rules of thumb that usually work well. For most animals, these include sex drive, drinking when thirsty, eating when hungry and avoiding things that taste bad or hurt.
Since today’s human society is very different from the environment evolution optimized our rules of thumb for, we shouldn’t be surprised to find that our behavior often fails to maximize baby making.
In summary, a living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid.
Why do we sometimes choose to rebel against our genes and their replication goal? We rebel because by design, as agents of bounded rationality, we’re loyal only to our feelings. Although our brains evolved merely to help copy our genes, our brains couldn’t care less about this goal since we have no feelings related to genes—indeed,
since our feelings implement merely rules of thumb that aren’t appropriate in all situations, human behavior strictly speaking doesn’t have a single well-defined goal at all.
Teleology is the explanation of things in terms of their purposes rather than their causes, so we can summarize the first part of this chapter by saying that our Universe keeps getting more teleological.
words, even without an intelligence explosion, most matter on Earth that exhibits goal-oriented properties may soon be designed rather than evolved.
All machines are agents with bounded rationality, and even today’s most sophisticated machines have a poorer understanding of the world than we do, so the rules they use to figure out what to do are often too simplistic.
the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
In a nutshell. Problem is people can't only hear this and get worried. It doesn't sound ominous becuase we assume we can create goals that align to our own. People don't really take the genie problem seriously. Terminator still scares more people.
1. Making AI learn our goals 2. Making AI adopt our goals 3. Making AI retain our goals
the time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.
If you imbue a superintelligent AI with the sole goal to self-destruct, it will of course happily do so. However, the point is that it will resist being shut down if you give it any goal that it needs to remain operational to accomplish—and this covers almost all goals!
if we create a superintelligence whose only goal is to play the game Go as well as possible, the rational thing for it to do is to rearrange our Solar System into a gigantic computer without regard for its previous inhabitants and then start settling our cosmos on a quest for more computational power.
With increasing intelligence may come not merely a quantitative improvement in the ability to attain the same old goals, but a qualitatively different understanding of the nature of reality that reveals the old goals to be misguided, meaningless or even undefined.
Perhaps there’s a way of designing a self-improving AI that’s guaranteed to retain human-friendly goals forever, but I think it’s fair to say that we don’t yet know how to build one—or even whether it’s possible. In conclusion, the AI goal-alignment problem has three parts, none of which is solved and all of which are now the subject of active research. Since they’re so hard, it’s safest to start devoting our best efforts to them now, long before any superintelligence is developed, to ensure that we’ll have the answers when we need them.
beauty, goodness and truth
the quest for a better world model
into four principles: • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized. • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible. • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle. • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans
...more
This one is interesting. How could we possibly be talking about not allowing Super AIs to have autonomy if it is one of our four ethical principles. I would rather trust them to make wiser decisions than us and live/or die with that, the purposfully impose our will on yet another swath of history.