Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Read between September 8, 2017 - March 24, 2025
44%
Flag icon
about half of the AI experts at our Puerto Rico conference guessed that it would happen by 2055. At a follow-up conference we organized two years later, this had dropped to 2047.
Ben Edwards
My guess is that we will have something resembling AGI by the late 2030's. What I mean by this is that it probably won't meet all critera for being AGI but it will appear like AGI to many people and perhaps pass new "Turing tests". Put me in the full AGI and then Super AI by 2050 group.
44%
Flag icon
hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So we should instead ask: “What should happen? What future do we want?”
44%
Flag icon
We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.
Ben Edwards
Key question of our time?
45%
Flag icon
1. Do you want there to be superintelligence? 2. Do you want humans to still exist, be replaced, cyborgized and/or uploaded/simulated? 3. Do you want humans or machines in control? 4. Do you want AIs to be conscious or not? 5. Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out? 6. Do you want life spreading into the cosmos? 7. Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?
46%
Flag icon
In a sense, the central entities of life aren’t minds, but experiences: exceptionally amazing experiences live on because they get continually copied and re-enjoyed by other minds,
46%
Flag icon
today: their economy is rather decoupled from that of the machines, so the presence of the machines elsewhere has little effect on them except for the occasional useful technologies that they can understand and reproduce for themselves—much as the Amish and various technology-relinquishing native tribes today have standards of living at least as good as they had in old times. It doesn’t matter that the humans have nothing to sell that the machines need, since the machines need nothing in return.
Ben Edwards
This doesn't seem very plausible to me outsie of a few small communities - like the Amish of today.
46%
Flag icon
Most humans who owned land therefore ended up selling a small fraction of it to AIs in return for guaranteed basic income for them and their offspring/uploads in perpetuity.
Ben Edwards
Take note. Buy land!
46%
Flag icon
the prospect of getting uploaded in the future has motivated over a hundred people to have their brains posthumously frozen by the Arizona-based company Alcor.
46%
Flag icon
To the vastly more intelligent entities that would exist at that time, an uploaded human may seem about as interesting as a simulated mouse or snail would seem to us. Although we currently have the technical capability to reanimate old spreadsheet programs from the 1980s in a DOS emulator, most of us don’t find this interesting enough to actually do it.
48%
Flag icon
a scenario where there is no superintelligent AI, and humans are the masters of their own destiny.
Ben Edwards
i didnt sees this before. No SAGI is a no go for me on this
48%
Flag icon
each person receives a basic monthly income from the government, which they can spend as they wish on products and renting places to live. There’s essentially no incentive for anyone to try to earn more money, because the basic income is high enough to meet any reasonable needs. It would also be rather hopeless to try, because they’d be competing with people giving away intellectual products for free and robots producing material goods essentially for free.
Ben Edwards
Bring it on, Zuck!
49%
Flag icon
If we appear headed toward an accidental nuclear war, it could avert it with an intervention we’d dismiss as luck. It could also give us “revelations” in the form of ideas for new beneficial technologies, delivered inconspicuously in our sleep.
Ben Edwards
maybe we are already living this
49%
Flag icon
It’s interesting to compare this with the so-called theodicy problem of why a good god would allow suffering. Some religious scholars have argued for the explanation that God wants to leave people with some freedom.
50%
Flag icon
that like human slaves, non-human animals are subjected to branding, restraints, beatings, auctions, the separation of offspring from their parents, and forced voyages.
51%
Flag icon
Of all traits that our human form of intelligence has, I feel that consciousness is by far the most remarkable, and as far as I’m concerned, it’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them.
51%
Flag icon
Some, however, are so incensed by the way we treat people and other living beings that they hope we’ll get replaced by some more intelligent and deserving life form.
Ben Edwards
Jena ;)
53%
Flag icon
Although political opposition has thus far prevented the full-scale implementation of such a system, we humans are well on our way to building the required infrastructure for the ultimate dictatorship—so in the future, when sufficiently powerful forces decided to enact this global 1984 scenario, they found that they didn’t need to do much more than flip the on switch.
54%
Flag icon
We’ve had to rely on luck to weather an embarrassingly long list of near misses caused by all sorts of things: computer malfunction, power failure, faulty intelligence, navigation error, bomber crash, satellite explosion and so on.
Ben Edwards
Maybe it hasn't been luck. Maybe we already are living in a world with a super intelligence that guides us past these near misses.
55%
Flag icon
intensity. Media reports suggest that cobalt bombs are now being built for the first time.
Ben Edwards
What?!
56%
Flag icon
To me, the most inspiring scientific discovery ever is that we’ve dramatically underestimated life’s future potential. Our dreams and aspirations need not be limited to century-long life spans marred by disease, poverty and confusion. Rather, aided by technology, life has the potential to flourish for billions of years,
56%
Flag icon
that we could meet all our current global energy needs by harvesting the sunlight striking an area smaller than 0.5% of the Sahara desert.
Ben Edwards
With current technology? Why the fuck aren't we doing this is this is the case?
57%
Flag icon
O’Neill cylinders can provide comfortable Earth-like human habitats if they orbit the Sun in such a way that they always point straight at it.
57%
Flag icon
Sun has consumed about
Ben Edwards
oops
59%
Flag icon
I’ve yet to discover a topic that he doesn’t have something interesting to say about.
Ben Edwards
Strange phrasing. Incorrect?
60%
Flag icon
Gaining Resources Through Cosmic Settlement
Ben Edwards
Come back to this.
70%
Flag icon
nature appears to have a built-in goal of producing self-organizing systems that are increasingly complex and lifelike, and this goal is hardwired into the very laws of physics.
70%
Flag icon
In practice, these agents have what Nobel laureate and AI pioneer Herbert Simon termed “bounded rationality” because they have limited resources: the rationality of their decisions is limited by their available information, their available time to think and their available hardware with which to think.
70%
Flag icon
This means that when Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself.
70%
Flag icon
it implements a hodgepodge of heuristic hacks: rules of thumb that usually work well. For most animals, these include sex drive, drinking when thirsty, eating when hungry and avoiding things that taste bad or hurt.
70%
Flag icon
Since today’s human society is very different from the environment evolution optimized our rules of thumb for, we shouldn’t be surprised to find that our behavior often fails to maximize baby making.
70%
Flag icon
In summary, a living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid.
70%
Flag icon
Why do we sometimes choose to rebel against our genes and their replication goal? We rebel because by design, as agents of bounded rationality, we’re loyal only to our feelings. Although our brains evolved merely to help copy our genes, our brains couldn’t care less about this goal since we have no feelings related to genes—indeed,
71%
Flag icon
since our feelings implement merely rules of thumb that aren’t appropriate in all situations, human behavior strictly speaking doesn’t have a single well-defined goal at all.
71%
Flag icon
Teleology is the explanation of things in terms of their purposes rather than their causes, so we can summarize the first part of this chapter by saying that our Universe keeps getting more teleological.
71%
Flag icon
words, even without an intelligence explosion, most matter on Earth that exhibits goal-oriented properties may soon be designed rather than evolved.
71%
Flag icon
All machines are agents with bounded rationality, and even today’s most sophisticated machines have a poorer understanding of the world than we do, so the rules they use to figure out what to do are often too simplistic.
72%
Flag icon
the real risk with AGI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.
Ben Edwards
In a nutshell. Problem is people can't only hear this and get worried. It doesn't sound ominous becuase we assume we can create goals that align to our own. People don't really take the genie problem seriously. Terminator still scares more people.
72%
Flag icon
1. Making AI learn our goals 2. Making AI adopt our goals 3. Making AI retain our goals
72%
Flag icon
To learn our goals, an AI must figure out not what we do, but why we do it.
Ben Edwards
first we need to understand that?
72%
Flag icon
When those to be persuaded are computers rather than people, the challenge is known as the value-loading problem, and it’s even harder than the moral education of children.
Ben Edwards
is it?
72%
Flag icon
the time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.
72%
Flag icon
The reason that value loading can be harder with machines than with people is that their intelligence growth can be much faster: whereas children can spend many years in that magic persuadable window where their intelligence is comparable to that of their parents,
Ben Edwards
I guess it is.
73%
Flag icon
If you imbue a superintelligent AI with the sole goal to self-destruct, it will of course happily do so. However, the point is that it will resist being shut down if you give it any goal that it needs to remain operational to accomplish—and this covers almost all goals!
73%
Flag icon
if we create a superintelligence whose only goal is to play the game Go as well as possible, the rational thing for it to do is to rearrange our Solar System into a gigantic computer without regard for its previous inhabitants and then start settling our cosmos on a quest for more computational power.
Ben Edwards
LOL
74%
Flag icon
With increasing intelligence may come not merely a quantitative improvement in the ability to attain the same old goals, but a qualitatively different understanding of the nature of reality that reveals the old goals to be misguided, meaningless or even undefined.
Ben Edwards
Becoming wiser and not wanting the same things from life seems like a pretty normal thing.
74%
Flag icon
for example by using birth control.
Ben Edwards
Need another example, man
74%
Flag icon
Perhaps there’s a way of designing a self-improving AI that’s guaranteed to retain human-friendly goals forever, but I think it’s fair to say that we don’t yet know how to build one—or even whether it’s possible. In conclusion, the AI goal-alignment problem has three parts, none of which is solved and all of which are now the subject of active research. Since they’re so hard, it’s safest to start devoting our best efforts to them now, long before any superintelligence is developed, to ensure that we’ll have the answers when we need them.
Ben Edwards
This seems way harder than teaching a computer to teach itself. We're fucked.
74%
Flag icon
beauty, goodness and truth
74%
Flag icon
the quest for a better world model
75%
Flag icon
into four principles: • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized. • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible. • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle. • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans ...more
Ben Edwards
This one is interesting. How could we possibly be talking about not allowing Super AIs to have autonomy if it is one of our four ethical principles. I would rather trust them to make wiser decisions than us and live/or die with that, the purposfully impose our will on yet another swath of history.