Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Kindle Notes & Highlights
Read between December 4, 2024 - August 18, 2025
72%
Flag icon
inverse reinforcement learning,
72%
Flag icon
imbuing
72%
Flag icon
value-loading problem,
72%
Flag icon
corrigibility.
72%
Flag icon
Vernor Vinge
72%
Flag icon
“singularity”—the
72%
Flag icon
Nick Bostrom’s book Superintelligence.
73%
Flag icon
there’s an inherent tension between goal retention and improving its world model, which casts doubts on whether it will actually retain its original goal as it gets smarter.
73%
Flag icon
If you give a superintelligence the sole goal of minimizing harm to humanity, for example, it will defend itself against shutdown attempts because it knows we’ll harm one another much more in its absence through future wars and other follies.
73%
Flag icon
almost all goals can be better accomplished with more resources, so we should expect a superintelligence to want resources almost regardless of what ultimate goal it has. Giving a superintelligence a single open-ended goal with no constraints can therefore be dangerous:
73%
Flag icon
There’s tension between world-modeling and goal retention (see
73%
Flag icon
With increasing intelligence may come not merely a quantitative improvement in the ability to attain the same old goals, but a qualitatively different understanding of the nature of reality that reveals the old goals to be misguided, meaningless or even undefined.
74%
Flag icon
formicine
74%
Flag icon
vapid
74%
Flag icon
we now have an excellent framework for our truth quest: the scientific method. But how can we determine what’s beautiful or good?
75%
Flag icon
if there’s no experience (as in a dead universe or one populated by zombie-like unconscious machines), there can be no meaning or anything else that’s ethically relevant. If we buy into this utilitarian ethical principle, then it’s crucial that we figure out which intelligent systems are conscious (in the sense of having a subjective experience) and which aren’t;
75%
Flag icon
“Pareto-optimality”
75%
Flag icon
nobody can get better off without someone else getting worse off.
75%
Flag icon
“Three Laws of Robotics” devised by sci-fi legend Isaac Asimov: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection doesn’t conflict with the First or Second Laws.
75%
Flag icon
digital life forms,
75%
Flag icon
would we really want people from 1,500 years ago to have a lot of influence over how today’s world is run? If not, why should we try to impose our ethics on future beings that may be dramatically smarter than us?
75%
Flag icon
suicidal pilot Andreas Lubitz flew Germanwings Flight 9525 into a mountain on March 24, 2015—by setting the autopilot to an altitude of 100 feet (30 meters) above sea level and letting the flight computer do the rest of the work.
76%
Flag icon
The Better Angels of Our Nature, Steven Pinker
76%
Flag icon
Nick Bostrom
76%
Flag icon
Superintelligence,
76%
Flag icon
the orthogonality...
This highlight has been truncated due to consecutive passage length restrictions.
76%
Flag icon
that the ultimate goals of a system can be independent of...
This highlight has been truncated due to consecutive passage length restrictions.
76%
Flag icon
Peter Singer
76%
Flag icon
most humans behave unethically for evolutionary reasons,
76%
Flag icon
how can an “ultimate goal” (or “final goal,” as Bostrom calls it) even be defined for a superintelligence?
76%
Flag icon
we can’t have confidence in the friendly-AI vision unless we can answer this crucial question.
76%
Flag icon
a googolplex is 1 followed by 10100 zeroes—more
76%
Flag icon
many systems evolve to maximize their entropy, which in the absence of gravity eventually leads to heat death, where everything is boringly uniform and unchanging.
77%
Flag icon
Marcus Hutter
77%
Flag icon
Alex Wissner-Gross
77%
Flag icon
Cameron Freer
77%
Flag icon
causal e...
This highlight has been truncated due to consecutive passage length restrictions.
77%
Flag icon
it appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us.
77%
Flag icon
obdurate
77%
Flag icon
To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident.
77%
Flag icon
Intelligence is the ability to accomplish complex goals.
77%
Flag icon
Aligning machine goals with our own involves three unsolved problems: making machines learn them, adopt them and retain them.
77%
Flag icon
A rule of thumb that many insects use for flying in a straight line is to assume that a bright light is the Sun and fly at a fixed angle relative to it. If the light turns out to be a nearby flame, this hack can unfortunately trick the bug into an inward death spiral.
77%
Flag icon
Christof Koch,
78%
Flag icon
Erwin Schrödinger, “a play before empty benches, not existing for anybody, thus quite properly speaking not existing”?
78%
Flag icon
consciousness = subjective experience   In other words, if it feels like something to be you right now, then you’re conscious.
78%
Flag icon
by this definition, you’re conscious also when you’re dreaming, even though you lack wakefulness or access to sensory input
78%
Flag icon
Similarly, any system that experiences pain is conscious
78%
Flag icon
Our definition leaves open the possibility that some future AI systems...
This highlight has been truncated due to consecutive passage length restrictions.
78%
Flag icon
David Chalmers,