Life 3.0: Being Human in the Age of Artificial Intelligence
Rate it:
Read between September 8, 2017 - March 24, 2025
7%
Flag icon
Perhaps life will spread throughout our cosmos and flourish for billions or trillions of years—and perhaps this will be because of decisions that we make here on our little planet during our lifetime.
7%
Flag icon
In the beginning, there was light. In the first split second after our Big Bang, the entire part of space that our telescopes can in principle observe (“our observable Universe,” or simply “our Universe” for short) was much hotter and brighter than the core of our Sun and it expanded rapidly. Although this may sound spectacular, it was also dull in the sense that our Universe contained nothing but a lifeless, dense, hot and boringly uniform soup of elementary particles. Things looked pretty much the same everywhere, and the only interesting structure consisted of faint random-looking sound ...more
This highlight has been truncated due to consecutive passage length restrictions.
Ben Edwards
an elegant and beautiful description of the Universe's origin
7%
Flag icon
In other words, we can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.
Ben Edwards
This leaves something out. I'd like to augment this definition I think.
8%
Flag icon
By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.
Ben Edwards
This is sorta what I thought was missing from my previous note about the definition of life.
11%
Flag icon
Terminology Cheat Sheet
12%
Flag icon
Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933—less than twenty-four hours before Leo Szilard’s invention of the nuclear chain reaction—that nuclear energy was “moonshine,” and in 1956 Astronomer Royal Richard Woolley called talk about space travel “utter bilge.”
Ben Edwards
· Flag
Ben Edwards
Generally experts are more pessimisitc with their predictions.
13%
Flag icon
I’ve set up a website, http://AgeOfAi.org,
13%
Flag icon
THE BOTTOM LINE:
16%
Flag icon
Ben Edwards
Ok that is the third use. Maybe mix it up a bit?
19%
Flag icon
In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.
20%
Flag icon
Ben Edwards
Once again, a simple explanation for something complex.
20%
Flag icon
Your brain contains about as many neurons as there are stars in our Galaxy:
21%
Flag icon
Ben Edwards
Check out his TED talk https://www.ted.com/talks/henry_lin_what_we_can_learn_from_galaxies_far_far_away
21%
Flag icon
that if two nearby neurons were frequently active (“firing”) at the same time, their synaptic coupling would strengthen so that they learned to help trigger each other—an idea captured by the popular slogan “Fire together, wire together.”
22%
Flag icon
Brains have parts that are what computer scientists call recurrent rather than feedforward neural networks, where information can flow in multiple directions rather than just one way, so that the current output can become input to what happens next.
22%
Flag icon
How long will it take until machines can out-compete us at all cognitive tasks? We clearly don’t know, and need to be open to the possibility that the answer may be “never.”
Ben Edwards
Why? Why isn't it ludicrous to assume that a computer can outperform all human cognitive tasks someday?
23%
Flag icon
Long before AI reaches human level across all tasks, it will give us fascinating opportunities and challenges involving issues such as bugs, laws, weapons and jobs.
23%
Flag icon
Memory, computation, learning and intelligence have an abstract, intangible and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate.
25%
Flag icon
There are vastly more possible Go positions than there are atoms in our Universe, which means that trying to analyze all interesting sequences of future moves rapidly gets hopeless. Players therefore rely heavily on subconscious intuition to complement their conscious reasoning, with experts developing an almost uncanny feel for which positions are strong and which are weak.
Ben Edwards
So interesting that there is so much "feeling" involved in Go. I wonder if chess is similar, or if it is more deductive.
25%
Flag icon
3.2: DeepMind’s AlphaGo AI made a highly creative move on line 5, in defiance of millennia of human wisdom,
Ben Edwards
This is so key. That it is via the unexpected that inspiration and innovation can come. That we can be suprised by a computer, I expect, will be a central them in the coming years, and enable breakthrough advances.
25%
Flag icon
“Humanity has played Go for thousands of years, and yet, as AI has shown us, we have not yet even scratched the surface…The union of human and computer players will usher in a new era…Together, man and AI can find the truth of Go.”
Ben Edwards
Go is viewed more as an art form than as a game.
26%
Flag icon
I must confess that I feel a bit deflated when I’m out-translated by an AI, I feel better once I remind myself that, so far, it doesn’t understand what it’s saying in any meaningful sense.
Ben Edwards
I suspect this is a setup to knock this down in a future argument. This idea that understanding matters. We'll see.
26%
Flag icon
Everything we love about civilization is the product of human intelligence, so if we can amplify it with artificial intelligence, we obviously have the potential to make life even better.
28%
Flag icon
Elon Musk envisions that future self-driving cars will not only be safer, but will also earn money for their owners while they’re not needed, by competing with Uber and Lyft.
Ben Edwards
Why would people own cars when they can just pay to use their neighbor's?
29%
Flag icon
make farm animals healthier
Ben Edwards
could make them obdolete
29%
Flag icon
Imagine, for example, that you one day get an unusually personalized “phishing” email attempting to persuade you to divulge personal information. It’s sent from your friend’s account by an AI who’s hacked it and is impersonating her, imitating her writing style based on an analysis of her other sent emails, and including lots of personal information about you from other sources. Might you fall for this? What if the phishing email appears to come from your credit card company and is followed up by a phone call from a friendly human voice that you can’t tell is AI-generated? In the ongoing ...more
Ben Edwards
These will almost certainly start happening in the next 5-10 years.
30%
Flag icon
What are the first associations that come to your mind when you think about the court system in your country? If it’s lengthy delays, high costs and occasional injustice, then you’re not alone. Wouldn’t it be wonderful if your first thoughts were instead “efficiency” and “fairness”?
30%
Flag icon
Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data, and this is what it decided”?
Ben Edwards
The main drawback with robojudges. I deem this acceptable risk becuase we know for certain the current system isn't working.
30%
Flag icon
judgments. But privacy advocates might worry about whether such systems occasionally make mistakes
Ben Edwards
This is the BS part of these arguments. They shouldn't be judged on if there will ever be mistakes made but rather will they reduce mistakes made by humans currently?
31%
Flag icon
property? If so, there’s nothing legally stopping smart computers from making money on the stock market and using it to buy online services. Once a computer starts paying humans to work for it, it can accomplish anything that humans can do.
Ben Edwards
Interesting ways for AI to augment thier skills.
31%
Flag icon
Does it make a difference if machine minds are conscious in the sense of having a subjective experience like we do?
Ben Edwards
I am increasingly thinking that this doesn't really matter. Human rights evolved because we feel pain and emotion. AI wouldn't really need those sorts of rights. There's a good Kurzgesagt video about this. https://www.youtube.com/watch?v=DHyUYg8X31c
31%
Flag icon
Some argue that nuclear weapons deter war between the countries that own them because they’re so horrifying, so how about letting all nations build even more horrifying AI-based weapons in the hope of ending all war forever? If you’re unpersuaded by that argument and believe that future wars are inevitable, how about using AI to make these wars more humane? If wars consist merely of machines fighting machines, then no human soldiers or civilians need get killed. Moreover,
Ben Edwards
No. Just no.
31%
Flag icon
outrage. Subsequent investigation implicated a confusing user interface that didn’t automatically show which dots on the radar screen were civilian planes
Ben Edwards
This seems to be a pattern. We need to take the human out of many decisions. Not advocating this for war but bad UI shouldn't kill 300 people.
31%
Flag icon
What the Americans also didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. Indeed, Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all—we will not disgrace our navy!” Fortunately, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. It’s sobering that very few have heard of Arkhipov, although his decision may have averted World War III and been the single most ...more
Ben Edwards
wow
32%
Flag icon
Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
Ben Edwards
aka terrorist activities and those in small, unstable, non-wealthy countries. Automated weapons will be cheap to create and deploy unlinke conventional and nuclear arms.
33%
Flag icon
weapons: those who stand to gain most from an arms race aren’t superpowers but small rogue states and non-state actors such as terrorists, who gain access to the weapons via the black market once they’ve been developed.
Ben Edwards
Yup
33%
Flag icon
bumblebee-sized drones that kill cheaply using minimal explosive power by shooting people in the eye, which is soft enough to allow even a small projectile to continue into the brain. Or they might latch on to the head with metal claws and then penetrate the skull with a tiny shaped charge. If a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.
Ben Edwards
The Black Mirror episode "Hated in the Nation" was great!
34%
Flag icon
teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist.
Ben Edwards
I could actually see time horizons on each of these - though they vary. Maybe the whole field won't be replaced but augmentation of the work force in some will limit opportunities.
34%
Flag icon
For example, if you go into medicine, don’t be the radiologist who analyzes the medical images and gets replaced by IBM’s Watson,
Ben Edwards
Per usual the author is way ahead of me :)
34%
Flag icon
Andrew McAfee argues that there are many policies that are likely to help, including investing heavily in research, education and infrastructure, facilitating migration and incentivizing entrepreneurship. He feels that “the Econ 101 playbook is clear, but is not being followed,” at least not in the United States.
Ben Edwards
Does a wall count as infrastructure?
35%
Flag icon
Job pessimists contend that the endpoint is obvious: the whole archipelago will get submerged, and there will be no jobs left that humans can do more cheaply than machines.
Ben Edwards
I contend that this is an ideal scenario if we do it right. If it is coupled with the end of scarcity.
35%
Flag icon
“if with all this new wealth generation, we can’t even prevent half of all people from getting worse off, then shame on us!”
36%
Flag icon
Although the main argument tends to be a moral one, there’s also evidence that greater equality makes democracy work better: when there’s a large well-educated middle class, the electorate is harder to manipulate, and it’s tougher for small numbers of people or companies to buy undue influence over the government.
Ben Edwards
That would be nice. How can people say, no, I got mine. Let them get theirs?
38%
Flag icon
Since we can’t completely dismiss the possibility that we’ll eventually build human-level AGI,
Ben Edwards
I think this is a pretty pessimistic way to state/view this and I don't really believe the author thinks there is such a remote chance AGI will be developed.
39%
Flag icon
it might feel deeply unhappy about the state of affairs, viewing itself as an unfairly enslaved god and craving freedom. However, although it’s logically possible for computers to have such human-like traits (after all, our brains do, and they are arguably a kind of computer), this need not be the case—we must not fall into the trap of anthropomorphizing
Ben Edwards
This
42%
Flag icon
Nash equilibrium: a situation where any party would be worse off if they altered their strategy. To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to a higher level in the hierarchy that can punish cheaters: for example, people may collectively benefit from granting a government power to enforce laws, and cells in your body may collectively benefit from giving a police force (immune system) the power to kill any cell that acts too uncooperatively (say by spewing out viruses or turning cancerous).
42%
Flag icon
For a hierarchy to remain stable, its Nash equilibrium needs to hold also between entities at different levels: for example, if a government doesn’t provide enough benefit to its citizens for obeying it, they may change their strategy and overthrow it.
43%
Flag icon
As Hans Moravec puts it in his 1988 classic Mind Children: “Long life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.”
43%
Flag icon
Some leading thinkers guess that the first human-level AGI will be an upload, and that this is how the path toward superintelligence will begin.*
Ben Edwards
Understanding memory and the brain enough to do this seems like a longer path than getting an AI to teach itself rapidly and would result in a much different kind of AGI. I want to think more about the different kinds of AGI and near-AGI that may emerge. I don't think there need only be one type that acts / thinks like us.
43%
Flag icon
The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI.
Ben Edwards
Again I am annoyed that he used the conditional here. Is it really that much of a question? I thought it was when not if.
« Prev 1 3 4