More on this book
Community
Kindle Notes & Highlights
by
Max Tegmark
Read between
September 8, 2017 - March 24, 2025
Perhaps life will spread throughout our cosmos and flourish for billions or trillions of years—and perhaps this will be because of decisions that we make here on our little planet during our lifetime.
In the beginning, there was light. In the first split second after our Big Bang, the entire part of space that our telescopes can in principle observe (“our observable Universe,” or simply “our Universe” for short) was much hotter and brighter than the core of our Sun and it expanded rapidly. Although this may sound spectacular, it was also dull in the sense that our Universe contained nothing but a lifeless, dense, hot and boringly uniform soup of elementary particles. Things looked pretty much the same everywhere, and the only interesting structure consisted of faint random-looking sound
...more
This highlight has been truncated due to consecutive passage length restrictions.
By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.
Terminology Cheat Sheet
Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933—less than twenty-four hours before Leo Szilard’s invention of the nuclear chain reaction—that nuclear energy was “moonshine,” and in 1956 Astronomer Royal Richard Woolley called talk about space travel “utter bilge.”
I’ve set up a website, http://AgeOfAi.org,
THE BOTTOM LINE:
In short, computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter.
Your brain contains about as many neurons as there are stars in our Galaxy:
that if two nearby neurons were frequently active (“firing”) at the same time, their synaptic coupling would strengthen so that they learned to help trigger each other—an idea captured by the popular slogan “Fire together, wire together.”
Brains have parts that are what computer scientists call recurrent rather than feedforward neural networks, where information can flow in multiple directions rather than just one way, so that the current output can become input to what happens next.
Long before AI reaches human level across all tasks, it will give us fascinating opportunities and challenges involving issues such as bugs, laws, weapons and jobs.
Memory, computation, learning and intelligence have an abstract, intangible and ethereal feel to them because they’re substrate-independent: able to take on a life of their own that doesn’t depend on or reflect the details of their underlying material substrate.
There are vastly more possible Go positions than there are atoms in our Universe, which means that trying to analyze all interesting sequences of future moves rapidly gets hopeless. Players therefore rely heavily on subconscious intuition to complement their conscious reasoning, with experts developing an almost uncanny feel for which positions are strong and which are weak.
So interesting that there is so much "feeling" involved in Go. I wonder if chess is similar, or if it is more deductive.
3.2: DeepMind’s AlphaGo AI made a highly creative move on line 5, in defiance of millennia of human wisdom,
This is so key. That it is via the unexpected that inspiration and innovation can come. That we can be suprised by a computer, I expect, will be a central them in the coming years, and enable breakthrough advances.
I must confess that I feel a bit deflated when I’m out-translated by an AI, I feel better once I remind myself that, so far, it doesn’t understand what it’s saying in any meaningful sense.
I suspect this is a setup to knock this down in a future argument. This idea that understanding matters. We'll see.
Everything we love about civilization is the product of human intelligence, so if we can amplify it with artificial intelligence, we obviously have the potential to make life even better.
Imagine, for example, that you one day get an unusually personalized “phishing” email attempting to persuade you to divulge personal information. It’s sent from your friend’s account by an AI who’s hacked it and is impersonating her, imitating her writing style based on an analysis of her other sent emails, and including lots of personal information about you from other sources. Might you fall for this? What if the phishing email appears to come from your credit card company and is followed up by a phone call from a friendly human voice that you can’t tell is AI-generated? In the ongoing
...more
What are the first associations that come to your mind when you think about the court system in your country? If it’s lengthy delays, high costs and occasional injustice, then you’re not alone. Wouldn’t it be wonderful if your first thoughts were instead “efficiency” and “fairness”?
Even if AI can be made robust enough for us to trust that a robojudge is using the legislated algorithm, will everybody feel that they understand its logical reasoning enough to respect its judgment? This challenge is exacerbated by the recent success of neural networks, which often outperform traditional easy-to-understand AI algorithms at the price of inscrutability. If defendants wish to know why they were convicted, shouldn’t they have the right to a better answer than “we trained the system on lots of data, and this is what it decided”?
The main drawback with robojudges. I deem this acceptable risk becuase we know for certain the current system isn't working.
Does it make a difference if machine minds are conscious in the sense of having a subjective experience like we do?
I am increasingly thinking that this doesn't really matter. Human rights evolved because we feel pain and emotion. AI wouldn't really need those sorts of rights. There's a good Kurzgesagt video about this. https://www.youtube.com/watch?v=DHyUYg8X31c
Some argue that nuclear weapons deter war between the countries that own them because they’re so horrifying, so how about letting all nations build even more horrifying AI-based weapons in the hope of ending all war forever? If you’re unpersuaded by that argument and believe that future wars are inevitable, how about using AI to make these wars more humane? If wars consist merely of machines fighting machines, then no human soldiers or civilians need get killed. Moreover,
What the Americans also didn’t know was that the B-59 crew had a nuclear torpedo that they were authorized to launch without clearing it with Moscow. Indeed, Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all—we will not disgrace our navy!” Fortunately, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. It’s sobering that very few have heard of Arkhipov, although his decision may have averted World War III and been the single most
...more
Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
aka terrorist activities and those in small, unstable, non-wealthy countries. Automated weapons will be cheap to create and deploy unlinke conventional and nuclear arms.
bumblebee-sized drones that kill cheaply using minimal explosive power by shooting people in the eye, which is soft enough to allow even a small projectile to continue into the brain. Or they might latch on to the head with metal claws and then penetrate the skull with a tiny shaped charge. If a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.
teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist.
I could actually see time horizons on each of these - though they vary. Maybe the whole field won't be replaced but augmentation of the work force in some will limit opportunities.
Andrew McAfee argues that there are many policies that are likely to help, including investing heavily in research, education and infrastructure, facilitating migration and incentivizing entrepreneurship. He feels that “the Econ 101 playbook is clear, but is not being followed,” at least not in the United States.
“if with all this new wealth generation, we can’t even prevent half of all people from getting worse off, then shame on us!”
Although the main argument tends to be a moral one, there’s also evidence that greater equality makes democracy work better: when there’s a large well-educated middle class, the electorate is harder to manipulate, and it’s tougher for small numbers of people or companies to buy undue influence over the government.
it might feel deeply unhappy about the state of affairs, viewing itself as an unfairly enslaved god and craving freedom. However, although it’s logically possible for computers to have such human-like traits (after all, our brains do, and they are arguably a kind of computer), this need not be the case—we must not fall into the trap of anthropomorphizing
Nash equilibrium: a situation where any party would be worse off if they altered their strategy. To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to a higher level in the hierarchy that can punish cheaters: for example, people may collectively benefit from granting a government power to enforce laws, and cells in your body may collectively benefit from giving a police force (immune system) the power to kill any cell that acts too uncooperatively (say by spewing out viruses or turning cancerous).
For a hierarchy to remain stable, its Nash equilibrium needs to hold also between entities at different levels: for example, if a government doesn’t provide enough benefit to its citizens for obeying it, they may change their strategy and overthrow it.
As Hans Moravec puts it in his 1988 classic Mind Children: “Long life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.”
Some leading thinkers guess that the first human-level AGI will be an upload, and that this is how the path toward superintelligence will begin.*
Understanding memory and the brain enough to do this seems like a longer path than getting an AI to teach itself rapidly and would result in a much different kind of AGI.
I want to think more about the different kinds of AGI and near-AGI that may emerge. I don't think there need only be one type that acts / thinks like us.