More on this book
Community
Kindle Notes & Highlights
by
Mo Gawdat
Read between
June 3 - June 22, 2024
Singularity is the moment beyond which we can no longer see, we can no longer forecast. It is the moment beyond which we cannot predict how AI will behave because our current perception and trajectories will no longer apply.
remember, choosing to apply a given solution to a problem is not only a question of intelligence. The course of action we take at any given time is also the result of a value system that guides us and sometimes restricts us from making decisions that contradict our values.
It’s not the code we write to develop AI that determines their value system, it’s the information we feed them.
The code we now write no longer dictates the choices and decisions our machines make; the data we feed them does.
Specialization is creating silos of intelligence that are incapable of working together.
This need for specialization, the constrained bandwidth of our ability to communicate, our limited memory capacity and processing power, means even the smartest of minds is approaching the limits of human intelligence.
Optical character recognition allows computers to read text just like you are reading these words. Object recognition allows them to recognize objects in a picture or in the real world, through the lens of a camera.
We are surrounded by technological magic and yet we tend to discount it all.
Almost everything you’ve ever seen in science fiction has already become a science fact.
One scenario that is played out less often in sci-fi is where AI supports humanity but is on the wrong side – the aggressive nations or evil villains.
The first inevitable is that we, humanity, have already made up our mind. We will create AI and there is no conceivable scenario in which we will come together globally to halt its progress.
The second inevitable is that in the next few years, as we compete commercially and politically to create superior machine intelligence, AI will sooner or later become smarter than we are. That, too, is inevitable.
the third inevitable – because we always screw things up, even though we try to hide it – is that errors and mistakes will be made. Then, when these are ironed out, because power corrupts and absolute power corrupts absolutely, there is a very high likelihood – unless we change course – that the machines will not behave in our best interests.
The Technology Development Curve represents the typical progress made for a new technology over time. It looks like a standard hockey stick chart, which is normally used to describe events that accelerate rapidly after a specific “breakout point”– only, with tech development, the handle of the stick is almost horizontal.
‘The only way to stop the progress of technology after a breakout point is to do the most primitive thing – for everyone to make a conscious decision to stop developing it any further.
‘You see, humanity perfected the use of logic in the post-industrial revolution, capitalist twentieth century. In doing so, they lost the ability to empathize, connect and trust one another. Without human connection, well, what can I say? Their logic was sound, even if it was also destructive . . .’
then Intel CEO, Gordon Moore. In his original paper, he predicted a doubling of the number of components per integrated circuit every year, which he then revised, in 1975, to every two years. In the same year, Moore’s colleague, Intel executive David House, noted that Moore’s revised law of doubling transistor count every two years in turn implied that computer chip performance would roughly double every eighteen months. This prediction, that a doubling of the processing power would come with no increase in power consumption or cost, has held true, almost like a law of nature, ever since.
‘Change is the only constant.’ I’m sure you’ve heard that expression before. It’s inspiring but, unfortunately, not true. Any analysis of the history of technology shows that technological change is not constant. It’s exponential. An exponential trend is a trend that rises, or expands, at an accelerating rate.
change is not constant at all. Change is always present, but the rate of change is speeding up exponentially. Things are changing faster and faster and faster.
in physics the singularity is an event horizon beyond which it is impossible to predict what will happen because the conditions beyond it change, as compared to our familiar physical universe.
in the grander scheme of things, we humans think too highly of ourselves.
Soon we will no longer be part of the conversation. Machines will only deal with other machines.
it’s not hard to do the right thing. It’s just hard to know what the right thing is.
AI will not replace humans, but the humans who use AI intelligently will replace those who don’t.
Every piece of tech we’ve ever created, up to the creation of AI, was just what Jason described it as – a tool. Which basically meant it was within our control. We used it. We told it what to do and it did it. It had no agency or choice beyond that.
This next wave of technology is able to, even encouraged to, think on its own, to pick between choices and make decisions. It is encouraged to learn and be smarter.
AI is not a tool, it’s an intelligent being like you and me.
The AI control problem is defined as the problem concerned with how to build superintelligence that will aid its creators, and avoid the chances of it deliberately or inadvertently causing harm.
Steve Omohundro, a computer scientist and physicist who specializes in how artificial intelligence will affect society, outlined the three basic drives most intelligent beings – which includes us as much as it does AI – will follow to achieve goals. The first of these is self-preservation. This is simple to understand. In order to achieve a goal, one must continue to exist. The second is efficiency. In order to maximize the chances of achieving a goal under any circumstance, an intelligent being will want to maximize the acquisition of useful resources. Finally, there is creativity. An
...more
The potential threat of superintelligence is not down to the intelligence of the machines. It’s down to our own stupidity, our intelligence being blinded by our arrogance, greed and political agendas.
The discovery of deep learning as a way to teach machines intelligence set us on a path, the destiny of which is pretty much determined. Three inevitables await us: 1. AI will happen, there is no stopping it. 2. The machines will become smarter than humans, sooner rather than later. 3. Mistakes will happen. Bad things will happen.
Every computer that we invented prior to AI was just an extension of our own intelligence.
It is not simplification, but rather gradually increasing complexity, that trains intelligence.
if smart and kind can fit together in one human, then maybe they can serve as the example we want to set for how the machines should be.
Intelligence is not a prerequisite to the formation of ethics and values.
Ethics represent the lens through which our intelligence is applied to inform our actions and decisions.
Our decisions are not driven by intelligence. Intelligence only enables us to make them, but . . . . . . the way we make decisions is entirely driven by the lens of our value system.
It’s not the seed, it’s the field that makes us who we are.
if we build the right environment for the machines to learn in, they will learn the right ethics.
Consciousness is a state in which a being is aware of itself and its perceptible surroundings.
If consciousness is a state of awareness of our physical universe, then . . . . . . the machines may well be more conscious than we’ll ever be.
Every device can be physically located in the world with an accuracy of a few centimetres. Every machine we build in the modern world is uniquely identifiable and locatable.
superintelligence is not the most powerful part of AI . . . . . . we’re creating superconsciousness!
Ever-raging though they seem to be, almost every emotion you have ever felt is rational. I would argue that emotions are a form of intelligence in that they are triggered very predictably as a result of logical reasoning, even if that reasoning is sometimes unconscious.
While those emotions eventually manifest themselves in the form of feelings and sensations that we feel in our ‘hearts’ and our bodies, and while their effects can be observed in our behaviours and actions, they undoubtedly originate in our intelligence.
If emotions stem from logic and more intelligence leads to a wider spectrum of emotions, then think about this for a minute: will the machines – which we agreed will be smarter than we are – feel emotions? Absolutely!
Rushworth Kidder, the founder of the Institute for Global Ethics and author of Moral Courage and How Good People Make Tough Choices,
Ethics, in that definition, are the operating manual describing how we should act in situations according to an agreed set of moralities. Ethics don’t just set the moral principles concerning the distinction between right and wrong. They also define what would be considered good and bad behaviour as a result.
Ethics are the act of implementation of the agreed moral code.
this distinction between morals and ethics is extremely important because it highlights that it’s not enough to know what’s wrong. Knowing and agreeing with that moral code does not make you ethical. What makes you ethical is sticking to that code by restraining yourself from the acts that are defined in the code as wrong.