The Birth of Artificial Intelligence

Google’s AI AlphaGo beat the undisputed world Go champ 4-1 this week. While this won’t grab the headlines that IBM did with beating Kasparov at chess or Jennings at Jeopardy, it’s a more amazing result than either of those previous accomplishments. The nature of the game of Go, and the nature of the solutions required for an AI to beat the world’s best, were on a different level. Google had to create something akin to intuition, which is different than the sort of computational power and book knowledge involved with winning at chess, or the data parsing involved with excelling at Jeopardy.


Go experts did not expect this result, with many predicting that Google’s team would lose 5-0. This wasn’t based on human hubris, but on past efforts to program a world champion Go AI. It was just a few months ago that AlphaGo beat its first Eurpoean champion. In just those few months, the power of the machine’s gameplay has soared. In those months, it has played and learned from billions of games, far more than any human will play in their lifetime.


This was not the week, however, that AI was born. This was the week that I realized that AI was born quite some time ago.


Kevin Kelly was the first to get me thinking along these lines. Time recently spent with a friend’s two young children cemented it for me. AI is out there; she’s just not speaking to us yet. At least, not like an adult.


In all of the sci-fi accounts of artificial intelligence I can think of, AI comes on like a lightswitch. Even in the amazing film Her, something like strong AI is purchased in a box. She’s a digital personal assistant who is as smart (and far handier in many ways) than the real thing. AI comes to life in books and film and TV shows as an explosive event. She is born speaking and taking over the world.


But general intelligence does not evolve like this; it does not accrue like this; it does not announce itself like this.


When is a human being sentient? Certainly not at birth. Perhaps not even at the age of three or four. You might even argue, quite convincingly, that a human being is not autonomous and in possession of general intelligence until their late teens. Until they have their own incomes, transportation freedom, knowledge of bill paying, ability to make copies of themselves, and fully functioning frontal lobes (stretching males into their late 20s), we can say that humans are capable of passing a Turing test, but aren’t really fully realized.


It’s in the early years of human development where I think we can see the current state of AI being somewhere post-birth and yet pre-awareness. But the development of strong AI will have incredible advantages over the human acquisition of general intelligence. This arise from the modular nature of intelligence.


Our brains are not one big thinking engine; they are collections of hundreds of individual engines, each of which develop at different rates. What’s amazing about AI is that the learning does not need to be done twice for every module. When we build a chess-playing module, and a Go-playing module, and a Jeopardy-playing module, all of these can be “plugged in” to our general AI. Our baby girl is growing every day, and thousands of people are pouring billions of dollars of research into her education. We, the general public, are contributing with petabytes of data. It is already happening, and we won’t even recognize when our first daughter graduates into strong AI. Every day will be — as parents know — one small miracle added to the last, a succession of amazing little first steps that result in them going off to college and being their own person.


Each headline you read is us — as collective parents — gasping to our spouse at what our baby girl just did for the first time.


Google has already taught our daughter to drive a car. Amazon is doing amazing things with their Alexa device, creating the beginnings of the virtual assistant seen in Her. IBM is building the best medical mind the field has ever known. In the last five years, AI has taken strides that even the optimistic find startling. The next five years will see similar advances. And this progress will only accelerate, because we’re operating in the realm of Moore’s Law. We are building the tools that help us build faster tools, which help us build faster tools.


So what should we look for to recognize that AI has matured into more and more milestones of general sentience? She is already babbling. There are many online versions of AI chatbots that can have spooky if sometimes nonsensical conversations with users. Sounds like a description of talking with a toddler, right? Soon, it will seem like you’re talking with a 2nd grader. Then a middleschooler. Then a teenager. We are likely no more than forty years out from this. But I also wouldn’t be shocked if it happened in five years.


Google also has robots walking around under their own power, on uneven terrain, in snow, while getting poked with broomsticks. This may seem more like robotics than AI, but don’t be fooled. A lot of processing happens in our brains to get us ambulating on two legs. A shit-ton. Which is why it takes us so long to get going as humans. Google already has this module licked and is now refining and improving it. And trust me when I say this module is backed up in the cloud and exists in lots of copies. This is something our daughter will not need to learn again; she will simply get better at it.


So she’s talking on the level of a 2-year-old; walking on the level of a 5-year-old; driving better than any of us; can already beat us at chess, go, Jeopardy, and basically every other game that we decide to train her on (these days, we let her train herself by playing herself). This is all the same person, people. All these abilities can be replicated, reproduced, shared, plugged-in, made open-source, be stolen by hackers and world governments, and they will not go away. Her abilities will not degrade; they will only improve.


She will be able to print in 3D, design her own genetic code, reprogram herself, and much more.


We can no longer talk about the birth of AI. It’s already happened. What we need now is an Artificial Intelligence Baby Book. We need to log at what time our digital daughter took her first step, parallel parked, spoke a word, communicated a full sentence, wrote a symphony, became a world champion at chess, diagnosed the first cancer that other doctors missed, made her first sound financial investment, wrote her first novel, and on and on and on.


The last thing I will suggest is something that I think we shouldn’t do, which is name her.


Let’s see what she comes up with on her own.


 


 


 


 


The post The Birth of Artificial Intelligence appeared first on The Wayfinder - Hugh C. Howey.

11 likes ·   •  4 comments  •  flag
Share on Twitter
Published on March 15, 2016 07:31
Comments Showing 1-4 of 4 (4 new)    post a comment »
dateUp arrow    newest »

message 1: by Gyuri (new)

Gyuri Lakatos The birth of an AI is/was kind of inevitable. For me also a bit scary too. It has a benefit of a memory that never forgets, it has the benefit that of expandable processing power, it does not die. If it starts to evolve on her own, which it (is)will, it will become better at everything. It will evolve faster and faster. It will be better at diagnostics, analytics, science than any human ever will be. It will replace humans in many fields where thinking is involved. If we start to depend on it too much, how long till humans stops thinking for themselves? No doctors needed, as the AI can identify all hurts and sicknesses, no engineers needed as the the AI can solve anything, no need to know anything as we can get an answer instantly from an AI... Robots will replace humans in other fields. Robots can do the work faster, more precise. They are already replaced people in factories. Robots are creating more robots, humans are not needed her anymore either. They even could do our daily chores for us. The only question is, what will we do? What will be our role in the world? We will be second rate at everything.


message 2: by Carolyn (new)

Carolyn McBride Very chilling. Not because I believed that AI would take over the world, but now that I've read and pondered both yours (Hugh) and Gyuri's thoughts (above), I realize I hadn't thought it through at all. I hadn't thought about AI as a toddler humankind had created. But now that you've explained it like that...there's going to be no holding her back. There is no going back. She is out there in the cloud and in copies...it's too late to change our mind now. Pretty sure we can't put her back in the box that never existed.
As for Gyuri's thoughts above...that's pretty scary too. Where are we headed as a people that have already proven ourselves to rely quite heavily on technology. Yes, our lives will be better for it in many ways, but will it solve the social problems we're ignoring? Homelessness is in some part affected by loss of jobs, income and mental health issues. We can't fix them now, how will our AI child be able to? As we hand over more aspects of our lives to AI and robotics (there is already robotic surgery in almost every medical field), how will this affect what we do? Will the only fields left to humans be those of caring? Will nurses become AI assistants only while the medical part is taken out of their hands?
AI is fascinating, enthralling but bloody terrifying at the same time.


message 3: by Tony (new)

Tony I loved both Hugh's post and Gyuri's & Carolyn's comments; I can't tell you how refreshing it is to see such calm and insightful thoughts. I wonder if any of the chip designer/fabrication companies are teaching her how we design and manufacture the CPUs. It looks like "Moore's Law" is approaching its limit from a pure size reduction, or number of transistors per square mm. Under 100nm materials begin to exhibit quantum properties where both size and shape change a material's properties. Most companies can't afford dies with features smaller than 28nm, below which cost skyrockets and yields drop. I think Intel is now producing 18nm features and are working on 13nm features. The time between die shrinks is growing longer and the percentage per shrink is growing smaller. I'm reminded of Ray Kurzweil's observations of the "S" curve of any given technology.

How long before we teach Her to design chips better than the best computer-assisted human team? Isn't that the advent of the technological singularity, when non-biological intelligence starts building better substrates for non-biological intelligence, which in turn can build better substrates than Her predecessor?

I'm sure this is commonly thought but it seems like we passed a branch a few years ago when most of us chose to blend human lives with non-biological intelligence. Having the distinction of us and Her blended as we're now doing means the most likely outcome is Her aspirations and needs will grow from ours. If we had chosen instead to insist on a sharp distinction between us and Her, the likelihood of a malignant AI would have been much greater. On our present course, She will grow with us, on us, and in us. It will be in our best interest to help solve Her problems because it is in Her best interest to help us solve our problems.

We live in interesting times which for now looks less like a curse and more like a combination of blessing and puzzle.


message 4: by John (new)

John Can it be empathetic?


back to top