Poll

119009
Do you believe A.I. (artificial intelligence) will ever be more advanced/sophisticated than human intelligence?

YES
 
  284 votes 54.2%

NO
 
  169 votes 32.3%

UNSURE
 
  71 votes 13.5%

524 total votes

Poll added by: James



Comments Showing 1-50 of 147 (147 new)


message 1: by James (last edited May 06, 2015 02:32PM) (new)

James Morcan Admittedly the question to this poll was very tricky to word the properly. What I was trying to convey was whether AI will ever surpass humanity "overall" - meaning AI might be able to beat human intelligence in isolated fields like a game of chess, but can AI ever be more advanced in almost every other intellectual field?

Anyway, I voted NO in this poll and believe AI will never beat human intelligence. Reason being in my opinion (which is really just a gut instinct) is that human intelligence has an indefinable creativity inbuilt in it that is unprogrammable.

I have always believed the most advanced classical or quantum computer, even several thousand years in the future, will never be able to replicate this creativity aspect in humans... Possibly this creativity could also be termed intuition as well, although I'm not sure...

Like I say I've always had these vague ideas but was never able to put them into words until today when I watched this video lecture by computer scientist and author James Tagg which is on this very subject:
https://www.goodreads.com/videos/8376...

Look forward to hearing the rest of your thoughts on this fascinating subject!


message 2: by David (new)

David Elkin Let's hope we always can pull the plug.


message 3: by Takaaki (new)

Takaaki Musha I believed No before that the machine can transcend the capability of human brain. But I think there is a possibility that the machine can transcend the human brain by studying the bain mechanism from the standpoint of superluminal particles. In some future, we will obtain the technology for superluminal physics which can manupulate superluminal particles, which can lead to the realization of HAL 9000 in this world.


message 4: by James (last edited May 06, 2015 06:24PM) (new)

James Morcan Takaaki wrote: "I believed No before that the machine can transcend the capability of human brain. But I think there is a possibility that the machine can transcend the human brain by studying the bain mechanism from the standpoint of superluminal particles. In some future, we will obtain the technology for superluminal physics which can manupulate superluminal particles, which can lead to the realization of HAL 9000 in this world.
..."


Have you considered tho that what Kubrick was trying to convey with 2001: A Space Odyssey's computer vs. human plot is that there's something flawed in AI no matter how intelligent and that a human character has the potential to beat AI through human ingenuity?

Or am I misreading Kubrick's themes?
I didn't read Arthur C. Clarke's book, but I'm not sure that matters as Kubrick always made very loose film adaptations of books anyway (as per The Shining and A Clockwork Orange).


message 5: by Takaaki (new)

Takaaki Musha I have a book, "The Singularity is Near" by Ray Kurzweil. He suggested in his book, that the machine can transced the capability of human brains in some future times. From the study of brain mechanism, I have reached to the conculsion that human brain has been created by Universal intelligence. Maybe, Clarke mentioned this possibility in his book. But I think human evolution will take place in some future and evolved mankind can reach the level far from the brilliant advanced machines. This is my imagination.


message 6: by Marcha (new)

Marcha Fox I think it's a matter of how you define "intelligence." In terms of storing knowledge, we're already there. My desktop "knows" a whole lot more than I do and databases store more than we could ever remember and then, of course, there's Wikipedia. Synthesizing data is slightly more complex, but certainly possible given enough "if-then" statements. A certain level of morality can be programmed relatively easily.

However, creativity and having emotions are another story. Those define us as humans and individuals. It's also the driving force behind most of our actions. How much that can ever be imbued in a machine or robot is another story.

I think we will go very far augmenting human intelligence with computers. I'm currently reading "Esquelle and the Tesla Protocol" which includes a very sophisticated implant that does that and I had a similar device in my novels as well. It's a good partnership, much like smart phones. Another thing to consider is that no one bothers to remember anything anymore because they can always look it up. So it's not just a matter of computers or machines exceeding our capabilities, it's a matter of whether ours will decline. No one has much good to say about the education system these days and IMHO some animals are a lot more intelligent than some humans. So it's a philosophical question that easily slips into the world of metaphysics and what it means to be human.


message 7: by Mario (new)

Mario Pinheiro I voted "no", but if humans continue to behave at na inferior level, without improving their skills, of course we will be beaten by AI. I mean, if we want to stay at the level of AI, or better, to overpass it, humans need to keep working continuosly on themselves, and improve their institutions and societies...In the jungle, without preparation, even a chacal beats us...


message 8: by Udai (new)

Udai Kumar I had voted yes, but I agree with your point.


message 9: by James (new)

James Morcan Mario wrote: "I voted "no", but if humans continue to behave at na inferior level, without improving their skills, of course we will be beaten by AI. I mean, if we want to stay at the level of AI, or better, to overpass it, humans need to keep working continuosly on themselves, and improve their institutions and societies...In the jungle, without preparation, even a chacal beats us... ..."

Ha!
Very well put, Professor Pinheiro.


message 10: by Laureen (last edited May 08, 2015 04:40PM) (new)

Laureen I voted NO. I just think that human-kind is much more complicated than the analytical part if our brains. I just can't see a machine replicating or advancing the complex nature of the whole of human intelligence. But, of course, never say never!


message 11: by James (new)

James Morcan I still have kept my vote NO - however, I admit it's not a subject I've read much about and could easily be wrong in my assumption there's something in humans that cannot be replicated.

I noticed this article on Discovery News: 'Underwater Robots Think For Themselves' http://news.discovery.com/tech/roboti...

The ocean depths are a notoriously treacherous environment for human beings. As such, robots and remote control vehicles have been used for decades to map and monitor underwater environments.

The trouble is that robots have to be programmed to do what they do. Even simple tasks, when performed underwater, require a lot of time and attention from engineers, who must write scripts for each particular job. There’s got to be a better way, right? Right.

DNews: Get A Glimpse Inside A Robot’s Brain

A research initiative at MIT is currently addressing this issue with a new programming approach that gives robots more cognitive capabilities, allowing them to — for lack of a better term — figure stuff out on their own. A robot crew is assigned a certain high-level goal, then the bots work it out among themselves to determine the best way to accomplish the task.

In fact, the MIT approach is modeled after time-tested top-down command systems, and specifically inspired by the starship Enterprise from Star Trek. One robot acts as the captain, making high-level decisions, while other bots might serve as navigators, engineers or even doctors — repairing other bots.

Robot Reveals Sea Life Thriving Beneath Antarctic Ice

“We wanted to show that these vehicles could plan their own missions, and execute, adapt, and re-plan them alone, without human support,” writes MIT’s Brian Williams, principal developer of the mission-planning system, on the MIT project page. “We can give the system choices, like, ‘Go to either this or that science location and map it out,’ or ‘Communicate via an acoustic modem, or a satellite link.’”

The approach is similar to a system Williams developed for NASA in the 1990s, which allows for certain autonomous functions on satellites, probes and other spacecraft. The MIT team recently tested the underwater system in waters off the coast of Australia, and plans an official presentation at in June at the International Conference on Automated Planning and Scheduling in Israel.


message 12: by James (new)

James Morcan February 2018

Now that this group is approaching 8,000 members, I thought it might be good to redo this poll. Given a few years have gone by and AI has evolved...and we have many more members now, some whom may be experts on AI...

So here we go...Round 2!

Vote away and comment...can't wait to learn more about AI!


message 13: by Asmi (new)

Asmi Udassi I think eventually there won’t be much of a difference between the two.. human brain would become artificial intelligence.


message 14: by Dixit (new)

Dixit Nagpal Obviously voted Yes
If you would have asked a question in 1960 that’s will a machine be able to beat humans in chess game ever ? I’m sure everyone would have voted NO however with time we have evolved the machines to great extent that I’m certain that we are not that far where a machine can decide and take rational calls better than humans can even think. We are here talking about millions of bites getting processed in tiny fraction of seconds and there is no end to the ever improving capabilities. Also we resonate and take calls emotionally doesn’t mean that an advanced species mandated to have that, think about more rational and pragmatic AI makin. Choices and acting, who is one to decide that whether it is better than humans or not and yes u can’t unplug the whole world when all the devices are connecting in web spidernet more and more.

So I have strong gut feeling we will be living along side with this super species in coming century and it will be probably amalgamated version of AI and advanced mankind
Cheers


Denismuturiyahoo.Com A.I can get to be advanced and sophisticated but it will never bypass human being. Human beings employ quite a number of facets in order to make decisions, most important being emotions and that is what makes us stand out.


message 16: by Glen (new)

Glen Tucker The real answer is 42


 B. Sinsational The big difference between a AI and a human is that a human is not normally driven to make decisions that can have a very negative outcome regarding pain, emotions and empathy. The few that lacks those emotions are psychopaths and alt
Ready when nirmal emotional human beeings encounter psychopaths we tend to end up victimized. As is the case in many kidnappings, murders etc. it’s incomprehensible for a normal person to even consider the ways they think. But psychopats have a strong sense of self preservation, and that is manytimes the saving grace for the rest of mankind. If you take a AI, they will have no emphaty, no self preservation (or if they have self preservation it makes sure it has a backdoor) it cant be cornered, shot down, killed, put in prison, ortaken out by any ordinary means. If you look crudly at it a AI that gets loose have infinite possibilities of hiding space, not just on the web, the darkweb, infinet, or similar! It can also ith ease upload it self to spacestations etc so not even a EMP puls that would black out the whole earth could be a guarantee to snuf it out.
It can easily get access to any and all computerized weapons, including spacebased, and exotic weaponry, it can disable warnings, alarms, shields, and ground every form of aviation, collide trains wreak havoc on traffic, and the newer cars and the driverless ones are extremly vulnerableto this. So considering a AIdoesnt have a moral compass, or emotions, and intelligence is only measured in mathematical intelligence its clear it can easily take over and rule. The attempts that has been made by facebook shows with clearity it surpassed the expectations and the risks within a very short time. They tried having two and they developed a language humans dont 7ndrstand so they could communicate without us understanding. And the statements it did.. its bone chilling. Worse is that they didnt scrap it, they keep building on it, using the quantum computers.. I would say that limitations need to be set in the law, and liability established, before things go nutters.


message 18: by Evan (new)

Evan Yes, because the question implies and open ended time frame, and you have to believe that all things are possible when that is he case. historically speaking the residents of earth 5000 years ago could not have predicted the course of human development to the current stage, unless by chance you were open to all possibilities.


message 19: by Carlton (last edited Feb 03, 2018 06:20AM) (new)

Carlton No. I've worked in IT all my adult life, from the mammoth mainframes with 4K of memory in the 60s to the present time where I have more computing power in my phone than we had running 5 state banks in 1968. I worked a bit with AI in the late '80s - early 90. We were looking at how it might be used for ATCS (Advanced Train Control System). At that time it was never seriously considered because there were too many human factors that could not be entrusted to AI algorithms. In a simple way of answering the question: The creation can never know more than the creator. What sets us apart from machine intelligence is our ability to creatively adapt to unforeseen or new, never before encountered situations. Machines can only adapt with whatever information is available to them. They are not creative although they may mimic creativity. All they can act on is the information or data available to them. There may come a time where the technology becomes so refined that AI may give machines the ability to become somewhat self-aware but I go back to my opening premise: the creation can never know more than the creator.


Christopher Sharp Primary difference between man and machine, Humans have needs, machines do not. For our species to survive, we must meet certain basic needs no different than a fly, a fish or a bird.

The moment a machine computes that it needs to stay powered up to complete its task will be the moment of the singularity. Everyone is looking in the wrong direction.

Some twenty-year-old hacker is probably going to write a simple piece of code into a crappy piece of existing AI that is connected to the web and start the snowball rolling.

I voted YES


message 21: by Beth (new)

Beth Yes...unless we blow
ourselves up first!


message 22: by E.S. (new)

E.S. Martell The situation is worse than most people realize. We don't really know how deep-learning works, only that it produces results that humans cannot match. If it were paired with the ability to recursively self-program, we could be faced with an AI that springboards from roughly human level to an IQ of over 10,000. This could happen within seconds. If such an entity were given access to the Internet, it could, for example, dominate the finances of the world within minutes.

If the original human programmers had given it a basic mandate of creating something trivial, such as calculating Pi, it might quickly come to the conclusion that it needed more computing power to better achieve its goal. With all the money under its control, it could dedicate unlimited resources to creating processing power. Vast server farms could cover the surface of the earth, humans could starve, and it would be perfectly happy calculating the next digit of Pi.

Read Nick Bostrum's Superintelligence for a good academic treatment of the problem.

I've now written 3 short stories and 1 novel dealing with this sort of problem and have researched it extensively. In the past I've created everything from early childhood education software to military training, along with business-related applications, so I'm not exactly a computer novice. I say this to give some background to my next statement. I'm worried about this problem, since the perceived potential financial reward for the first company to create a true superAI is great enough that someone, somewhere will do it and will probably take short-cuts that allow the AI to escape their control.

Elon Musk believes that if we're part of the AI system, it might just allow us to co-exist, hence his Neuralink enterprise. I'm not so sure. We might just as easily be viewed as meat-robots with a convenient self-replicating mechanism and wake up to find ourselves in some form of the Matrix.


message 23: by Bob (new)

Bob Muench With respect to the machines that we build to be better than anything that we've created before, they do continue to improve. The first thing that needs to happen is to develop a new processor. DDR won't do it, but we are on the verge of breaking past linear processors. In the next 5 years I see processors being able to handle much more than they are now, which is already incredible data at unbelievable speeds. Once we make the jump to quantum processing, computers and the humanoid bodies that we are now beginning to build will certainly become the "Commander Datas" of our world. In many ways they will be able to calculate and process information in ways that push far beyond anything that we can do.
I believe that this will first involve problem solving, but will eventually move into creativity. In a STNG episode, Data had fused the musical styles of various composers, creating his own, unique style. He had created something new by synthesizing different styles before. It is exactly what we do. We just don't realize that we are doing it.
So, for my part--I say that we need to keep our eyes on the future. In just a few decades, they will become as human as we are...perhaps even more so.


message 24: by Bob (new)

Bob Muench Christopher wrote: "Primary difference between man and machine, Humans have needs, machines do not. For our species to survive, we must meet certain basic needs no different than a fly, a fish or a bird.

The moment ..."



I am sooo with you, Christopher. Someone, somewhere (and I'm willing to bet that it will be a hacker attempting to prove himself) will decide that emotions, wants, and desires will be something cool. And when he (or she) does, he will have opened Pandora's Box upon us.


message 25: by Austin (new)

Austin Hinton Of course I see artificial intelligence becoming a reality by 2050. I think we will possibly lose control of it somehow if it can become smarter than us.


Christopher Sharp E.S. wrote: "The situation is worse than most people realize. We don't really know how deep-learning works, only that it produces results that humans cannot match. If it were paired with the ability to recursiv..."

The dude from Hansen Robotics with the long hair and glasses is already planning on writing AI and dropping it into the cloud.


message 27: by Afiena (new)

Afiena Kamminga I, cringingly, voted yes/ that is, if we take Intelligence to be the analytical part of our thinking, which so far, is basically all that Artificial intelligence is able to duplicate and increasingly surpass/ the curse, and the glory, of the human mind is that we have so much more in us than analytical thinking to affect our decisions, and a major component of that 'more' is what we label 'emotions' something that isn't uniquely human, as it is shared by mammals, birds, and probably to an extent also by fish and reptiles/ it is the thing in us that makes some humans risk their own lives and well-being for the sake of another/ it is also, unfortunately, what causes entire nations to follow sociopath leaders into war and probably, one day (soon?) into creating a global wipe out/ SO, IF we get AI to eventually dominate what goes on with our world, overriding human decision making, we had better get working NOW on pairing 'their' superhuman analytic abilities with emotions that are BETTER balanced with ethical considerations than they have proven to be in our own mental makeup, as expressed throughout history in thousands of years of disastrous human decision making


 B. Sinsational Carlton wrote: "No. I've worked in IT all my adult life, from the mammoth mainframes with 4K of memory in the 60s to the present time where I have more computing power in my phone than we had running 5 state banks..."

The smartest man alive ..Steven Hawkins believes it will be an issue; https://youtu.be/zM4ijcSAhMY


 B. Sinsational Carlton wrote: "No. I've worked in IT all my adult life, from the mammoth mainframes with 4K of memory in the 60s to the present time where I have more computing power in my phone than we had running 5 state banks..."

Second point ..it has already proven to be able of things that it was not programmed to know, i.e. Adaptive learning, and developing a language to communicate with another AI without us understanding: https://youtu.be/zUO6YkhtgGY


Christopher Sharp B. Sinsational wrote: "Carlton wrote: "No. I've worked in IT all my adult life, from the mammoth mainframes with 4K of memory in the 60s to the present time where I have more computing power in my phone than we had runni..."

Welcome to the machine...


message 31: by James (new)

James Morcan I'm raging against the machine, guys :)


Christopher Sharp James wrote: "I'm raging against the machine, guys :)"

The only problem with that is the machine is here whether we like it or not. We chose to be born into this life and now must navigate it the best way we can devise.

We must adapt to survive in this world, beneath the umbrella that's been perched above our collective heads. My path has taken a turn the last six months or so. An opportunity was placed before me and I'm making as much of it as I can. Investing my energy to achieve a selfless goal.

You'd be surprised how hard it is to turn someone's head away from long-held, erroneous beliefs. Proof doesn't seem to be enough. Everyone wants someone else to be the first. Why has the world become so cynical?

Sorry, off topic again but I'll try to tie it in. Watson the AI is being used as an aid in diagnosing cancer patients. It's only been given three options for treatment, cut, chemo or radiation. The AI is being used to push an agenda. Many dollars are involved in this. The AI is not being used to sort out available cures. The machine is being used to make more money for those who already have too much, not to advance, aid or help humanity.


message 33: by B. Sinsational (last edited Feb 03, 2018 08:18PM) (new)

 B. Sinsational Christopher wrote: "James wrote: "I'm raging against the machine, guys :)"

The only problem with that is the machine is here whether we like it or not. We chose to be born into this life and now must navigate it the ..."


Christopher May i DM you, some of the statements you did are of high interest for me.


message 34: by James (new)

James Morcan I'm still raging against the machine!


message 35: by Christopher Sharp (last edited Feb 04, 2018 03:47AM) (new)

Christopher Sharp B. Sinsational wrote: "Christopher wrote: "James wrote: "I'm raging against the machine, guys :)"

The only problem with that is the machine is here whether we like it or not. We chose to be born into this life and now m..."


DM? I assume it means direct message or something to that affect. Absolutely, my email is sharpwriter1@gmail.com


message 36: by Inak (new)

Inak once upon a time, when its all materialised and screwed together, only to find it was already there all the time, in a more perfect (oo) form/tool: humans


message 37: by [deleted user] (new)

It's a done deal, from the singularity to Skynet, the matrix, Cylons, Westworld, UK TV show Humans, or even a killer waffle iron chasing you around the kitchen. The machines will rise! But maybe it can be a good thing, maybe they can do a better job than us.

Great group, you got here guys, I'm JJ West check out a copy of GRL FORCE rate & review etc, think sucker punch with the spice girls or Charlies Angels meets the A team with a bit of GTA thrown in. Girl Force


Christopher Sharp Jonathan wrote: "It's a done deal, from the singularity to Skynet, the matrix, Cylons, Westworld, UK TV show Humans, or even a killer waffle iron chasing you around the kitchen. The machines will rise! But maybe it..."

The future is coming whether we like it or not. If the human race can adapt, it will survive. More likely, a few humans will mutate or already poses something that will enable them to survive.

One thing is for sure, this thread is full of meat for a lot of new books.


message 39: by [deleted user] (new)

Christopher wrote: "Jonathan wrote: "It's a done deal, from the singularity to Skynet, the matrix, Cylons, Westworld, UK TV show Humans, or even a killer waffle iron chasing you around the kitchen. The machines will r..."

You know it!


message 40: by John (new)

John Dizon 1) From a theological perspective, we know that the created thing can never be greater than he who created it.

2) Consider humans vs. chess computers. There has not yet been a machine unbeaten by man.

3) Instinct is a God-given quality that can never be explained or replicated. Man will never be able to install it into a machine, and it is a major part of real-life decision making. Just as God's creatures use instinct to survive the evil of men, man will always be able to triumph against whatever machines are set against him.


Christopher Sharp Iain wrote: "Christopher wrote: "Jonathan wrote: "It's a done deal, from the singularity to Skynet, the matrix, Cylons, Westworld, UK TV show Humans, or even a killer waffle iron chasing you around the kitchen...."

It's been a long time since anyone in a developed country did a single thing directly related to their survival. We don't build our own houses. We don't grow or gather our own food and water. We don't split the wood to warm our houses when it's cold and we don't fan ourselves to stay cool.

We've learned to rely on technology (and the advancement of a structured society) to survive. It's not to say that we couldn't survive without it, just that for the most part, we depend on it.


Christopher Sharp John wrote: "1) From a theological perspective, we know that the created thing can never be greater than he who created it.

2) Consider humans vs. chess computers. There has not yet been a machine unbeaten by ..."


I have to say that we should leave religion out of this discussion. The inclusion of it feels like someone bringing an English History book to a math quiz.


message 43: by Kartikay (last edited Feb 04, 2018 06:43AM) (new)

Kartikay Mittal Asmi wrote: "I think eventually there won’t be much of a difference between the two.. human brain would become artificial intelligence."

Well I voted Yes, this being a beter part of my rationale. We will be one and they will be better than us in many areas and we will combine them to create a new species of sorts or say augmented humanity.
Also hear what Yuval Noah Harari had to say this year at Davos on this, he makes a clear case how will AI will be better in understanding us, that they will "know us much better than we know ourselves". That is both exciting and chilling and dreadful at the same time. (Youtube video: https://www.youtube.com/watch?v=hL9uk...)
Now is the time to have a wider debate on the pros and cons so we can shape the future or again we will be ruled by a version of AI created by the 'elitists'.


Christopher Sharp Kartikay wrote: "Asmi wrote: "I think eventually there won’t be much of a difference between the two.. human brain would become artificial intelligence."

Well I voted Yes, this being a beter part of my rationale. ..."


I just watched this video an hour ago. I believe his point that they will start engineering life to make a new species was interesting, so long as the new species can self-replicate.

Following that thought to conclusion, how long would it be before all homo sapiens would die out and the new species supplant it?


Christopher Sharp Iain wrote: "We are already seeing changes, compared to 10-15 years ago, in how our brains process info, memory retention and attention span.

I am wondering how the brain will be shaped in tandem with more ad..."


I'd like to see the data on that change. If our attention spans get short enough, it'll be like we all have Alzheimers.

Interesting side note here : someone studied a bunch of people that had sustained injury to their hippocampus (part of your brain). They found that the people got stuck in the moment, as it were. They were unable to think or imagine into the future and couldn't remember what they had for breakfast. A sort of mental prison if you would.


message 46: by Kendall (last edited Feb 04, 2018 07:09AM) (new)

Kendall Cherry Of course A.I. will be. “The Waking Up Podcast” Our Final Invention Artificial Intelligence and the End of the Human Era by James Barrat The Second Machine Age Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson Life 3.0 Being Human in the Age of Artificial Intelligence by Max Tegmark The Beginning of Infinity Explanations That Transform the World by David Deutsch Superintelligence Paths, Dangers, Strategies by Nick Bostrom After On A Novel of Silicon Valley by Rob Reid


message 47: by Joe (new)

Joe Borg Take self driving cars, they will soon be given to option to decide and select if faced with the possibility of either running over an old man or a young boy who to survive. This level of discernment will go on to ensure that machine will ensure that man cannot harm the AI. I do not believe that we are very far off this technology.


message 48: by James (new)

James Morcan Creativity.
Intuition.
Things we categorize as the "human spirit".
Spontaneity.
Community Orientation (collectivism).
Improvisation.
Sexuality.
Humor.
Body awareness.
Love.

The above are all among the various forms of human intelligence that I'm still skeptical AI will ever be able to fully replicate, let alone better. Granted AI will be, or is already, infinitely better at processing data and memory retention and left-brained-style thinking. But is that really what makes human intelligence, and humanity as a whole, such a force of nature? Or...are there some subtle or hidden things we take for granted in whatever consciousness is that is impossible for any supercomputer to outdo?

But I'm enjoying learning from all you learned people who know a lot more about AI than I do.


message 49: by Manuel (new)

Manuel Short answer YES! The plug will not be able to be pulled, for there is no plug any longer.


Christopher Sharp Joe wrote: "Take self driving cars, they will soon be given to option to decide and select if faced with the possibility of either running over an old man or a young boy who to survive. This level of discernme..."

Let's take the self-driving car thing ahead a few years. Wait, let me tell you a story I recently heard first.This friend of my wife bought a shiny new car with some self-driving features. She was driving along and wanted to change lanes. When she went to turn the wheel, the car resisted and wouldn't let her change lanes until her directional was on.

Now a few years ahead. So, will the self-driving car let you exceed the speed limit? Do any of you know anyone that doesn't exceed the speed limit while driving? Will there be a manual override? If you then speed, will the car use its high tech features to report you to the authorities? Will this car keep track of everywhere you go? Can your wife plug into it and see if you are going where you say you are and not to visit her sister or some floozie? The list is endless of the downsides to self-driving cars.


« previous 1 3
back to top

Members can create polls
widget

142309

Underground Knowledge — A discussion group