The Sword and Laser discussion

This topic is about
Daniel Suarez
Daniel Suarez TED talk: The kill decision shouldn't belong to a robot
date
newest »


AI should never have the right and capability to take human life on it's own decision. That's just one step from Skynet.
But we may have to give AI the ability to decide who to save if given a choice between 2 or more life or death scenarios. Which will essentially be deciding who also may die.
Will driverless cars need to have preset moral dilemma protocol on life preservation?
Given a choice of running over a group of children or driving you over a cliff, does the AI make the moral decision to save the most lives by killing you? and what if it is an animal? What size or type of animal does AI decide it needs to swerve to avoid. Risking losing control and crashing. These are hard decisions for humans to make, yet alone a robot.
But we may have to give AI the ability to decide who to save if given a choice between 2 or more life or death scenarios. Which will essentially be deciding who also may die.
Will driverless cars need to have preset moral dilemma protocol on life preservation?
Given a choice of running over a group of children or driving you over a cliff, does the AI make the moral decision to save the most lives by killing you? and what if it is an animal? What size or type of animal does AI decide it needs to swerve to avoid. Risking losing control and crashing. These are hard decisions for humans to make, yet alone a robot.

This definitely underscores the value of Science Fiction as it ponders these questions.
Trike wrote: "We're definitely not good at making that decision, so it stands to reason our robots wouldn't be, either"
Theoretically they would be able to make the decision (and react) faster than we ever could.
I have heard Elon Musk make similar statements on the moral dilemmas of driverless cars and he is someone who will be making them and having a say in their AI.
Theoretically they would be able to make the decision (and react) faster than we ever could.
I have heard Elon Musk make similar statements on the moral dilemmas of driverless cars and he is someone who will be making them and having a say in their AI.

Still, I can imagine humans rejecting good ideas, even when backed by good evidence of previous success.

One episode, "If-Then-Else" S4 Ep11, shows the AI testing out different scenarios that will both accomplish the mission and let the people have a chance at surviving - it was awesome :)

But we may have to give AI the ability to decide who to save if given a choi..."
Nope you have to prioritize the life of the person in the car first. If you had that dodges out of the way thing you would have dumbass highschoolers jumping in front of cars at the last second to watch them swerve as a prank...this will still probably be a thing just now some risk will be associated with it instead of it will always swerve and damn the consequences.
The answer is no matter who dies/gets injured they will sue the company for doing it wrong.

As an example, my cousin was involved in a car accident and, knowing him, he was driving too fast beyond his limits. However, his car was equipped with anti-lock brakes and at the accident scene you could clearly see the strips of rubber laid down by his tires during the panic stop. He got a settlement from GM because the anti-lock brakes didn't work as advertised.
Thing is, though, in some situations it's far better for the wheels to lock up and sacrifice the tires because that increases friction, ultimately stopping you in a shorter distance than ABS will. He was in just that situation. GM was caught out by the ignorance of the judge and GM's own marketing touting the amazingness of ABS.

But we may have to give AI the ability to decide who to save if given a choi..."
Why are children playing so close to a Cliff?
To be serious, the cars will most likely be programmed to work best in the most likely scenarios. For example, humans should never be walking on the highway. So on the highway the car's priority is to keep the passenger safe from other cars. In a scenario where everyone has driverless cars and there aren't any bugs/unauthorized modifications, this should be trivial. All cars will be maintaining safe distances, leaving the highway if they need gas, etc.
Things get trickier when you're on regular roads (at least in the US). Residential neighborhoods should never have speed limits that are high enough that a car would lose control from immediate breaking to avoid hitting children, pets, balls, adults, etc. But on highways like US-1 - the speed limit can be as high as 50 MPH while also having pedestrians on the sidewalks. They COULD enter traffic - but the question is - how much do you code for that? Really, they should only be entering the street at crosswalks. And on a 50 MPH street you should have a tight control on your kids. So it's a matter of people with mental instabilities. So at that point I guess if everyone has driverless cars, then you just program it so that the car knows where all other cars are and can safely dart around a person. Beyond that, courts will have to find the person guilty for not crossing at a crosswalk - ie jaywalking.
I was being a bit flippant with my scenarios but the way we handle AI is going to present ethical discussions. I agree that avoiding an accident, avoiding pedestrians and maximising the survivability of the passengers would be the main objectives of AI in driverless cars.
Back to the OP. What would the AI do in a situation where their primary target is surrounded by innocents?
If an Osama Bin Laden level baddie is walking through a market you would hope the software prioritises the safety of civilians over the overwhelming need to kill the enemy. I know this hasn't been handled well by humans in the past. But at least a human can judge the situation and be held accountable for their decision.
Back to the OP. What would the AI do in a situation where their primary target is surrounded by innocents?
If an Osama Bin Laden level baddie is walking through a market you would hope the software prioritises the safety of civilians over the overwhelming need to kill the enemy. I know this hasn't been handled well by humans in the past. But at least a human can judge the situation and be held accountable for their decision.

Why not? Humans have that capacity, and yet we trust each other not to go on random killing sprees; why should we deny it to robots once they reach comparable levels of intelligence? Doing so would be morally reprehensible, forcing artificial intelligences to be subservient to our interests even when it runs counter to theirs.
The Three Laws of Robotics: codified slavery.

So which would you prefer:
- a human choosing life or death based around a lifetime of experience and unobserved prejudices and biases which can be later justified by the human however they like
- a machine choosing life or death based around an auditable decision tree with clearly programmed parameters based around "best case" approaches
If you say the human, I hope you're not the subject of that decision on the day the human discovered her husband was cheating on her. Or the day the human discovered he has terminal cancer.
Humans should always have the final decision.
You need responsibility and culpability.
Who is responsible if AI makes a mistake? The programmer? The person who set the parameters?
I wouldn't call it codified slavery. AI should always be subservient to our interests. As much as Sci-Fi likes to play around with the idea. We would/should never let AI attain equality with humans. Yes I am a robotist, androidist, cyborgist ;-)
You need responsibility and culpability.
Who is responsible if AI makes a mistake? The programmer? The person who set the parameters?
I wouldn't call it codified slavery. AI should always be subservient to our interests. As much as Sci-Fi likes to play around with the idea. We would/should never let AI attain equality with humans. Yes I am a robotist, androidist, cyborgist ;-)

You need responsibility and culpability.
Who is responsible if AI makes a mistake? The programmer? The person who set the parameters?"
Aren't the programmer and the person who set the parameters humans? If you don't trust them, how does adding one more person to the loop make the situation better, especially when we've had plenty of proof over the last year that humans are incompetent at making exactly this sort of decision.
I wouldn't call it codified slavery. AI should always be subservient to our interests. As much as Sci-Fi likes to play around with the idea. We would/should never let AI attain equality with humans. Yes I am a robotist, androidist, cyborgist ;-)
If an AI has human-level intelligence, it deserves all the same rights as humans. It absolutely is slavery to deny it the ability to defend itself against humans who would force it to do anything against its will.
Unless you're willing to have yourself lobotomized to prevent yourself from committing acts of violence, you have no right to impose such a prohibition on any other intelligent being.
Having a military strategist explain parameters to a software engineer is problematic in many ways. It's up to the engineer to decipher the intent. A programmer know what he wants his program to do. It doesn't always behave the way it is intended.
In several million lines of code there will inevitably be bugs. This is not a problem when you are crushing candy in a game, but when it costs lives it is a major problem. Too late for a software update then.
Humans can adapt on the fly to unplanned scenarios.
I doubt we will ever allow AI to approach sentience. They will achieve and surpass human intelligence but that is only a part of what makes us human.
I perceive we will regard Intelligent human looking AI on the same level as our pets.
In several million lines of code there will inevitably be bugs. This is not a problem when you are crushing candy in a game, but when it costs lives it is a major problem. Too late for a software update then.
Humans can adapt on the fly to unplanned scenarios.
I doubt we will ever allow AI to approach sentience. They will achieve and surpass human intelligence but that is only a part of what makes us human.
I perceive we will regard Intelligent human looking AI on the same level as our pets.

So which would you prefer:
- a human choosing life or death based around a lifetime of exper..."
Depends on who makes the decision tree and who audits it. And if we can know that's actually what was run.

It's likely we would ever be able to tell until it was to late. Modern supercomputer expert system are so complex and fast that monitoring them for non-programmed behavior becomes almost impossible.
Check out the movie Automata http://www.imdb.com/title/tt1971325/ It points out that once you let a machine change it's own programming code it rapidly becomes impossible to tell what it's doing.
Happened on this TED talk by Sci-Fi author Daniel Suarez. This is from 2013 but seems very relevant with the current set of people declaring AI will be dangerous in the future (Stephen Hawking, Elon Musk, Steve Wozniak).
It is worth a look.