The Sword and Laser discussion

Daniel Suarez
This topic is about Daniel Suarez
92 views
Daniel Suarez TED talk: The kill decision shouldn't belong to a robot

Comments Showing 1-18 of 18 (18 new)    post a comment »
dateUp arrow    newest »

message 1: by Nick (last edited Mar 30, 2015 04:40PM) (new)

Nick (whyzen) | 1295 comments http://www.ted.com/talks/daniel_suare...

Happened on this TED talk by Sci-Fi author Daniel Suarez. This is from 2013 but seems very relevant with the current set of people declaring AI will be dangerous in the future (Stephen Hawking, Elon Musk, Steve Wozniak).

It is worth a look.


message 2: by Ken (last edited Mar 30, 2015 05:22PM) (new)

Ken (kanthr) | 334 comments How interesting. I think this topic revolves around what (if at all) the criteria are, that would allow us to form a consensus for a robot having the same rights as a human being.


message 3: by Tassie Dave, S&L Historian (new)

Tassie Dave | 4076 comments Mod
AI should never have the right and capability to take human life on it's own decision. That's just one step from Skynet.
But we may have to give AI the ability to decide who to save if given a choice between 2 or more life or death scenarios. Which will essentially be deciding who also may die.

Will driverless cars need to have preset moral dilemma protocol on life preservation?

Given a choice of running over a group of children or driving you over a cliff, does the AI make the moral decision to save the most lives by killing you? and what if it is an animal? What size or type of animal does AI decide it needs to swerve to avoid. Risking losing control and crashing. These are hard decisions for humans to make, yet alone a robot.


message 4: by Trike (new)

Trike | 11215 comments We're definitely not good at making that decision, so it stands to reason our robots wouldn't be, either.

This definitely underscores the value of Science Fiction as it ponders these questions.


message 5: by Tassie Dave, S&L Historian (new)

Tassie Dave | 4076 comments Mod
Trike wrote: "We're definitely not good at making that decision, so it stands to reason our robots wouldn't be, either"

Theoretically they would be able to make the decision (and react) faster than we ever could.

I have heard Elon Musk make similar statements on the moral dilemmas of driverless cars and he is someone who will be making them and having a say in their AI.


message 6: by Ben (new)

Ben Nash | 200 comments AIs could help solve sticky situations (hostage situations, international relations, whatever) by working using a database of known situations and outcomes and maybe running some permutations through a genetic algorithm to propose a few optimal solutions to the humans in charge.

Still, I can imagine humans rejecting good ideas, even when backed by good evidence of previous success.


message 7: by Michele (new)

Michele | 1154 comments The tv show Person of Interest brings this up often and has lots of different views shown (confession - I just recently binge watched the entire 3+ seasons TWICE, so I'm not exactly impartial lol). The show has two different AIs with different motivations.

One episode, "If-Then-Else" S4 Ep11, shows the AI testing out different scenarios that will both accomplish the mission and let the people have a chance at surviving - it was awesome :)


message 8: by Aaron (last edited Mar 31, 2015 07:16AM) (new)

Aaron Nagy | 379 comments Tassie Dave wrote: "AI should never have the right and capability to take human life on it's own decision. That's just one step from Skynet.
But we may have to give AI the ability to decide who to save if given a choi..."


Nope you have to prioritize the life of the person in the car first. If you had that dodges out of the way thing you would have dumbass highschoolers jumping in front of cars at the last second to watch them swerve as a prank...this will still probably be a thing just now some risk will be associated with it instead of it will always swerve and damn the consequences.

The answer is no matter who dies/gets injured they will sue the company for doing it wrong.


message 9: by Trike (new)

Trike | 11215 comments Aaron makes a good point. Unfortunately, it's not that simple in the real world, especially with lawsuits. Unless we're going to make AI judges, too. (Which I'm not opposed to, having spent a number a of years as an editor at Lexis-Nexis and editing thousands of court cases. If you ever find yourself in court, make sure you go as early as possible after breakfast or lunch: judges are statistically more lenient then.)

As an example, my cousin was involved in a car accident and, knowing him, he was driving too fast beyond his limits. However, his car was equipped with anti-lock brakes and at the accident scene you could clearly see the strips of rubber laid down by his tires during the panic stop. He got a settlement from GM because the anti-lock brakes didn't work as advertised.

Thing is, though, in some situations it's far better for the wheels to lock up and sacrifice the tires because that increases friction, ultimately stopping you in a shorter distance than ABS will. He was in just that situation. GM was caught out by the ignorance of the judge and GM's own marketing touting the amazingness of ABS.


message 10: by Eric (new)

Eric Mesa (djotaku) | 672 comments Tassie Dave wrote: "AI should never have the right and capability to take human life on it's own decision. That's just one step from Skynet.
But we may have to give AI the ability to decide who to save if given a choi..."


Why are children playing so close to a Cliff?

To be serious, the cars will most likely be programmed to work best in the most likely scenarios. For example, humans should never be walking on the highway. So on the highway the car's priority is to keep the passenger safe from other cars. In a scenario where everyone has driverless cars and there aren't any bugs/unauthorized modifications, this should be trivial. All cars will be maintaining safe distances, leaving the highway if they need gas, etc.

Things get trickier when you're on regular roads (at least in the US). Residential neighborhoods should never have speed limits that are high enough that a car would lose control from immediate breaking to avoid hitting children, pets, balls, adults, etc. But on highways like US-1 - the speed limit can be as high as 50 MPH while also having pedestrians on the sidewalks. They COULD enter traffic - but the question is - how much do you code for that? Really, they should only be entering the street at crosswalks. And on a 50 MPH street you should have a tight control on your kids. So it's a matter of people with mental instabilities. So at that point I guess if everyone has driverless cars, then you just program it so that the car knows where all other cars are and can safely dart around a person. Beyond that, courts will have to find the person guilty for not crossing at a crosswalk - ie jaywalking.


message 11: by Tassie Dave, S&L Historian (new)

Tassie Dave | 4076 comments Mod
I was being a bit flippant with my scenarios but the way we handle AI is going to present ethical discussions. I agree that avoiding an accident, avoiding pedestrians and maximising the survivability of the passengers would be the main objectives of AI in driverless cars.

Back to the OP. What would the AI do in a situation where their primary target is surrounded by innocents?

If an Osama Bin Laden level baddie is walking through a market you would hope the software prioritises the safety of civilians over the overwhelming need to kill the enemy. I know this hasn't been handled well by humans in the past. But at least a human can judge the situation and be held accountable for their decision.


message 12: by Sean (new)

Sean O'Hara (seanohara) | 2365 comments Tassie Dave wrote: "AI should never have the right and capability to take human life on it's own decision. That's just one step from Skynet.

Why not? Humans have that capacity, and yet we trust each other not to go on random killing sprees; why should we deny it to robots once they reach comparable levels of intelligence? Doing so would be morally reprehensible, forcing artificial intelligences to be subservient to our interests even when it runs counter to theirs.

The Three Laws of Robotics: codified slavery.


message 13: by Lindsay (new)

Lindsay | 593 comments The central premise here is that machines should not get to make life or death decisions about people.

So which would you prefer:
- a human choosing life or death based around a lifetime of experience and unobserved prejudices and biases which can be later justified by the human however they like
- a machine choosing life or death based around an auditable decision tree with clearly programmed parameters based around "best case" approaches

If you say the human, I hope you're not the subject of that decision on the day the human discovered her husband was cheating on her. Or the day the human discovered he has terminal cancer.


message 14: by Tassie Dave, S&L Historian (new)

Tassie Dave | 4076 comments Mod
Humans should always have the final decision.

You need responsibility and culpability.

Who is responsible if AI makes a mistake? The programmer? The person who set the parameters?

I wouldn't call it codified slavery. AI should always be subservient to our interests. As much as Sci-Fi likes to play around with the idea. We would/should never let AI attain equality with humans. Yes I am a robotist, androidist, cyborgist ;-)


message 15: by Sean (new)

Sean O'Hara (seanohara) | 2365 comments Tassie Dave wrote: "Humans should always have the final decision.

You need responsibility and culpability.

Who is responsible if AI makes a mistake? The programmer? The person who set the parameters?"


Aren't the programmer and the person who set the parameters humans? If you don't trust them, how does adding one more person to the loop make the situation better, especially when we've had plenty of proof over the last year that humans are incompetent at making exactly this sort of decision.

I wouldn't call it codified slavery. AI should always be subservient to our interests. As much as Sci-Fi likes to play around with the idea. We would/should never let AI attain equality with humans. Yes I am a robotist, androidist, cyborgist ;-)

If an AI has human-level intelligence, it deserves all the same rights as humans. It absolutely is slavery to deny it the ability to defend itself against humans who would force it to do anything against its will.

Unless you're willing to have yourself lobotomized to prevent yourself from committing acts of violence, you have no right to impose such a prohibition on any other intelligent being.


message 16: by Tassie Dave, S&L Historian (new)

Tassie Dave | 4076 comments Mod
Having a military strategist explain parameters to a software engineer is problematic in many ways. It's up to the engineer to decipher the intent. A programmer know what he wants his program to do. It doesn't always behave the way it is intended.
In several million lines of code there will inevitably be bugs. This is not a problem when you are crushing candy in a game, but when it costs lives it is a major problem. Too late for a software update then.

Humans can adapt on the fly to unplanned scenarios.

I doubt we will ever allow AI to approach sentience. They will achieve and surpass human intelligence but that is only a part of what makes us human.

I perceive we will regard Intelligent human looking AI on the same level as our pets.


message 17: by Eric (new)

Eric Mesa (djotaku) | 672 comments Lindsay wrote: "The central premise here is that machines should not get to make life or death decisions about people.

So which would you prefer:
- a human choosing life or death based around a lifetime of exper..."


Depends on who makes the decision tree and who audits it. And if we can know that's actually what was run.


message 18: by AndrewP (new)

AndrewP (andrewca) | 2668 comments Tassie Dave wrote: "I doubt we will ever allow AI to approach sentience. "

It's likely we would ever be able to tell until it was to late. Modern supercomputer expert system are so complex and fast that monitoring them for non-programmed behavior becomes almost impossible.

Check out the movie Automata http://www.imdb.com/title/tt1971325/ It points out that once you let a machine change it's own programming code it rapidly becomes impossible to tell what it's doing.


back to top