(?)
Quotes are added by the Goodreads community and are not verified by Goodreads. (Learn more)
Steven Pinker

“The danger, sometimes called the Value Alignment Problem, is that we might give an AI a goal and then helplessly stand by as it relentlessly and literal-mindedly implemented its interpretation of that goal, the rest of our interests be damned.”

Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress
Read more quotes from Steven Pinker


Share this quote:
Share on Twitter

Friends Who Liked This Quote

To see what your friends thought of this quote, please sign up!

1 like
All Members Who Liked This Quote

None yet!


This Quote Is From

Enlightenment Now: The Case for Reason, Science, Humanism, and Progress Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker
31,946 ratings, average rating, 3,494 reviews
Open Preview

Browse By Tag