Artificial intelligence: Machines do not see objects as wholes





Mistaking a teapot shape for a golf ball, due to surface features, is one striking example from a recent open-access paper:





The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” News, “Researchers: Deep Learning vision is very different from human vision” at Mind Matters





“To see life steadily and see it whole”* doesn’t seem to be popular among machines.





*(Zen via Matthew Arnold)





See also: Can an algorithm be racist?


Copyright © 2019 Uncommon Descent . This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by Taragana
 •  0 comments  •  flag
Share on Twitter
Published on January 08, 2019 15:23
No comments have been added yet.


Michael J. Behe's Blog

Michael J. Behe
Michael J. Behe isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Michael J. Behe's blog with rss.