
Mistaking a teapot shape for a golf ball, due to surface features, is one striking example from a recent open-access paper:
The networks did “a poor job of identifying such items as a butterfly, an airplane and a banana,” according to the researchers. The explanation they propose is that “Humans see the entire object, while the artificial intelligence networks identify fragments of the object.” News, “Researchers: Deep Learning vision is very different from human vision” at Mind Matters
“To see life steadily and see it whole”* doesn’t seem to be popular among machines.
*(Zen via Matthew Arnold)
See also: Can an algorithm be racist?
Copyright © 2019
Uncommon Descent
. This Feed is for personal non-commercial use only. If you are not reading this material in your news aggregator, the site you are looking at is guilty of copyright infringement UNLESS EXPLICIT PERMISSION OTHERWISE HAS BEEN GIVEN. Please contact legal@uncommondescent.com so we can take legal action immediately.
Plugin by
Taragana
Published on January 08, 2019 15:23