No Need for Alarm About How Neural Nets Work

Albert Wenger writes that concerns about “black box” algorithms are overwrought. (See here and here for more about these concerns.) It’s okay, Wenger says, if we can’t follow or audit the logic of the machines, even in life-and-death contexts like healthcare or policing. We often have that same lack of insight into the way humans make decisions, he says���and so perhaps we can adapt our current error prevention to the machines:




It all comes down to understanding failure modes and
guarding against them.



For instance, human doctors make wrong diagnoses. One
way we guard against that is by getting a second opinion.
Turns out we have used the same technique in complex
software systems. Get multiple systems to compute something
and act only if their outputs agree. This approach
is immediately and easily applicable to neural networks.



Other
failure modes include hidden biases and malicious attacks
(manipulation). Again these are no different than for
humans and for existing software systems. And we have
developed mechanisms for avoiding and/or detecting
these issues, such as statistical analysis across systems.





Continuations | No Need for Alarm About How Neural Nets Work
 •  0 comments  •  flag
Share on Twitter
Published on May 01, 2017 09:35
No comments have been added yet.