Mark Gerstein

47%
Flag icon
To recap what we know so far, in the late 1950s and early ’60s, Frank Rosenblatt and Bernard Widrow devised single-layer neural networks and the algorithms to train them, making these networks the focus of machine learning for almost a decade. Then, in 1969, Minsky and Papert published their book, Perceptrons, in which they elegantly proved that single-layer neural networks had limitations, while insinuating (without proof) that multi-layer neural networks would likely be similarly useless, effectively killing that field of research and bringing about the first AI winter.
Why Machines Learn: The Elegant Math Behind Modern AI
Rate this book
Clear rating
Open Preview