Tim Moore

33%
Flag icon
The next clever idea is to stack sparse autoencoders on top of each other like a club sandwich. The hidden layer of the first autoencoder becomes the input/output layer of the second one, and so on. Because the neurons are nonlinear, each hidden layer learns a more sophisticated representation of the input, building on the previous one.
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Rate this book
Clear rating