Mark Gerstein

56%
Flag icon
Training eventually comes down to this: Provide the network with some set of inputs, figure out what the expected output should be (either because we humans have annotated the data and know what the output should be or because, in types of learning called self-supervised, the expected output is some known variation of the input itself), calculate the loss, calculate the gradient of the loss, update the weights/biases, rinse and repeat.
Why Machines Learn: The Elegant Math Behind Modern AI
Rate this book
Clear rating
Open Preview