They called it “backpropagation,” named for its defining feature: a cascading effect in which each instance of training—specifically, the degree to which a network’s response to a given stimulus is correct or incorrect—ripples from one end to the other, layer by layer.

