Tim Moore

63%
Flag icon
boosting repeatedly applies the same classifier to the data, using each new model to correct the previous ones’ mistakes. It does this by assigning weights to the training examples; the weight of each misclassified example is increased after each round of learning, causing later rounds to focus more on it. The name boosting comes from the notion that this process can boost a classifier that’s only slightly better than random guessing, but consistently so, into one that’s almost perfect. Metalearning is remarkably successful, but it’s not a very deep way to combine models. It’s also expensive, ...more
The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
Rate this book
Clear rating