Ensemble methods have been called the most influential development in Data Mining and Machine Learning in the past decade. They combine multiple models into one usually more accurate than the best of its components. Ensembles can provide a critical boost to industrial challenges -- from investment timing to drug discovery, and fraud detection to recommendation systems -- where predictive accuracy is more vital than model interpretability.
Ensembles are useful with all modeling algorithms, but this book focuses on decision trees to explain them most clearly. After describing trees and their strengths and weaknesses, the authors provide an overview of regularization -- today understood to be a key reason for the superior performance of modern ensembling algorithms. The book continues with a clear description of two recent Importance Sampling (IS) and Rule Ensembles (RE). IS reveals classic ensemble methods -- bagging, random forests, and boosting -- to be special cases of a single algorithm, thereby showing how to improve their accuracy and speed. REs are linear rule models derived from decision tree ensembles. They are the most interpretable version of ensembles, which is essential to applications such as credit scoring and fault diagnosis. Lastly, the authors explain the paradox of how ensembles achieve greater accuracy on new data despite their (apparently much greater) complexity.
This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques.
Ensemble Methods in Data Mining is comprised of six chapters: 1. Ensembles Discovered 2. Predictive Learning and Decision Trees 3. Model Complexity, Model Selection and Regularization 4. Importance Sampling and the Classic Ensemble Methods 5. Rule Ensembles and Interpretation Statistics 6. Ensemble Complexity
The book also includes two technical Appendices and a solid bibliography. While there is plenty of valuable information here, a good portion of the book doesn't explicitly deal with its title (ensemble methods). To that point, one of the two Forewards notes the following: "The development of ensemble methods is by no means complete. Among the most interesting open challenges are a more thorough understanding of the mathematical structures, mapping of the detailed conditions of applicability, finding scalable and interpretable implementations, dealing with incomplete or imbalanced training samples, and evolving models to adapt to environmental changes" (xvii). I would've loved to have seen a more thorough examination of these theoretical areas. The issue isn't that the information is not relevant; rather, it could have been framed better (additional editing). It is still a valuable reference, and I look forward to digging further into the bibliography.
Some of the R code doesn't work, but the content is good. I never realized all the ensemble methods and algorithms could be classified as just special cases of one overall algorithm, which they call the Importance Sampling Learning Ensembles (ISLE) framework. It seems like everything I read that has John Elder's name on it turns out to be good.