Fundamental topics in machine learning are presented along with theoretical and conceptual tools for the discussion and proof of algorithms.
This graduate-level textbook introduces fundamental concepts and methods in machine learning. It describes several important modern algorithms, provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application. The authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics.
Foundations of Machine Learning fills the need for a general textbook that also offers theoretical details and an emphasis on proofs. Certain topics that are often treated with insufficient attention are discussed in more detail here; for example, entire chapters are devoted to regression, multi-class classification, and ranking. The first three chapters lay the theoretical foundation for what follows, but each remaining chapter is mostly self-contained. The appendix offers a concise probability review, a short introduction to convex optimization, tools for concentration bounds, and several basic properties of matrices and norms used in the book.
The book is intended for graduate students and researchers in machine learning, statistics, and related areas; it can be used either as a textbook or as a reference text for a research seminar.
Mehryar Mohri is Professor of Computer Science at New York University's Courant Institute of Mathematical Sciences and a Research Consultant at Google Research.
کتاب خوبیه به شرطی که کمی ماشین لرنینگ سرتون بشه. اگر قراره از صفر شروع کنید و دنبال کتابی برای یادگیری میگردید، این براتون سخته. پیشنهاد میکنم با کتاب understanding machine learning شای بندیوید آغاز کنید و همزمان با خواندنش ویدئوهای تدریس خود نویسنده در یوتیوب رو هم ببینید. این کتاب برای آغاز یادگیری اصلا مناسب نیست. به انگلیسیِ سخت توضیح داده. 😜😁 وقتی اون رو کامل کردید و گوشی دستتون اومد، بیاید سراغ این کتاب. ویدئوهای جادی در مکتبخونه رو هم در کنارش ببینید.
I did not like the texture of the paper of the hardcover version. Reading Mohri was overall very difficult and painful. While the concepts were explained well, the paper stock was too glossy/thick for this book to be a real page-turner.
-1 for explaining Rademacher Complexity before VC dimensions, and not motivating VC dimensions with "No-Free-Lunch" Theorem. I had to read Shai Shalev-Shwartz's book to understand VC dimensions.
-1 for having "feels bad" paper stock. Shai Shalev-Shwartz's book not only motivates Rademacher Complexity well, but also has GREAT paper stock.
On balance, this is a clear, thorough and comprehensive introduction to the foundations of machine learning. It is an excellent textbook.
Structurally, the book is clear, beginning with PAC and other research into learnability, proceeding to SVM, kernels and thence on to other, more complex topics: multiclass, Bayesian statistics, Markov models.
Ultimately though, this book is only a textbook. It is a reference and not an instructor. The proofs are clearly presented and easily consulted, but, like most textbooks, this work is a supplement to a lecture series, not a replacement.