An advanced book for researchers and graduate students working in machine learning and statistics who want to learn about deep learning, Bayesian inference, generative models, and decision making under uncertainty.
An advanced counterpart to Probabilistic Machine An Introduction, this high-level textbook provides researchers and graduate students detailed coverage of cutting-edge topics in machine learning, including deep generative modeling, graphical models, Bayesian inference, reinforcement learning, and causality. This volume puts deep learning into a larger statistical context and unifies approaches based on deep learning with ones based on probabilistic modeling and inference. With contributions from top scientists and domain experts from places such as Google, DeepMind, Amazon, Purdue University, NYU, and the University of Washington, this rigorous book is essential to understanding the vital issues in machine learning.
Kevin P. Murphy is a Research Scientist at Google. Previously, he was Associate Professor of Computer Science and Statistics at the University of British Columbia.
This is the 2nd book of the probabilistic machine learning series and cover more advanced and state-of-art topics. However this is not a book for everyone. Here are some of my feelings after reading the whole series.
1. Who are the series written for? I am not sure who are the perfect target readers. But I am clear who would suffer (like myself). 1.1 readers without solid linear algebra, calculus, probability, statistical inference. (Though it does cover a few chapters on math foundation, I would not recommend readers learn all these from scratch) 1.2 readers without intermediate knowledge or hands on experience in ML (If readers did not run simple linear regression/NN/Tree model before, he would not understand 90% of the materials) 1.3 reader without a broad knowledge view across statistics (inference, bayesian, time series, causal inference), basic ML/DL. You don't have to be expert in all areas (seldom can be), but you are expected to know the basic concept of most areas
I strongly recommend readers to start from introductory ESL and other deep learning books/tutorials first. This is not for beginners, instead, it's for ML/DL veterans.
2. How to best use the series? I viewed the series as a user guide for ML practitioners. It does cover almost all topics in ML/DL area (prediction, inference, generation, discovery) and the material is up to date (at early 2024). The best way to use the book is: a. You are trying to solve a particular problem b. You guess this may be related to XXX and want to learn more details or connections to other approaches c. Go check XXX in the book, it would give you a relative full picture about XXX with reference and correct directions d. Checkout the reference or google topics/things you are interested in, do deep in this area by yourself.
So in general, the book is used as a quick introduction about a particular topic and its most value is show you the connections between this topic with others and how you can learn the basics in 1-2 hours.
3. What to expect from reading the series? The series are designed for repeated reading and the best way to read it is to go with a mission. For example, I would just want to learn what's is VAE and how it works. You would be happy to solve a particular problem and also get a view about how it fits into the broader landscape of ML. Then checkout other resources (github/blog) to code your own solutions from there.