With the reinvigoration of neural networks in the 2000s, deep learning has become an extremely active area of research, one that’s paving the way for modern machine learning. In this practical book, author Nikhil Buduma provides examples and clear explanations to guide you through major concepts of this complicated field.
Companies such as Google, Microsoft, and Facebook are actively growing in-house deep-learning teams. For the rest of us, however, deep learning is still a pretty complex and difficult subject to grasp. If you’re familiar with Python, and have a background in calculus, along with a basic understanding of machine learning, this book will get you started.
Examine the foundations of machine learning and neural networksLearn how to train feed-forward neural networksUse TensorFlow to implement your first neural networkManage problems that arise as you begin to make networks deeperBuild neural networks that analyze complex imagesPerform effective dimensionality reduction using autoencodersDive deep into sequence analysis to examine languageLearn the fundamentals of reinforcement learning
Its one of the few books, that combines practical and theoretical information in a very balanced way. The first half of the book for me was very easy to follow. But I need to add, before the book, I have finished Andrew Ng's 16-week Machine Learning course, read a couple other books on Data Science and did some basic math&coding on the various ML/AI areas.
Somehow, up to Convolutional Neural Networks (~%50 of the book), there is a very good overview of what Gradient Descent is and how to implement and use it. After CNN things get more serious and it moves onto relatively newly discovered and production level state-of-the-art models (like the basic model powering Google Translate). The last chapter is about Deep Reinforcement Learning (Deep Minds astonishing model for all Atari games) and ends with very recent topics like Async. Advantage Actor-Critic Agents and UNREAL. I would be happier if I would see more computer vision related models and problems instead of sentiment and sequence analysis but its completely a personal preference. I strongly recommend this book if you have interest in Deep Learning.
If you expect code example, you would be disappointed. This book is very good at covering fundamentals, which I like. I suggest this book as a supplement with other deep learning book.
Strengths - Gives a really good overview of computer vision history and why traditional machine learning methods don't perform as good as convolutional networks - The section that talks about Gradient Descent is really well explained and destroy some myths around gradient descent (even though there is no math) - Gives a clear and intuitive idea of how convolutional layers can capture patterns in images - It includes attention methods for NLP
Weaknesses - Lacks math and precise definitions (but that is "ok" if the book was done for beginners) - It uses tensorflow for all examples which turns hard and cumbersome for beginners - It doesn't talk about other frameworks (some of the examples could have been written on top of tensorflow but using keras/tensorlearn or using pytorch) - Code Snippets are long, hard to follow and sometimes present errors - Some images have font size really small which turns impossible to read
Good overview of a wide variety of deep learning approaches to ML/AI. The book goes over several recent research papers, but it would have been helpful to include a more in-depth overview of transformers given their central role in LLMs. Lots of details on the math behind various approaches, but it may be difficult to use this to get a sense of the "big picture" unless you already have some knowledge in this area.
This book was insightful and overall a fun read. The principles of deep learning are fundamental to AI. And while these techniques are still used and taught today, reading this book feels like reading a piece of history. Shortly after publication, focus shifted to concepts like transformers and attention, which lead to the development of LLMs and GPTs that are now viewed as the state of the art.
not read chapter 8. good start point to read open AI gym. This book does not provide much details about each algorithm. It basically just mentions what it is. Therefore, read multiple books at the same time is a great help to understand how deep learning works. Some codes syntax are old and should be corrected. However, it definitely worths time reading the example codes.
This book strikes a good balance between the DL textbooks which are quite dense and the many practitioners guides which have code examples but are light on theory & math. There are equations here as well as code. I've been checking this one out from the library, but I'm going to go ahead an order my own copy.
As for me, it's a slightly complicated. The math basic is explained in a quite poor and boring manner. The another disadvantage is a lack of real world examples. It's a challenge to connect a pure formulas with high level ML algorithms. I agree the book might be useful however I don't like so academic style. As result this is only two stars. I can't give more.
Chapters are of varying quality, in particular the last one on deep reinforcement learning (written by a contributing author) doesn't jibe well with the rest of the book.
I am finished with the number of chapters that have been released so far. There have been three in total. The material is a little rough but it is an early release. One should have some basic understanding of statistics and probability before attempting to digest the material. Looking forward to the additional chapters.