This was going to be the second edition of my original book “Getting Started with Deep learning”. However, so much has changed about the book and the field since its initial publishing that I decided that a new name was more appropriate. It is now 2020 and Deep Learning is still going strong. In fact, I believe it is accelerating in its evolution. The techniques are widely used by companies now and the algorithms are starting to do things that are truly amazing. As is necessary with progress, the algorithms are also more complicated, with deeper and more resource intensive networks. This is best exemplified by one of the newest deep learning The Transformer. Transformers are, for me, the first algorithm I was not able to run on a laptop. They truly require a machine learning “war machine”. Lots of GPU power and memory, etc. The algorithms are much more complicated too. A little bit too much in fact and the programming languages are starting to abstract too much of the code. Something I am not crazy about as I like writing the code from scratch and I never use a deep learning algorithm until I understand every detail about it. My quest for understanding always makes me gravitate away from abstracting libraries and over simplifications. As such, I have great admiration for the computational static graph and the Tensorflow low level API. I feel that I can only understand a deep learning algorithm when I implement it in the low level API with a static graph. Therefore, the goal of this book is to help you to learn, and to better understand, how to write deep learning algorithms from scratch using the Tensorflow low level API and the static graph. This is a book for everyone from those starting in deep learning to those with more advanced knowledge. The book starts with basic linear regression and builds on every chapter until the more advanced algorithms like CNNs for RGB, RNNs, encoders, GANs, Q-Learn, and Transformers, to name a few. I hope you enjoy the book. I sure have enjoyed writing it.