Dramatically accelerate the building process of complex models using PyTorch to extract the best performance from any computing environment
Key FeaturesReduce the model-building time by applying optimization techniques and approachesHarness the computing power of multiple devices and machines to boost the training processFocus on model quality by quickly evaluating different model configurationsPurchase of the print or Kindle book includes a free PDF eBookBook DescriptionPenned by an expert in High-Performance Computing (HPC) with over 25 years of experience, this book is your guide to enhancing the performance of model training using PyTorch, one of the most widely adopted machine learning frameworks.
You’ll start by understanding how model complexity impacts training time before discovering distinct levels of performance tuning to expedite the training process. You’ll also learn how to use a new PyTorch feature to compile the model and train it faster, alongside learning how to benefit from specialized libraries to optimize the training process on the CPU. As you progress, you’ll gain insights into building an efficient data pipeline to keep accelerators occupied during the entire training execution and explore strategies for reducing model complexity and adopting mixed precision to minimize computing time and memory consumption. The book will get you acquainted with distributed training and show you how to use PyTorch to harness the computing power of multicore systems and multi-GPU environments available on single or multiple machines.
By the end of this book, you’ll be equipped with a suite of techniques, approaches, and strategies to speed up training , so you can focus on what really matters—building stunning models!
What you will learnCompile the model to train it fasterUse specialized libraries to optimize the training on the CPUBuild a data pipeline to boost GPU executionSimplify the model through pruning and compression techniquesAdopt automatic mixed precision without penalizing the model's accuracyDistribute the training step across multiple machines and devicesWho this book is forThis book is for intermediate-level data scientists who want to learn how to leverage PyTorch to speed up the training process of their machine learning models by employing a set of optimization strategies and techniques. To make the most of this book, familiarity with basic concepts of machine learning, PyTorch, and Python is essential. However, there is no obligation to have a prior understanding of distributed computing, accelerators, or multicore processors.
Table of ContentsDeconstructing the Training ProcessTraining Models FasterCompiling the ModelUsing Specialized LibrariesBuilding an Efficient Data PipelineSimplifying the ModelAdopting Mixed PrecisionDistributed Training at a GlanceTraining with Multiple CPUsTraining with Multiple GPUsTraining with Multiple Machines
I think this is a great book that will be useful to anyone who wants to improve their understanding of training and optimizing PyTorch models.
The author starts with an overview of the training process and rapidly goes deeper, explaining technical details and intricacies of tinkering with everything, starting with the model itself and finishing with modifying the environment level.
After that, the focus is moved to the training itself - we learn more about compiling models, pruning the model, processing data faster, and other approaches.
Finally, the author shared approaches to distributed training - on a single machine or on a cluster.
I liked the book's style —the interwoven switching between explanations in simpler terms and technical details, code examples, notes and links to external resources, and quizzes that checked my understanding of the text. Multiple times, after reading some sections, I thought, "Hey, but what about %approach name%?" and then I read about that exact approach in the next section! And I appreciate that the whole code is shared on GitHub.
This book was captivating, and I recommend it. It will be useful even for experienced PyTorch practitioners.