This work covers everything necessary to build a competitive, advanced compiler for parallel or high-performance computers. It starts with a review of basic terms and algorithms such as graphs, trees and matrix algebra. Methods focus on analysis and synthesis, where analysis extracts information from the source program. The various restrictions and problems caused by different languages commonly used in such machines are shown.
Pretty heavy in the errata department, and doesn't provide much beyond the original papers in terms of depth or interpretation. Read the papers of loop-dependency testing if you need the detail. Reach Kennedy/Allen or Muchnick if you need the hand-holding. Not a bad book, but not much was lost when it went out of print.
(earlier) Recommended to us by Santosh Ponde in CS6241, Compiler Design. A bit off the beaten path of the Rice-Cornell-UIUC Compiler Mafia (Wolfe's at UKansas, although his pedigree includes time at UOregon, another pillar of the seemingly tightly-knit compiler community (and prior home to a fine, fine man and GT graduate, Yannis Smaragdakis (it turns out he's at UMass now), who helped out on the k-rad FC++ and LC++ projects (along with Brian McNamara, another Yellow Jacket (now at Microsoft, and on GoodReads!))), and what exactly is up with that F-1 racer on the front cover?, but likely to have some good info.
Michael's loop optimizations are foundational for high-performance computing. You'll find references to 'the Racecar book' in the LLVM source code, for example. Absolutely worth a read. Consider the section on HPF: High-Performance Fortran for language-design decisions that put the parallelizability of its computation at the forefront.