Many parallel computer architectures are especially suited for particular classes of applications. However, there are only a few parallel architectures equally well suited for standard programs. Much effort is invested into research in compiler techniques to make programming parallel machines easier. This book presents methods for automatic parallelization, so that programs need not to be tailored for specific architectures; here the focus is on fine-grain parallelism, offered by most new microprocessor architectures. The book addresses compiler writers, computer architects, and students by demonstrating the manifold complex relationships between architecture and compiler technology.
Amazon 2008-11-24. For anyone interested in computer architecture, this is an excellent coverage of various processor esoterica spread across fifteen short chapters. I do mean short -- this reads (as the Springer Lecture Notes tend to do (as somewhat suggested by the line's title)) like the conference proceedings of a particularly good supercomputing retrospective, and would be difficult to tackle without a prefatory study of Patterson && Hennessy's hyperclassic Computer Architecture: A Quantitative Approach. I recognized any number of things I'd known for years, but only sans workable citation (my personal favorite: the systolic array as an example of the elusive MISD machine (Floyd taxonomy)) (see also the SHIFT machine of Nakamura (I have no clue whatsoever where I picked that one up)). There's a wealth of historical data, something I treasure and rarely find in techcore.