This is a concise, insightful introduction to the field of numerical linear algebra. The clarity and eloquence of the presentation make it popular with teachers and students alike. The text aims to expand the reader's view of the field and to present standard material in a novel way. All of the most important topics in the field are covered with a fresh perspective, including iterative methods for systems of equations and eigenvalue problems and the underlying principles of conditioning and stability. Presentation is in the form of 40 lectures, which each focus on one or two central ideas. The unity between topics is emphasized throughout, with no risk of getting lost in details and technicalities. The book breaks with tradition by beginning with the QR factorization - an important and fresh idea for students, and the thread that connects most of the algorithms of numerical linear algebra.
TL;DR: It is very good, simple and straight proofs, doesn't go deep on algorithms and edge cases for solving practical systems of equations and other problems. Arguments are almost all proved using the most elementary form, thus more simple to understand but it didn't prevent the writers to keep the work concise. In contrast with numerical linear algebra part of Numerical Linear Algebra And Optimization, This book has more theoretical depth, yet it lacks the vast examination of algorithms, for example for solving different types of linear systems of equations Gill et al. provided in-depth explanation of edge cases and different sizes of matrices with different ranks and a separate explanation for the full rank cases, but Trefethen et al. just assumed rows is going to be more than columns and ignored other cases, again this is the case for least squares problems. But this book has a detailed examination of the condition number of least squares problem while Gill et al.'s book just skimmed over it. This book has a big section about iterative methods and another one dedicated to the eigenvalue problem, which is great.
This is an excellent 2nd book on linear algebra, after a traditional book like Strang (or whatever your professor used in college). It presents an unusually intuitive and geometric interpretation of important operations like SVD and QR decomposition, which is just the thing for thinking about models based on these algorithms.
The second half of the book discusses conditioning and which algorithms are more susceptible to instability; data science focused readers who are not writing their own models may want to skim through that part.
Disclosure: One of the authors is a friend of mine, but he didn't get me a discount on the book. ;)
Pretty much all of Nick Trefethen's writing is superb, and this book is no exception. In particular, there are many insights to be gained through the exercises. The book is is written in an informal but precise mathematical style. All theorems are stated clearly and their proofs are succinct. The textbook formed the backbone of Nick's graduate numerical linear algebra course at Oxford. It was one of the best courses I've had the pleasure of taking. The textbook provided a solid pedagogical structure for the course.
This book changed my view on the numerical methods. Period. Prof. Trefethen has a gift with explaining difficult things. Also he is a very nice person, who responds to e-mails from complete strangers ;-) Read it for the explanation of matrix multiplication and SVD alone, you will stay for the rest.
While I am not really interested in numerical linear algebra (I still realize its significance), I found this book to be an enjoyable read. I have also read a similar book by Carl D. Meyer and both books are excellent, covering mostly similar topics (albeit with some differences). I would say Trefethen focuses more on analyzing the presented algorithms (while Meyer is more theoretical and prepares the reader better for more advanced courses such as functional analysis), their complexity, flop counts and stability. The book focuses mostly on eigenvalue/QR algorithms, but the reader learns how to analyze general algorithms as well. Therefore, I recommend the book to anyone who is interested not only in numerical linear algebra, but in numerical mathematics in general.
Honestly, a pretty interesting textbook. I learned a lot of fun facts about linear algebra and algorithms that I will probably never have the opportunity to share. Favorite line from the book “Though the flavors are related, however, a new spice appears in the dish when it comes to computing eigenvalues.”
This book is one of my all-time favorite math/optimization books. Excellent coverage of several fundamental linear algebra topics that do not get enough attention elsewhere.
I thoroughly enjoyed this book, as it has nice simple proofs and also outlines the numerical algorithms used in modern research.
I can cite several examples where I have found this book helpful, but here I will just state one of them. I am a graduate student researching of theoretical physics of black holes, but one day a friend asked me if they knew of any algorithm that can compute the eigenvalues of large matrices, as they were trying to use numerical methods to describe planetary insolation.
I asked them questions like: “What is the dimensionality of your matrix?” “Is the matrix Hermitian?”, “Does it have a Jordan canonical form?”. In the end, I referred them to Chapter 5, stating that: “If you only care about the smallest eigenvalues, go with Rayleigh-Ritz. If you want all eigenvalues, your best bet would be to use QR factorization with Givens rotation.”
This book was introduced to me by a really smart professor who taught me numerical methods (he used to work on stellar astrophysics, but now works on quantum information!).
Started a bit badly, thrown in at the deep end. Got better as I progressed through the book. Nice illustration of big O complexity. Nice summaries of iterative methods and they applications. Short chapters fairly easy to digest.
Some aspects are left too generic, and personally I would have kept the appendix in the beginning, as it helps to explain much of the motivation behind the chapter methods.
Incredibly text on linear algebra specifically for computing applications. It blew my mind to think that the way that we have generally learned linear algebra is, when multiplying a matrix by a vector, to think of the matrix as being an operator that is applied to the vector, but that in computing it makes more sense to think of the matrix as the object and the vector as the operator. Insights like these make this a great text for learning and refreshing linear algebra for those of us who are interested in applying it for computing (e.g. machine learning).
This is a great introduction to numerical linear algebra, and would even be a decent text to cherry pick from, for those not interested in the numerical side specifically. The presentation assumes some familiarity with basic linear algebra concepts, like matrix operations and decompositions, and it would be best to start this text already being comfortable with matrix manipulations, column/row operations, etc. The presentation generally gives a chapter on theory before discussing the numerical algorithm. For example, QR factorization and eigenvalue decomposition are discussed from a purely mathematical perspective before discussing Gram-Schmidt and the QR algorithm.
Some topics that would be nice to see are left out (e.g., Givens rotations, which only appear as a problem), but overall this gives a broad overview of numerical linear algebra that is complete enough for most engineers and scientists. The algorithms provided are all serial, leaving parallel algorithms for another course. If taking a class where Golub's text is the only required text, I highly recommend using this as a supplement.
As the title suggests, this book is about Numerical Linear Algebra. Aka. "how to make computers do that annoying stuff with matrices, because you will pretty much always forget a negative sign if you do it yourself, and will go insane if you attempt a problem greater than 5x5".
This book starts off strong, and gives good coverage of the basics; forward and backward stability, LU decomposition, QR factorization, etc. However, it seems like it becomes more vague as the topics become more complex-- eigen value methods in particular could have been covered in more detail.
Its not too expensive though, and might be worth picking up if you need a basic introduction to numerical linear algebra. It does contain a brief review of some linear algebra topics, but I'd recommend you be familiar with them before attempting this book.
This was a tough book to read. I would recommend it in addition to Matrix Computations by Golub & Loan only if your work is going to exclusively involve NLA. The Matrix computation book was far better in being thorough yet accessible enough that I was able to implement a lot of what I read. My review of that book is much more detailed.
The strength of this book is in the conceptual discussions. This isn't the book to use to learn the mechanics of the methods described. It's one of my three favorite numerical linear algebra books.