The tables of problem types are nice. They're also known mathematical results; you don't have to pay money to get them. (That may not have been true when this book was first published.)
NP-Completeness restates some properites of elementary functions, in a way that makes them seem mysterious. It turns out that
1) a polynomial
= {the same polynomial wearing a giant, obtrusive disguise}
and can be transformed to
another polynomial.
2) If I put an obtrusive disguise on an exponential function, it won't magically turn into a fixed-degree polynomial.
3) but let's say it did. You know, just once. Like, one time I have e^x or something worse, and it just *broop* turns into a fixed-order polynomial.
Then I could convert ANY exponential (or worse) function into a fixed order polynomial. Wouldn't that be amazing?
It would. But an exponential function by definition is not reducible to a finite number of algebraic terms. Passed through a turing machine and into abstract notation, such properties of functions become difficult to track.
Instances, Particular Solutions, and Logical Complement:
Any way of "asking" for an answer is considered an individual search.
The theory treats
individual searches,
composed problems,
the total number of searches required to find a solution,
the set of all possible solutions
indistinguishably. Each is spoken of as a single "instance". The authors also do not distinguish nested loops. The resulting confusions appear to be fundamental to the theory of NP-Completeness itself.
Example:
You have a Traveling Salesman problem, and want to know if there is a path less than 1000 miles which is a solution. You try one possible answer and
-a positive result ends the search (This is a solution! AND The length is less than 1000!)
-a negative result fails to end the search, or reduce the number of remaining solutions which must be checked.
The authors then state that such a problem fails to obey the law of logical complement. This is absurd. The complement of a particular solution is {all other possible solutions}.
They expected the negative to read, "It is not true that there is a solution". They did not ask this question. The question was: "Is it true that there is (at least one) solution to this problem, AND the present case is (one such) solution?"
The logical complement is then: "It is not true that there is a solution to this problem AND this particular case is one such solution.", which is the result they got. We must negate the entire statement or we have made a mistake.
If I want to write this question in code, how am I to determine whether or not there is a solution?
I will read from the memory location where "this particular case is a solution" is stored. There is only one question here, "is this particular case a solution?". Whenever that answer is "yes", then "there is a solution" also becomes YES.
This is the problem which we set out to solve. If I make YOU go into a room and check every file in a filing cabinet for my watch, and in addition require you to come out and say, "it's not in the first one," "it's not in the second..." etc, it is no mystery that only one of us is searching, and we will finish whenever you happen upon the watch and no sooner.
The relationship between particular solution and general solution ignored,the authors carry on confusing the two, and stumbling over the consequences.
If I chosen terms to tackle a problem make it hard for me to identify the properties of that system, they are poorly chosen. NP-Completeness is an agglomeration of poor choices.
The New Tools:
A good abstraction provides an easier way to manipulate information. A seemingly intractable problem might prove solvable, by rearranging the algebraic terms to better match the problem, or introducing artificial constraints. What does NP-Completeness provide?
The book is plain: NP-Completeness provides no new rules for determining the computational time of a given problem. Their results are identical to the Algebra. You will still have either to
a) work out the problem yourself, and determine the resulting order of computation or
b) check if someone else has solved this same (class of) problem.
Exponents remain exponents, polynomials remain polynomials and exponenets stubbornly refuse to reduce to them according to any a priori principle.
Stop and consider carefully. If the theory had given us any other result, it would have violated the rules of arithmetic and been invalid. If this theory succeded in making x^2 the same order as 2^x, or gave some rule for converting one to the other independent of the structure of the problem in question, it would not be math.
The section of solved problems is interesting, and if you know graph theory, very useful. Still, for me the diagrams in Cormen's Algoritms are just all-around better.
After spending months wrangling with this text, I believe the authors are simply wrong. The categories do not provide an intuitive way of understanding different problem types. The system does not improve on the expression of the results in Algebraic, geometric, or graphical terms. Its formulations are klunky; it offers limited analytic scope, and its methods are trivial; some are invalid. This circuitous and error-cluttered route is sufficient to construct a magical set with indistinct boundaries and which appears, in Theory, to be able to violate the algebra, but does not in fact. This result ought not be a surprise; it is by construction.
The authors repeat their lack of rules for detecting problem structure. Within their framework, they declare most problems look alike, and are difficult to group.
It is not difficult to get a good idea of the time complexity of a given problem: Get out a sheet of paper, and try to solve the damn problem. Algebraic terms which can be collected are distinguishable from those which cannot.
Given a real problem, the knowledge that a abstracted form of that problem is intractable is a bad answer. Efficient models make simplifying asssumptions. This book provides no such tools for the practical programmer faced with a seemingly intractable problem, while assuring her the book is designed for just this situation.
Graph Theory:
I think the book provides a good exposition of the elements of graph theory, as they relate to search problems. Examining the mappings from one problem into another is instructive.
However, there are easier to read books, which present the graphs and transforms rather than porting them them to a new theoretical structure.
In Brief
The runtime of any algorithm will be a countable number of instructions; generally, it will be a function of the input size. You may express that function any way you wish. If you are comfortable manipulating mathematical terms in logic symbols, then the format of this book is accessible. But be forewarned that the authors sometime get lost in their own logical syntax. A single nested loop throws them.
To be of lasting use, the system requires correction and clarification.
This book helped inspire me to make my own algebra. Procedure: Construct an artifice that masks the terms of the problem, and then bask in the mystery of the solutions I have hidden thereby.