What do you think?
Rate this book


After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning, describing fundamental results about the power and limits of those methods in terms of the VC-dimension of the hypotheses being considered. The VC-dimension is found to be superior to a related measure proposed by Karl Popper, and shown not to correspond exactly to ordinary notions of simplicity. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.
108 pages, Hardcover
First published March 30, 2007