Jump to ratings and reviews
Rate this book

Reliable Reasoning: Induction and Statistical Learning Theory

Rate this book
In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni—a philosopher and an engineer—argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors—a central topic in SLT.

After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an admirably clear account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning, describing fundamental results about the power and limits of those methods in terms of the VC-dimension of the hypotheses being considered. The VC-dimension is found to be superior to a related measure proposed by Karl Popper, and shown not to correspond exactly to ordinary notions of simplicity. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and suggest possible new models of human reasoning suggested by developments in SLT.

108 pages, Hardcover

First published March 30, 2007

4 people are currently reading
49 people want to read

About the author

Gilbert Harman

38 books8 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
2 (11%)
4 stars
4 (22%)
3 stars
7 (38%)
2 stars
4 (22%)
1 star
1 (5%)
Displaying 1 - 3 of 3 reviews
Profile Image for Laura.
37 reviews
August 10, 2008
Two stars isn't really fair to this book, since it is trying to do something that no other book tries to do. It is written by a computer scientist and a philosopher of mind-- and they try to bring together the ideas of both areas into one small book that they use for a class co-taught at Princeton.

My main complaint with the book is how repetitive it is. They say things over and over again but not exactly in different ways, and anyway the new words they use never clarify the point. Especially chapter 1 was bad (but others got just as bad at times). I suppose that they are trying to say something to correct long-standing misconceptions in their fields-- and that cannot be easy. Still, that makes it even more crucial to be clear.

I will credit them for their description of VC dimension. I understood it pretty clearly at one point-- not with a first reading but after I had concentrated for a bit on what they were saying. Apparently VC dimension and shattering are often very difficult concepts to grasp, but I felt that I could understand the explanation in the book fairly well.
Profile Image for David MacIver.
13 reviews50 followers
April 3, 2016
This book was not at all the book I thought it was. I was expecting it to be much more mathematically heavy. I'm glad I made that mistake though because it was a really nice read.

It's more or less "Statistical Learning Theory for Epistemologists". It provides a concise overview of the basics of statistical learning theory that is very light on details but high on insight, with a discussion of its relevance to epistemology and psychology.
Displaying 1 - 3 of 3 reviews

Can't find what you're looking for?

Get help and learn more about the design.