One of the biggest surprises in Robert J. Ackermann’s Belief and Knowledge, a 1972 treatise which combines symbolic logic and epistemological reasoning to examine the idea of how and why people believe, occurs on the second page. Long before the current experiments in self-driving cars and trucks and current usage of artificial intelligence and robotics in modern industry, Ackermann envisaged a mechanical rat equipped with a learning mechanism (p. 2). After introducing this concept, Ackermann immediately touches the a priori premise that belief must be associated with consciousness (p. 2).
Naturally, any serious work is going to distinguish between types of a given subject in a taxonomy. Ackermann’s taxonomy of beliefs includes: 1) behavioral beliefs (those which do not seem to require conscious effort – p. 5), unconscious beliefs (including habits – p. 6), conscious beliefs (those an agent has explicitly formulated – p. 8), and rational belief (those with a belief structure which can be reasonably demonstrated – p. 8).
In discussing the logical problems of understanding belief, Ackermann notes that the descriptive saying that x believes p (p. 14) may adequately express what x believes (p. 14), but if we say something like “John believes Tom is the best golfer in town,” we don’t know for certain if we are to be concerned with John or Tom (p. 15). Which belief do we test, the possible “fact” that Tom might be the best golfer in town or merely John’s belief that Tom is such a golfer? Behavioralists would try to hyphenate the section of the statement about Tom as one large predicate (pp. 15, 18), but that doesn’t really help one check the factual certainty of John’s belief. As a result, this classic behavioralist approach fails because of human variability (p. 19).
Next, Ackermann demonstrates how symbolic logic can use: a tilde to represent the negative, an upward-pointing caret to represent conjunction of ideas, a downward-pointing caret to represent disjunction of ideas, and a sideways U to represent a conditional state (pp. 23-24). After defining the symbols, he runs the reader through sets of conditions such as “Belief Augmentation” which might be helpful in determining whether one’s belief can be verifiable, only to come against the lottery paradox (pp. 39-40). Simply put, one may believe that all numbers in an honest lottery have an equal chance of winning and an equal chance of losing. If one uses augmentation (comparing the positives and negatives), one encounters inconsistency. Earlier, behavioralists suggested that rational belief would not have inconsistency, but the lottery paradox demonstrates the flaw in their methodology. While Ackermann admits that one can avoid the paradox by allowing some portion of belief that is partial, this wouldn’t satisfy the empirical dogmatists (p. 41). In addition to the “lottery paradox,” Ackermann later addresses the impact of discovery on both knowing and belief: “Although an actual agent cannot have an infinite number of beliefs, we call him rational if he adjusts his beliefs on discovery of an inconsistency.” (p. 78)
Similar problems exist in a dogmatic empiricist epistemology. The dogmatists presume the superiority of science as a means of knowing and assume that knowing can be ascribed to sensory input and/or sensory perception (p. 57). Yet, one cannot designate a child’s ability to learn a language as based on mimicking sounds and deducing a grammatical structure (again – p. 57). So, the empiricist paradigm without some room for innate structure (genetically encoded?) or intuitive awareness falls apart (p. 58). As a result, Ackermann proclaims: “…philosophy is in no position currently to provide a satisfactory account of the origins of knowledge…” (p. 59)
In addition, I was intrigued by Ackermann’s observation that we cannot (as a rule) equate knowing that something is true with knowing how something works—especially with regard to belief (p. 61, noting that German uses two words, kennen and wissen, to make this distinction). He asserts that some of our “knowing that” statements may be so trivial (or obvious) that we don’t need to say, “I know that the sun is shining.” To say, “The sun is shining” would be just as valid. Yet, he addresses circumstances where seeming trivial knowledge is assailed by possible doubt. He calls such doubt, “metaphysical doubt.” “A metaphysical doubt can be worked up, by contrast to evidence in special cases, for any conceivable knowledge claim.” (p. 65) Hence, “knowing that” indicates one is prepared to answer non-trivial objections to one’s knowledge claims and becomes equivalent to believing that x is true (although Ackermann uses p for such a case – p. 67).
One clever example used in Belief and Knowledge to demonstrate the differences between the two was that of a newspaper reporter who knew that five (5) people died in a fire, but reported that between five and twenty died in the fire. Since it is true that five people died, the reporting was “technically” true, but it was misleading because 20 people dying sounded more dramatic (p. 72). If we test the truth claim of the statement, we must admit that the reporter was correct, but if her readers “believe” that up to 20 died, the veracity of their “belief claim” cannot be certified.
Ackermann defines the “Ideal Analysis” of a knowledge claim as: “One can meet all possible non-metaphysical objections to the claim.” (p. 75) However, since he admits that one’s “knowing” can change over time, he addresses a “Pragmatic Analysis” as contexts where the relevant objections only show up over time (as a matter of discovery or experience – p. 77). He then observes that, because of “Fallibilism” (the tendency for counterexamples to appear over time), “Pragmatic Analysis” can work in actual life while “Ideal Analysis” will have trouble dealing with change (p. 79). Indeed, the question arises that if (according to “Ideal Analysis”) one believes x an subsequent evidence demonstrates x to be false, the new evidence causes the agent to wonder why said agent even believed in x from the start (p. 86). “Pragmatic Analysis” accounts for this (demonstrated again on p. 107). Although he points out that the two types of analysis are barely distinguishable in mathematics (p. 110) and most scientific experiments such as the statistical considerations of classical physics (p. 114), this cannot be true of everyday considerations. “In matters of everyday fact, and perhaps in many areas of science, the Pragmatic Analysis seems to fit the facts while the Ideal Analysis apparently entails a form of skepticism.” (pp. 115-116)
Toward the end of the book, Ackermann makes a distinction between “Basic” knowledge and “non-basic” knowledge. “Basic knowledge is defined as knowledge not obtained by inference; that is, knowledge that is in some sense directly evident.” (p. 92) “Non-basic knowledge” would then be the obverse. He attempts to build a case for “Ideal Analysis” based on non-accidentally based inferential steps where the truth is guaranteed by direct evidence (p. 95) and observes that such inference requires a causal connection between the steps. But Ackermann admits that this will not necessarily work because one step in the causal chain may mask another step so that there is a breakdown in the logic (p. 96). “In practice, then, the scientist does what he can to see that his samples aren’t biased, although he cannot know that they are not biased at the time he makes an inductive inference.” (p. 99)
Despite the hard work and thoroughness demonstrated in Belief and Knowledge, Ackermann concludes with a predilection toward the “Pragmatic Analysis,” but no definitive conclusions. Reading this book was good for me, but not as relevant to my own studies as I anticipated.