R. Scott Bakker's Blog, page 24
March 23, 2013
Metaphilosophical Reflections III: The Skeptical Dialectic
“Human reason is a two-edged and dangerous sword.”
– Montaigne, “Of Presumption”
—————————————————–
This is the third in a series of guest-blogger posts by me, Roger Eichorn. The first two posts can be found here and here.
I’m also a would-be fantasy author. The first three chapters of my novel, The House of Yesteryear, can be found here. I’ve also recently uploaded the first of what will be two ‘Bonus Scenes’ from later in the book. You can find it here, if you’re into that sort of thing.
—————————————————–
In my previous post, I argued that skepticism and philosophy are inextricably entwined. Following Hegel, Michael Forster has made a similar argument, and I’ve benefited a great deal (and cribbed) from his discussion. But whereas Forster stops with the claim that an engagement (direct or indirect) with skepticism is a defining feature of philosophy, I’ve gone farther and tried to develop a conceptual framework for understanding why this is the case. My explanation turns on the notion of presuppositions. The view, in short, is this:
Intellectual inquiry can make determinate progress only against a background of unquestioned fundamental premises, propositions, or assumptions (what I call ‘presuppositions’).
These fundamental presuppositions provide contexts for inquiry; they are like boundary-markers or the rules of a game, in that overstepping or questioning them entails ceasing to play the ‘discursive game’ they enclose or constitute.
Calling into question context-constitutive presuppositions involves a kind of skepticism.
Stepping outside of a presupposition context entails ‘going meta,’ i.e., it entails transitioning into a more abstract domain of inquiry.
Given (3) and (4), it is skepticism that pushes us to ever-greater levels of discursive–epistemological abstraction.
In ‘going meta,’ we end up—either immediately or after some intermediary steps—within the domain of philosophy.
Given (5) and (6), it is skepticism that leads us to philosophy, i.e., philosophy begins in skepticism.
There is no uncontroversial rationale that is both global and principled for forestalling the possibility of ‘going meta,’ i.e., of calling into question any presupposition. (Principled rationales are always context-specific or ‘local.’ The claim I’m making here, then, is that there are no principled meta-contextual, i.e., global, rationales for forestalling the questioning of a presupposition or set of presuppositions.)
Given (8), according to which any presupposition can be called into question, and (6), according to which philosophy is the domain of inquiry one occupies (sooner or later) in calling presuppositions into question, it follows that philosophy as such possesses no definitive presupposition-set of its own.
Given (1) and (9), philosophy can make no determinate progress.
Given (10), philosophy ends in skepticism.
This argument can, of course, be challenged on any number of fronts. I have not, for instance, made a sufficient case for (1). I touched on it in my previous post (where I mentioned Stalnaker and Wittgenstein), but I did not attempt to defend the view in any detail. Nor, in the interests of space, am I going to do so here. It should be enough for now to note (1)’s extreme plausibility. If we visualize intellectual progress as involving forward movement, and the act of questioning presuppositions as involving backward movement, then it’s easy to see that we can make progress only if we’re not calling presuppositions into question: we have to stop moving backward before we can move forward. Given (8)—which is itself a plausible view, though with its own complications—these presuppositions-of-inquiry must remain unquestioned, either in the sense of (a) never having been thematized or (b) being set aside, “apart from the route travelled by enquiry” (Wittgenstein, On Certainty, §88), whether (i) they are recognized as questionable though necessarily unquestioned (just as the rules of a game are questionable, but cannot be questioned from within the game itself) or (ii) they are (mis)taken as lying beyond all question (as in the form of indubitable first principles, the supposedly self-evident, etc.).
In this post, I want to elaborate—and with any luck buttress—my case for (3), (4), and (6). I want, in other words, to get clearer on the dialectical relations among presuppositions, skepticism, and philosophy.
—————————————————–
In earlier posts, I introduced the idea of ‘common life,’ which I’m conceptualizing here as the general, usually invisible presupposition context that frames our everyday sayings and doings. Common life is our twofold inheritance as beings who are both embodied in nature and embedded in a society; it is our natural medium, the subcognitive water for us cognitive fishes. When we are, as Hubert Dreyfus or Richard Rorty (influenced by Heidegger and pragmatism) would put it, smoothly and effortlessly ‘coping with the world,’ the fact of common life’s inherent questionability—its possible contingency—never presents itself. At such times, common life is (to borrow some Heideggerian terminology) ‘inconspicuous’ (see: Being and Time, §§15–6). Common life becomes ‘conspicuous’ only as a result of disruptions in the orderly flow of our everyday lives. Such disruptions can be relatively minor (what Heidegger called the mode of ‘obtrusiveness’). But they can also be more significant (what Heidegger called the mode of ‘obstinacy’). The deeper the disruption, the more the presuppositional structure of common life comes into view. The more the presuppositional structure of common life comes into view, the higher its ‘index of questionability’ climbs (cf., Luciano Floridi, Scepticism and the Foundation of Epistemology, Ch. 4).
Initially, then, we occupy the standpoint of common life as what I call ‘everyday dogmatists.’ This means that we acquiesce, usually unconsciously, in everyday dogmatisms: we (mis)take (again, usually only implicitly) the presuppositions of common life for known truths.
Michel de Montaigne wrote that “[p]resumption is our natural and original malady” (Apology for Raymond Sebond). Everyday dogmatism is, in his terms, ‘everyday presumption.’ In her book on Montaigne, Ann Hartle characterizes everyday presumption as “the unreflective milieu of prephilosophical certitude, the sea of opinion in which we are immersed” (Montaigne: Accidental Philosopher, p. 106). Human beings are, as I like to put it, natural-born dogmatists.
Common life provides us not only with first-order beliefs, but also with more or less established means of adjudicating many, even most, sorts of dispute. For instance, authoritative scriptures belong to the presupposition-framework of the common life into which many people are born. For such people, appeal to scripture is capable of settling certain kinds of dispute: in these cases, common life itself provides the resources that allow for the resolution of conflicts that arise within common life.
An initial challenge to an everyday dogmatism is issued. Here we encounter the most rudimentary form of skepticism. The skeptical challenge gives rise to a state of dissatisfaction: there is a felt need to resolve the conflict, to ‘refute’ the skeptic and restore our earlier confidence in the dogmatisms of common life. In many cases of such skeptical challenges, the dissatisfaction in question can be resolved simply by drawing more water from the well of everyday dogmatisms. In more extreme cases, the skeptical challenges can be resolved only by appealing to the context-constitutive presuppositions of common life. Either way, what we have is a kind of circular dialectic of skepticism and dogmatism.
In time, though, the skeptical challenges grow more sophisticated. They reach their apogee when they call into question not just intracontextual everyday dogmatisms, nor just one or another context-constitutive presupposition of common life, but rather common life as a whole. When that happens, it becomes clear that no appeal to everyday dogmatisms can satisfactorily answer the skeptical challenge, for the skeptical challenge now calls into question the entire domain of everyday dogmatisms.
Consider a simple case of perceptual skepticism. You see a tree. You think you know it’s a tree, precisely because you can see it (and you know what trees are, what they look like, etc.). This is an entirely acceptable everyday judgment, accompanied by an entirely acceptable everyday justification. Then a skeptic comes along and asks you how you know that what you think you see is actually a tree. At this point, no dissatisfaction arises, since you have to hand your everyday justification. But the skeptic presses the point: “How do you know it’s not an extraordinarily lifelike papier-mâché tree?” This might be enough to give rise to dissatisfaction; if not, then imagine that the skeptic has some further story to tell about how the city in which you both live has funded an art project that involves the creation of amazingly lifelike papier-mâché trees. Now you’re prepared to call into question your belief that it’s a tree (along with the sufficiency of your everyday justification). What do you do now? Obviously, you walk up to the tree and inspect it. The skeptic has hardly deprived you of all your everyday means of settling disputes. You poke the tree, peel back its bark, pluck off a leaf, and conclude that, clearly, this is not a papier-mâché tree. But what do you do when the skeptic smiles and asks, “Fair enough. But how do you know you’re not dreaming?”
Now, most of us would, most of the time, simply dismiss this question as nonsense. We’d say, “‘O, rubbish!’ to someone who wanted to make objections to the propositions that are beyond doubt. That is, not reply to him but admonish him” (Wittgenstein, On Certainty, §495). But the problem of justification remains. Most of us are going to believe that we’re justified in claiming to know that we’re not dreaming (even more so that we’re not dreaming all the time) and that we therefore know all sorts of things about the world as a result of our present and past experiences. Nothing is easier, in the course of our everyday lives, than to dismiss this sort of worry. But if it nags at us—if it persists as a source of dissatisfaction—then we’re going to want to find an answer to the skeptic. But, ex hypothesi, we’ve accepted the fact that we cannot answer the skeptical challenge by appealing to our experience (in the broader case: to common life or its presuppositions), since the skeptical challenge has called into question the veridicality of our experience in toto (in the broader case: the veridicality of common life and its presuppositions in toto). What do we do?
Bearing in mind that this whole process is animated by a commitment to truth and rationality (by what Nietzsche called our ‘intellectual conscience’), without which our capacity for epistemico-existential crises would be severely limited, there seems only one path open to us: that is, to repudiate the inherent authority of common life in favor of what I call autonomous reason.
I borrow the phrase ‘autonomous reason’ from Donald Livingston’s book on Hume (Hume’s Philosophy of Common Life). Livingston claims that, for Hume, philosophy is committed to autonomous reason, according to which “it is philosophically irrational to accept any standard, principle, custom, or tradition of common life unless it has withstood the fires of critical philosophical reflection” (23). We can quibble about whether or not this applies to every philosopher or even every philosophical tradition; but that’s beside the point if the claim is correct in the main—and I think it is. Moreover, I think it’s not just superficially correct (‘in the main’), but that it illuminates a deep and important feature of philosophy that goes back to its very earliest manifestations.
Philosophy is, at least initially, predicated on skepticism regarding common life. Thus, it seeks autonomy. The philosophy–common life distinction can be understood in terms of the familiar dichotomy between reason and tradition. Reason’s autonomy from tradition is often taken to be a necessary feature of any properly critical enterprise. As Kenneth Westphal has noted in referring to a “dichotomy, pervasive since the Enlightenment, that reason and tradition are distinct and independent resources”: “because tradition is a social phenomenon, reason must be an independent, individualistic phenomenon. Otherwise it could not assess or critique tradition, because criticizing tradition requires an independent, ‘external’ standpoint and standards” (Hegel’s Epistemology, p. 77). Westphal rejects this view, but it is common enough. Nicholas Wolterstorff, for example, gives voice to it when he writes, “Traditions are still a source of benightedness, chicanery, hostility, and oppression… In this situation, examining our traditions remains for many of us a deep obligation, and for all of us together, a desperate need” (John Locke and the Ethics of Belief, p. 246). Enlightened reason, in other words, must be able to rise above the soup of prejudices that is common life; otherwise, it will be unable to establish the distance needed to criticize those traditions.
These metatheoretical concerns are usually articulated without any reference to skepticism. Even when it is separated from the Kantian project, however, critique is best understood as a response to skepticism, an attempt to forge a middle way between skepticism and dogmatism. The repudiation of the inherent authority of common life and the subsequent commitment to autonomous reason is predicated on a kind of skepticism. And this is not, as is commonly claimed or implied, unique (whether as a whole or just in character) to the modern period. Rather, this kind of skepticism was a precondition of the emergence of philosophical thought itself, 2,500 years ago. The motto for this transition is von Mythos zum Logos—from myth to reason.
—————————————————–
In his fascinating book The Discovery of the Mind—a study of conceptions of the self in archaic and ancient Greece—Bruno Snell refers to the emergence of a “social scepticism” that opened up a space within which individuals could call into question the epistemic and practical authority of the traditions into which they’d been born. Given this sort of social skepticism, according to Snell, “[r]eality is no longer something that is simply given. The meaningful no longer impresses itself as an incontrovertible fact, and appearances have ceased to reveal their significance directly to man. All this really means that myth has come to an end” (p. 24). The repudiation of myth was, on my picture, a repudiation by philosophers of common life, of the world of their fathers. Malcolm Schofield has written that “[t]he transition from myths to philosophy… entails, and is the product of, a change that is political, social and religious rather than sheerly intellectual, away from the closed traditional society… and toward an open society in which the values of the past become relatively unimportant and radically fresh opinions can be formed both of the community itself and of its expanding environment… It is this kind of change that took place in Greece between the ninth and sixth centuries B.C.” (The Presocratic Philosophers, pp. 73–4).
Going beyond the Eurocentrism of Snell and Schofield, Karl Jaspers developed the idea of what he calls ‘the Axial Age,’ a period of sudden social, political, and philosophical enlightenment that, he claimed, occurred nearly simultaneously and yet independently in Greece (with the Presocratics), India (with the Buddha), and China (with Confucianism and Daoism). In this period, Jaspers writes, “hitherto unconsciously accepted ideas, customs and conditions were subjected to examination, questioned and liquidated. Everything was swept into the vortex. In so far as the traditional substance still possessed vitality and reality, its manifestations were clarified and thereby transmuted” (The Origin and Goal of History, p. 2). As though to confirm Jaspers’s theory—though he was writing decades earlier—S. Radhakrishnan tells us that
[t]he age of the Buddha represents the great springtide of philosophical spirit in India. The progress of philosophy is generally due to a powerful attack on a historical tradition when men feel themselves compelled to go back on their steps and raise once more the fundamental questions which their fathers had disposed of by the older schemes. The revolt of Buddhism and Jainism… finally exploded the method of dogmatism and helped to bring about a critical point of view… Buddhism served as a cathartic in clearing the mind of the cramping effects of ancient obstructions. Scepticism, when it is honest, helps to reorganise belief. (Indian Philosophy, Vol. 2, p. 18)
The notion of a clear-cut transition ‘from myth to reason’ is deeply entrenched in our cultural narrative, yet it is clearly problematic if understood in an overly simplistic way. Just as Aristotle was not the first person to use logic, so the presocratic philosophers were not the first Greeks to use reason or to think reasonably. Still, I think it is clear that something important occurred during the Axial Age. It may not have been unprecedented, as some commentators want to claim, but its effects were, for (it seems to me) we are still feeling those effects today. The fundamental transition, I want to argue, is best understood not as being from myth to reason, but as being from common life to autonomous reason.
The ability of reasoning to call into question—to radically disrupt—common life was recognized very early. Plato worries about it in the Republic
We all have strongly held beliefs, I take it, going back to our childhood [i.e., our pretheoretical certainties], about things which are just and things which are fine and beautiful… When someone… encounters the question ‘What is the beautiful?’, and gives the answer he used to hear from the lawgiver [i.e., from tradition], and argument shows it to be incorrect, what happens to him? He may have many of his answers refuted, in many different ways, and be reduced to thinking that the beautiful is no more beautiful or fine than it is ugly or shameful. The same with ‘just’, ‘good’, and the things he used to have more respect for. At the end of this, what do you think his attitude to these strongly held beliefs will be, when it comes to respect for them and obedience to their authority?… I imagine he’ll be thought to have changed from a law-abiding citizen into a criminal. (538c–539a)
We find the same recognition of the cultural–existential (as opposed to merely epistemological) threat of skepticism in Hegel.
The need to understand logic in a deeper sense than that of the science of mere formal thinking is prompted by the interest we take in religion, the state, the law and ethical life. In earlier times, people had no misgivings about thought… But while engaging in thinking… it turned out that the highest relationships of life are thereby compromised. Through thinking, the positive state of affairs was deprived of its power… Thus, for example, the Greek philosophers opposed the old religion and destroyed representations of it… In this way, thinking made its mark on actuality and had the most awe-inspiring effect. People thus became aware of the power of thinking and started to examine more closely its pretensions. They professed to finding out that it claimed too much and could not achieve what it undertook. Instead of coming to understand the essence of God, nature and spirit and in general the truth, thinking had overthrown the state and religion. (Encyclopedia Logic, §20)
The transition to autonomous reason, then, is in many respects a desperate gamble, an attempt to salvage by way of reason what reason itself has taken away from us, namely, the certainty and stability of common life.
—————————————————–
Thus, the move to autonomous reason gives rise to a new kind of dogmatism, not the simple, inchoate or prereflective dogmatisms of common life, but sophisticated philosophical dogmatisms. The hope of most developers of philosophical dogmatisms is to refute the skeptical challenges that led to the repudiation of common life, to restore common life on a more solid foundation. Unfortunately for philosophical dogmatists, skepticism does not obediently remain at the level of common life, waiting to be overthrown; rather, it follows them up to the level of autonomous reason, continuing to attack them where they live.
As at the level of common life, the initial response to skeptical challenges to philosophical dogmas will involve a circular return to those same philosophical dogmas, hoping to marshal more resources with which to overthrow the skeptic. But, again as at the level of common life, eventually the skeptical challenges will becomes sophisticated enough to call into question the entire epistemological project. The result is metaepistemological skepticism. Its most conceptually powerful, and historically influential, expression is found in the Agrippan Trilemma, which I briefly discussed in the previous post. The fundamental challenge of the Trilemma at the epistemological level is this: How do you justify that which makes justification possible? Just as the skeptical challenges at the level of common life ended up calling into question the presupposition context of common life as a whole, likewise skeptical challenges at the level of autonomous reason end up calling into question the presupposition context of autonomous reason as a whole. The question, of course, is where this leaves us.
I’ll take up that question, among others, in my next post.


March 20, 2013
Metaphilosophical Reflections II: The Entwinement of Skepticism and Philosophy
“… skepticism itself is in its inmost heart at one with every true philosophy.”
– Hegel, On the Relationship of Skepticism to Philosophy
“Whoever is believed in his presuppositions, he is our master and our God; he will plant his foundations so broad and easy that by them he will be able to raise us, if he wants, up to the clouds.”
– Montaigne, Apology for Raymond Sebond
—————————————————–
This is the second in a series of guest-blogger posts by me, Roger Eichorn. The first post can be found here.
I’m also a would-be fantasy author. The first three chapters of my novel, The House of Yesteryear, can be found here. I’ve also recently uploaded the first of what will be two ‘Bonus Scenes’ from later in the book. You can find that here. Now on to business…
—————————————————–
What is philosophy?
In asking this question, it is misguided—and probably hopeless—to insist upon a strict definition (i.e., a definition that specifies necessary and sufficient conditions for something to qualify as ‘philosophy’). Chances are good that no such definition is possible. Rather, it is likely that philosophy is what Wittgenstein called a ‘family resemblance’ concept, that is, a concept that picks out a number of importantly distinct things that are more or less loosely bound together by a resemblance-relation. Wittgenstein’s most famous example is the concept game: it seems impossible to provide necessary and sufficient conditions for something to qualify as a game, yet it also seems that all the various things we refer to as ‘games’ bear some sort of resemblance to one another.
What I’m after, then, is not a strict definition, but a sort of physiognomy of philosophy. What is/are the most salient or common feature(s) of the family resemblance? The explanatory desideratum is to understand what makes philosophy distinct from other intellectual domains. What distinguishes philosophy from, say, theology or the sciences? In most cases, it does seem that, as with porn, we ‘know it when we see it.’ But I think that, in addressing the question “What is philosophy?”, we can do better than simply pointing to examples. Indeed, I believe that there is a single feature of philosophy that both (a) stands out more prominently than any other and (b) provides the groundwork for a systematic explanation both of philosophy’s relation to other intellectual domains and of the apparent interminability of philosophical inquiries. That feature is skepticism.
Philosophy and skepticism are, I want to argue, inextricably entwined.
—————————————————–
Now, what exactly I mean by ‘the entwinement of skepticism and philosophy’ will be the topic of this and the two posts that will follow. Thus, my claim should not be prejudged. In particular, it should not be dismissed out of hand. Given what I’ve said so far, there are numerous ways of understanding the claim as meaning things I do not intend.
I began by asking “What is philosophy?” Now, it seems, I’m forced to address first another nebulous question, namely, “What is skepticism?” In fact, my answers to both questions will unfold together, over the course of this and subsequent posts. The questions will be approached by way of a discussion of presuppositions, specifically the idea of freedom from presuppositions, or ‘presuppositionlessness.’
What do I mean by ‘presuppositions’? It is important that we not over-intellectualize the concept, for doing so would obscure the sort of presupposition I’m most interested in. I imagine that when many people think of presuppositions, they think first of something like (i) consciously developed and articulated hypotheses, such as those posited by scientists. But there is also a deeper sense of presupposition, according to which presuppositions are (ii) the unreflective (or prereflective) commitments that frame or underlay our sayings and doings, our ‘situation’ as human-beings-in-the-world. Presuppositions of this sort lie so far in the background—or, alternatively, saturate so completely—our cognitive lives as to be effectively invisible. Such presuppositions can, at least in principle, be made visible; but such a process of explication involves thematizing commitments that were already there, rather than (as in the case of scientific hypotheses) developing new commitments. A third sort of presupposition lies somewhere between the two: (iii) they are not hypotheses, but neither are they entirely unreflective. In most cases, this third kind of presupposition will be taken, by those who hold them, as obviously true, perhaps as ‘self-evident.’ Thus, they will not be seen as presuppositions by those who hold them, but as something like fundamental, immovable, or indubitable beliefs/truths.
I shall refer in what follows to presupposition contexts. A presupposition context is a ‘situation,’ with regard to our sayings and doings, that is framed and defined by either the second or the third sort of presupposition introduced in the previous paragraph. Presupposition(ii) contexts define what I call ‘common life,’ i.e., the context into which we’re ‘thrown’ (as Heidegger would say), both as natural beings and as products of a particular culture. Such contexts are the ‘background’ of our ‘everydayness’; their constitutive presuppositions determine to a large extent how the world shows up for us, in the sense of how things strike us, how they appear to us to be. These presuppositions are expressed affectively as well as—indeed, perhaps more fundamentally than they are expressed—cognitively.
For instance, I happen to think that incest is wrong. The proposition is one I find that I cannot fail to assent to. Why do I believe that incest is wrong? I could, of course, marshal any number of reasons to support the belief, but (a) the belief, in its cognitive guise, is capable of withstanding devastating counterarguments, and (b) even if I were brought around, intellectually, to rejecting the belief (which happens when I stop and really think about it), the belief qua affective-disposition remains. In other words, even if I ‘officially’ reject the proposition that incest is wrong, I continue to find incest repulsive. (Regarding this example: see the study referenced and discussed by Jesse Prinz in The Emotional Construction of Morals, p. 30.) This repulsion is, on my view, an expression of the sort of deep underlying commitment that constitutes the context of common life. Common life is, as Wittgenstein put it, an inherited background: “I did not get my picture of the world by satisfying myself of its correctness; nor do I have it because I am satisfied of its correctness. No: it is the inherited background against which I distinguish between true and false” (On Certainty, §94). Presupposition(ii) contexts, then, are similar to what Wittgenstein refers to as ‘world-pictures’: “The propositions describing this world-picture [= in my terms, context-constitutive presuppositions] might be part of a kind of mythology. And their role is like that of rules of a game; and the game can be learned purely practically, without learning any explicit rules” (On Certainty, §95).
Presupposition(iii) contexts are specialized domains of inquiry. Their constitutive presuppositions are more or less reflective on a case-by-case basis. Often, their constitutive presuppositions are going to match, and arise from, presuppositions framing the more general context of common life, with which specialized domains of inquiry are (at least) going to overlap. So, for instance, historians presuppose that the past existed (i.e., that the world didn’t pop into existence five minutes ago), that the past is unchanging, that certain kinds of presently existing artifacts are capable of informing us about what happened in the past, etc. It may be that a given historian has never actually formulated the belief that the past existed, in which case it looks more like an unreflective Type-2 presupposition. The important point, however, is that the claim that the world has existed for x number of years is constitutive of the very practice of historical inquiry. The historical-inquiry domain is specialized for precisely this reason: it has more or less definite boundaries, the crossing of which constitutes something like a foul. If a nosy ‘subversive epistemologist’ (to borrow a helpful phrase from Michael Forster)—or perhaps a moon-eyed metaphysician—butts into an historical debate to ask, “But how do you know the world didn’t pop into existence five minutes ago?”, the historians have to hand a principled rationale for rejecting the question, for it lies outside the limits of the game they’re playing. The historical-inquiry game can only proceed on the basis of such presuppositions. Calling these context-constitutive presuppositions into question would entail the cessation of historical inquiry. One would begin, instead, to philosophize.
As I suggested above, it can be misleading to refer to Type-2 and Types-3 presuppositions as presuppositions. Type-2 presuppositions can seem to run ‘deeper’ than any mere presupposition. As for Type-3 presuppositions, they are taken to be true (and so not merely presupposed) by those who hold them. In the first case, ‘presupposition’ can seem too intellectual a notion; in the second case, it can seem inappropriate insofar as ‘presupposing’ seems to imply a degree of doubt or tentativeness. All of that is true enough. The rationale for nevertheless referring to ‘presuppositions’ in these cases is that that is how they appear from a philosophical standpoint.
As I’ll argue in more detail in my next post, the practice of philosophy is both historically and conceptually predicated on an initial skepticism regarding the inherent epistemic and practical authority of common life. It strives to provide, now on a purely rational basis, the explanations and justifications that it itself took away from common life. Crucial to stripping common life of epistemic and practical authority involves thematizing, and subsequently calling into question, its presuppositions. (This does not mean that philosophers are necessarily hostile to everyday presuppositions. On the contrary, I find that they are generally apologists. But qua philosophers, they seek—usually without outright admitting as much—simply to transplant everyday presuppositions into richer, more solid, and, above all, more rational ground. We can engage in combat in order to strengthen as well as to overthrow.) Philosophy adopts the same sort of attitude toward the more reflective presuppositions of specialized contexts: what the historian takes to be self-evident or indubitable, the philosopher reduces to the status of a mere presupposition.
—————————————————–
It’s hardly surprising, then, that philosophy has traditionally striven to free itself from presuppositions. We simply accept, without reasons, all sorts of things in common life as well as in other, less ‘radical’ domains of inquiry. Moreover, as context-constitutive, such presuppositions form the ground of our presupposition-contextual epistemic–doxastic practices. Given this picture, it can seem that, barring the establishment of presuppositionless knowledge, we’re doomed to irrationality—to playing mere games in the upper stories of the citadel of reason while failing, or even refusing, to investigate its foundations, to see whether the building is sound, whether it rests upon the ground of truth.
In the Republic, Plato argues that genuine knowledge must be presuppositionless: it must descend from the top of the Divided Line down. If we try to make progress bottom-up, we’re “compelled to work from assumptions, proceeding to an end-point, rather than back to an origin or first principle” (510b). He considers the example of geometry and arithmetic: “[T]here are some things they take for granted in their respective disciplines. Odd and even, figures and the three types of angle. That sort of thing. Taking these as known, they make them into assumptions. They see no need to justify them either to themselves or to anyone else. They regard them as plain to anyone. Starting from these, they then go through the rest of the argument, and finally reach, by agreed steps, that which they set out to investigate” (510c–d). Plato associates this sort of inquiry with what he simply calls “thinking” (534a). ‘Thinking’ deals with objects of knowledge, but cannot arrive at genuine knowledge itself, precisely because it cannot dispose of its presuppositions. “As for the subjects which we said did grasp some part of what really is [i.e., geometry and arithmetic]… we can now see that as long as they leave the assumptions they use untouched, without being able to give any justification for them, they are only dreaming about what is. They cannot possibly have any waking awareness of it. After all, if the first principles of a subject are something you don’t know, and the endpoint and intermediate steps are interwoven out of what you don’t know, what possible mechanism can there ever be for turning a coherence between elements of this kind into knowledge?” (533b–c). Knowledge, on the other hand, is acquired only when one achieves freedom from presuppositions: the soul “goes from an assumption to an origin or first principle which is free from assumptions” (510b). Reason “uses assumptions not as first principles, but as true ‘bases’—points to take off from, entry-points—until it gets to what is free from assumptions, and arrives at the origin or first principle of everything. This it seizes hold of, then turns round and follows the things which follow from this first principle, and so makes its way down to an end-point” (511b–c). The method of achieving presuppositionlessness Plato calls ‘dialectic’: “The dialectical method is the only one which in its determination to make itself secure proceeds by this route—doing away with its assumptions until it reaches the first principle itself” (537d).
The same commitment to presuppositionlessness can be found in Kant. As in Plato, this commitment pushes Kant to reject experience as capable of providing rational satisfaction. “[E]xperience never fully satisfies reason; it [i.e., reason] directs us ever further back in answering questions and leaves us unsatisfied as regards their full elucidation” (Prolegomena). “[R]eason does not find its satisfaction in experience, it asks about the ‘why,’ and can find a ‘because’ for a while, but not always. Therefore it ventures a step out of the field of experience and comes to ideas.” Unfortunately, the move to ‘ideas’ doesn’t help; even here, “one cannot satisfy reason,” for the ‘whys?’ never let up (Metaphysik Mrongovius). As he puts it in the first introduction to the Critique of Pure Reason, “Reason falls into this perplexity through no fault of its own. It begins from principles whose use is unavoidable in the course of experience and at the same time sufficiently warranted by it. With these principles it rises (as its nature also requires) ever higher, to more remote conditions. But since it becomes aware in this way that its business must always remain incomplete because the questions never cease, reason sees itself necessitated to take refuge in principles that overstep all possible use in experience, and yet seem so unsuspicious that even ordinary common sense agrees with them. But it thereby falls into obscurity and contradictions” (Avii–viii). In other words, the common understanding makes use of principles that, although they are taken to be unproblematic in the course of everyday life, reason (i.e., philosophy) unmasks as objectively unjustified presuppositions (cf., Critique of Pure Reason, A473/B501). Reason, which is not held in check by experience or by the contingencies of common life, strives after, and is satisfied by nothing less than, presuppositionlessness or, in Kant’s terms, the unconditioned. “[R]eason in its logical use seeks the universal condition of its judgment… [T]he proper principle of reason in general (in its logical use) is to find the unconditioned for conditioned cognitions of the understanding” (Critique of Pure Reason, A307/B364). “[R]eason demands to know the unconditioned, and therewith the totality of all conditions, for otherwise it does not cease to question, just as if nothing had yet been answered” (“What Real Progress Has Metaphysics Made…?”).
Unlike Plato, however, Kant rejects the possibility at arriving at any sort of transcendent ground of truth. Instead, he argues that we can only have knowledge within the sphere of experience. Still, experience is structured in such a way, he argues, that we can have certain knowledge of what must be the case for experience to be possible at all. (Kant calls this approach transcendental, which refers to conditions of possibility, not to ‘transcendence.’) For Kant, the quest for presuppositionless knowledge ends not in transcendence, but in the uncovering of the determinate limits of knowledge. As he puts it, reason will only be satisfied with “complete certainty”—which entails presuppositionlessness, since any lingering presuppositions could be doubted—“whether it be one of the cognition of the objects themselves or of the boundaries within which all of our cognitions of objects is enclosed” (Critique of Pure Reason, A761/B789).
There is a quite different tradition in Western philosophy, going back at least to Aristotle, that can be seen as furnishing a counterexample to my claim that philosophy strives for presuppositionlessness. It is often thought that Aristotle was not concerned with skeptical problems, that he did not consider them worthy or requiring of response or refutation. He is often taken to preempt skeptical philosophers by claiming that some of what they call ‘presuppositions’ are known to be true even though their truth cannot be demonstrated. There’s clearly something right about the latter claim at least: as Aristotle says in the Posterior Analytics, “We contend that not all knowledge is demonstrative: knowledge of the immediate premises is indemonstrable” (72b). The ‘immediate premises’ are what Aristotle calls ‘first principles.’ His argument, then, is that the truth of first principles cannot be demonstrated, yet nevertheless we can know them.
First off, I think it is clear that Aristotle’s philosophy is indeed entwined with skepticism, broadly construed (i.e., ‘subversive epistemologies’). As we’ve just seen, he presents in the Posterior Analytics an anti-skeptical argument. A similar anti-skeptical intent can be found elsewhere in the Aristotelian corpus, such as in the defense of logical laws in Metaphysics Book Gamma. And while he has far more regard than Plato does for common, prephilosophical opinion (endoxa)—often using them as starting-points for the development of his own positions—he is ultimately skeptical of endoxa, for he displays both a willingness to reject it (when it happens to be wrong) and a desire to provide it with a more rational foundation (when it happens to be right). If this is right, and if I’m right to conceptualize the entwinement of skepticism and philosophy as I’ve been doing so far, then we should find in Aristotle a commitment to the epistemic ideal of presuppositionlessness. But just as it has seemed to many that Aristotle is concerned with skepticism, so it may seem that he lacks a commitment to the epistemic ideal of presuppositionlessness. Addressing this issue in anything approaching a thorough way is impossible here. All I’m going to do is focus on the anti-skeptical position we’ve looked at from the Posterior Analytics, according to which first principles are known immediately and indemonstrably. Does this mean that Aristotle contents himself with presuppositional knowledge?
Aristotle’s argument in the Posterior Analytics anticipates—and may well have been the source of—the most powerful of all skeptical arguments, namely, the Agrippan Trilemma, according to which any attempt to justify a claim will end either in vicious circularity, infinite regress, or brute hypothesis. Aristotle rejects outright the possibility of an infinite chain of justifications. He also rejects circularity, for on his view, demonstrative knowledge relies on premises that are both prior to and better known than the conclusions derived from them. In the case of circular justifications, though, the same propositions would have to be alternatively prior and subsequent to each other, alternatively better and worse known than each other. Finally, he denies that immediately known first principles are mere hypotheses; if they were, then the most that could be concluded from them is that “if the primary things [the first principles] obtain, then so too do the things derived from them.” His way of avoiding the Trilemma is to reject the assumption that all knowledge must be demonstrable: there is a type of indemonstrable knowledge, namely, knowledge of first principles. But how do we know first principles? On this, Aristotle’s remarks are cryptic, to say the least. Such knowledge is not innate, but is said to “come to rest in the soul” as a result of “induction” from various instances of “perception” (100a–b). Are these first principles merely presupposed, or are they known? The skeptic—as well as many a dogmatist, such as Plato—will claim that they’re merely presupposed. Aristotle, however, is going to deny this. As we’ve seen, he holds that the first principles can be known, not merely hypothesized. In fact, he holds that all demonstrative knowledge rests on prior knowledge: “All teaching and all learning of an intellectual kind proceed from pre-existent knowledge” (71a). Aristotle, then, is not content with presuppositional knowledge. We can disagree over the effectiveness of his strategy, but that his strategy evinces a commitment to presuppositionlessness should be clear.
Aristotle’s brand of anti-skeptical foundationalism can be found not only in later Aristotelians, but also, I would argue, in such philosophically distant groups as the so-called commonsense philosophers. Like Aristotle, commonsense philosophy, from Thomas Reid to G.E. Moore to Jim Pryor, maintain that some things (indeed, a great many things) are simply and irrefutably known and so cannot be genuinely called into question. These privileged bits of knowledge are indubitable, immovable, self-evident.
The problem—as Ambroise Beirce underlines in the entry on “Self-Evident” in The Devil’s Dictionary—is that, when scrutinized, self-evident seems to mean merely that which is “[e]vident to one’s self and to nobody else.”
—————————————————–
More recently, many philosophers have questioned the viability or necessity of attaining freedom from presuppositions. It has been argued, for instance by Robert Stalnaker, that ‘pragmatic presuppositions’ are a necessary condition for discourse (see his Content and Context, p. 49). In On Certainty, Wittgenstein seems to make a similar argument: “[T]he questions that we raise and our doubts depend on the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn” (§341). But, Wittgenstein adds, “[I]t isn’t that the situation is like this: We just can’t investigate everything, and for that reason we are forced to rest content with assumption. If I want the door to turn, the hinges must stay put” (§343). “It may be that all enquiry on our part is set so as to exempt certain propositions from doubt, if they are ever formulated. They lie apart from the route travelled by enquiry” (§88).
I’ll return to some of these ideas in subsequent posts. For now, I want merely to point out that, on the picture I’m presenting, all domains of inquiry are presupposition-contextual from a philosophical standpoint. It may be that determinate intellectual or dialogic progress can only be made against a fixed background of unquestioned commitments. If this is so, and if I’m right that philosophy is traditionally committed to the ideal of presuppositionlessness, then we would have the beginnings of an explanation of the apparent interminability of philosophical inquiries. Philosophy, even when explicitly committed to presuppositionlessness, often proceeds presupposition-contextually, such as when it mistakes its presuppositions for self-evident first principles. If progress cannot be made presuppositionlessly, then the only way for philosophy to make progress would be somehow to forestall the possibility of calling into question the presuppositions structuring a given philosophical discourse. The problem with this is that philosophy does not appear to have any determinate boundaries, such as those that structure historical inquiries. Philosophy, in short, lacks a principled means of calling “Foul!” Philosophers are free, qua philosophers, to call into question any presupposition whatsoever. It seems, in fact, that the task of securing a determinate set of presuppositions for philosophy—a presupposition-set that would allow philosophy to make determinate progress—is actually incoherent, for it seems that the only rational way to forestall the possibility of calling into question context-constitutive presuppositions is to ground or justify those presuppositions; yet doing so is tantamount to stripping those presuppositions of their status as presuppositions.
In the Apology for Raymond Sebond, Michel de Montaigne wrote that “[i]t is very easy, upon accepted foundations, to build what you please… Whoever is believed in his presuppositions, he is our master and our God; he will plant his foundations so broad and easy that by them he will be able to raise us, if he wants, up to the clouds… If you happen to crash this barrier in which lies the principal error, immediately [philosophical dogmatists] have this maxim in their mouth, that there is no arguing against people who deny first principles.” In Montaigne’s view, “there cannot be first principals for men,” given the limits of our reason. “To those who fight by presupposition, we must presuppose the opposite of the same axiom we are disputing about. For every human presupposition and every enunciation has as much authority as another, unless reason shows the difference between them. Thus they must all be put in the scales, and first of all the general ones, and those which tyrannize over us.” For as Kant wrote, “[R]eason has no dictatorial authority; its verdict is always simply the agreement of free citizens, of whom each one must be permitted to express, without holding back, his objections and even his veto” (Critique of Pure Reason, A738–9/B766–7).


March 18, 2013
Three Roses: The Sack of Nevegas
Hello all! Roger Eichorn here again.
Sorry for the delay in posting the second of my “Metaphilosophical Reflections.” I hoped to have it ready today. Instead I’ve blundered my way into doing a bunch of reading to fill a gap in the story I want to tell. I should have the second post up tomorrow or the next day. Currently, the plan has me writing five posts in total. I expect to have all of them up by the end of the month.
As for my fantasy writing, I’ve decided to keep it simple: I’m going to post two ‘Bonus Scenes’ from later sections of the book (that is, later than the first three chapters, which can be read here). The scenes are continuous and relatively self-contained: they center on a character (Davyd Carverus) who was introduced earlier but was never focused on. Thus, the scenes represent the beginning of Carverus’s ‘viewpoint arc,’ and should be relatively easy to follow even pulled from the rest of the story. You can find the first Bonus Scene here. I’ve included the first few paragraph below:
—————————————
1546, Moon of Dathiel (Late Spring), Nevegas
Davyd Carverus would later swear that he had seen the arrow before it struck—its arc somehow plucked from swirling chaos, as though his gaze were pinned to the fault line of fate.
Under the late spring sun, the arrow, fired from the walls of the Holy City of Nevegas, bloomed in the neck of the Duke of Iseldas. The duke teetered, his arms pinwheeling, and fell to the mud at his horse’s hooves. The damnable fool was wearing his famous white cloak, which marked him as commander of the imperial army. It was useful for one’s troops to be able to identify their leader from a distance, but such information was equally exploitable by one’s enemies, a truth the duke learned in most dire fashion, once and for all, in the final choking moments of his life.
A reverent, almost superstitious hush fell within the ring of men nearest the duke—but only for a moment. Nothing would prevent the imperial force from storming the walls, not stone nor steel nor the roar of the defenders’ cannon batteries. And now, with Iseldas dead, nothing would prevent the sacking—the utter despoiling—of the city that lay beyond…


March 15, 2013
Metaphilosophical Reflections I: Preliminaries
“Men have, as it were, a calling to use their reason socially… From this it follows naturally that everyone who has the principium of conceit, that the judgments of others are for him utterly dispensable in the use of his own reason and for the cognition of truth, thinks in a very bad and blameworthy way.”
– Immanuel Kant, Blomberg Logic
—————————————
Hello all! This is Roger Eichorn. I’ll be guest-blogging here for the remainder of the month, while Scott and his family are on vacation.
Like Scott, I’m here to peddle two sorts of product: philosophy and fantasy fiction (though, also like Scott, I find myself increasingly unable to tell them apart!). On the philosophy side of things, I intend to present a series of posts that will introduce my metaphilosophy, that is, my philosophy of philosophy. My metaphilosophical speculations bridge the systematic and the historical sides of my philosophical interests. Thus, I’ll have occasion both to discuss the history of philosophy and to indulge in a bit of first-order philosophizing of my own.
As for my fantasy fiction, I hope that my front-page posts will drum up some renewed interest in the chapters that are already posted here. If time and inspiration strikes, I may devote a front-page post or two to my fantasy work. I’m not sure what form such posts would take. I’d be interested to hear people’s thoughts on what they’d be most interested in reading. Three options have occurred to me as likely possibilities: (i) I could post selections from later parts of the book, that is, later than the three chapters already posted here; (ii) I could try to write short, standalone-ish companion pieces, like Scott’s Atrocity Tales; or (iii) I could write ‘historical’ or ‘metaphysical’ posts about the world in which the story takes place, like the sort of material one might find in an Appendix.
Obviously, (i) would be the easiest. In a perfect world, I would love to do (ii)—but it would require the greatest expenditure of time and energy. Moreover, I’ve never been good at short fiction. My ‘short story’ ideas are invariably novel-sized ideas—and my ‘book’ ideas are invariably ‘series-of-books’ ideas! It would certainly be an interesting experiment, but I would run a serious risk of falling on my compositional face. As for (iii), it would fall somewhere between (i) and (ii) on the ‘difficulty’ / ‘risk-of-creative-failure’ axis.
Now, in the remainder of this post, I’d like to raise and discuss some of the questions that motivate my metaphilosophical reflections. Most generally, there are fundamental questions such as “What is philosophy?” and “How is philosophy related to other domains of intellectual inquiry?” In conversation, I often get at these issues by asking, “Just what exactly do philosophers think they’re doing when they philosophize?” If I’m allowed to go on, I often elaborate thusly: “I mean, what do philosophers hope to achieve? And why do they suppose that their methods—whatever those happen to be—are apt for achieving those ends? Why those methods and not others?”
It’s interesting that there seems to be no uncontroversial answers to questions of this sort. The same cannot be said of most, if not all, other established domains of intellectual inquiry. I mean, sure, historians or sociologists or physicists might give different answers to these sorts of questions, but there is likely to be a more or less easily achieved equilibrium between their differing answers. Not so in philosophy. As for methodology, there might be (and undoubtedly is) real disagreement among, say, historians about how best to pursue historical investigations; but on closer inspection, those methodological disagreements are likely to be based on a broad foundation of agreement such that their disagreements are relatively superficial. Not so in philosophy.
This should not be taken to mean that there is no metaphilosophical harmony among philosophers. There is. But it is local—across both time and space—to a degree that far exceeds that of other domains of intellectual inquiry. Moreover, what harmony does exist seems accidental (as opposed to ‘essential’), in the sense that it doesn’t appear to arise from any intrinsic feature of philosophy itself. In most cases, it doesn’t even arise from a shared explicit commitment to some sort of metaphilosophical ‘self-understanding.’ In most cases, it seems rather to be a function of where and with whom one first studied philosophy, or to be the residue of a ‘politics of exclusion’ perpetuated by philosophers either by (a) reading (or assigning) only certain sorts of texts, or (b) actively looking down upon certain other sorts of texts (and those who read or assign them).
In short, compared to other intellectual disciplines, philosophy-as-such seems untethered, curiously free of any definitive theoretical or conceptual commitments. (I say ‘philosophy-as-such’ to emphasize my unwillingness to play the inclusion/exclusion game. That is, at the level of abstraction from which I’m beginning, there is no basis for claiming that person x, who calls herself a philosopher, really is a philosopher, whereas person y, who also calls herself a philosopher, isn’t really a philosopher. One finds such accusations being made, for instance, across the notorious—and notoriously unhelpful, from an explanatory standpoint—Analytic–Continental divide.)
Another question that motivates my metaphilosophical reflections concerns the apparent interminability of philosophical disputes. It is often claimed, especially by those unsympathetic to philosophy, that philosophy hasn’t made any progress in 2,500 years. This is frequently contrasted with the startling successes of mathematics and the hard sciences in the modern era. Often, pointing to this contrast is considered sufficient to prove philosophy’s intellectual bankruptcy. As will become apparent over the course of this series of posts—and as longtime TPB’ers already know—I’m an unlikely candidate for Champion of Philosophy, given that I’m a card-carrying Skeptic. Even so, I think that the common picture of ‘futile philosophy’ alongside ‘all-conquering science’ is deeply naive.
To begin with, there’s the historical fact that all the sciences—indeed, virtually every branch of intellectual inquiry—was once part of philosophy proper. Far from having made no progress in 2,500 years, philosophy has in fact succeeded in spawning every branch of the modern academic tree. (It’s telling that all Ph.D’s are doctors of philosophy.) Furthermore, as I’m going to argue in subsequent posts, at the level of abstraction at which philosophical disputes are interminable, all disputes are interminable, regardless whether the disputes’ subject matter is thought of as belonging to ‘philosophy.’ In other words, the interminability of philosophical disputes points up a general fact about human cognition, not a fact peculiar to some specialized domain of inquiry called ‘philosophy.’ Indeed, as I’ve suggested above, there is a sense in which no such domain of inquiry exists. There are no clear boundaries, no clear definitions, of ‘philosophy.’ Ultimately, I want to argue that ‘philosophical reflection’ is distinguished from other forms of intellectual inquiry neither by its subject matter nor by its methodology, but rather by its radicality (which should be understood literally, as pertaining to roots, an etymological link that gives us the word ‘radish’). In subsequent posts, I’ll connect the ‘radicality’ of philosophy to the idea of presuppositionlessness, which I take to be the concept by means of which philosophy can be distinguished from, and related to, other domains of intellectual inquiry.
The metaphilosophical problem of interminability connects up with another question I’m interested in, one that seems especially pertinent given the dust-up in the discussion thread of Scott’s latest post on the Blind Brain Theory: namely, the philosophical significance of disagreement, specifically disagreement among epistemic peers. I may or may not take up this issue to the extent it deserves, as it’s secondary to the main points I want to make. That’s why I want to flag it here as an issue that should be kept in mind as we proceed.
There’s a sense in which the interminability of philosophical inquiries seems to be a function of—or at least to be correlated to—the interminability of philosophical disagreements. On the other hand, unless we subscribe to a consensus theory of truth (which should be kept separate from a consensus criterion of truth) it seems that, in and of itself, disagreement is epistemically unproblematic. After all, if person x is right about p and person y is wrong about p, then the fact that persons x and y continue to disagree about p has no bearing on the truth or falsity of p. Yet even if this is right (which—again, barring a consensus theory of truth—it seems to be), it strikes me as wrongheaded in the extreme to deny that persistent, irresolvable disagreement among epistemic peers is epistemically problematic (in some sense, at least). In my view, while disagreement may be unproblematic with respect to theories of truth (i.e., with regard to truth as such), it is deeply problematic with respect to criteria of truth. In other words, even if disagreement does not stand in the way of us being right, it does (at least among epistemic peers) stand in the way of us knowing we’re right.
Kant saw this clearly. “[R]eason,” he wrote, “has no dictatorial authority; its verdict is always simply the agreement of free citizens, of whom each one must be permitted to express, without holding back, his objections and even his veto” (Critique of Pure Reason, A738–9/B766–7). He refers to “the comparison of our judgments with those of others” as a “touchstone of truth,” while “[t]he incompatibility of the judgments of others with our own is… an external mark of error” (Jäsche Logic). And in the Blomberg Logic, he claims that “[a]s long as there is controversy concerning a thing… as long as disputes are exchanged by this side or the other, the thing is not yet settled at all.” Underlying these claims is a commitment to the view that human beings share in one and the same common humanity. There is no principled way, at the least at the outset of a dispute, to privilege one person’s opinion over that of another, for we are all human. If we genuinely know that we know that p—that is, if we have genuine reflective knowledge that p and not simply an unverified (though possibly true) belief that p—then we should, it seems, be able to demonstrate to others that we know p such that they will come to recognize the truth of p and come to believe—and know—p as well.
In many domains of inquiry—including that vast, amorphous domain I call ‘common life,’ which simply refers to our everyday world, in which many things are routinely inquired into, etc.—there are more or less established means of arriving at the sort of rational consensus Kant has in mind. (A prime generator of consensus in today’s world is Google, as when someone interrupts a dispute by saying, “Just Google it!”) An example can be found in Plato’s Meno, in which Socrates teaches (‘demonstrates’ the truth of) geometric axioms to a slave-boy. Now, looked at more closely, available mechanisms for generating rational consensus are all questionable with respect to whether or not they are productive of genuine knowledge. (Google certainly is.) But even so, it is peculiar that philosophy is a domain of inquiry that, as a whole, has no generally agreed upon methods for generating consensus. Again, as I suggested above, I think this points up not a shortcoming of philosophy as such, but rather a shortcoming of human cognition as such. Hence, no matter how well-established a given ‘regime of truth’ may be, intellectual history suggests that none is immune to revision, reconceptualization, and rejection. Even those geometric proofs that Plato taught the slave-boy can be called into question by non-Euclidian geometries.
So what is going on when a number of people, all possessing at least the minimum intellectual capabilities necessary to grasp the matter in hand, cannot agree? My answer, in short, is that these people are working on the basis of differing sets of underlying presuppositions, meaning that their disagreement is rooted in a deeper disagreement about which they are not actively arguing. Hence, they are unable to make progress toward consensus, for the roots of their disagreement go deeper than their debate does.
Depending on how much conceptual baggage one loads onto this initial characterization, the view will likely seem either obviously (and so uninterestingly) true or else overly (and hence uninterestingly) simplistic. There is a sense in which I agree with the ‘obviously-true’ charge—though I think that the consequences of the view, once thought out, are far from obvious. As for the ‘overly-simplistic’ charge: while I agree that the view is literally neat, I think it will become clear, once it’s looked at more closely, that the apparent simplicity of the view’s initial statement masks all sorts of hidden complexities.
One thing my view does not do is provide a means of escaping dialogic impasses, if ‘escape’ means generating consensus. The most I hope for is to point toward the possibility of reorientation, the possibility of coming to view the epistemic–doxastic state both of ourselves and of others—and hence the nature of our disagreements—differently such that we don’t give in to the tempting move Wittgenstein noted when he wrote, “Where two principles really do meet which cannot be reconciled with one another, then each man declares the other a fool and a heretic” (On Certainty, §611).
With respect to the charge of foolishness, we would do better to recognize that we are all fools. As Michel de Montaigne wrote, in the voice of the Delphic Oracle, “There is not a single thing as empty and needy as you [i.e., Man], who embrace the universe: you are the investigator without knowledge, the magistrate without jurisdiction, and all in all, the fool of the farce” (Of Vanity).
With respect to the charge of heresy, we would do better to question our own judgment at least as strongly as we question that of the person with whom we disagree. Again quoting Montaigne: “… it is putting a very high price on one’s conjectures to have a man roasted alive because of them” (Of Experience).
I look forward to working through some of these ideas with all of you over the next couple weeks. I’ll do my best to keep up with the comments. Thanks for reading!


March 11, 2013
The Ptolemaic Restoration: Object Oriented Whatevery and Kant’s Copernican Revolution
“And now, after all methods, so it is believed, have been tried and found wanting, the prevailing mood is that of weariness and complete indifferentism” –Immanuel Kant, The Critique of Pure Reason
.
So, continuing my whirlwind interrogation of the new Continental materialisms, I want to turn to Object-Oriented Whatevery via the lens of Levi Bryant’s, “The Ontic Principle: Outline of an Object Oriented Ontology.” As always, I need to impress I’m a tourist and not a native of these philosophical climes, so I sincerely encourage anyone who comes across what seems to be an obvious misreading on my part to expose the offending claims in the comments. My goals, once again, are both critical and constructive: in the course of showing you why I think it’s obvious that Bryant cannot deliver the goods as advertised, I want to demonstrate the explanatory reach and power of BBT, not as any kind of theoretical panacea, but as a system of empirically tractable claims that, in the tradition of scientific theory more generally, are quite indifferent to what we want to be the case. Like I’ve said before, the conclusions suggested by BBT are so radical as to almost qualify as a reductio, were it not for the fact that a reductio is precisely the way it would appear were it true. And besides, as I hope some of you are at least beginning to see, there is something genuinely uncanny about its explanatory power.
Essentially I want to argue that BBT may actually deliver on what Bryant advertises–a way out of the philosophical impasses of the tradition, even a ‘flat ontology’ rationalized via difference!–though its consequences are nowhere near so kind. I’ve corresponded with Levi in the past, and he strikes me as a good egg. It’s his position I find baffling. With any luck he’ll do what Hagglund found incapable: acknowledge, expose, and contradict–inject some much-needed larva into Three Pound Brain!
Bryant begins, not by rehearsing the primary motive of critical philosophy–namely, how the failure of dogmatic philosophy to produce theoretical knowledge convinced philosophers to examine knowing–but rather the claim of critical philosophy, the notion “that prior to any claims about the nature of reality, prior to any speculation about objects or being, we must first secure a foundation for knowledge and our access to beings” (262). This allows him, quite without irony, to rehearse what he takes to be the primary motive of Object Oriented Ontology: the failure of critical philosophy to produce theoretical knowledge. “Faced with such a bewildering philosophical situation,” he writes, “what if we were to imagine ourselves as proceeding naively and pre-critically as first philosophers, pretending that the last three hundred years of philosophy had not taken place or that the proper point of entry into philosophical speculation was not the question of access?” In other words, given the failure of three centuries of critical philosophy to produce theoretical knowledge, perhaps the time has come to embrace, as best we can, the two millennia of dogmatic failure that preceded it.
Thus he motivates a turn away from the subject of knowledge to the object of knowledge, from the epistemological to the ontological–as we should, apparently, given that the object comes first. After all, as Heidegger made ‘clear,’ “questions of knowledge are already premised on a pre-ontological comprehension of being” (263). Unlike Heidegger, however, who saw in this pre-ontological comprehension an interpretative basis for theorizing a collapse of subject and object (which quickly came to resemble a conceptually retooled subject), Bryant sees a call to theorize, in tentative fashion, the ‘ultimate generalities’ that objectively organize the world. Premier among these tentative ultimate generalities, he asserts, is difference. This leads Bryant to pose what he calls the ‘Ontic Principle,’ the claim “that ‘to be’ is to make or produce a difference” (263).
Why should difference be our ‘fundamental principle’? Well, because all epistemology presupposes it. As he writes:
Paradoxically it therefore follows that epistemology cannot be first philosophy. Insofar as the question of knowledge presupposes a pre-epistemological comprehension of difference, the question of knowledge always comes second in relation to the metaphysical or ontological priority of difference. As such, there can be no question of securing the grounds of knowledge in advance or prior to an actual engagement with difference. 265
To which the reader might be tempted to ask, How do you know?
This is one of those junctures that makes me (if only momentarily) appreciate Derrida and his tireless attempts to show philosophers the inextricable co-implication of dokein and krinein. The easiest way to illustrate it here is to simply wonder aloud what is ‘presupposed’ by difference. If difference comes before epistemology because epistemology ‘presupposes’ difference as its ‘condition,’ and if the ultimate ‘first first,’ no matter how ‘tentative,’ is what we are after, then we should inquire into the presuppositions of our alleged presupposition. Since there can be no difference without the negation of some prior identity, for instance, perhaps we should choose identity–snub Heraclitus and do a few rails with Parmenides.
Can counterarguments be adduced against the ontological primacy of identity? Of course they can (and Bryant helps himself to a few), just as counterarguments can be adduced against those counterarguments, and so on and so on. In other words, if critical philosophy is motivated by the failure of dogmatic philosophy to produce theoretical knowledge, and if Bryant’s neo-dogmatic philosophy is motivated by the failure of critical philosophy to produce theoretical knowledge, then perhaps we should skip the ‘and centuries passed’ part, assume the failure of neo-dogmatism to produce theoretical knowledge and, crossing our fingers, simply leap straight into neo-critical philosophy.
Far from ‘escaping’ or ‘solving’ anything, this strategy–quite obviously in my opinion–perpetrates the very process it sets out to redress. Let’s call this state of oscillating institutional emphasis on the subject and the object of knowledge, ‘correlativity.’ And let’s call ‘correlativism’ the idea according to which philosophy can only ever prioritize either subject or object and never any term other than these two.
Why has correlativism so dominated philosophy since its Modern inception? I actually think I can give a naturalistic answer to this question. The dichotomy of subject and object, of course, possesses a myriad of conceptual attenuations, binaries such as thought and being, mind and body, spirit and matter, ideal and real, epistemology and ontology, to name but a few of the oppositions that have constrained the possibilities of coherent, speculative thought for centuries now. There are other binaries, certainly, categorical conceptual oppositions (such as that between difference and identity) that a number of philosophers (like Heidegger) have recruited in various attempts to think beyond subjectivity and objectivity, only to find themselves, inexorably it seems, re-inscribed within the logic of ‘correlativism.’ In this sense, I will be following a very well-trodden path, though one quite different than the one proposed by Bryant above–or so I like to think.
The primary problem I see with Bryant’s approach is that it takes the failure of critical philosophy to produce theoretical knowledge to obviate the need to answer the primary question that it sought to answer, which is, namely, the question of securing speculative truth despite the limitations of our nature. We are afflicted with numerous ‘cognitive scandals,’ basic questions it seems we should be able to answer but for whatever reason cannot. What is the good? Does the external world exist? What is beauty? Does the past exist? What is justice? Do other minds exist? What is consciousness? No matter how many answers we throw at these and other questions, the skeptic always seems to carry the day–and handily.
For whatever reason, we lack the capacity to decisively answer these questions. When it comes to the problems of critical philosophy, Bryant would have you focus on the ‘critical’ and to overlook the ‘philosophy.’ What precisely failed when it came to critical philosophy? Given the manner it seeks to redress the failure of dogmatic philosophy, the more obvious answer (by far one would think) is philosophy. And indeed, the more cognitive psychology learns about human reasoning, the more understandable the generational failure of philosophy to produce theoretical knowledge becomes. Human beings are theoretically incompetent, plain and simple. Doubtless we have the capacity to theorize, but it is a capacity that evolved long before our theories could exhibit any accuracy. Whatever fitness it rendered our ancestors had precious little to do with theoretical ‘discovery.’ Science would not represent the signature institutional achievement of our times were it otherwise.
In all likelihood, the critical impulse, the call for reason to critique reason, had no special part to play in critical philosophy’s failure to secure theoretical knowledge. So why then did it fail to improve the lot of philosophy? Well, who’s to say it hasn’t? Perhaps it improved the cognitive prospects of philosophy in a manner that philosophy has yet to discern. It’s worth recalling that for Kant, the project of critique was in an important sense continuous with the greater enterprise of Enlightenment. Noting the power of mathematics and natural science, he writes:
Their success should incline us, at least by way of experiment, to imitate their procedure, so far as the analogy which, as species of rational knowledge, they bear to metaphysics may commit. Hitherto it has been assumed that all our knowledge must conform to objects. But all attempts to extend our knowledge of objects by establishing something in regard to them a priori, by means of concepts, have ended in failure. We must therefore make trial of whether we may not have more success in the tasks of metaphysics, if we suppose that objects must conform to our knowledge. This would agree better with what is desired, namely, that it should be possible to have knowledge of objects a priori, determining something in regard to them prior to their being given. We should then be proceeding precisely on the lines of Copernicus’ primary hypothesis. Failing of satisfactory progress in explaining the movements of the heavenly bodies on the supposition that they all revolved around the spectator, he tried whether he might not have better success if made the spectator to revolve and the stars to remain at rest. A similar experiment can be tried in metaphysics, as regards the intuition of objects. (Critique of Pure Reason, 22)
If it is the case that the sciences more or less monopolize theoretical cognition, then the most reasonable way for reason to critique reason is via the sciences. The problem confronting Kant, however, was nothing less than the problem confronting all inquiries into cognition until very recently: the technical and theoretical intractability of the brain. So Kant was forced to rely on theoretical reason absent the methodologies of natural science. In other words, he was forced to conceive critique as more philosophy, and this presumably, is why his project ultimately failed.
The best Kant could do was draw some kind of moral from the sciences, a ‘procedural analogy’ as he puts it. Taking Copernicus as his example, he thus proposes ‘to put the spectator into motion.’ Kant scholars have debated the appropriateness of this analogy for centuries. As Russell notoriously points out, Kant does not so much put the subject into motion about the object as he puts the object into motion about the subject and so “would have been more accurate if he had spoken of a ‘Ptolemaic counter-revolution’ since he put Man back at the centre from which Copernicus had dethroned him” (Human Knowledge: Its Scope and Limits, 1). Where the Descartes’ subject anchored the possibility of knowledge, the Kantian subject anchors the possibility of experience. As the invariant frame of every possible experience, transcendental subjectivity would seem to be ‘motionless’ if anything. So if one takes the ‘spectator’ in Kant’s analogy to be the subject, it becomes hard to understand what he means.
In a famous note to the Second Preface a few pages subsequent, however, Kant suggests he’s after an ‘analogous change in point of view,’ one allowing us to see truths that are otherwise “contradictory of the senses” (25). After all, for thousands of years the prevailing assumption was that the subject had no constitutive role to play, that objects could thus be known without consideration of the knower. And in this sense, his analogy functions quite well. Consider, for instance, the elaborate theoretical machinery once required to make sense of the retrograde motion of Mars across the night sky, and how simply putting the spectator-earth into motion allows us to resolve this otherwise perplexing experience. Our problematic experience of Mars is literally an illusion pertaining to our ignorance of earth. Kant is claiming the ‘retrograde motions’ of metaphysics are likewise an illusion pertaining to our ignorance of cognition.
The parallel, as he sees it, lies in the attribution of activity to the ‘spectator.’ In early 1772, Kant wrote to Marcus Herz regarding the question of “how a representation that refers to an object without being in any way affected by it can be possible,” a letter that clearly signals the decisive break in his thought leading to the so-called ‘silent decade’ separating his dogmatic Inaugural Dissertation from the Critique. “If such intellectual representations depend on our inner activity,” he asks, “whence comes the agreement that they are supposed to have with objects–objects that are nevertheless not possibly produced thereby?” All critical philosophy, you could say, is struck from the hip of this question–one that could just as easily be posed to Bryant and his fellow Speculative Realists today…
So where Copernicus resolved the manifest problems of astronomy by attributing planetary motion to the earth, Kant thinks he has resolved the manifest problems of metaphysics by attributing representational activity to the subject. Expressed thus, the analogy is quite clear. So then why does it also seem to constitute an egregious disanalogy as Russell and others insist? Call this Kant’s Copernican paradox: the way his attribution of activity to the subject, though analogous to Copernicus’ attribution of motion to the earth, somehow commits him to a Ptolemaic conception of subjectivity. As preposterous as it sounds, I think the resolution to this paradox could entail nothing less than the end of philosophy as we know it…
Like everything else, these strange fucking days.
.
First I want to point out a couple of strange features that no one, to my knowledge anyway, has called attention to before. The first regards the curious assumption of spectatorial immobility or inactivity. Why is it that both the astronomical and the metaphysical tradition initially assumed the immobility of the earth and the inactivity of the subject respectively? Why should, in other words, immobility or inactivity be the default, the intuition to be overcome?
The second regards Kant’s hubristic cognitive presumption, the fact that he quite literally believed he had solved all the problems of metaphysics. For all its notorious technicality, the Critique possesses a bombast that would make a laughingstock of any philosopher writing today, and yet, not only do we find Kant’s proclamations forgivable, we somehow find them–implicitly at least–understandable as well. Somehow we intuitively understand how Kant, given the unprecedented nature of his approach, could be duped into thinking his way was the only way. Why does ignorance of alternatives generate the illusion of univocality? Or conversely, why does the piling on of interpretations tend to undermine the plausibility of novel interpretations?
This latter, of course, turns on the invisibility of ignorance–or as the Blind Brain Theory terms it, sufficiency. Our brains are mechanistic systems, astronomically complex symphonies of stochastically interrelated activities. Sufficiency simply follows from our mechanistic nature: central nervous systems operate according to information activated. This is the basic reason why insufficiency is parasitic upon sufficiency (and ultimately why falsehood is parasitic upon truth). The cognition of insufficient information as insufficient always requires more information. And so Kant, lacking information regarding the insufficiency of his interpretations, information that only became available as the array of viable alternatives became ever more florid, assumed sufficiency, that is, the apodictic status of his ‘transcendental deductions.’
The former also turns on sufficiency, albeit in a different respect. Cognizing the mobility of the earth requires information to that effect. In the absence of such information, we quite simply lack the ability to differentiate the position of the earth one moment to the next. Thus the manifest experience of the heavens moving about a motionless earth. The same goes for the subject: cognizing the activity of the subject requires information regarding differences made. In the absence of that information quite simply no difference is made. Thus the dogmatic metaphysical stance, where the philosopher, possessing only information regarding the objects of knowledge, attributes all activity (differentiation) to those objects and assumes cognition is a passive register.
So what does any of this have to do with the Copernican paradox described above? As we noted, the analogy works insofar as it attributes what is manifest to the activity of the subject. The analogy fails, on the other hand, because of the way it seems to render the subject the motionless centre about which objects now revolve. The solution to this paradox, not surprisingly, turns on the question of where the information runs out. Kant himself refers, on occasion, to finding the ‘data sufficient to determine the transcendental,’ assuming (given sufficiency, once again) that the information he had available was all that he required. But, as the subsequent profusion of variant transcendental interpretations have made plain, the information at his disposal does not even come close to possessing apodictic sufficiency. Given the pervasive and not to mention persuasive nature of sufficiency, it is worth rehearsing how the accumulation of scientific information has transformed our traditional metacognitive understanding of memory. Our traditional metacognitive assumption was that memory was a kind of storehouse, like the aviary Plato immortalized in the Theaetetus. With Ebbinghaus in the 19th century memory at last became an object of scientific inquiry. The story then becomes one of accumulating distinctions between different kinds of memory, as well as a drastic reappraisal of its veridical and systematic role. The picture that has emerged is so complicated, in fact, so different from our initial metacognitive assumptions, that some researchers now advocate dispensing with the traditional notion of memory altogether.
Our metacognitive sense of memory, what makes Plato’s analogy so convincing, is quite simply an artifact of informatic neglect, our inability not only to cognize the complexities of our capacity to remember, but to cognize that inability to cognize. BBT maintains that metacognitive blindness or neglect is a wholesale affair. Thus the ‘introspection illusion.’ Thus the troubling nature of dissociations such as that found in ‘pain asymbolia.’ Thus the ‘peculiar fate’ of reason, how, as Kant notes at the beginning of his first Preface to the original Critique, “it is burdened by questions which, as prescribed by the very nature of reason itself, it is not able to ignore, but which, as transcending all its powers, it is also not able to answer” (7). Thus, in other words, the blindness of reason to itself.
And, most importantly here, thus the transcendental. The idea is this: the problems besetting dogmatic philosophy provided Kant the information required to attribute activity to various aspects of subjective cognition and nothing more. The reason Kant’s Copernican analogy takes the peculiar, Ptolemaic form it does has to do with the way metacognitive neglect combined with the illusion of sufficiency forces him to locate the activities he attributes beyond the circuit of nature–to characterize them as ‘transcendental.’ Thus, lacking the information required to differentially situate these activities, they seem to reside nowhere. The conceptual activity of the subject finds itself nested within the empirically occluded and therefore apparently ‘motionless’ frame of transcendental subjectivity. And this is how Kant, in the act of prosecuting his Copernican revolution, simultaneously achieves a Ptolemaic restoration. Where in dogmatic philosophy the known invariably moves the knowing, in critical philosophy the knowing becomes the unmoved mover of everything that can be possibly known.
The cognition of difference requires information. Absent that information, identity is the default, be it the ‘positional’ self-identity of a motionless earth or a transcendental subject. It’s worth noting that this diagnosis applies whether one opts for an ontological or formal interpretation of Kant. Interpret Kant’s concepts any way you will, if they are to be active in any meaningful sense they have to be natural, which is to say, situated. The Blind Brain Theory maintains that the information integrated into consciousness and made available for conscious deliberation does not magically cut our ‘inner world’ at the joints. It is a brute fact that astronomical information asymmetries characterize the actual operations of our brain and our metacognitive sense of ‘mind.’ BBT provides a way of interpreting the metacognitive conundrums of intentionality and consciousness as artifacts of this asymmetry, the result of various forms of ‘information blindness,’ anosognosias that in some cases generate profound illusions. Consciousness is remarkably low-dimensional, not in the information-conserving sense of compression, but in the ‘lossy’ sense of depletions, distortions, and occlusions. Given that the information available to consciousness is the only information available for conscious cognition, we should not be surprised that this empirical fact possesses profound consequences across the whole range of human cognition. The Copernican paradox is one of these consequences, a striking example of the way information privation generates what might be called the ‘out-of-play’ illusion, the sense that the earth is the motionless centre of the universe on the one hand, and the sense that transcendental activity stands outside the circuit of nature, on the other. When combined with sufficiency, or what might be called the ‘only-game-in-town’ illusion, it becomes easy to understand why both geocentrism and transcendental idealism commanded the heights of cognition as long as they did.
(It’s worth noting in passing that both of these illusions are amenable to empirical verification. Any number of experiments can be imagined. Once again, unlike the speculative positions critiqued here, the Blind Brain Theory is continuous with the natural sciences.)
So to return to our question above: Why did ‘critical philosophy’ fail to provide the kind of theoretical knowledge that dogmatic philosophy could not? Because, simply enough, Kant and his successors not only lacked the information they required to naturalize the activity of the subject, they lacked the information required to realize they suffered this lack in the first place! Identifying activity, which is to say, identifying the difference the subject makes, will go down in history as Kant’s signature achievement, his gift to human civilization. But his insight was premature: only now, given the theoretical and technical resources belonging to the sciences of the brain, are we in a position to situate this activity within the greater arena of the natural world.
And this is what makes Bryant’s critique of critical philosophy so retrograde–even atavistic. Here the sciences of brain of the brain are actually making good on the goal of critical philosophy, laying bare the mechanistic activities that underwrite experience and knowledge, and Bryant is calling for a wholesale repudiation, not simply of critical philosophy, but of this very goal. So for instance, we already have a pretty good empirical understanding of why dogmatic philosophy was doomed to failure: humans are theoretically incompetent absent the institutional, conceptual, and procedural prosthetics of the sciences. We also have a good empirical understanding of the heuristic nature, not simply of human cognition, but of all animal cognition. The same way memory research has progressively complicated our traditional monolithic, metacognitive sense of memory, the sciences of the brain are doing the same with regard to cognition more generally. The more we learn, the more clear it’s becoming that cognition is fractionate, a concatenation of specialized tools, heuristics that conserve computational resources via the systematic neglect of information.
On the Blind Brain Theory, the subject-object paradigm is another one of these heuristics, which is to say, a way to effectively comport our organism to its environments absent certain kinds of information. Recapitulating distal (environmental) information exhausts the resources of the mechanisms involved. Recapitulating proximal (neural) information thus requires supplementary mechanisms, which, given the sheer complexity of the neural mechanisms required to recapitulate distal information, either need to be far more powerful than those mechanisms, or to settle for far less fidelity. More brain, in other words, is required for the brain to track itself the way it tracks its environments. Given the exorbitant metabolic expense (not to mention the absence of direct evolutionary pressures) of such secondary tracking systems, it should come as no surprise that the brain suffers medial neglect, a wholesale inability to track its own functions. This is why the neurofunctional context of any information integrated into conscious cognition (the way it is actually utilized) escapes conscious cognition–why, in other words, experience is ‘transparent.’ This is why we perceive objects while remaining almost utterly blind to the machinery of perception. And this is why our sense of subjectivity is so granular, ineffable, and mysterious. The usurious expense of proximal cognition imposes drastic constraints on our metacognitive capacities, constraints that themselves utterly escape metacognition.
The subject-object paradigm is a heuristic solution according to BBT, a way for the brain to maximize cognitive effectiveness while minimizing metabolic costs. So long as the medial mechanisms involved in the recapitulation of environmental information do not impact the environment tracked, then medial neglect possesses no immediate liabilities and leverages tremendous gains in efficiency. Our brains can track various causal systems in its environment without having to account for any interference generated by the systems doing the tracking. But as soon as those tracking systems do impact their targets–as soon as observation finds itself functionally entangled with its targets–cognition quickly becomes difficult if not impossible. In such instances it must track effects that it cannot, given the occlusion of its own causal activities (medial neglect), situate within the causal nexus of any natural environment. As a heuristic, the subject-object paradigm is not a universal problem solver, though the only-game-in-town illusion (sufficiency) means metacognition is bound to intuit it as such. This explains, not only why we continue to find experience mysterious even as our environmental cognition presses to the asymptotic limits of particle physics and cosmology, but also why those perplexities take the shape they do.
Subject-object cognition, thanks to medial neglect, is utterly incapable of producing genuine theoretical metacognition. Given the subject-object paradigm, the brain remains a necessary blind-spot, something that it can only cognize otherwise. Thus the invisibility of activity, and the epochal nature of Kant’s critical insight. Thus the default nature of dogmatic philosophy, why millennia of errant groping were required before realizing that we were not, as far as cognition was concerned, out of play.
It’s hard to overstate the eerie elegance of this account–damn hard. Whatever the case, BBT is an exhaustive interpreter. Not only does it seem to resolve a number of notorious, hitherto unresolvable conundrums pertaining to consciousness using one basic insight, it claims to offer understanding, in impressionistic outline at least, of why philosophical inquiry has followed the trajectory it has.
In the present context, however, the thing to remember is simply this: To speak of subjects and/or objects as metaphysically fundamental is to immediately commit oneself to the universality of a certain kind of low-dimensional cartoon, which is to say, a heuristic that organizes information in a manner that enables or impedes cognition depending on the particular ecology it finds itself deployed in. The cartoonishness of this cartoon, the way it betrays as opposed to facilitates cognition, is something numerous critics in numerous contexts have called attention to (perhaps illuminating portions of BBT from less comprehensive perspectives). For proponents of so-called embodied cognition, for instance, the subject-object paradigm constitutively neglects what might be called the brain-environment, the greater mechanism that explains the profound continuity of our organism with its environments. For Heidegger, on the other hand, it’s a paradigmatic expression of the ‘metaphysics of presence,’ the wilderness through which the tribes of thought wander awaiting the promise of ‘being.’ For other thinkers in the phenomenological and post-structural traditions, it distorts and conceals essential relations, generating structurally inescapable impasses, social alienation, as well as facilitating myriad abuses of authority and capital.
And for ‘speculative realists’ such as Bryant, Harman, and Meillasoux, it confounds the possibility of genuine theoretical knowledge. Thus the curious canard of ‘correlation,’ and the even more curious conceit that simply naming the subject-object paradigm as a problem provides theoretical egress, rather than, as even the most rabid enthusiast must recognize as a storm-cloud on the horizon, simply more of the same. Gone are the early days of novelty, and with it the only-game-in-town illusion of genuine philosophical progress. Speculative realism is now mired in the same ‘bewildering philosophical situation’ it takes as its motive, making claims to theoretical knowledge on inferential grounds every bit as interpretative as those it seeks to supplant, pinning skyhook to skyhook, in the effort to conceal the fact that everything is left hanging…
No different than before.
So many ironies and problems bedevil this approach I simply don’t know where to begin. I’ve already mentioned the unfortunate timing involved in denying activity to cognition just as the bona fide sciences of those activities are in bloom. If theoretical knowledge is what Bryant is after, as he claims, then he need only embrace these sciences, embrace naturalism and foreswear his metaphysical fundamentalism. It’s a good rule-of-thumb, I think most will be inclined to agree, to be incredulous of any systematic set of claims that argues against incredulity. But this is precisely what Bryant does in arguing that, even though all his claims are in fact conditioned by his cognitive capacities, personal history, social context, and so on, one should pretend all these potential confounds are out of play. There is no question more honest than, “How do you know?” yet he would have us relegate it on the basis of speculation that, coincidentally enough, has no way of answering this very question.
And it is for this reason, more than any other, that so much Speculative Realism strikes me as desperate philosophy, as the work of weary, thoroughly captive souls that nonetheless refuse to remain indifferent. “There must be some way out!” This has been the cry, naming a need that for many has become so urgent they are willing to suspend disbelief to attain the appearance or approximation of ‘escape.’ This wilful credulity, this opportunistic refusal to critique, is what raises the irony of Byrant’s approach to its most debilitating pitch. After all, questions are what make ignorance visible, what reveals the insufficiencies of our thought–the information missing. Questions, in other words, bring to light differences not made. Thus Bryant, by eschewing Kant’s critical question regarding the differences cognition makes, is in effect occluding the very differences he claims are fundamental. He is not, in fact, interested in ‘doing justice to the plural swarm of differences’ so much as he is interested in differences of the right sort–namely, those that conserve the identity of his Object Oriented Ontology.
The final irony is that BBT, like Bryant’s approach, is decisively concerned with differences–only understood as information, systematic differences making systematic differences. But unlike, Object Oriented Ontology, my approach takes information as an unexplained explainer that is warranted by the theoretical work it enables, and not as a metaphysical primitive that warrants all that follows. Theorizing the kinds of informatic constraints (the crucial differences not made) faced by human cognition, BBT provides a powerful diagnosis of the subject-object paradigm, one that not only explains myriad traditional philosophical difficulties, but also allows, on an empirical basis, a means to think beyond the perennial, oscillating tyranny of subject and object, thought and being, and here’s the important thing, when required. It begins with theoretical knowledge, the sciences of the brain, offering speculative claims that will find decisive, empirical arbitration in the due course of time. Object Oriented Ontology, however, is yet another metaphysical fundamentalism–and an anachronistic one at that. It wades into the swamp of metaphysical argumentation claiming to discover firm ground. Unable to conceive a way beyond the subject-object paradigm, it seizes upon the unfashionable partner, the object, buys it a new dress and dancing shoes, then takes it to the philosophical ball proclaiming discovery. And so, with difference upon its lips, it gets down to the business of perpetuating the same, magically offering rationales for what its practitioners cherish, and critiques of what they despise.
The situation is quite the reverse with BBT. In promising to overthrow noocentrism in a manner consistent with the overthrow of geocentrism and biocentrism centuries previous, it offers far more heartbreak than otherwise…
An escape from all that matters.
This is where the trail of clear inferences comes to an end. I’ve been mulling over ways to characterize a ‘big picture’ that might follow from this crazy attempt to explore post-intentional philosophy. Does it argue a kind of Wittgensteinian quietism, an admission that it lies beyond the ken of our motley tools, or does it suggest some species of informatic pluralism, where you acknowledge the shortcomings of the kinds of understanding you can come to in terms of a universe parsed into possibilities of informatic interaction? Arguing what it is not seems far easier. It is neither a materialism nor an idealism. It is not rationalist or contextualist or instrumentalist or interpretationist. It is, for whatever it’s worth, an extension of the explanatory paradigm of the life and other sciences into the traditional domain of the intentional. Since the intentional domain has no claim it recognizes as cognitive, no traditional philosophical characterization applies. It refuses projection across any one heuristic plane because it recognizes that all such planes are just that, heuristic. There is no subject or object on BBT, no ‘correlativity,’ no fundamental ‘inside/outside,’ only a series of heuristic lenses (to opt for a visual heuristic) allowing various kinds of grasp (to opt for a kinesthetic heuristic). Given that the prostheses of science do allow for counter-to-heuristic knowledge (as with particle physics, most famously) it accords precedence to scientific discovery and the operationalizations that make them possible. To the question of whether we are a global workspace or a brain or a brain-environment (where the latter is understood in any one of many senses (social, historical, biological, cosmological, and so on)) it seems to answer, Yes.
And there is I suppose a certain kind of peace to be found in such a picture.
I keep looking.
.


March 6, 2013
Wake-up Call
Aphorism of the Day: If I have smelled farther than others, it is because I have shoved my head up the asses of giants.
.
Take it for what it’s worth. I’ve been camped on the outskirts of Golgotterath for awhile now, and it gets hard, sometimes, keeping things distinct, sorting the theoretical moods from the narrative, deciding what’s besieging what, and who’s storming whom. Besides, I find it plum exhausting not pissing people off.
So apparently someone posted a link to my previous post on Hagglund’s Facebook page, where it lingered a bit before mysteriously disappearing. I certainly understand the impulse, but for whatever reason explicit acts of hypocrisy rot my soul. I just finished reading an entire book by the guy extolling exposure, so I gotta call it. Just what kind of exposure was he extolling? The flattering kind? The self-promotional kind? Or (what amounts to the same) just the kind that keeps the ugly, dishevelled, and uncredentialed at the door?
I know the fears, I suppose. Academic politics, as ‘Sayre’s Law’ has it, are so vicious because the stakes are so low. Rumour and reputation are the coin of the realm when you profess for a living–aside, that is (ahem), from a steady paycheck, summers off, and the obedience of gullible undergrads. The circles are small enough that you always need consider whom might be listening–especially if you’re fool enough to entertain ambition. Everyone is careful to be careful, urgent to be urbane. Ask yourself, is anything more insane than the ‘academic tone’? You pour your thoughts into a sieve, and you shake and shake and shake, not to gather the kernels of genuine individuality, but the chaff, the maximally processed flour, whatever your ingroup peers can use to bake their maximally tasteless bread. Panning for dirt, the way it has to be when you make any bureaucratized institution your yardstick of value and success. Extolling originality only when it’s dead.
Everything alive is safer that way. Dead.
And agreement is so much more agreeable. A degree. A library. An attitude. A skin. A religion. Want to know how much you really ‘appreciate difference’? Just look at the vocabularies of your friends.
I ain’t no different.
But it’s worth marvelling all the same. The hypocrisy, enough to make a fundamentalist Christian blush. Who would have imagined that the academic humanities, in the course of ceaselessly generating more graduates than jobs, would succeed in casting a para-academic shadow more substantial than themselves? Because this, my friends, seems to be precisely what’s happening. I know there’s people out there who feel this way. Plenty. More importantly, I know there’s people out there with organizational skills who feel this way. A strategic handful. I ain’t that person. I’m just a fucking windbag, but I will assist you if I can. They may have the paychecks, but we have the pots and pans…
Okay, I’m not sure what that means, exactly, except that we now have the capacity to be loud in ways they no longer dare. Too much training. Too much droning before audiences both living and legal. And certainly too much striving, toiling, labouring to secure what our disenfranchised numbers have transformed into a rare earth metal. Too much market share to risk risk. You shrink once you attain what you covet. Worse yet, you set out to make good on all that you have sacrificed. All those norms you had to imbibe, they replace you sip by odourless sip, until you begin sweating colour, inhaling white oblivion–until meticulous grooming becomes second nature. You walk across your campus faerie-land, and you walk and you walk until the day comes when you feel more entitled than astounded. You pull your edges into defensive circles. And you talk and you talk, until your voice feels like an ancient and indestructible boot. Your erudition fades into a pastime. Your relevance escapes you. You fuck anything that lets you. Your faculty photo becomes another orthopedic insert. Laziness becomes indistinguishable from insight, so you begin to promise relief, like any other over-the-counter medication. You peer at your eyebrows in the mirror, thinking, Hmmmm…
Of course we’re more ‘real.’ Our failure (your success) keeps us hungry. Our hunger (your fat) keeps us distinct, mindful of what once mattered.
Eager for overthrow… or at the very least some bell to signal morning.
Because the frontdesk has forgotten.


February 8, 2013
Reengineering Dennett: Intentionality and the ‘Curse of Dimensionality’
Aphorism of the Day: A headache is one of those rare and precious things that is both in your head and in your head.
.
In a few weeks time, Three Pound Brain will be featuring an interview with Alex Rosenberg, who has become one of the world’s foremost advocates of Eliminativism. If you’re so inclined, now would be a good time to pick up his Atheist’s Guide to Reality, which will be the focus of much of the interview.
The primary reason I’m mentioning this has to do with a comment of Alex’s regarding Dennett’s project in our back and forth, how he “has long sought an account of intentionality that constructs it out of nonintentional resources in the brain.” This made me think of a paper of Dennett’s entitled “A Route to Intelligence: Oversimplify and Self-Monitor” that is only available on his website, and which he has cryptically labelled, ‘NEVER-TO-APPEAR PAPERS BY DANIEL DENNETT.’ Now maybe it’s simply a conceit on my part, given that pretty much everything I’ve written falls under the category of ‘never-to-appear,’ but this quixotic piece has been my favourite Dennett article every since I first stumbled upon it. In the note that Dennett appends to the beginning, he explains the provenance of the paper, how it was written for a volume that never coalesced, but he leaves its ‘never-to-be-published’ fate to the reader’s imagination. (If I had to guess, I would say it has to do with the way the piece converges on what is now a dated consideration of the frame problem).
Now in this paper, Dennett does what he often does (most recently, in this talk), which is to tell a ‘design process’ story that begins with the natural/subpersonal and ends with the intentional/personal. The thing I find so fascinating about this particular design process narrative is the way it outlines, albeit in a murky form, what I think actually is an account of how intentionality arises ‘out of the nonintentional resources of the brain,’ or the Blind Brain Theory. What I want to do is simply provide a close reading of the piece (the first of its kind, given that no one I know of has referenced this piece apart from Dennett himself), suggesting, once again, that Dennett was very nearly on the right track, but that he simply failed to grasp the explanatory opportunities his account affords in the proper way. “A Route to Intelligence” fairly bowled me over when I first read it a few months ago, given the striking way it touches on so many of the themes I’ve been developing here. So what follows, then, begins with a consideration of the way BBT itself follows from certain, staple observations and arguments belonging to Dennett’s incredible oeuvre. More indirectly, it will provide a glimpse of how the mere act of conceptualizing a given dynamic can enable theoretical innovation.
Dennett begins with the theme of avoidance. He asks us to imagine that scientists discover an asteroid on a collision course with earth. We’re helpless to stop it, so the most we can do is prepare for our doom. Then, out of nowhere, a second asteroid appears, striking the first in the most felicitous way possible saving the entire world. It seems like a miracle, but of course the second meteor was always out there, always hurtling on its auspicious course. What Dennett wants us to consider is the way ‘averting’ or ‘preventing’ is actually a kind of perspectival artifact. We only assumed the initial asteroid was going to destroy earth because of our ignorance of the subsequent: “It seems appropriate to speak of an averted or prevented catastrophe because we compare an anticipated history with the way things turned out and we locate an event which was the “pivotal” event relative to the divergence between that anticipation and the actual course of events, and we call this the “act” of preventing or avoiding” (“A Route to Intelligence,” 3).
In BBT terms, the upshot of this fable is quite clear: Ignorance–or better, the absence of information–has a profound, positive role to play in the way we conceive events. Now coming out of the ‘Continental’ tradition this is no great shakes: one only need think of Derrida’s ‘trace structure’ or Adorno’s ‘constellations.’ But as Dennett has found, this mindset is thoroughly foreign to most ‘Analytic’ thinkers. In a sense, Dennett is providing a peculiar kind of explanation by subtraction, bidding us to understand avoidance as the product of informatic inaccessibility. Here it’s worth calling attention to what I’ve been calling the ‘only game in town effect,’ or sufficiency. Avoidance may be the artifact of information scarcity, but we never perceive it as such. Avoidance, rather, is simply avoidance. It’s not as if we catch ourselves after the fact and say, ‘Well, it only seemed like a close call.’
Academics spend so much time attempting to overcome the freshman catechism, ‘It-is-what-it-is!’ that they almost universally fail to consider how out-and-out peculiar it is, even as it remains the ‘most natural thing in the world.’ How could ignorance, of all things, generate such a profound and ubiquitous illusion of epistemic sufficiency? Why does the appreciation of contextual relativity, the myriad ways our interpretations are informatically constrained, count as a kind of intellectual achievement?
Sufficiency can be seen as a generalization of what Daniel Kahneman refers to as WYSIATI (‘What You See Is All There Is’), the way we’re prone to confuse the information we have for all the information required. Lacking information regarding the insufficiency of the information we have, such as the existence of a second ‘saviour’ asteroid, we assume sufficiency, that we are doomed. Sufficiency is the assumptive default, which is why undergrads, who have yet to be exposed to information regarding the insufficiency of the information they have, assume things like ‘It-is-what-it-is.’
The concept of sufficiency (and its flip-side, asymptosis) is of paramount importance. It explains why, for instance, experience is something that can be explained via subtraction. Dennett’s asteroid fable is a perfect case in point: catastrophe was ‘averted’ because we had no information regarding the second asteroid. If you think about it, we regularly explain one another’s experiences, actions, and beliefs by reference to missing information, anytime we say something of the form, So-and-so didn’t x (realize, see, etc.) such-and-such, in fact. Implicit in all this talk is the presumption of sufficiency, the ‘It-is-what-it-is! assumption,’ as well as the understanding that missing information can make no difference–precisely what we should expect of a biomechanical brain. I’ll come back to all this in due course, but the important thing to note, at this juncture at least, is that Dennett is arguing (though he would likely dispute this) that avoidance is a kind of perspectival illusion.
Dennett’s point is that the avoidance world-view is the world-view of the rational deliberator, one where prediction, the ability to anticipate environmental changes, is king. Given this, he asks:
Suppose then that one wants to design a robot that will live in the real world and be capable of making decisions so that it can further its interests–whatever interests we artificially endow it with. We want in other words to design a foresightful planner. How must one structure the capacities–the representational and inferential or computational capacities–of such a being? 4
The first design problem that confronts us, he suggests, involves the relationship between response-time, reliability, and environmental complexity.
No matter how much information one has about an issue, there is always more that one could have, and one can often know that there is more that one could have if only one were to take the time to gather it. There is always more deliberation possible, so the trick is to design the creature so that it makes reliable but not foolproof decisions within the deadlines naturally imposed by the events in its world that matter to it. 4
Our design has to perform a computational balancing act: Since the well of information has no bottom, and the time constraints are exacting, our robot has to be able to cherry-pick only the information it needs to make rough and reliable determinations: “one must be designed from the outset to economize, to pass over most of the available information” (5). This is the problem now motivating work in the field of rational ecology, which looks at human cognition as a ‘toolbox’ filled with a variety of heuristics, devices adapted to solve specific problems in specific circumstances–‘ecologies’–via the strategic neglect of various kinds of information. On the BBT account, the brain itself is such a heuristic device, a mechanism structurally adapted to walk the computational high-wire between behavioural efficiency and environmental complexity.
And this indeed is what Dennett supposes:
How then does one partition the task of the robot so that it is apt to make reliable real time decisions? One thing one can do is declare that some things in the world of the creature are to be considered fixed; no effort will be expended trying to track them, to gather more information on them. The state of these features is going to be set down in axioms, in effect, but these are built into the system at no representational cost. One simply designs the system in such a way that it works well provided the world is as one supposes it always will be, and makes no provision for the system to work well (“properly”) under other conditions. The system as a whole operates as if the world were always going to be one way, so that whether the world really is that way is not an issue that can come up for determination. 5
So, for instance, the structural fact that the brain is a predictive system simply reflects the fundamental fact that our environments not only change in predictable ways, but allow for systematic interventions given prediction. The most fundamental environmental facts, in other words, will be structurally implicit in our robot, and so will not require modelling. Others, meanwhile, will “be declared as beneath notice even though they might in principle be noticeable were there any payoff to be gained thereby” (5). As he explains:
The “grain” of our own perception could be different; the resolution of detail is a function of our own calculus of wellbeing, given our needs and other capacities. In our design, as in the design of other creatures, there is a trade-off in the expenditure of cognitive effort and the development of effectors of various sorts. Thus the insectivorous bird has a trade-off between flicker fusion rate and the size of its bill. If it has a wider bill it can harvest from a larger volume in a single pass, and hence has a greater tolerance for error in calculating the location of its individual prey. 6
Since I’ve been arguing for quite some time that we need to understand the appearance of consciousness as a kind of ‘flicker fusion writ large,’ I can tell you my eyebrows fairly popped off my forehead reading this particular passage. Dennett is isolating two classes of information that our robot will have no cause to model: environmental information so basic that it’s written into the structural blueprint or ‘fixed’, and environmental information so irrelevant that it is ignored outright or ‘beneath notice.’ What remains is to consider the information our robot will have cause to model:
If then some of the things in the world are considered fixed, and others are considered beneath notice, and hence are just averaged over, this leaves the things that are changing and worth caring about. These things fall roughly into two divisions: the trackable and the chaotic. The chaotic things are those things that we cannot routinely track, and for our deliberative purposes we must treat them as random, not in the quantum mechanical sense, and not even in the mathematical sense (e.g., as informationally incompresssible), but just in the sense of pseudo-random. These are features of the world which, given the expenditure of cognitive effort the creature is prepared to make, are untrackable; their future state is unpredictable. 6-7
Signal and noise. If we were to design our robot along, say, the lines of a predictive processing account of the brain, its primary problem would be one of deriving the causal structure of its environment on the basis of sensory effects. As it turns out, this problem (the ‘inverse problem’) is no easy one to solve. We evolved sets of specialized cognitive tools, heuristics with finite applications, for precisely this reason. The ‘signal to noise ratio’ for any given feature of the world will depend on the utility of the signal versus the computational expense of isolating it.
So far so good. Dennett has provided four, explicitly informatic categories–fixed, beneath notice, trackable, and chaotic–‘design decisions’ that will enable our robot to successfully cope with the complexities confronting it. This is where Dennett advances a far more controversial claim: that the ‘manifest image’ belonging to any species is itself an artifact of these decisions.
Now in a certain sense this claim is unworkable (and Dennett realizes as much) given the conceptual interdependence of the manifest image and the mental. The task, recall, was to build a robot that could tackle environmental complexity, not become self-aware. But his insight here stands tantalizingly close to BBT, which explains our blinkered metacognitive sense of ‘consciousness’ and ‘intentionality’ in the self-same terms of informatic access.
And things get even more interesting, first with his consideration of the how the scientific image might be related to the manifest image thus construed:
The principles of design that create a manifest image in the first place also create the loose ends that can lead to its unraveling. Some of the engineering shortcuts that are dictated if we are to avoid combinatorial explosion take the form of ignoring – treating as if non-existent – small changes in the world. They are analogous to “round off error”in computer number-crunching. And like round-off error, their locally harmless oversimplifications can accumulate under certain conditions to create large errors. Then if the system can notice the large error, and diagnose it (at least roughly), it can begin to construct the scientific image. 8
And then with his consideration of the constraints facing our robot’s ability to track and predict itself:
One of the pre-eminent varieties of epistemically possible events is the category of the agent’s own actions. These are systematically unpredictable by it. It can attempt to track and thereby render predictions about the decisions and actions of other agents, but (for fairly obvious and well-known logical reasons, familiar in the Halting Problem in computer science, for instance) it cannot make fine-grained predictions of its own actions, since it is threatened by infinite regress of self-monitoring and analysis. Notice that this does not mean that our creature cannot make some boundary-condition predictions of its own decisions and actions. 9
Because our robot possesses finite computational resources in an informatically bottomless environment, it must neglect information, and so must be heuristic through and through. Given that heuristics possess limited applicability in addition to limited computational power, it will perforce continually bump into problems it cannot solve. This will be especially the case when it comes the problem of itself–for the very reasons that Dennett adduces in the above quote. Some of these insoluble problems, we might imagine, it will be unable to see as problems, at least initially. Once it becomes aware of its informatic and cognitive limitations, however, it could begin seeking supplementary information and techniques, ways around its limits, allowing the creation of a more ‘scientific’ image.
Now Dennett is simply brainstorming here–a fact that likely played some role in his failure to pursue its publication. But “A Route to Intelligence” stuck with him as well, enough for him to reference it on a number of occasions, and to ultimately give it a small internet venue all of its own. I would like to think this is because he senses (or at least once sensed) the potential of this general line of thinking.
What makes this paper so extraordinary, for me, is the way he explicitly begins the work of systematically thinking through the informatic and cognitive constraints facing the human brain, both with respect to its attempts to cognize its environment and itself. For his part, Dennett never pursues this line of speculative inquiry in anything other than a piecemeal and desultory way. He never thinks through the specifics of the informatic privation he discusses, and so, despite many near encounters, never finds his way to BBT. And it this failure, I want to argue, that makes his pragmatic recovery of intentionality, the ‘intentional stance,’ seem feasible–or so I want to argue.
As it so happens, the import and feasibility of Dennett’s ‘intentional stance,’ has taken a twist of late, thanks to some of his more recent claims. In “The Normal Well-tempered Mind,” for instance, he claims that he was (somewhat) mistaken in thinking that “the way to understand the mind is to take it apart into simpler minds and then take those apart into still simpler minds until you get down to minds that can be replaced by a machine,” the problem being that “each neuron, far from being a simple switch, is a little agent with an agenda, and they are much more autonomous and much more interesting than any switch.” For all his critiques of original intentionality in the heyday of computationalism, Dennett’s intentional apologetics have become increasingly strident and far-reaching. In what follows I will argue that his account of the intentional stance, and the ever expanding range of interpretative applicability that he accords it actually depends on his failure to think through the informatic straits of the human brain. If he had, I want to suggest, he would have seen that intentionality, like avoidance, is best explained in terms of missing information, which is to say, as a kind of perspectival illusion.
Now of course all this betrays more than a little theoretical vanity on my part, the assumption that Dennett has to be peering, stumped, at some fragmentary apparition of my particular inferential architecture. But this presumption stands high among my motives for writing this post. Why? Because for the life of me I can’t see any way around those inferences–and I distrust this ‘only game in town’ feeling I have.
But I’ll be damned if I can find a way out. As I hope to show, as soon as you begin asking what cognitive systems are accessing what information, any number of dismal conclusions seem to directly follow. We literally have no bloody clue what we’re talking about when begin theorizing ‘mind.’
To see this, it serves to diagram the different levels of information privation Dennett considers:
The evolutionary engineering problem, recall, is one of finding some kind of ‘golden informatic mean,’ extracting only the information required to maximize fitness given the material and structural resources available and nothing else. This structurally constrained select-and-neglect strategy is what governs the uptake of information from the sum of all information available for cognition and thence to the information available for metacognition. The Blind Brain Theory is simply an attempt to think this privation through in a principled and exhaustive way, to theorize what information is available to what cognitive systems, and the kinds of losses and distortions that might result.
Information is missing. No one I know of disputes this. Each of these ‘pools’ are the result of drastic reductions in dimensionality (number of variables). Neuroscientists commonly refer something called the ‘Curse of Dimensionality,’ the way the difficulty of finding statistical patterns in data increases exponentially as the data’s dimensionality increases. Imagine searching for a ring on a 100m length of string, which is to say, in one dimension. No problem. Now imagine searching for that ring in two dimensions, a 100m by 100m square. More difficult, but doable. Now imagine trying to find that ring in three dimensions, in a 100m by 100m by 100m cube. The greater the dimensionality, the greater the volume, the more difficult it becomes extracting statistical relationships, whether you happen to be a neuroscientist trying to decipher relations between high-dimensional patterns of stimuli and neural activation, or a brain attempting to forge adaptive environmental relations.
For example, ‘semantic pointers,’ Eliasmith’s primary innovation in creating SPAUN (the recent artificial brain simulation that made headlines around the world) are devices that maximize computational efficiency by collapsing or inflating dimensionality according to the needs of the system. As he and his team write:
Compression is functionally important because low-dimensional representations can be more efficiently manipulated for a variety of neural computations. Consequently, learning or defining different compression/decompression operations provides a means of generating neural representations that are well suited to a variety of neural computations. “A Large-Scale Model of the Functioning Brain,” 1202
The human brain is rife with bottlenecks, which is why Eliasmith’s semantic pointers represent the signature contribution they do, a model for how the brain potentially balances its computational resources against the computational demands facing it. You could say that the brain is an evolutionary product of the Curse, since it is in the business of deriving behaviourally effective ‘representations’ from the near bottomless dimensionality of its environment.
Although Dennett doesn’t reference the Curse explicitly, it’s implicit in his combinatoric characterization of our engineering problem, the way our robot has to suss out adaptive patterns in the “combinatorial explosion,” as he puts it, of environmental variables. Each of the information pools he touches on, in other words, can be construed as solutions to the Curse of Dimensionality. So when Dennett famously writes:
I claim that the intentional stance provides a vantage point for discerning similarly useful patterns. These patterns are objective–they are there to be detected–but from our point-of-view they are not out there entirely independent of us, since they are patterns composed partly of our own “subjective” reactions to what is our there; they are the patterns made to order for our narcissistic concerns. The Intentional Stance, “Real Patterns, Deeper Facts, and Empty Questions,” 39
Dennett is discussing a problem solved. He recognizes that the solution is parochial, or ‘narcissistic,’ but it remains, he will want to insist, a solution all the same, a powerful way for us (or our robot) to predict, explain, and manipulate our natural and social environments as well as ourselves. Given this efficacy, and given that the patterns themselves are real, even if geared to our concerns, he sees no reason to give up on intentionality.
On BBT, however, the appeal of this argument is largely an artifact of its granularity. Though Dennett is careful to reference the parochialism of intentionality, he does not do it justice. In “The Last Magic Show,” I turned to the metaphor of shadows at several turns trying to capture something of the information loss involved in consciousness, unaware that researchers, trying to understand how systems preserve functionality despite massive reductions of dimensionality, had devised mathematical tools, ‘random projections,’ that take the metaphor quite seriously:
To understand the central concept of a random projection (RP), it is useful to think of the shadow of a wire-frame object in three-dimensional space projected onto a two dimensional screen by shining a light beam on the object. For poorly chosen angles of light, the shadow may lose important information about the wire-frame object. For example, if the axis of light is aligned with any segment of wire, that entire length of wire will have a single point as its shadow. However, if the axis of light is chosen randomly, it is highly unlikely that the same degenerate situation will occur; instead, every length of wire will have a corresponding nonzero length of shadow. Thus the shadow, obtained by this RP, generically retains much information about the wire-frame object. (Ganguli and Sompolinsky, “Sparsity and Dimensionality,” 487)
On the BBT account, consciousness and intentionality, as they appear to metacognition, can be understood as concatenations of idiosyncratic low-dimensional ‘projections.’ Why idiosyncratic? Because when it comes to ‘compression,’ evolution isn’t so much interested in the ‘veridical conservation’ as in scavenging effective information. And what counts as ‘effective information’? Whatever facilitates genetic replication–period. In terms of the wire-frame analogy, the angle may be poorly chosen, the projection partial, the light exceedingly dim, etc., and none of this would matter so long as the information projected discharged some function that increased fitness. One might suppose that only compression will serve in some instances, but to assume that only compression will serve in all instances is simply to misunderstand evolution. Think of ‘lust’ and the biological need to reproduce, or ‘love’ and the biological need to pair-bond. Evolution is opportunistic: all things being equal, the solutions it hits upon will be ‘quick and dirty,’ and utterly indifferent to what we intuitively assume (let alone want) to be the case.
Take memory research as a case in point. In the Theaetetus, Plato famously characterized memory as an aviary, a general store from which different birds, memories, could be correctly or incorrectly retrieved. It wasn’t until the late 19th century, when Hermann Ebbinghaus began tracking his own recall over time in various conditions, that memory became the object of scientific investigation. From there the story is one of greater and greater complication. William James, of course, distinguished between short and long term memory. Skill memory was distinguished from long term memory, which Endel Tulving famously decomposed into episodic and semantic memory. Skill memory, meanwhile, was recognized as one of several forms of nondeclarative or implicit memory, including classical conditioning, non-associative learning, and priming, which would itself be decomposed into perceptual and conceptual forms. As Plato’s grand aviary found itself progressively more subdivided, researchers began to question whether memory was actually a discrete system or rather part and parcel of some larger cognitive network, and thus not the distinct mental activity assumed by the tradition. Other researchers, meanwhile, took aim at the ‘retrieval assumption,’ the notion that memory is primarily veridical, adducing evidence that declarative memory is often constructive, more an attempt to convincingly answer a memory query than to reconstruct ‘what actually happened.’
The moral of this story is as simple as it should be sobering: the ‘memory’ arising out of casual introspection (monolithic and veridical) and the memory arising out of the scientific research (fractionate and confabulatory) are at drastic odds, to the point where some researchers suggest the term ‘memory’ is itself deceptive. Memory, like so many other cognitive capacities, seems to be a complex of specialized capacities arising out of non-epistemic and epistemic evolutionary pressures. But if this is the case, one might reasonably wonder how Plato could have gotten things so wrong. Well, obviously the information available to metacognition (in its ancient Greek incarnation) falls far short the information required to accurately model memory. But why would this be? Well, apparently forming accurate metacognitive models of memory was not something our ancestors needed to survive and reproduce.
We have enough metacognitive access to isolate memory as a vague capacity belonging to our brains and nothing more. The patterns accessed, in other words, are real patterns, but it seems more than a little hinky to take the next step and say they are “made to order for our narcissistic concerns.” For one, whatever those ‘concerns’ happen to be, they certainly don’t seem to involve any concern with self-knowledge, particularly when the ‘concerns’ at issue are almost certainly not the conscious sort–which is to say, concerns we could be said to be ‘ours’ in any straightforward way. The concerns, in fact, are evolutionary: Metacognition, for reasons Dennett touched on above and that I have considered at length elsewhere, is a computational nightmare, more than enough to necessitate the drastic informatic compromises that underwrite Plato’s Aviary.
And as memory goes, I want to suggest, so goes intentionality. The fact is, intentional patterns are not “made to order for our narcissistic concerns.” This is a claim that, while appearing modest, characterizes intentionality as an instrument of our agency, and so ‘narcissistic’ in a personal sense. Intentional patterns, rather, are ad hoc evolutionary solutions to various social or natural environmental problems, some perhaps obvious, others obscure. And this simply refers to the ‘patterns’ accessed by the brain. There is the further question of metacognitive access, and the degree to which the intentionality we all seem to think we have might not be better explained as a kind of metacognitive illusion pertaining to neglect.
Asymptotic. Bottomless. Rules hanging with their interpretations.
All the low-dimensional projections bridging pool to pool are evolutionary artifacts of various functional requirements, ‘fixes,’ multitudes of them, to some obscure network of ancestral, environmental problems. They are parochial, not to our ‘concerns’ as ‘persons,’ but to the circumstances that saw them selected to the exclusion of other possible fixes. To return to Dennett’s categories, the information ‘beneath notice,’ or neglected, may be out-and-out crucial for understanding a given capacity, such as ‘memory’ or ‘agency’ or what have you, even though metacognitive access to this information was irrelevant to our ancestor’s survival. Likewise, what is ‘trackable’ may be idiosyncratic, information suited to some specific, practical cognitive function, and therefore entirely incompatible with and so refractory to theoretical cognition–philosophy as the skeptics have known it.
Why do we find the notion of a fractionate, non-veridical memory surprising? Because we assume otherwise, namely, that memory is whole and veridical. Why do we assume otherwise? Because informatic neglect leads us to mistake the complex for the simple, the special purpose for the general purpose, and the tertiary for the primary. Our metacognitive intuitions are not reliable; what we think we do or undergo and what the sciences of the brain reveal need only be loosely connected. Why does it seem so natural to assume that intentional patterns are “made to order for our narcissistic concerns”? Well, for the same reason it seems so natural to assume that memory is monolithic and veridical: in the absence of information to the contrary, our metacognitive intuitions carry the day. Intentionality becomes a personal tool, as opposed to a low-dimensional projection accessed via metacognitive deliberation (for metacognition), or a heuristic device possessing a definite evolutionary history and a limited range of applications (for cognition more generally).
So to return to our diagram of ‘information pools’:
we can clearly see how the ‘Curse of Dimensionality’ is compounded when it comes to theoretical metacognition. Thus the ‘blind brain’ moniker. BBT argues that the apparent perplexities of consciousness and intentionality that have bedevilled philosophy for millennia are artifacts of cognitive and metacognitive neglect. It agrees with Dennett that the relationship between all these levels is an adaptive one, that low-dimensional projections must earn their keep, but it blocks the assumption that we are the keepers, seeing this intuition as the result of metacognitive neglect (sufficiency, to be precise). It’s no coincidence, it argues, that all intentional concepts and phenomena seem ‘acausal,’ both in the sense of seeming causeless, and in the sense of resisting causal explanation. Metacognition has no access whatsoever to the neurofunctional context of any information broadcast or integrated in consciousness, and so finds itself ‘encapsulated,’ stranded with a profusion of low-dimensional projections that it cannot cognize as such, since doing so would require metacognitive access to the very neurofunctional contexts that are occluded. Our metacognitive sense of intentionality, in other words, depends upon making a number of clear mistakes–much as in the case of memory.
The relations between ‘pools’ it should be noted, are not ‘vehicles’ in the sense of carrying ‘information about.’ All the functioning components in the system would have to count as ‘vehicles’ if that were the case, insofar as the whole is required for that information that does find itself broadcast or integrated. The ‘information about’ part is simply an artifact of what BBT calls medial neglect, the aggregate blindness of the system to its ongoing operations. Since metacognition can only neglect the neural functions that make a given conscious experience possible–since it is itself invisible to itself–it confuses an astronomically complex systematic effect for a property belonging to that experience.
The very reason theorists like Dretske or Fodor insist on semantic interpretations of information is the same reason those interpretations will perpetually resist naturalistic explanation: they are attempting to explain a kind of ‘perspectival illusion,’ the way the information broadcast or integrated exhausts the information available for deliberative cognition, so generating the ‘only-game-in-town-effect’ (or sufficiency). ‘Thoughts’ (or the low-dimensional projections we confuse for them) must refer to (rather than reliably covary with) something in the world because metacognition neglects all the neurofunctional and environmental machinery of that covariance, leaving only Brentano’s famous posit, intentionality, as the ‘obvious’ explanandum–one rendered all the more ‘obvious’ by thousands of largely fruitless years of intentional conceptual toil.
Aboutness is magic, in the sense that it requires the neglect of information to be ‘seen.’ It is an illusion of introspection, a kind of neural camera obscura effect, ‘obvious’ only because metacognition is a captive of the information it receives. This is why our information pool diagram can be so easily retooled to depict the prevailing paradigm in the cognitive sciences today:
The vertical arrows represent medial functions (sound, light, neural activity) that are occluded and so are construed acausally. The ‘mind’ (or the network of low-dimensional projections we confuse as such) is thought to be ‘emergent from’ or ‘functionally irreducible to’ the brain, which possesses both conscious and nonconscious ‘representations of’ or ‘intentional relations to’ the world. No one ever pauses to ask what kind of cognitive resources the brain could bring to bear upon itself, what it would take to reliably model the most complicated machinery known from within that machinery using only cognitive systems adapted to modelling external environments. The truth of the brain, they blithely assume, is available to the brain in the form of the mind.
Or thought.
But this is little more than wishful ‘thinking,’ as the opaque, even occult, nature of the intentional concepts used might suggest. Whatever emergence the brain affords, why should metacognition possess the capacity to model it, let alone be it? Whatever function the broadcasting or integration of a given low-dimensional projection provides, why should metacognition, which is out-and-out blind to neurofunctionality, possess the capacity to reliably model it, as opposed to doing what cognition always does when confronted with insufficient information it cannot flag as insufficient, leap to erroneous conclusions?
All of this is to say that the picture is both more clear and yet less sunny than Dennett’s ultimately abortive interrogation of information privation would lead us to believe. Certainly in an everyday sense it’s obvious that we take perspectives, views, angles, standpoints, and stances vis a vis various things. Likewise, it seems obvious that we have two broad ways in which to explain things, either by reference to what causes an event, or by virtue of what rationalizes an event. As a result, it seems natural to talk of two basic explanatory perspectives or stances, one pertaining to the causes of things, the other pertaining to the reasons for things.
The question is one of how far we can trust our speculations regarding the latter beyond this platitudinous observation. One might ask, for instance, if intentionality is a heuristic, which is to say, a specialized problem solver, then what are its conditions of applicability? The mere fact that this is an open question means that things like the philosophical question of knowledge, to give just one example, should be divided into intentional and mechanical incarnations–at the very least. Otherwise, given the ‘narcissistic idiosyncrasy’ of the former, we need to consider whether the kinds of conundrums that have plagued epistemology across the ages are precisely what we should expect. Chained to the informatic bottleneck of metacognition, epistemology has been trading in low-dimensional projections all along, attempting time and again to wring universality out of what amount to metacognitive glimpses of parochial cognitive heuristics. There’s a very real chance the whole endeavour has been little more than a fool’s errand.
The real question is one of why, as philosophers, we should bother entertaining the intentional stance. If the aim of philosophy really is, as Sellars has it, “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term,” if explanatory scope is our goal, then understanding intentionality amounts understanding it in functional terms, which is to say, as something that can only be understood in terms of the information it neglects. What is the adaptive explanatory ecology of any given intentional concept? What was it selected for? And if it is ‘specialized,’ would that not suggest incompatibility with different (i.e., theoretical) cognitive contexts? Given what little information we have, what arbitrates our various metacognitive glimpses, our perpetually underdetermined interpretations, allowing us to discriminate between any stage on the continuum of the reliable and the farcical?
Short of answers to these questions, we cannot even claim to be engaging in educated as opposed to mere guesswork. So to return to “The Normal Well-tempered Mind,” what does Dennett mean when he says that neurons are best seen as agents? Does he mean that cellular machinery is complicated machinery, and so ill-served when conceptualized as a ‘mere switch’? Or does he mean they really are like little people, organized in little tribes, battling over little hopes and little crimes? I take it as obvious that he means the former, and that his insistence on the latter is more the ersatz product of a commitment he made long ago, one he has invested far too much effort in to relinquish.
‘Feral neurons’ are a metaphoric conceit, an interesting way to provoke original thought, perhaps, a convenient facon de parler in certain explanatory contexts, but more an attempt to make good on an old and questionable argument than anything, one that would have made a younger Dennett, the one who wrote “Mechanism and Responsibility,” smile and scowl as he paused to conjure some canny and critical witticism. Intentionality, as the history of philosophy should make clear, is an invitation to second-order controversy and confusion. Perhaps what we have here is a potential empirical basis for the infamous Wittgensteinian injunction against philosophical language games. Attributing intentionality in first-order contexts is not only well and fine, it’s unavoidable. But as soon as we make second-order claims on the basis of metacognitive deliberation, say things like, ‘Knowledge is justified, true belief,’ we might as well be playing Monopoly using the pieces of Risk, ‘deriving’ theoretical syntaxes constrained–at that point–by nothing ‘out there.’
On BBT, ‘knowledge’ simply is what it has to be if we agree that the life science paradigm cuts reality as close to the joints as anything we have ever known: a system of mechanical bets, a swarm of secondary asteroids following algorithmic trajectories, ‘miraculously’ averting disaster time and again.
Breathtakingly complex.
Alien.


January 30, 2013
Homelessness and the Transhuman: Some Existential Implications of Cognitive Science (by Benjamin Cain)
If science and commonsense about human nature are in conflict, and cognitive science and R. Scott Bakker’s Blind Brain Theory are swiftly bringing this conflict to a head, what are the social implications? After explaining the conflict and putting it in the broader contexts of homelessness and alienation, I contrast the potential dystopian and utopian outcomes for society, focusing on the transhuman utopia in which, quite ironically, science and technology make the fantasy of the manifest image a reality, by turning people into gods. I use the sociopathic oligarch and the savvy politician as models to try to understand the transhuman’s sophisticated self-conception.
.
Our Self-Destructing Home
Richard Dawkins called the genetically-determined, artificial transformation of the environment–for example, the spider’s web, beaver’s dam, or human-made shelter–an organism’s extended body. So to see why alienation is part of our destiny, compare a person’s situation with that of a web-spinning spider. Remove the spider entirely from its web, deprive it of its ability to weave a new one, and the spider would be discombobulated from its homelessness. The spider that spins webs can’t function without them. This creature’s body evolved to walk on silk threads, to eat the prey that can be caught in that net, and to sense threats through vibrations in the web. To the extent that a spider thinks of the world, its viewpoint is web-centric. The spider surely feels most at home in its web where it’s lord of the land; from its perspective, the world beyond is webless and out of its control. So a spider has external and internal means of reorganizing the world, although its internal means are indirect. Its body crafts a tool, the web, for transmuting part of the world into a form that’s compatible with the spider’s way of life, and its brain states lump the world into categories so that the spider can deal with threats and opportunities.
A typical person likewise has a home in the world, although a person’s home is much more flexible. When someone takes a broom to a spider’s web, the spider must weave a new one and once woven, the spider is committed to that location. The web isn’t portable, although it can withstand minor disturbances. By contrast, a person adapts her outer home to suit the environment, and so in a snowy climate a person builds an igloo, while in a rainy place she adds a roof that causes the rain to roll harmlessly down the roof’s slope. And we add a wide variety of buildings to achieve our many purposes, building not just houses but towns, cities, and civilizations. The relevant difference between a spider and a person is that the spider’s body is highly specialized whereas a person’s physiological capacities are more open-ended. All of the web-spinning spider’s physical traits are put to optimal use in the web which the spider must build for itself, whereas a person’s main outer advantage is her opposable thumb which gives her a capacity for infinite manipulations of the environment. Thus, we’re not so committed to just one kind of artificial home, but can adapt our extended body to suit the natural circumstances. To do this, we must understand those circumstances, and so the main web we weave, as it were, is inside rather than outside us. We weave this with our mind or more specifically with our brain. This web is made not of silk threads but of electrical currents which pass between neurons. The web of our thoughts allows us to make many subtle distinctions and so to exploit much more of the environment. Whereas a spider requires an outer web to feel at home and even to live as a spider, a person requires a mind made up of an inner web of memories, imaginings, feelings, categories, speculations, and inferences.
But there’s a paradox. A person’s mind accesses the world through the five senses and processes the information received. That task is what the mind is mainly for in evolutionary terms. But those senses don’t similarly access the mind itself or the brain. The senses are all pointed outward. They could conceivably be extended by technology and then directed inward to observe the brain as it processes the information generated by its activities. In fact, this is what dreams or psychedelic drugs may do; the hallucinations you perceive when sleeping or stoned may reflect deeper mental processes than those with which ordinary consciousness is familiar. In any case, observation doesn’t suffice for understanding, so the impressions of what the brain does while it’s thinking would have to be interpreted, and we don’t yet have as much experience of the brain’s intricacies as we do, say, of elements in the outer world like earth, water, and fire.
The paradox, then, is that our primary shelter and source of comfort is internal and yet this shelter dissolves itself.
We belong not so much to the brick and concrete homes we build–those are not the worlds we truly live in–but to the cherished beliefs of our religious, political, and other ideologies. The degree to which we live in our heads is the degree to which we live as persons, as mammals that are highly curious and reflective not just about the physical environment but about our capacities for understanding it. Self-awareness is a necessary condition of personhood. But the more we look at ourselves, the more we shrink from our withering glare until the self we imagine we are is lost. We’re most at home in the world when we feel free to fill the unobserved void of our inner self with speculations and fantasies. They form the so-called manifest image, the naive, intuitive picture of the self that we dream up because we’re extremely curious and won’t settle for such a blind spot. We replace ignorance about the brain and the mind with fanciful, flattering notions such as those you find in religious myths and in other social conventions. But the more we think about our inner nature, the more rigorous and scientific our self-reflections become until we discover that the manifest image is largely or perhaps even entirely a fiction; certainly, that image is a work of art rather than a self-empowering scientific theory.
We learn that there is no inner self in the ordinary, comforting sense, but we’re not adapted to identify with our body because our body is pitifully weak. Again, our main physiological advantage is our opposable thumb, and it’s our brainpower that permits us to reinforce our body, to engineer an airplane because we have no wings, a saw because we have no claws, clothes because we have no fur, and so on. In effect, we’re most proud of our brain–except when we learn what the brain actually is and does. As cognitive science and BBT in particular show (and as the philosopher Immanuel Kant maintained), the mind prefers delusion to a humble admission of ignorance. As those who attempt to still their thoughts in meditation will testify, the mind loves to think and won’t shut up unless the thinker exerts herself in ignoring its spontaneous ramblings. We fill our head with chitchat, with rumours and all manner of mental associations, often on the basis of scarce input. We take that input and run with it and we’re drawn especially to those speculations that flatter us. Like a hermit crab, we climb inside the net of those speculations and we live there, meaning that we identify our self with them. Most of us don’t know exactly what the inner self is, but we surmise that the self is rational, conscious, free, unified, and even immaterial and immortal. Then we take a closer look, with science, and we find that we can look past the illusion. Of course we’re not as we naively picture we are: look at the brain, see what it does, and notice that there’s no ghost inside! If we were hermit crabs, we’d learn that our shell isn’t so sturdy after all, that it dissolves on contact. The difference is that whereas the crab needs that shell to protect it from others, we need the manifest image to protect us from ourselves, or rather from our capacity to discover that we have no self.
Mind you, we erase not just the naive image of the self, but that of the outer world as well. The senses and the brain present a colourful, three-dimensional world that’s relative to each viewer’s perspective, thus effectively flattering the ego. Moreover, we perceive all events as having a past, a future, and a present moment in consciousness. Einsteinian physics teaches, though, that space and time are not as we so intuit them. Again, we think of causes and effects as mechanisms, as though the cosmos were a machine, but that’s a naive, deistic conception. We think of the universe as governed by laws even though the scientist no longer assumes there’s an intelligent designer to issue them or to ensure that the universe follows them. We perceive the environment as made up of whole, solid things even though matter at the quantum level isn’t solid or neatly divided. Modern science thus undermines all intuitive conceptions, both those of the self and of everything else. This is just to say that the brain’s spontaneous chatter about this or that which happens to mesmerize us isn’t likely to be the brain’s last word on the subject.
.
The Horror of Alienation
The paradox of reason, which makes reason an evolutionary curse rather than just a gift, is that we live mainly in the ideational home we make in our heads, but those ideas eventually lead us to recognize that our heads are empty of anything with which we’d prefer to identify ourselves. Reason thus evicts us from our homes, kicking us to the curb, whereupon we may wander the cultural byways as outsiders, unable to lose the selves we cease to believe in in the cultural products that cater to the mass delusions. As least, that’s one path for the evicted to travel. Another is for them to sneak back into their homes, to forget that they don’t belong there and to pretend that they’re full-fledged home owners even though they know they’re dressed in rags and smell like urine. That’s an illustration of the difference between existential authenticity and inauthenticity.
To understand what I mean by that distinction, we need to consider the idea of alienation. The way I like to approach this is through the melancholic philosophy that Lovecraft dramatized in his cosmicist short stories. And it seems to me that this philosophy is analogous to the philosophical upshot of BBT. So what BBT contends is that scientific truth is opposed to personal truth, that what a self actually is is very different from what is naively presumed. This opposition raises the likelihood of cultural apocalypse and of the intriguing possibility of transhumanity to which I’ll turn in the next section. But what Lovecraft realized is that there’s a more general opposition, between the potential science of a superhuman species and even our supreme rational output. Just as the manifest image is inadequate to our scientific image, so too our scientific image may be inadequate to the superhuman conception of the world. To get an idea of the relevant sort of superhuman, picture Superman, the fictional hero whose superpowers are confined to his physiological and perhaps moral capacities, and now add superhuman intelligence plus the important levels of reality that may be exposed only to someone of that mental caliber. Of course, Lovecraft stressed that this more general scenario of what philosophers call mysterianism, which is a plausible result of atheistic naturalism, makes for psychological horror. Whereas BBT and cognitive science kick us to the curb, Lovecraft removes the curb, the street, and the whole planet and leaves us floating in a void that only a hideously indifferent alien could comprehend and use to its inhuman advantage.
What, then, is alienation? It’s just the futile feeling of homesickness, of not belonging somewhere you’d like to be or indeed of not belonging anywhere at all. Science alienates us from our preference to see ourselves in terms of the manifest image. We’d prefer to identify with that naive conception of the ego or of the immortal spirit, but informed people with intellectual integrity or perhaps with the foolishness to take human knowledge so seriously as to upset their chance for a happy life, are estranged from that conception. Married people who get divorced may feel terribly awkward when they’re then forced to be together, say, in some legal hearing. Likewise, science and especially cognitive science seem to push us towards a reckoning with the naive self-image so that even if we’re forced to project that image onto the brain, we’re sickened by or bored with that particular painting. In this context, alienation is the fear that that reckoning leaves us nowhere, or at least unsure of where to go next. And an existentially authentic, self-evicted mammal stays true to that homelessness, whereas an inauthentic one settles for a delusion rather than the reality.
.
Home for the Transhuman
I want to consider some possible refuges for those who are existentially homeless. The most likely scenario, I fear, is the dark one that RSB speaks of and that is in fact a staple of dark science fiction. In this scenario, most people are reduced to the inauthentic state. What may happen, then, is that the majority either aren’t permitted to understand the natural facts of human identity or they prefer not to understand them, in which case they become subhuman: slaves to the technocrats who perfect technoscientific means of engineering cultural and mental spaces to suit the twisted purposes of the sociopathic oligarchs that tend to rule; automatons trained to consume material goods like cattle, whose manifest image functions as a blinder to keep them on the straight and narrow path; or hypocrites who have the opportunity and intelligence to recognize the sad truth but prefer what the philosopher Robert Nozick calls the Happiness Machine (the capitalistic monoculture) and so suffer from severe cognitive dissonance and a kind of Stockholm Syndrome. These aren’t dubious predictions, but are descriptions of what most people, to some extent, are currently like in modern societies. The prediction is only that these dynamics will be intensified and perhaps perfected, so that we’d have on our hands the technoscientific dystopia described by Orwell, Huxley, and others. I should add that on a Lovecraftian view, it’s possible that human scientific control of our nature will never be absolute, because part of our nature may fall within the ambit of reality that transcends our comprehension.
Is there a more favourable outcome? Many transhumanists speak optimistically about a mergence between our biological body and our extended, technological one. If we aren’t immaterial spirits who pass on to a supernatural realm after our physical death, we can still approximate that dualistic dream with technoscience. We can build heaven on earth and deify ourselves with superhuman knowledge and power; cast off our genetic leash/noose, through genetic engineering; overcome all natural obstacles through the internet’s dissemination of knowledge and nanoengineering; and even live forever by downloading our mental patterns into machines. In short, even though the manifest image of a conscious, rational, free, and immortal self is currently only an illusion that conceals the biological reality, the hope is that technoscience can actually make us more rational, conscious, free, and immortal than we’ve ever imagined. Of course, there are many empirical questions as to the feasibility of various technologies, and there’s also the dystopian or perhaps just realistic scenario in which such godlike power benefits the minority at the majority’s expense. But there’s also the preliminary question of the existential significance of optimistic transhumanism, granting at least the possibility of that future. How should we understand the evolutionary stage in which we set aside our dualistic myths and merge fully with our technology to become more efficient natural machines? Indeed, how would such transhumans think of themselves, given that they’d no longer entertain the manifest image?
I think we should conceive of this in terms of a natural process. Atoms bond to become molecules, molecules join to form macroscopic things like rocks, animals, and planets, and some animals incorporate their handiwork to become creatures that can interact more fully with the rest of nature. There’s the mereological process of complexification and the temporal process of evolution, and these may come together to produce transhumans. Lacking the manifest image and the vanity but also the moral limitations which that image subserves, a transhuman would have to conceive of itself as strictly part of some such natural process. The universe changes itself, and the transhuman can bring about many more of those changes than can a deluded, self-limited mammal. Currently, we transform much of our planet, whereas a transhuman who accepts only the scientific image of human nature may acquire the power to transform star systems, galaxies, or untold dimensions. A transhuman wouldn’t think in normative or teleological terms; such a natural god would have no goals or individualistic hallucinations, and would take to heart the Joker’s lines in the movie, The Dark Knight, “Do I really look like a guy with a plan?…You know, I just…do things.”
We have a model of such a transhuman god and that’s the oligarch. An oligarch is a very powerful person who’s reached the top of a national pecking order and is either sufficiently sociopathic to have reached that position with finesse or is naturally corrupted by the power he thereby acquires, in which case he conditions himself to be sociopathic. What I mean by “sociopathy” in this context is that power corrupts in the specific sense that the very powerful person tends to lose not just a sense of morality but the capacity for empathy. A transhuman would share that incapacity, since morality is part of the illusion of the manifest image. However, a transhuman and a corrupted ruler would differ significantly in that the latter would still act egoistically; indeed, such a person is a megalomaniac who believes he’s entitled to so much wealth and power because of his personal magnificence. By contrast, the transhuman would have no illusion of personhood: a transhuman would be only an instrument that ushers in galactic transformations; these wouldn’t be intended or preferred, but would be understood as just meaningless, natural evolutions of the cosmic landscape.
Another model that can help us get a sense of what transhuman life would be like is the democratic politician. I may be slightly more cynical than the average person living in a democracy, but I just take it for granted that a politician never speaks the truth in public. More precisely, the politician never tells the people at large exactly what she’s thinking. This is because when a politician speaks publicly, she’s on the job and so must carry out the functions of her office. As is said in the business, the politician–and the lobbyist, political handler, public relations expert, spin doctor, partisan, and so forth–speak publicly only in “talking points,” never leveling with the public or having anything as pedestrian as a conversation or a dialogue with a presumed equal. This is to say, then, that the politician eliminates semantics in her side of the public discourse: the meaning of her statements is irrelevant to their function, and the politician is interested only in that function, which is to say in the statements’ shaping of public opinion to the politician’s advantage. In other words, a politician’s public statements are guided only by what we might call their political syntax, which is the set of social scientific laws that make plausible various Machiavellian strategies for manipulating people, for exploiting their weaknesses and biases as a means to some end. The ends of the politician’s purely instrumental use of language are usually the limited ones of maintaining the politician’s privileged position and of stroking her ego, but may rarely include the purpose of benefitting the country at large according to the politician’s principles.
Again, there are interesting differences between this politician and the transhuman. A politician has goals whereas the transhuman has none. We might prefer to say that the transhuman has “implicit purposes,” but this would be sheer personification, since anything in the universe can be interpreted as acting towards some end point that isn’t mentally represented by that which is so acting. This would just amount to reading intelligent design into everything and positing some transcendent designer that does so represent the goals which that designer’s creations would be built to achieve. No, a transhuman who has fully embraced the scientific image and so abandoned the crude conception of personhood wouldn’t conceive of herself as mentally representing anything, which is to say that she would understand her mental states to be meaningless pseudo-instruments, as elements of a natural process. She would have neither beliefs nor desires in the ordinary sense and so she wouldn’t seek her enrichment or even the continuation of her life (although her vast technoscientific knowledge and power would render her invulnerable, in any case). The transhuman would be a new force of nature, as blind, deaf, and dumb as the wind or as sunshine. By contrast, a politician’s instrumentalism is petty, the scheme of a child playing at being a god. A politician may flatter herself that in her political role she acts as a savvy machine that sees past the delusions of the herd and can manipulate the masses at will by pushing their proverbial buttons, uttering a code word or two to initiate the news cycle, and so forth. But as long as the politician labours under the quaint delusion that she personally plans or desires anything, she’s better thought of as a wannabe god, as a child who hasn’t yet grown into her shoes. At best, the cynical politician would be the harbinger of the god to come, the Silver Surfer to the future Galactus.
Where, then, would the transhuman call home? The universe would be the transhuman’s playground, just as a force of nature works wherever it’s naturally able. A transhuman identifies not with a figment of its imagination, with a particular mind or consciousness, but with all of nature, since the transhuman’s knowledge and power would encompass that whole domain, or at least enough of the universe that the transhuman would effectively be divine. The transhuman’s reach would extend very far in space and time, and her body would be the extended one of technology that only morally-neutral science could unleash. And the transhuman would understand natural processes at a highly technical level; she’d be immortal, fearless, and enmeshed in the universe’s course of self-creation, as opposed to being limited, alienated, and homeless. Perhaps technoscience is the means of building gods, of ironically turning the manifest image, which is currently a fantasy, into a reality, and we are mere strands in the cocoon that will birth that new form of life. This transhumanism seems to me the most uplifting way of imagining the outcome of the clash between science and commonsense, but of course this doesn’t mean the scenario is plausible or likely. At any rate, if BBT is correct, we are primarily not individual persons with private agendas, but are stages of some natural process that we can’t yet see clearly, because our vision is obscured by smoke and mirrors.


January 28, 2013
Neither Separate, Nor Equal
Aphorisim of the Day: Some argue against yesterday. Some argue against tomorrow. But everyone kisses ass when it comes to today.
.
‘Continuity bias’ is a term I coined years back to explain how it could be that so many people could remain so unaware of the kinds of fundamental upheaval that are about to engulf human civilization. I sit with my three year-old daughter watching little robots riding bicycles, walking tightropes, doing dance routines and so on, thinking how when I was her age the world was electrified by the first handheld calculators. So I ask myself, with more than a little apprehension, I assure you, What can my daughter expect?
The only remotely plausible answer to this question is almost entirely empty, and yet all the more consequential for it: What can my daughter expect? Something radically different than this…
Something fundamentally discontinuous.
To crib concepts used by Reinhart Kosselleck to characterize Neuzeit, or modernity, we are living in an age where our ‘horizon of expectation’ has all but collapsed into our ‘space of experience.’ My daughter will live through an age when the traditional verities of human experience will likely be entirely discredited by neuroscientific fact, and where the complexities and capacities of our machines will almost certainly outrun our own complexities and capacities. And this, as much as anything else, is the reason why I find any kind of principled defence of traditionalism at once poignant and alarming: poignant because I too belonged to that tradition and I too mourn its imminent passing, and alarming because it does not bode well when the change at issue is so fundamental that the very institutions charged with critiquing the tradition are now scrambling to rationalize its defence.
So it was I found myself shaking my head while reading Jason Bartulis’s recent defence of nooconservativism on Nonsite.org. I decided to write on it because of the way it exemplifies what I’ve been calling the ‘separate-but-equal strategy’ and how it tends to devolve into question-begging and special pleading. But since head-shaking whilst reading is never a good sign, I encourage people to challenge my interpretation, particularly if you find Bartulis’s position appealing. Maybe I am overlooking something. Against all reason, thousands of people are now reading these posts, more than enough for me to become sensitive to the consequences of any oversights on my part.
Bartulis summarizes his position thus:
I’ve been arguing …. that engineering questions can only be answered in engineering terms. Conversely, I’ve tracked the infelicities attending the importation of the explanatory vocabulary of the natural sciences into human sciences to demonstrate why engineering explanations can’t work as explanations to normative questions. Thinking they can is one way of committing, not the Intentional, but the Naturalistic Fallacy in (literary) epistemology and in the philosophy of mind that subtends most attempts to make cognition a category for literary and cultural analysis.
Now since I once defended a position similar to this, I understand the straightforward (if opportunistic) nature of its appeal: ‘Your cognition has its yardsticks, my cognition has mine, therefore keep your yardstick away from my cognition.’ But it really is a peculiar argument, if you think about. For instance, it’s a given that functional explanations and intentional explanations are conceptually incommensurable. This has been part of the problem all along. And yet Bartulis (like Zizek, only less dramatically) has convinced himself that this problem is itself the solution.
Bartulis is arguing that because the functional and the intentional are incommensurable, the traditional intentional discursive domain is secure. Why? Because once you acknowledge the cognitive autonomy of intentional discourse, you can label any functional explanatory incursion into that discourse’s domain as ‘fallacious,’ a version of G. E Moore’s ‘Naturalistic Fallacy,’ to be precise. A kind of ‘category mistake.’ And why should we acknowledge the cognitive autonomy of intentional discourse? Well, because only it can cognize its domain. As he puts it:
My point, of course, is an anti-reductionist one. No amount of mapping of which synaptic vectors alight when can explain why I think that I should interpret a passage (or character, or author) one way rather than another. Nor can visual mapping, in and of itself, explain what I mean to do by interpreting a passage one way rather than another. And that’s because neither normative significance nor meaning is something that synapses, simply, have, and so normative significance and meaning aren’t things that we can, simply, see. Stating the position a bit more carefully: at least in the case of human perception—say, listening to a work of art or, more ordinarily, conversing with a familiar foe—there certainly are cases when normative significance and meaning can be seen and heard straightaway. Moreover, there are interpretive contexts when would-be explainers immediately perceive, and so can intelligibly claim to know, that a given subject is herself immediately perceiving the meaning of some object. But our best account of those instances proceeds…by placing those instances in the space of reasons.
Here we can clearly see how the separate but equal strategy requires that the nooconservative make a virtue out of ignorance and the failure of imagination. I could pick this passage apart phrase by phrase, fault Bartulis for cherry-picking neurofunctional elements that rhetorically jar with traditional conceits (as opposed to ‘tracking infelicities’), or I could take him at his word, and devise the very interpretations that he finds unimaginable, argue–along lines at least as plausible as his own–that ‘normative significance’ is something that only neurofunctional accounts will allow us to cognize. Why, for instance, should the subpersonal prove any less appropriate than the psychoanalytic?
But all I really need to do is invoke the what I’ve called the Big Fat Pessimistic Induction: Given that, throughout its historical metastasis, science (and functional explanation) has utterly revolutionized every discursive domain it has colonized, why should we presume the soul will prove to be any different? What plucks us from the sum of natural explanation, and so guarantees the cognitive autonomy of your tradition?
The fact that Bartulis needs to recognize is that these are questions that only science can decisively answer. The only way we have of knowing whether the brain sciences will revolutionize the humanities is to wait and see whether the brain sciences will revolutionize the humanities. He and innumerable other traditionalists will float claim after territorial claim only to watch them vanish over the cataract of academic fashion, while the sciences of the brain will continue their grim and inexorable march, leveraging techniques and technologies that will command public and commercial investment, not to mention utterly remake the ‘human.’ Once again, it’s a given that functional explanations and intentional explanations are conceptually incommensurable. This is a big part of the problem. The other part lies in the power of functional explanations, the fact that they, unlike the ‘dramatic idiom’ of intentionality, actually allow us to radically remake the natural world–of which we happen to be a part. The sad fact is that Bartulis and his ilk are institutionally overmatched, that the contest was never equal, but only appeared so, simply because the complexities of the brain afforded their particular prescientific discourse a prolonged reprieve from the consequences of scientific inquiry.
“How uncanny,” Bartulis writes of those bemoaning scientific literacy in the humanities, “to find the language of change, force, and progress surfacing in an intellectual domain whose defining critical gesture, for better or worse, have involved critiques of those very terms as they operate in liberal discourse and other Enlightenment ideologies.” But this is simply a canard. He thinks he’s rapping critical knuckles–‘You should know better!’–when in point of fact he’s underscoring his own ignorance. Personally, I think science will cut our collective throats (using, of course, enough anaesthesia to confound the event with bliss and transcendence). Science builds, complicates, empowers, no matter what one thinks of Old Enlightenment ideologies. And the fact that it does so blindly does more to impugn his nooconservative stance than support it.
So, to return to the quote above: Yes, it is the case that we often, in those instances, enjoy the ‘feeling of knowing.’ But we now know the feeling itself is an indicator of nothing (fools, after all, have their convictions). We also now know that deliberative metacognition is severely limited: veridical auto-theorization is clearly not something our brains evolved to do. And we have no clue whatsoever whether ‘our best account of those instances proceeds by placing those instances in the space of reasons.’
‘But you’re arguing in the space of reasons now!’ Bartulis would almost certainly cry, assuming that I necessarily mean what he means when I use concepts like ‘use’ (even though I do not).
To which, I need only shrug and say, ‘It’s a long shot, but you could be right.’
I wanna believe, but traditions and their centrisms generally don’t fare that well once science jams its psychopathic foot in the door.


January 23, 2013
Zizek, Hollywood, and the Disenchantment of Continental Philosophy
Aphorism of the Day: At least a flamingo has a leg to stand on.
.
Back in the 1990′s whenever I mentioned Dennett and the significance of neuroscience to my Continental buddies I would usually get some version of ‘Why do you bother reading that shite?’ I would be told something about the ontological priority of the lifeworld or the practical priority of the normative: more than once I was referred to Hegel’s critique of phrenology in the Phenomenology.
The upshot was that the intentional has to be irreducible. Of course this ‘has to be’ ostensibly turned on some longwinded argument (picked out of the great mountain of longwinded arguments), but I couldn’t shake the suspicion that the intentional had to be irreducible because the intentional had to come first, and the intentional had to come first because ‘intentional cognition’ was the philosopher’s stock and trade–and oh-my, how we adore coming first.
Back then I chalked up this resistance to a strategic failure of imagination. A stupendous amount of work goes into building an academic philosophy career; given our predisposition to rationalize even our most petty acts, the chances of seeing our way past our life’s work are pretty damn slim! One of the things that makes science so powerful is the way it takes that particular task out of the institutional participant’s hands–enough to revolutionize the world at least. Not so in philosophy, as any gas station attendant can tell you.
I certainly understood the sheer intuitive force of what I was arguing against. I quite regularly find the things I argue here almost impossible to believe. I don’t so much believe as fear that the Blind Brain Theory is true. What I do believe is that some kind of radical overturning of noocentrism is not only possible, but probable, and that the 99% of philosophers who have closed ranks against this possibility will likely find themselves in the ignominious position of those philosophers who once defended geocentrism and biocentrism.
What I’ve recently come to appreciate, however, is that I am literally, as opposed to figuratively, arguing against a form of anosognosia, that I’m pushing brains places they cannot go–short of imagination. Visual illusions are one thing. Spike a signal this way or that, trip up the predictive processing, and you have a little visual aporia, an isolated area of optic nonsense in an otherwise visually ‘rational’ world. The kinds of neglect-driven illusions I’m referring to, however, outrun us, as they have to, insofar as we are them in some strange sense.
So here we are in 2013, and there’s more than enough neuroscientific writing on the wall to have captured even the most insensate Continental philosopher’s attention. People are picking through the great mountain of longwinded arguments once again, tinkering, retooling, now that the extent of the threat has become clear. Things are getting serious; the akratic social consequences I depicted in Neuropath are everywhere becoming more evident. The interval between knowledge and experience is beginning to gape. Ignoring the problem now smacks more of negligence than insouciant conviction. The soul, many are now convinced, must be philosophically defended. Thought, whatever it is, must be mobilized against its dissolution.
The question is how.
My own position might be summarized as a kind of ‘Good-Luck-Chuck’ argument. Either you posit an occult brand of reality special to you and go join the Christians in their churches, or you own up to the inevitable. The fate of the transcendental lies in empirical hands now. There is no way, short of begging the question against science, of securing the transcendental against the empirical. Imagine you come up with, say, Argument A, which concludes on non-empirical Ground X that intentionality cannot be a ‘cognitive illusion.’ The problem, obviously, is that Argument A can only take it on faith that no future neuroscience will revise or eliminate its interpretation of Ground X. And that faith, like most faith, only comes easy in the absence of alternatives–of imagination.
The notion of using transcendental speculation to foreclose on possible empirical findings is hopeless. Speculation is too unreliable and nature is too fraught with surprises. One of the things that makes the Blind Brain Theory so important, I think, is the way its mere existence reveals this new thetic landscape. By deriving the signature characteristics of the first-personal out of the mechanical, it provides a kind of ‘proof of concept,’ a demonstration that post-intentional theory is not only possible, but potentially powerful. As a viable alternative to intentional thought (of which transcendental philosophy is a subset), it has the effect of dispelling the ‘only game in town illusion,’ the sense of necessity that accompanies every failure of philosophical imagination. It forces ‘has to be’ down to the level of ‘might be’…
You could say the mere possibility that the Blind Brain Theory might be empirically verified drags the whole of Continental philosophy into the purview of science. The most the Continental philosopher can do is match their intentional hopes against my mechanistic fears. Put simply, the grand old philosophical question of what we are no longer belongs to them: It has fallen to science.
.
For better and for worse, Metzinger’s Being No One has become the textual locus of the ‘neuroscientific threat’ in Continental circles. His thesis alone would have brought him to attention, I’m sure. That aside, the care, scholarship, and insight he brings to the topic provide the Continental reader with a quite extraordinary (and perhaps too flattering) introduction to cognitive science and Anglo-American philosophy of mind as it stood a decade or so ago.
The problem with Being No One, however, is precisely what renders it so attractive to Continentalists, particularly those invested in the so-called ‘materialist turn’: rather than consider the problem of meaning tout court, it considers the far more topical problem of the self or subject. In this sense, it is thematically continuous with the concerns of much Continental philosophy, particularly in its post-structuralist and psychoanalytic incarnations. It allows the Continentalist, in other words, to handle the ‘neuroscientific threat’ in a diminished and domesticated form, which is to say, as the hoary old problem of the subject. Several people have told me now that the questions raised by the sciences of the brain are ‘nothing new,’ that they simply bear out what this or that philosophical/psychoanalytic figure has said long ago–that the radicality of neuroscience is not all that ‘radical’ at all. Typically, I take the opportunity to ask questions they cannot answer.
Zizek’s reading of Metzinger in The Parallax View, for instance, clearly demonstrates the way some Continentalists regard the sciences of the brain as an empirical mirror wherein they can admire their transcendental hair. For someone like Zizek, who has made a career out of avoiding combs and brushes, Being No One proves to be one the few texts able to focus and hold his rampant attention, the one point where his concern seems to outrun his often brutish zest for ironic and paradoxical formulations. In his reading, Zizek immediately homes in on those aspects of Metzinger’s theory that most closely parallel my view (the very passages that inspired me to contact Thomas years ago, in fact) where Metzinger discusses the relationship between the transparency of the Phenomenal Self-Model (PSM) and the occlusion of the neurofunctionality that renders it. The self, on Metzinger’s account, is a model that cannot conceive itself as a model; it suffers from what he calls ‘autoepistemic closure,’ a constitutive lack of information access (BNO, 338). And its apparent transparency accordingly becomes “a special form of darkness” (BNO, 169).
This is where Metzinger’s account almost completely dovetails with Zizek’s own notion of the subject, and so holds the most glister for him. But he defers pressing this argument and turns to the conclusion of Being No One, where Metzinger, in an attempt to redeem the Enlightenment ethos, characterizes the loss of self as a gain in autonomy, insofar as scientific knowledge allows us to “grow up,” and escape the ‘tutelary nature’ of our own brain. Zizek only returns to the lessons he finds in Metzinger after a reading of Damasio’s rather hamfisted treatment of consciousness in Descartes’ Error, as well as a desultory and idiosyncratic (which, as my daughter would put it, is a fancy way of saying ‘mistaken’) reading of Dennett’s critique of the Cartesian Theater. Part of the problem he faces is that Metzinger’s PSM, as structurally amenable as it is to his thesis, remains too topical for his argument. The self simply does not exhaust consciousness (even though Metzinger himself often conflates the two in Being No One). Saying there is no such thing as selves is not the same as saying there is no such thing as consciousness. And as his preoccupation with the explanatory gap and cognitive closure makes clear, nothing less than the ontological redefinition of consciousness itself is Zizek’s primary target. Damasio and Dennett provide the material (as well as the textual distance) he requires to expand the structure he isolates in Metzinger. As he writes:
Are we free only insofar as we misrecognize the causes which determine us? The mistake of the identification of (self-)consciousness with misrecognition, with an epistemological obstacle, is that it stealthily (re)introduces the standard, premodern, “cosmological” notion of reality as a positive order of being: in such a fully constituted positive “chain of being” there is, of course, no place for the subject, so the dimension of subjectivity can be conceived of only as something which is strictly co-dependent with the epistemological misrecognition of the positive order of being. Consequently, the only way effectively to account for the status of (self-)consciousness is to assert the ontological incompleteness of “reality” itself: there is “reality” only insofar as there is an ontological gap, a crack, in its very heart, that is to say, a traumatic excess, a foreign body which cannot be integrated into it. This brings us back to the notion of the “Night of the World”: in this momentary suspension of the positive order of reality, we confront the ontological gap on account of which “reality” is never a complete, self-enclosed, positive order of being. It is only this experience of psychotic withdrawal from reality, of absolute self-contraction, which accounts for the mysterious “fact” of transcendental freedom: for a (self-)consciousness which is in effect “spontaneous,” whose spontaneity is not an effect of misrecognition of some “objective” process. 241-242
For those with a background in Continental philosophy, this ‘aporetic’ discursive mode is more than familiar. What I find so interesting about this particular passage is the way it actually attempts to distill the magic of autonomy, to identify where and how the impossibility of freedom becomes its necessity. To identify consciousness as an illusion, he claims, is to presuppose that the real is positive, hierarchical, and whole. Since the mental does not ‘fit’ with this whole, and the whole, by definition, is all there is, it must then be some kind of misrecognition of that whole–‘mind’ becomes the brain’s misrecognition of itself as a brain. Brain blindness. The alternative, Zizek argues, is to assume that the whole has a hole, that reality is radically incomplete, and so transform what was epistemological misrecognition into ontological incompleteness. Consciousness can then be seen as a kind of void (as opposed to blindness), thus allowing for the reflexive spontaneity so crucial to the normative.
In keeping with his loose usage of concepts from the philosophy of mind, Zizek wants to relocate the explanatory gap between mind and brain into the former, to argue that the epistemological problem of understanding consciousness is in fact ontologically constitutive of consciousness. What is consciousness? The subjective hole in the material whole.
[T]here is, of course, no substantial signified content which guarantees the unity of the I; at this level, the subject is multiple, dispersed, and so forth—its unity is guaranteed only by the self-referential symbolic act, that is,”I” is a purely performative entity, it is the one who says “I.” This is the mystery of the subject’s “self-positing,” explored by Fichte: of course, when I say “I,” I do not create any new content, I merely designate myself, the person who is uttering the phrase. This self-designation nonetheless gives rise to (“posits”) an X which is not the “real” flesh-and-blood person uttering it, but, precisely and merely, the pure Void of self-referential designation (the Lacanian “subject of the enunciation”): “I” am not directly my body, or even the content of my mind; “I” am, rather, that X which has all these features as its properties. 244-245
Now I’m no Zizek scholar, and I welcome corrections on this interpretation from those better read than I. At the same time I shudder to think what a stolid, hotdog-eating philosopher-of-mind would make of this ontologization of the explanatory gap. Personally, I lack Zizek’s faith in theory: the fact of human theoretical incompetence inclines me to bet on the epistemological over the ontological most every time. Zizek can’t have it both ways. He can’t say consciousness is ‘the inexplicable’ without explaining it as such.
Either way, this clearly amounts to yet another attempt to espouse a kind of naturalism without transcendental tears. Like Brassier in “The View from Nowhere,” Zizek is offering an account of subjectivity without self. Unlike Brassier, however, he seems to be oblivious to what I have previously called the Intentional Dissociation Problem: he never considers how the very issues that lead Metzinger to label the self hallucinatory also pertain to intentionality more generally. Certainly, the whole of The Parallax View is putatively given over to the problem of meaning as the problem of the relationship between thought/meaning and being/truth, or the problem of the ‘gap’ as Zizek puts it. And yet, throughout the text, the efficacy (and therefore the reality) of meaning–or thought–is never once doubted, nor is the possibility of the post-intentional considered. Much of his discussion of Dennett, for instance, turns on Dennett’s intentional apologetics, his attempt to avoid, among other things, the propositional-attitudinal eliminativism of Paul Churchland (to whom Zizek mistakenly attributes Dennett’s qualia eliminativism (PV, 177)). But where Dennett clearly sees the peril, the threat of nihilism, Zizek only sees an intellectual challenge. For Zizek, the question, Is meaning real? is ultimately a rhetorical one, and the dire challenge emerging out of the sciences of the brain amount to little more than a theoretical occasion.
So in the passage quoted above, the person (subject) is plucked from the subpersonal legion via “the self-referential symbolic act.” The problems and questions that threaten to explode this formulation are numerous, to say the least. The attraction, however, is obvious: It apparently allows Zizek, much like Kant, to isolate a moment within mechanism that nevertheless stands outside of mechanism short of entailing some secondary order of being–an untenable dualism. In this way it provides ‘freedom’ without any incipient supernaturalism, and thus grounds the possibility of meaning.
But like other forms of deflationary transcendentalism, this picture simply begs the question. The cognitive scientist need only ask, What is this ‘self-referential symbolic act’? and the circular penury of Zizek’s position is revealed: How can an act of meaning ground the possibility of meaningful acts? The vicious circularity is so obvious that one might wonder how a thinker as subtle as Zizek could run afoul it. But then, you must first realize (as, say, Dennett realizes) the way intentionality as a whole, and not simply the ‘person,’ is threatened by the mechanistic paradigm of the life sciences. So for instance, Zizek repeatedly invokes the old Derridean trope of bricolage. But there’s ‘bricolage’ and then there’s bricolage: there’s fragments that form happy fragmentary wholes that readily lend themselves to the formation of new functional assemblages, ‘deconstructive ethics,’ say, and then there’s fragments that are irredeemably fragmentary, whose dimensions of fragmentation are such that they can only be misconceived as wholes. Zizek seizes on Metzinger’s account of the self in Being No One precisely because it lends itself to the former, ‘happy’ bricolage, one where we need only fear for the self and not the intentionality that constitutes it.
The Blind Brain Theory, however, paints a far different portrait of ‘selfhood’ than Metzinger’s PSM, one that not only makes hash of Zizek’s thesis, but actually explains the cognitive errors that motivate it. On Metzinger’s account, ‘auto-epistemic closure’ (or the ‘darkness of transparency’) is the primary structural principle that undermines the ‘reality’ of the PSM and the PSM only. The Blind Brain Theory, on the other hand, casts the net wider. Constraints on the information broadcast or integrated are crucial, to be sure, but BBT also considers the way these constraints impact the fractionate cognitive systems that ‘solve’ them. On my view, there is no ‘phenomenal self-model,’ only congeries of heuristic cognitive systems primarily adapted to environmental cognition (including social environmental cognition) cobbling together what they can given what little information they receive. For Metzinger, who remains bound to the ‘Accomplishment Assumption’ that characterizes the sciences of the brain more generally, the cognitive error is one of mistaking a low-dimensional simulation for a reality. The phenomenal self-model, for him, really is something like ‘a flight-simulator that contains its own exits.’
On BBT, however, there is no one error, nor even one coherent system of errors; instead there are any number of information shortfalls and cognitive misapplications leading to this or that form of reflective, acculturated forms of ‘selfness,’ be it ancient Greek, Cartesian, post-structural, or what have you. Selfness, in other words, is the product of compound misapprehensions, both at the assumptive and the theoretical levels (or better put, across the spectrum of deliberative metacognition, from the cursory/pragmatic to the systematic/theoretical).
BBT uses these misconstruals, myopias, and blindnesses to explain the ways intentionality and phenomenality confound the ‘third-person’ mechanistic paradigm of the life sciences. It can explain, in other words, many of the ‘structural’ peculiarities that make the first-person so refractory to naturalization. It does this by interpreting those peculiarities as artifacts of ‘lost dimensions’ of information, particularly with reference to medial neglect. So for instance, our intuition of aboutness derives from the brain’s inability to model its modelling, neglecting, as it must, the neurofunctionality responsible for modelling its distal environments. Thus the peculiar ‘bottomlessness’ of conscious cognition and experience, the way each subsequent moment somehow becomes ground of the moment previous (and all the foundational paradoxes that have arisen from this structure). Thus the metacognitive transformation of asymptotic covariance into ‘aboutness,’ a relation absent the relation.
And so it continues: Our intuition of conscious unity arises from the way cognition confuses aggregates for individuals in the absence of differentiating information. Our intuition of personal identity (and nowness more generally) arises from metacognitive neglect of second-order temporalization, our brain’s blindness to the self-differentiating time of timing. For whatever reason, consciousness is integrative: oscillating sounds and lights ‘fuse’ or appear continuous beyond certain frequency thresholds because information that doesn’t reach consciousness makes no conscious difference. Thus the eerie first-person that neglect hacks from a much higher dimensional third can be said to be inevitable. One need only apply the logic of flicker-fusion to consciousness as a whole, ask why, for instance, facets of conscious experience such as unity or presence require specialized ‘unification devices’ or ‘now mechanisms’ to accomplish what can be explained as perceptual/cognitive errors in conditions of informatic privation. Certainly it isn’t merely a coincidence that all the concepts and phenomena incompatible with mechanism involve drastic reductions in dimensionality.
In explaining away intentionality, personal identity, and presence, BBT inadvertently explains why we intuit the subject we think we do. It sets the basic neurofunctional ‘boundary conditions’ within which Sellars’ manifest image is culturally elaborated–the boundary conditions of intentional philosophy, in effect. In doing so, it provides a means of doing what the Continental tradition, even in its most recent, quasi-materialist incarnations, has regarded as impossible: naturalizing the transcendental, whether in its florid, traditional forms or in its contemporary deflationary guises–including Zizek’s supposedly ineliminable remainder, his subject as ‘gap.’
And this is just to say that BBT, in explaining away the first-person, also explains away Continental philosophy.
Few would argue that many of the ‘conditions of possibility’ that comprise the ‘thick transcendental’ account of Kant, for instance, amount to speculative interpretations of occluded brain functions insofar as they amount to interpretations of anything at all. After all, this is a primary motive for the retreat into ‘materialism’ (a position, as we shall see, that BBT endorses no more than ‘idealism’). But what remains difficult, even apparently impossible, to square with the natural is the question of the transcendental simpliciter. Sure, one might argue, Kant may have been wrong about the transcendental, but surely his great insight was to glimpse the transcendental as such. But this is precisely what BBT and medial neglect allows us to explain: the way the informatic and heuristic constraints on metacognition produce the asymptotic–acausal or ‘bottomless’–structure of conscious experience. The ‘transcendental’ on this view is a kind of ‘perspectival illusion,’ a hallucinatory artifact of the way information pertaining to the limits of any momentary conscious experience can only be integrated in subsequent moments of conscious experience.
Kant’s genius, his discovery, or at least what enabled his account to appeal to the metacognitive intuitions of so many across the ages, lay in making-explicit the occluded medial axis of consciousness, the fact that some kind of orthogonal functionality (neural, we now know) haunts empirical experience. Of course Hume had already guessed as much, but lacking the systematic, dogmatic impulse of his Prussian successor, he had glimpsed only murk and confusion, and a self that could only be chased into the oblivion of the ‘merely verbal’ by honest self-reflection.
Brassier, as we have seen, opts for the epistemic humility of the Humean route, and seeks to retrieve the rational via the ‘merely verbal.’ Zizek, though he makes gestures in this direction, ultimately seizes on a radical deflation of the Kantian route. Where Hume declines the temptation of hanging his ‘merely verbal’ across any ontological guesses, Zizek positions his ‘self-referential symbolic act’ within the ‘Void of pure designation,’ which is to say, the ‘void’ of itself, thus literally construing the subject as some kind of ‘self-interpreting rule’–or better, ‘self-constituting form’–the point where spontaneity and freedom become at least possible.
But again, there’s ‘void,’ the one that somehow magically anchors meaning, an then there’s, well, void. According to BBT, Zizek’s formulation is but one of many ways deliberative metacognition, relying on woefully depleted and truncated information and (mis)applying cognitive tools adapted to distal social and natural environments, can make sense of its own asymptotic limits: by transforming itself into the condition of itself. As should be apparent, the genius of Zizek’s account is entirely strategic. The bootstrapping conceit of subjectivity is preserved in a manner that allows Zizek to affirm the tyranny of the material (being, truth) without apparent contradiction. The minimization of overt ontological commitments, meanwhile, lends a kind of theoretical immunity to traditional critique.
There is no ‘void of pure designation’ because there is no ‘void’ any more than there is ‘pure designation.’ The information broadcast or integrated in conscious experience is finite, thus generating the plurality of asymptotic horizons that carve the hallucinatory architecture of the first-person from the astronomical complexities of our brain-environment. These broadcast or integration limits are a real empirical phenomenon that simply follow from the finite nature of conscious experience. Of BBT’s many empirical claims, these ‘information horizons’ are almost certain to be scientifically vindicated. Given these limits, the question of how they are expressed in conscious experience becomes unavoidable. The interpretations I’ve so far offered are no doubt little more than an initial assay into what will prove a massive undertaking. Once they are taken into account, however, it becomes difficult not to see Zizek’s ‘deflationary transcendental’ as simply one way for a fractionate metacognition to make sense of these limits: unitary because the absence of information is the absence of differentiation, reflexive because the lack of medial temporal information generates the metacognitive illusion of medial timelessness, and referential because the lack of medial functional information generates the metacognitive illusion of afunctional relationality, or intentional ‘aboutness.’
Thus we might speak of the ‘Zizek Fallacy,’ the faux affirmation of a materialism that nevertheless spares just enough of the transcendental to anchor the intentional…
A thread from which to dangle the prescientific tradition.
.
So does this mean that BBT offers the only ‘true’ route from intentionality to materialism. Not at all.
BBT takes the third-person brain as the ‘rule’ of the first-person mind simply because, thus far at least, science provides the only reliable form of theoretical cognition we know. Thus it would seem to be ‘materialist,’ insofar as it makes the body the measure of the soul. But what BBT shows–or better, hypothesizes–is that this dualism between mind and brain, ideal and real, is itself a heuristic artifact. Given medial neglect, the brain can only model its relation to its environment absent any informatic access to that relation. In other words, the ‘problem’ of its relation to distal environments is one that it can only solve absent tremendous amounts of information. The very structure of the brain, in other words, the fact that the machinery of predictive modelling cannot itself be modelled, prevents it, at a certain level at least, from being a universal problem solver. The brain is itself a heuristic cognitive tool, a system adapted to the solution of particular ‘problems.’ Given neglect, however, it has no way of cognizing its limits, and so regularly takes itself to be omni-applicable.
The heuristic structure of the brain and the cognitive limits this entails are nowhere more evident than in its attempts to cognize itself. So long as the medial mechanisms that underwrite the predictive modelling of distal environments in no way interfere with the environmental systems modelled–or put differently, so long as the systems modelled remain functionally independent of the modelling functions–then medial neglect need not generate problems. When the systems modelled are functionally entangled with medial modelling functions, however, one should expect any number of ‘interference effects’ culminating in the abject inability to predictively model those systems. We find this problem of functional entanglement distally where the systems to be modelled are so delicate that our instrumentation causes ‘observation effects’ that render predictive modelling impossible, and proximally where the systems to be modelled belong to the brain that is modelling. And indeed, as I’ve argued in a number of previous posts, many of the problems confronting the philosophy of mind can be diagnosed in terms of this fundamental misapplication of the ‘Aboutness Heuristic.’
This is where post-intentionalism reveals an entirely new dimension of radicality, one that allows us to identify the metaphysical categories of the ‘material’ and the ‘formal’ (yes, I said, formal) for the heuristic cartoons they are. BBT allows us to finally see what we ‘see’ as subreptive artifacts of our inability to see, as low-dimensional shreds of abyssal complexities. It provides a view where not only can the tradition be diagnosed and explained away, but where the fundamental dichotomies and categories, hitherto assumed inescapable, dissolve into the higher dimensional models that only brains collectively organized into superordinate heuristic mechanisms via the institutional practices of science can realize. Mind? Matter? These are simply waystations on an informatic continuum, ‘concepts’ according to the low-dimensional distortions of the first-person and mechanisms according to the third: concrete, irreflexive, high-dimensional processes that integrate our organism–and therefore us–as componential moments of the incomprehensibly vast mechanism of the universe. Where the tradition attempts, in vain, to explain our perplexing role in this natural picture via a series of extraordinary additions, everything from the immortal soul to happy emergence to Zizek’s fortuitous ‘void,’ BBT merely proposes a network of mundane privations, arguing that the self-congratulatory consciousness we have tasked science with explaining simply does not exist…
That the ‘Hard Problem’ is really one of preserving our last and most cherished set of self-aggrandizing conceits.
It is against this greater canvas that we can clearly see the parochialism of Zizek’s approach, how he remains (despite his ‘merely verbal’ commitment to ‘materialism’) firmly trapped within the hallucinatory ‘parallax’ of intentionality, and so essentializes the (apparently not so) ‘blind spot’ that plays such an important role in the system of conceptual fetishes he sets in motion. It has become fashion in certain circles to impugn ‘correlation’ in an attempt to think being in a manner that surpasses the relation between thought and being. This gives voice to an old hankering in Continental philosophy, the genuinely shrewd suspicion that something is wrong with the traditional, understanding of human cognition. But rather than answer the skepticism that falls out of Hume’s account of human nature or Wittgenstein’s consideration of human normativity, the absurd assumption has been that one can simply think their way beyond the constraints of thought, simply reach out and somehow snatch ‘knowledge at a spooky distance.’ The poverty of this assumption lies in the most honest of all questions: ‘How do you know?’ given that (as Hume taught us) you are a human and so cursed with human cognitive frailties, given that (as Wittgenstein taught us) you are a language-user and so belong to normative communities.
‘Correlation’ is little more than a gimmick, the residue of a magical thinking that assumes naming a thing gives one power over it. It is meant to obscure far more than enlighten, to covertly conserve the Continental tradition of placing the subject on the altar of career-friendly critique, lest the actual problem–intentionality–stir from its slumber and devour twenty-five centuries of prescientific conceit and myopia. The call to think being precritically, which is to say, without thinking the relation of thought and being, amounts to little more than an conceptually atavistic stunt so long as Hume and Wittgenstein’s questions remain unanswered.
The post-intentional philosophy that follows from BBT, however, belongs to the self-same skeptical tradition of disclosing the contextual contingencies that constrain thought’s attempt to cognize being. As opposed to the brute desperation of simply ignoring subjectivity or normativity, it seizes upon them. Intentional concepts and phenomena, it argues, exhibit precisely the acausal ‘bottomlessness’ that medial neglect, a structural inevitability given a mechanistic understanding of the brain, forces on metacognition. A great number of powerful and profound illusions result, illusions that you confuse for yourself. You think you are more a system of levers rather than a tangle of wiretaps. You think that understanding is yours. The low-dimensional cartoon of you standing within and apart from an object world is just that, a low-dimensional cartoon, a surrogate, facile and deceptive, for the high-dimensional facts of the brain-environment.
Thus is the problem of so-called ‘correlation’ solved, not by naming, shaming, and ersatz declaration, but rather by passing through the problematic, by understanding that the ‘subjective’ and the ‘normative’ are themselves natural and therefore amenable to scientific investigation. BBT explains the artifactual nature of the apparently inescapable correlation of thought and being, how medial neglect strands metacognition with an inexplicable covariance that it must conceive otherwise–in supra-natural terms. And it allows one to set aside the intentional conundrums of philosophy for what they are: arguments regarding interpretations of cognitive illusions.
Why assume the ‘design stance,’ given that it turns on informatic neglect? Why not regularly regard others in subpersonal terms, as mechanisms, when it strikes ‘you’ as advantageous? Or, more troubling still, is this simply coming to terms with what you have been doing all along? The ‘pragmatism’ once monopolized by ‘taking the intentional stance’ no longer obtains. For all we know, we could be more a confabulatory interface than anything, an informatic symbiont or parasite–our ‘consciousness’ a kind of tapeworm in the gut of the holy neural host. It could be this bad–worse. Corporate advertisers are beginning to think as much. And as I mentioned above, this is where the full inferential virulence of BBT stands revealed: it merely has to be plausible to demonstrate that anything could be the case.
And the happy possibilities are drastically outnumbered.
As for the question, ‘How do you know?’ BBT cheerfully admits that it does not, that it is every bit as speculative as any of its competitors. It holds forth its parsimonious explanatory reach, the way it can systematically resolve numerous ancient perplexities using only a handful of insights, as evidence of its advantage, as well as the fact that it is ultimately empirical, and so awaits scientific arbitration. BBT, unlike ‘OOO’ for instance, will stand or fall on the findings of cognitive science, rather than fade as all such transcendental positions fade on the tide of academic fashion.
And, perhaps most importantly, it is timely. As the brain becomes ever more tractable to science, the more antiquated and absurd prescientific discourses of the soul will become. It is folly to think that one’s own discourse is ‘special,’ that it will be the first prescientific discourse in history to be redeemed rather than relegated or replaced by the findings of science. What cognitive science discovers over the next century will almost certainly ruin or revolutionize fairly everything that has been assumed regarding the soul. BBT is mere speculation, yes, but mere speculation that turns on the most recent science and remains answerable to the science that will come. And given that science is the transformative engine of what is without any doubt the most transformative epoch in human history, BBT provides a means to diagnose and to prognosticate what is happening to us now–even going so far as to warn that intentionality will not constrain the posthuman.
What it does not provide is any redeeming means to assess or to guide. The post-intentional holds no consolation. When rules become regularities, nothing pretty can come of life. It is an ugly, even horrifying, conclusion, suggesting, as it does, that what we hold the most sacred and profound is little more than a subreptive by-product of evolutionary indifference. And even in this, the relentless manner in which it explodes and eviscerates our conceptual conceits, it distinguishes itself from its soft-bellied competitors. It simply follows the track of its machinations, the algorithmic grub of ‘reason.’ It has no truck with flattering assumptions.
And this is simply to say is that the Blind Brain Theory offers us a genuine way out, out of the old dichotomies, the old problems. It bids us to moult, to slough off transcendental philosophy like a dead serpentine skin. It could very well achieve the dream of all philosophy–only at the cost of everything that matters.
And really. What else did you fucking expect? A happy ending? That life really would turn out to be ‘what we make it’?
Whatever the conclusion is, it ain’t going to be Hollywood.


R. Scott Bakker's Blog
- R. Scott Bakker's profile
- 2171 followers
