R. Scott Bakker's Blog, page 25
January 23, 2013
Zizek, Hollywood, and the Disenchantment of Continental Philosophy
Aphorism of the Day: At least a flamingo has a leg to stand on.
.
Back in the 1990′s whenever I mentioned Dennett and the significance of neuroscience to my Continental buddies I would usually get some version of ‘Why do you bother reading that shite?’ I would be told something about the ontological priority of the lifeworld or the practical priority of the normative: more than once I was referred to Hegel’s critique of phrenology in the Phenomenology.
The upshot was that the intentional has to be irreducible. Of course this ‘has to be’ ostensibly turned on some longwinded argument (picked out of the great mountain of longwinded arguments), but I couldn’t shake the suspicion that the intentional had to be irreducible because the intentional had to come first, and the intentional had to come first because ‘intentional cognition’ was the philosopher’s stock and trade–and oh-my, how we adore coming first.
Back then I chalked up this resistance to a strategic failure of imagination. A stupendous amount of work goes into building an academic philosophy career; given our predisposition to rationalize even our most petty acts, the chances of seeing our way past our life’s work are pretty damn slim! One of the things that makes science so powerful is the way it takes that particular task out of the institutional participant’s hands–enough to revolutionize the world at least. Not so in philosophy, as any gas station attendant can tell you.
I certainly understood the sheer intuitive force of what I was arguing against. I quite regularly find the things I argue here almost impossible to believe. I don’t so much believe as fear that the Blind Brain Theory is true. What I do believe is that some kind of radical overturning of noocentrism is not only possible, but probable, and that the 99% of philosophers who have closed ranks against this possibility will likely find themselves in the ignominious position of those philosophers who once defended geocentrism and biocentrism.
What I’ve recently come to appreciate, however, is that I am literally, as opposed to figuratively, arguing against a form of anosognosia, that I’m pushing brains places they cannot go–short of imagination. Visual illusions are one thing. Spike a signal this way or that, trip up the predictive processing, and you have a little visual aporia, an isolated area of optic nonsense in an otherwise visually ‘rational’ world. The kinds of neglect-driven illusions I’m referring to, however, outrun us, as they have to, insofar as we are them in some strange sense.
So here we are in 2013, and there’s more than enough neuroscientific writing on the wall to have captured even the most insensate Continental philosopher’s attention. People are picking through the great mountain of longwinded arguments once again, tinkering, retooling, now that the extent of the threat has become clear. Things are getting serious; the akratic social consequences I depicted in Neuropath are everywhere becoming more evident. The interval between knowledge and experience is beginning to gape. Ignoring the problem now smacks more of negligence than insouciant conviction. The soul, many are now convinced, must be philosophically defended. Thought, whatever it is, must be mobilized against its dissolution.
The question is how.
My own position might be summarized as a kind of ‘Good-Luck-Chuck’ argument. Either you posit an occult brand of reality special to you and go join the Christians in their churches, or you own up to the inevitable. The fate of the transcendental lies in empirical hands now. There is no way, short of begging the question against science, of securing the transcendental against the empirical. Imagine you come up with, say, Argument A, which concludes on non-empirical Ground X that intentionality cannot be a ‘cognitive illusion.’ The problem, obviously, is that Argument A can only take it on faith that no future neuroscience will revise or eliminate its interpretation of Ground X. And that faith, like most faith, only comes easy in the absence of alternatives–of imagination.
The notion of using transcendental speculation to foreclose on possible empirical findings is hopeless. Speculation is too unreliable and nature is too fraught with surprises. One of the things that makes the Blind Brain Theory so important, I think, is the way its mere existence reveals this new thetic landscape. By deriving the signature characteristics of the first-personal out of the mechanical, it provides a kind of ‘proof of concept,’ a demonstration that post-intentional theory is not only possible, but potentially powerful. As a viable alternative to intentional thought (of which transcendental philosophy is a subset), it has the effect of dispelling the ‘only game in town illusion,’ the sense of necessity that accompanies every failure of philosophical imagination. It forces ‘has to be’ down to the level of ‘might be’…
You could say the mere possibility that the Blind Brain Theory might be empirically verified drags the whole of Continental philosophy into the purview of science. The most the Continental philosopher can do is match their intentional hopes against my mechanistic fears. Put simply, the grand old philosophical question of what we are no longer belongs to them: It has fallen to science.
.
For better and for worse, Metzinger’s Being No One has become the textual locus of the ‘neuroscientific threat’ in Continental circles. His thesis alone would have brought him to attention, I’m sure. That aside, the care, scholarship, and insight he brings to the topic provide the Continental reader with a quite extraordinary (and perhaps too flattering) introduction to cognitive science and Anglo-American philosophy of mind as it stood a decade or so ago.
The problem with Being No One, however, is precisely what renders it so attractive to Continentalists, particularly those invested in the so-called ‘materialist turn’: rather than consider the problem of meaning tout court, it considers the far more topical problem of the self or subject. In this sense, it is thematically continuous with the concerns of much Continental philosophy, particularly in its post-structuralist and psychoanalytic incarnations. It allows the Continentalist, in other words, to handle the ‘neuroscientific threat’ in a diminished and domesticated form, which is to say, as the hoary old problem of the subject. Several people have told me now that the questions raised by the sciences of the brain are ‘nothing new,’ that they simply bear out what this or that philosophical/psychoanalytic figure has said long ago–that the radicality of neuroscience is not all that ‘radical’ at all. Typically, I take the opportunity to ask questions they cannot answer.
Zizek’s reading of Metzinger in The Parallax View, for instance, clearly demonstrates the way some Continentalists regard the sciences of the brain as an empirical mirror wherein they can admire their transcendental hair. For someone like Zizek, who has made a career out of avoiding combs and brushes, Being No One proves to be one the few texts able to focus and hold his rampant attention, the one point where his concern seems to outrun his often brutish zest for ironic and paradoxical formulations. In his reading, Zizek immediately homes in on those aspects of Metzinger’s theory that most closely parallel my view (the very passages that inspired me to contact Thomas years ago, in fact) where Metzinger discusses the relationship between the transparency of the Phenomenal Self-Model (PSM) and the occlusion of the neurofunctionality that renders it. The self, on Metzinger’s account, is a model that cannot conceive itself as a model; it suffers from what he calls ‘autoepistemic closure,’ a constitutive lack of information access (BNO, 338). And its apparent transparency accordingly becomes “a special form of darkness” (BNO, 169).
This is where Metzinger’s account almost completely dovetails with Zizek’s own notion of the subject, and so holds the most glister for him. But he defers pressing this argument and turns to the conclusion of Being No One, where Metzinger, in an attempt to redeem the Enlightenment ethos, characterizes the loss of self as a gain in autonomy, insofar as scientific knowledge allows us to “grow up,” and escape the ‘tutelary nature’ of our own brain. Zizek only returns to the lessons he finds in Metzinger after a reading of Damasio’s rather hamfisted treatment of consciousness in Descartes’ Error, as well as a desultory and idiosyncratic (which, as my daughter would put it, is a fancy way of saying ‘mistaken’) reading of Dennett’s critique of the Cartesian Theater. Part of the problem he faces is that Metzinger’s PSM, as structurally amenable as it is to his thesis, remains too topical for his argument. The self simply does not exhaust consciousness (even though Metzinger himself often conflates the two in Being No One). Saying there is no such thing as selves is not the same as saying there is no such thing as consciousness. And as his preoccupation with the explanatory gap and cognitive closure makes clear, nothing less than the ontological redefinition of consciousness itself is Zizek’s primary target. Damasio and Dennett provide the material (as well as the textual distance) he requires to expand the structure he isolates in Metzinger. As he writes:
Are we free only insofar as we misrecognize the causes which determine us? The mistake of the identification of (self-)consciousness with misrecognition, with an epistemological obstacle, is that it stealthily (re)introduces the standard, premodern, “cosmological” notion of reality as a positive order of being: in such a fully constituted positive “chain of being” there is, of course, no place for the subject, so the dimension of subjectivity can be conceived of only as something which is strictly co-dependent with the epistemological misrecognition of the positive order of being. Consequently, the only way effectively to account for the status of (self-)consciousness is to assert the ontological incompleteness of “reality” itself: there is “reality” only insofar as there is an ontological gap, a crack, in its very heart, that is to say, a traumatic excess, a foreign body which cannot be integrated into it. This brings us back to the notion of the “Night of the World”: in this momentary suspension of the positive order of reality, we confront the ontological gap on account of which “reality” is never a complete, self-enclosed, positive order of being. It is only this experience of psychotic withdrawal from reality, of absolute self-contraction, which accounts for the mysterious “fact” of transcendental freedom: for a (self-)consciousness which is in effect “spontaneous,” whose spontaneity is not an effect of misrecognition of some “objective” process. 241-242
For those with a background in Continental philosophy, this ‘aporetic’ discursive mode is more than familiar. What I find so interesting about this particular passage is the way it actually attempts to distill the magic of autonomy, to identify where and how the impossibility of freedom becomes its necessity. To identify consciousness as an illusion, he claims, is to presuppose that the real is positive, hierarchical, and whole. Since the mental does not ‘fit’ with this whole, and the whole, by definition, is all there is, it must then be some kind of misrecognition of that whole–‘mind’ becomes the brain’s misrecognition of itself as a brain. Brain blindness. The alternative, Zizek argues, is to assume that the whole has a hole, that reality is radically incomplete, and so transform what was epistemological misrecognition into ontological incompleteness. Consciousness can then be seen as a kind of void (as opposed to blindness), thus allowing for the reflexive spontaneity so crucial to the normative.
In keeping with his loose usage of concepts from the philosophy of mind, Zizek wants to relocate the explanatory gap between mind and brain into the former, to argue that the epistemological problem of understanding consciousness is in fact ontologically constitutive of consciousness. What is consciousness? The subjective hole in the material whole.
[T]here is, of course, no substantial signified content which guarantees the unity of the I; at this level, the subject is multiple, dispersed, and so forth—its unity is guaranteed only by the self-referential symbolic act, that is,”I” is a purely performative entity, it is the one who says “I.” This is the mystery of the subject’s “self-positing,” explored by Fichte: of course, when I say “I,” I do not create any new content, I merely designate myself, the person who is uttering the phrase. This self-designation nonetheless gives rise to (“posits”) an X which is not the “real” flesh-and-blood person uttering it, but, precisely and merely, the pure Void of self-referential designation (the Lacanian “subject of the enunciation”): “I” am not directly my body, or even the content of my mind; “I” am, rather, that X which has all these features as its properties. 244-245
Now I’m no Zizek scholar, and I welcome corrections on this interpretation from those better read than I. At the same time I shudder to think what a stolid, hotdog-eating philosopher-of-mind would make of this ontologization of the explanatory gap. Personally, I lack Zizek’s faith in theory: the fact of human theoretical incompetence inclines me to bet on the epistemological over the ontological most every time. Zizek can’t have it both ways. He can’t say consciousness is ‘the inexplicable’ without explaining it as such.
Either way, this clearly amounts to yet another attempt to espouse a kind of naturalism without transcendental tears. Like Brassier in “The View from Nowhere,” Zizek is offering an account of subjectivity without self. Unlike Brassier, however, he seems to be oblivious to what I have previously called the Intentional Dissociation Problem: he never considers how the very issues that lead Metzinger to label the self hallucinatory also pertain to intentionality more generally. Certainly, the whole of The Parallax View is putatively given over to the problem of meaning as the problem of the relationship between thought/meaning and being/truth, or the problem of the ‘gap’ as Zizek puts it. And yet, throughout the text, the efficacy (and therefore the reality) of meaning–or thought–is never once doubted, nor is the possibility of the post-intentional considered. Much of his discussion of Dennett, for instance, turns on Dennett’s intentional apologetics, his attempt to avoid, among other things, the propositional-attitudinal eliminativism of Paul Churchland (to whom Zizek mistakenly attributes Dennett’s qualia eliminativism (PV, 177)). But where Dennett clearly sees the peril, the threat of nihilism, Zizek only sees an intellectual challenge. For Zizek, the question, Is meaning real? is ultimately a rhetorical one, and the dire challenge emerging out of the sciences of the brain amount to little more than a theoretical occasion.
So in the passage quoted above, the person (subject) is plucked from the subpersonal legion via “the self-referential symbolic act.” The problems and questions that threaten to explode this formulation are numerous, to say the least. The attraction, however, is obvious: It apparently allows Zizek, much like Kant, to isolate a moment within mechanism that nevertheless stands outside of mechanism short of entailing some secondary order of being–an untenable dualism. In this way it provides ‘freedom’ without any incipient supernaturalism, and thus grounds the possibility of meaning.
But like other forms of deflationary transcendentalism, this picture simply begs the question. The cognitive scientist need only ask, What is this ‘self-referential symbolic act’? and the circular penury of Zizek’s position is revealed: How can an act of meaning ground the possibility of meaningful acts? The vicious circularity is so obvious that one might wonder how a thinker as subtle as Zizek could run afoul it. But then, you must first realize (as, say, Dennett realizes) the way intentionality as a whole, and not simply the ‘person,’ is threatened by the mechanistic paradigm of the life sciences. So for instance, Zizek repeatedly invokes the old Derridean trope of bricolage. But there’s ‘bricolage’ and then there’s bricolage: there’s fragments that form happy fragmentary wholes that readily lend themselves to the formation of new functional assemblages, ‘deconstructive ethics,’ say, and then there’s fragments that are irredeemably fragmentary, whose dimensions of fragmentation are such that they can only be misconceived as wholes. Zizek seizes on Metzinger’s account of the self in Being No One precisely because it lends itself to the former, ‘happy’ bricolage, one where we need only fear for the self and not the intentionality that constitutes it.
The Blind Brain Theory, however, paints a far different portrait of ‘selfhood’ than Metzinger’s PSM, one that not only makes hash of Zizek’s thesis, but actually explains the cognitive errors that motivate it. On Metzinger’s account, ‘auto-epistemic closure’ (or the ‘darkness of transparency’) is the primary structural principle that undermines the ‘reality’ of the PSM and the PSM only. The Blind Brain Theory, on the other hand, casts the net wider. Constraints on the information broadcast or integrated are crucial, to be sure, but BBT also considers the way these constraints impact the fractionate cognitive systems that ‘solve’ them. On my view, there is no ‘phenomenal self-model,’ only congeries of heuristic cognitive systems primarily adapted to environmental cognition (including social environmental cognition) cobbling together what they can given what little information they receive. For Metzinger, who remains bound to the ‘Accomplishment Assumption’ that characterizes the sciences of the brain more generally, the cognitive error is one of mistaking a low-dimensional simulation for a reality. The phenomenal self-model, for him, really is something like ‘a flight-simulator that contains its own exits.’
On BBT, however, there is no one error, nor even one coherent system of errors; instead there are any number of information shortfalls and cognitive misapplications leading to this or that form of reflective, acculturated forms of ‘selfness,’ be it ancient Greek, Cartesian, post-structural, or what have you. Selfness, in other words, is the product of compound misapprehensions, both at the assumptive and the theoretical levels (or better put, across the spectrum of deliberative metacognition, from the cursory/pragmatic to the systematic/theoretical).
BBT uses these misconstruals, myopias, and blindnesses to explain the ways intentionality and phenomenality confound the ‘third-person’ mechanistic paradigm of the life sciences. It can explain, in other words, many of the ‘structural’ peculiarities that make the first-person so refractory to naturalization. It does this by interpreting those peculiarities as artifacts of ‘lost dimensions’ of information, particularly with reference to medial neglect. So for instance, our intuition of aboutness derives from the brain’s inability to model its modelling, neglecting, as it must, the neurofunctionality responsible for modelling its distal environments. Thus the peculiar ‘bottomlessness’ of conscious cognition and experience, the way each subsequent moment somehow becomes ground of the moment previous (and all the foundational paradoxes that have arisen from this structure). Thus the metacognitive transformation of asymptotic covariance into ‘aboutness,’ a relation absent the relation.
And so it continues: Our intuition of conscious unity arises from the way cognition confuses aggregates for individuals in the absence of differentiating information. Our intuition of personal identity (and nowness more generally) arises from metacognitive neglect of second-order temporalization, our brain’s blindness to the self-differentiating time of timing. For whatever reason, consciousness is integrative: oscillating sounds and lights ‘fuse’ or appear continuous beyond certain frequency thresholds because information that doesn’t reach consciousness makes no conscious difference. Thus the eerie first-person that neglect hacks from a much higher dimensional third can be said to be inevitable. One need only apply the logic of flicker-fusion to consciousness as a whole, ask why, for instance, facets of conscious experience such as unity or presence require specialized ‘unification devices’ or ‘now mechanisms’ to accomplish what can be explained as perceptual/cognitive errors in conditions of informatic privation. Certainly it isn’t merely a coincidence that all the concepts and phenomena incompatible with mechanism involve drastic reductions in dimensionality.
In explaining away intentionality, personal identity, and presence, BBT inadvertently explains why we intuit the subject we think we do. It sets the basic neurofunctional ‘boundary conditions’ within which Sellars’ manifest image is culturally elaborated–the boundary conditions of intentional philosophy, in effect. In doing so, it provides a means of doing what the Continental tradition, even in its most recent, quasi-materialist incarnations, has regarded as impossible: naturalizing the transcendental, whether in its florid, traditional forms or in its contemporary deflationary guises–including Zizek’s supposedly ineliminable remainder, his subject as ‘gap.’
And this is just to say that BBT, in explaining away the first-person, also explains away Continental philosophy.
Few would argue that many of the ‘conditions of possibility’ that comprise the ‘thick transcendental’ account of Kant, for instance, amount to speculative interpretations of occluded brain functions insofar as they amount to interpretations of anything at all. After all, this is a primary motive for the retreat into ‘materialism’ (a position, as we shall see, that BBT endorses no more than ‘idealism’). But what remains difficult, even apparently impossible, to square with the natural is the question of the transcendental simpliciter. Sure, one might argue, Kant may have been wrong about the transcendental, but surely his great insight was to glimpse the transcendental as such. But this is precisely what BBT and medial neglect allows us to explain: the way the informatic and heuristic constraints on metacognition produce the asymptotic–acausal or ‘bottomless’–structure of conscious experience. The ‘transcendental’ on this view is a kind of ‘perspectival illusion,’ a hallucinatory artifact of the way information pertaining to the limits of any momentary conscious experience can only be integrated in subsequent moments of conscious experience.
Kant’s genius, his discovery, or at least what enabled his account to appeal to the metacognitive intuitions of so many across the ages, lay in making-explicit the occluded medial axis of consciousness, the fact that some kind of orthogonal functionality (neural, we now know) haunts empirical experience. Of course Hume had already guessed as much, but lacking the systematic, dogmatic impulse of his Prussian successor, he had glimpsed only murk and confusion, and a self that could only be chased into the oblivion of the ‘merely verbal’ by honest self-reflection.
Brassier, as we have seen, opts for the epistemic humility of the Humean route, and seeks to retrieve the rational via the ‘merely verbal.’ Zizek, though he makes gestures in this direction, ultimately seizes on a radical deflation of the Kantian route. Where Hume declines the temptation of hanging his ‘merely verbal’ across any ontological guesses, Zizek positions his ‘self-referential symbolic act’ within the ‘Void of pure designation,’ which is to say, the ‘void’ of itself, thus literally construing the subject as some kind of ‘self-interpreting rule’–or better, ‘self-constituting form’–the point where spontaneity and freedom become at least possible.
But again, there’s ‘void,’ the one that somehow magically anchors meaning, an then there’s, well, void. According to BBT, Zizek’s formulation is but one of many ways deliberative metacognition, relying on woefully depleted and truncated information and (mis)applying cognitive tools adapted to distal social and natural environments, can make sense of its own asymptotic limits: by transforming itself into the condition of itself. As should be apparent, the genius of Zizek’s account is entirely strategic. The bootstrapping conceit of subjectivity is preserved in a manner that allows Zizek to affirm the tyranny of the material (being, truth) without apparent contradiction. The minimization of overt ontological commitments, meanwhile, lends a kind of theoretical immunity to traditional critique.
There is no ‘void of pure designation’ because there is no ‘void’ any more than there is ‘pure designation.’ The information broadcast or integrated in conscious experience is finite, thus generating the plurality of asymptotic horizons that carve the hallucinatory architecture of the first-person from the astronomical complexities of our brain-environment. These broadcast or integration limits are a real empirical phenomenon that simply follow from the finite nature of conscious experience. Of BBT’s many empirical claims, these ‘information horizons’ are almost certain to be scientifically vindicated. Given these limits, the question of how they are expressed in conscious experience becomes unavoidable. The interpretations I’ve so far offered are no doubt little more than an initial assay into what will prove a massive undertaking. Once they are taken into account, however, it becomes difficult not to see Zizek’s ‘deflationary transcendental’ as simply one way for a fractionate metacognition to make sense of these limits: unitary because the absence of information is the absence of differentiation, reflexive because the lack of medial temporal information generates the metacognitive illusion of medial timelessness, and referential because the lack of medial functional information generates the metacognitive illusion of afunctional relationality, or intentional ‘aboutness.’
Thus we might speak of the ‘Zizek Fallacy,’ the faux affirmation of a materialism that nevertheless spares just enough of the transcendental to anchor the intentional…
A thread from which to dangle the prescientific tradition.
.
So does this mean that BBT offers the only ‘true’ route from intentionality to materialism. Not at all.
BBT takes the third-person brain as the ‘rule’ of the first-person mind simply because, thus far at least, science provides the only reliable form of theoretical cognition we know. Thus it would seem to be ‘materialist,’ insofar as it makes the body the measure of the soul. But what BBT shows–or better, hypothesizes–is that this dualism between mind and brain, ideal and real, is itself a heuristic artifact. Given medial neglect, the brain can only model its relation to its environment absent any informatic access to that relation. In other words, the ‘problem’ of its relation to distal environments is one that it can only solve absent tremendous amounts of information. The very structure of the brain, in other words, the fact that the machinery of predictive modelling cannot itself be modelled, prevents it, at a certain level at least, from being a universal problem solver. The brain is itself a heuristic cognitive tool, a system adapted to the solution of particular ‘problems.’ Given neglect, however, it has no way of cognizing its limits, and so regularly takes itself to be omni-applicable.
The heuristic structure of the brain and the cognitive limits this entails are nowhere more evident than in its attempts to cognize itself. So long as the medial mechanisms that underwrite the predictive modelling of distal environments in no way interfere with the environmental systems modelled–or put differently, so long as the systems modelled remain functionally independent of the modelling functions–then medial neglect need not generate problems. When the systems modelled are functionally entangled with medial modelling functions, however, one should expect any number of ‘interference effects’ culminating in the abject inability to predictively model those systems. We find this problem of functional entanglement distally where the systems to be modelled are so delicate that our instrumentation causes ‘observation effects’ that render predictive modelling impossible, and proximally where the systems to be modelled belong to the brain that is modelling. And indeed, as I’ve argued in a number of previous posts, many of the problems confronting the philosophy of mind can be diagnosed in terms of this fundamental misapplication of the ‘Aboutness Heuristic.’
This is where post-intentionalism reveals an entirely new dimension of radicality, one that allows us to identify the metaphysical categories of the ‘material’ and the ‘formal’ (yes, I said, formal) for the heuristic cartoons they are. BBT allows us to finally see what we ‘see’ as subreptive artifacts of our inability to see, as low-dimensional shreds of abyssal complexities. It provides a view where not only can the tradition be diagnosed and explained away, but where the fundamental dichotomies and categories, hitherto assumed inescapable, dissolve into the higher dimensional models that only brains collectively organized into superordinate heuristic mechanisms via the institutional practices of science can realize. Mind? Matter? These are simply waystations on an informatic continuum, ‘concepts’ according to the low-dimensional distortions of the first-person and mechanisms according to the third: concrete, irreflexive, high-dimensional processes that integrate our organism–and therefore us–as componential moments of the incomprehensibly vast mechanism of the universe. Where the tradition attempts, in vain, to explain our perplexing role in this natural picture via a series of extraordinary additions, everything from the immortal soul to happy emergence to Zizek’s fortuitous ‘void,’ BBT merely proposes a network of mundane privations, arguing that the self-congratulatory consciousness we have tasked science with explaining simply does not exist…
That the ‘Hard Problem’ is really one of preserving our last and most cherished set of self-aggrandizing conceits.
It is against this greater canvas that we can clearly see the parochialism of Zizek’s approach, how he remains (despite his ‘merely verbal’ commitment to ‘materialism’) firmly trapped within the hallucinatory ‘parallax’ of intentionality, and so essentializes the (apparently not so) ‘blind spot’ that plays such an important role in the system of conceptual fetishes he sets in motion. It has become fashion in certain circles to impugn ‘correlation’ in an attempt to think being in a manner that surpasses the relation between thought and being. This gives voice to an old hankering in Continental philosophy, the genuinely shrewd suspicion that something is wrong with the traditional, understanding of human cognition. But rather than answer the skepticism that falls out of Hume’s account of human nature or Wittgenstein’s consideration of human normativity, the absurd assumption has been that one can simply think their way beyond the constraints of thought, simply reach out and somehow snatch ‘knowledge at a spooky distance.’ The poverty of this assumption lies in the most honest of all questions: ‘How do you know?’ given that (as Hume taught us) you are a human and so cursed with human cognitive frailties, given that (as Wittgenstein taught us) you are a language-user and so belong to normative communities.
‘Correlation’ is little more than a gimmick, the residue of a magical thinking that assumes naming a thing gives one power over it. It is meant to obscure far more than enlighten, to covertly conserve the Continental tradition of placing the subject on the altar of career-friendly critique, lest the actual problem–intentionality–stir from its slumber and devour twenty-five centuries of prescientific conceit and myopia. The call to think being precritically, which is to say, without thinking the relation of thought and being, amounts to little more than an conceptually atavistic stunt so long as Hume and Wittgenstein’s questions remain unanswered.
The post-intentional philosophy that follows from BBT, however, belongs to the self-same skeptical tradition of disclosing the contextual contingencies that constrain thought’s attempt to cognize being. As opposed to the brute desperation of simply ignoring subjectivity or normativity, it seizes upon them. Intentional concepts and phenomena, it argues, exhibit precisely the acausal ‘bottomlessness’ that medial neglect, a structural inevitability given a mechanistic understanding of the brain, forces on metacognition. A great number of powerful and profound illusions result, illusions that you confuse for yourself. You think you are more a system of levers rather than a tangle of wiretaps. You think that understanding is yours. The low-dimensional cartoon of you standing within and apart from an object world is just that, a low-dimensional cartoon, a surrogate, facile and deceptive, for the high-dimensional facts of the brain-environment.
Thus is the problem of so-called ‘correlation’ solved, not by naming, shaming, and ersatz declaration, but rather by passing through the problematic, by understanding that the ‘subjective’ and the ‘normative’ are themselves natural and therefore amenable to scientific investigation. BBT explains the artifactual nature of the apparently inescapable correlation of thought and being, how medial neglect strands metacognition with an inexplicable covariance that it must conceive otherwise–in supra-natural terms. And it allows one to set aside the intentional conundrums of philosophy for what they are: arguments regarding interpretations of cognitive illusions.
Why assume the ‘design stance,’ given that it turns on informatic neglect? Why not regularly regard others in subpersonal terms, as mechanisms, when it strikes ‘you’ as advantageous? Or, more troubling still, is this simply coming to terms with what you have been doing all along? The ‘pragmatism’ once monopolized by ‘taking the intentional stance’ no longer obtains. For all we know, we could be more a confabulatory interface than anything, an informatic symbiont or parasite–our ‘consciousness’ a kind of tapeworm in the gut of the holy neural host. It could be this bad–worse. Corporate advertisers are beginning to think as much. And as I mentioned above, this is where the full inferential virulence of BBT stands revealed: it merely has to be plausible to demonstrate that anything could be the case.
And the happy possibilities are drastically outnumbered.
As for the question, ‘How do you know?’ BBT cheerfully admits that it does not, that it is every bit as speculative as any of its competitors. It holds forth its parsimonious explanatory reach, the way it can systematically resolve numerous ancient perplexities using only a handful of insights, as evidence of its advantage, as well as the fact that it is ultimately empirical, and so awaits scientific arbitration. BBT, unlike ‘OOO’ for instance, will stand or fall on the findings of cognitive science, rather than fade as all such transcendental positions fade on the tide of academic fashion.
And, perhaps most importantly, it is timely. As the brain becomes ever more tractable to science, the more antiquated and absurd prescientific discourses of the soul will become. It is folly to think that one’s own discourse is ‘special,’ that it will be the first prescientific discourse in history to be redeemed rather than relegated or replaced by the findings of science. What cognitive science discovers over the next century will almost certainly ruin or revolutionize fairly everything that has been assumed regarding the soul. BBT is mere speculation, yes, but mere speculation that turns on the most recent science and remains answerable to the science that will come. And given that science is the transformative engine of what is without any doubt the most transformative epoch in human history, BBT provides a means to diagnose and to prognosticate what is happening to us now–even going so far as to warn that intentionality will not constrain the posthuman.
What it does not provide is any redeeming means to assess or to guide. The post-intentional holds no consolation. When rules become regularities, nothing pretty can come of life. It is an ugly, even horrifying, conclusion, suggesting, as it does, that what we hold the most sacred and profound is little more than a subreptive by-product of evolutionary indifference. And even in this, the relentless manner in which it explodes and eviscerates our conceptual conceits, it distinguishes itself from its soft-bellied competitors. It simply follows the track of its machinations, the algorithmic grub of ‘reason.’ It has no truck with flattering assumptions.
And this is simply to say is that the Blind Brain Theory offers us a genuine way out, out of the old dichotomies, the old problems. It bids us to moult, to slough off transcendental philosophy like a dead serpentine skin. It could very well achieve the dream of all philosophy–only at the cost of everything that matters.
And really. What else did you fucking expect? A happy ending? That life really would turn out to be ‘what we make it’?
Whatever the conclusion is, it ain’t going to be Hollywood.
January 21, 2013
The Toll
Aphorism of the Day: This? Yeah, well, dope smoke that, motherfucker.
.
I want to say I’m not quite sure what I’m doing anymore. But then I’m not sure what it means to say one is doing anything anymore. The reflexes seem to be in functioning order… Be witty. Be urbane. Charm those around you, and most importantly, impress. You never know… You never know…
These are the offerings we cast into the blackness–nowadays. This is what throws us on our bellies, what we burn. This is what it means to live in a world without bubbles of air, where the social has flooded the most ad hoc recess, the most hidden pocket. Always guarded. Always poised to be poised. Always polite, lest… who the fuck knows?
Bury that scream deep in the meat.
Look, it says. Just give it to me, whatever it is it wants.
You were never good at the game–or at least you rarely count yourself as such. If you were, it would be easy. And it’s anything but. You look at them wondering that you wonder, huddling about a spark of pedestrian superiority, the glee of seeing, the one that twist smiles into cramps, like toddlers hiding in plain view. What joy there is in deception!
Bury it.
Where is the agon? The strife? What you call ‘professionalism’ is prestidigitation, the absence of personality made indicator of truth–the faux voice of no one to cement a faux view from nowhere. The winnowing of idiosyncrasies as grace. Nowhere is the mania for standardization more noiselessly sublimated than in academia.
Life has become a conflict of machines: the stochastic wave of your nature, paleolithic consciousness shooting the translucent curl, versus the deterministic demands of a metastatic bureaucracy, forever punishing you for your margins of error.
And now look at you, weeping for reasons no one would care to admit.
Shush. Shush, you fool. The belly is full. The bowel voided.
Honesty has always been an angle.
Smile for the camera. No one can pretend not to be a politician anymore.
Set a great stone before the tomb.
I like talking to addicts. I like thinking I can see further, that I have some kind of wisdom to impart. I mourn the flutter of indecision, the wary squint, when something in my tone or vocabulary gives me away. I like learning the lingo, the names of things illegal. I like bullshitting about things that seem worth bullshitting about–though only at the time. I like to be the one that knows. I like that my life has been tragic, that I can stop strangers cold with my memories. And I find it strange, this inability to arbitrate between personas.
This is soul rotting stuff, this.
Philosophy sets you at odds with your origins as it is. It alienates and isolates you, especially when all seems convivial. Or maybe you’re ‘just-different-that-way.’ Maybe I’m ‘you-had-to-be-there’ like, all the way down.
But I doubt it.
To be a philosopher is to forever hold your tongue, watch what you say. They grow quiet around the turkey when you speak, out of forbearance, not interest or deference. They endure more than understand. They refuse more than fail to recognize your ‘expertise’–and how could they not, when it would relieve them of their humanity? To accord you authority would be to concede their right to judge and believe, to dare hold forth a world from their small corner.
How could they not despise? Lampoon your cartoonish pretension? And above all, how could they not distrust what you see?
All theory is megalomania, a crime against interpersonal proportion. Could you imagine actually telling them what you believe? That they are hapless, cretinized, duped, tyrannized by their purchase patterns, their stories, their comedians and body-mass-indices?
That they are the They? The hoi-fucking-polloi?
But then it cuts both ways, doesn’t it? Maybe you catch a glimpse of it, now again, the lunatic scale of your defection. The inkling of undergraduate condescension–of patience.
The knowledge that you would be murdered were this any other age hangs like implicit smoke about you.
If you are young, you’re still working through the consequences of what you are becoming. You still resent. You still primp and preen, declaim before make-shift worlds. You can still taste the transformation of frustrated pride into ingrown loathing. If you are not so young you have already learned to be wry and acerbic, to speak only to make the people around the turkey laugh or wonder. You find refuge in observation, and even manage to flatter yourself, on occasion, for your anthropological isolation. If you’re lucky, you recover joy in ways devious and orthogonal. You heap abuse upon what you have become as both lubricant and prophylactic. You imagine Zizek fucking groupies, wag your eyes at the circus that was once your passion.
You let mystery become the one simple. Perhaps find wisdom in exhaustion.
I’m not sure what I’m doing any more. What I’m writing or for whom, whether it’s important or outrageous or pathetic. I’ve never been a robust person. I’ve always been frail in ways that stoke a father’s outrage, enough to worry that I’m not a match for whatever it is–let alone this. Always faintly amazed that I have survived. And always driven.
All I know is that indulgences exact a toll.
Back when I was doing my PhD a friend of mine would walk his dog every night, one of those toy breeds that sound like rats when crossing hardwood floors. A classmate of ours, a solitary soul, happened to live a few doors down. He would see him through his patio doors every night, laying motionless on his couch watching the tube, soaked in erratic prints of white and blue. Every night. Passive. Watching. Wordless.
Never screaming.
And he would wonder about him. Theorize.
January 18, 2013
The Introspective Peepshow: Consciousness and the ‘Dreaded Unknown Unknowns’
Aphorism of the Day: That it feels so unnatural to conceive ourselves as natural is itself a decisive expression of our nature.
.
This is a paper I finished a couple of months back, my latest attempt to ease those with a more ‘analytic’ mindset into the upside-down madness of my views. It definitely requires a thorough rewrite, so if you see any problems, or have any questions, or simply see a more elegant way of getting from A to B, please sound off. As for the fixation with ‘show’ in my titles, I haven’t the foggiest!
Oh, yes, the Abstract:
“Evidence from the cognitive sciences increasingly suggests that introspection is unreliable – in some cases spectacularly so – in a number of respects, even though both philosophers and the ‘folk’ almost universally assume the complete opposite. This draft represents an attempt to explain this ‘introspective paradox’ in terms of the ‘unknown unknown,’ the curious way the absence of explicit information pertaining to the reliability of introspectively accessed information leads to the implicit assumption of reliability. The brain is not only blind to its inner workings, it’s blind to this blindness, and therefore assumes that it sees everything there is to see. In a sense, we are all ‘natural anosognosiacs,’ a fact that could very well explain why we find the consciousness we think we have so difficult to explain.”
More generally I want to apologize for neglecting the comments of late. Routine is my lifeblood, and I’m just getting things back online after a particularly ‘noro-chaotic’ holiday. The more boring my life is, the more excited I become.
January 8, 2013
Brassier’s Divided Soul
Aphorism of the Day: If science is the Priest and nature is the Holy Spirit, then you, my unfortunate friend, are Linda Blair.
.
And Jesus asked him, “What is your name?” He replied, “My name is Legion, for we are many.” - Mark 5:9
.
For decades now the Cartesian subject–whole, autonomous and diaphanous–has been the whipping-boy of innumerable critiques turning on the difficulties that beset our intuitive assumptions of metacognitive sufficiency. A great many continental philosophers and theorists more generally consider it the canonical ‘Problematic Ontological Assumption,’ the conceptual ‘wrong turn’ underwriting any number of theoretical confusions and social injustices. Thinkers across the humanities regularly dismiss whole theoretical traditions on the basis of some perceived commitment to Cartesian subjectivity.
My long time complaint with this approach lies in its opportunism. I entirely agree that the ‘person’ as we intuit it is ‘illusory’ (understood in some post-intentional sense). What I’ve never been able to understand, especially given post-structuralism’s explicit commitment to radical contextualism, was the systematic failure to think through the systematic consequences of this claim. To put the matter bluntly: if Descartes’ metacognitive subject is ‘broken,’ an insufficient fragment confused for a sufficient whole, then how do we know that everything subjective isn’t likewise broken?
The real challenge, as the ‘scientistic’ eliminativism of someone like Alex Rosenburg makes clear, is not so much one of preserving sufficient subjectivity as it is one of preserving sufficient intentionality more generally. The reason the continental tradition first lost faith with the Cartesian and Kantian attempts to hang the possibility of intentional cognition from a subjective hook is easy enough to see from a cognitive scientific standpoint. Nietzsche’s ‘It thinks’ is more than pithy, just as his invocation of the physiological is more than metaphorical. The more we learn about what we actually do, let alone how we are made, the more fractionate the natural picture–or what Sellars famously called the ‘scientific image’–of the human becomes. We, quite simply, are legion. The sufficient subject, in other words, is easily broken because it is the most egregious illusion.
But it is by no means the only one. The entire bestiary of the ‘subjective’ is on the examination table, and there’s no turning back. The diabolical possibility has become fact.
Let’s call this the ‘Intentional Dissociation Problem,’ the problem of jettisoning the traditional metacognitive subject (person, mind, consciousness, being-in-the-world) while retaining some kind of traditional metacognitive intentionality–the sense-making architecture of the ‘life-world’–that goes with it. The stakes of this problem are such, I would argue, that you can literally use it to divide our philosophical present from our past. In a sense, one can forgive the naivete of the 20th century critique of the subject simply because (with the marvellous exception of Nietzsche) it had no inkling of the mad cognitive scientific findings confronting us. What is willful ignorance or bad faith for us was simply innocence for our teachers.
It is Wittgenstein, perhaps not surprisingly, who gives us the most elegant rendition of the problem, when he notes, almost in passing (see Tractatus, 5.542), the way so-called propositional attitudes such as desires and beliefs only make sense when attributed to whole persons as opposed to subpersonal composites. Say that Scott believes p, desires p, enacts p, and is held responsible for believing, desiring, and enacting. One night he murders his neighbour Rupert, shouting that he believes him a threat to his family and desires to keep his family safe. Scott is, one would presume, obviously guilty. But afterward, Scott declares he remembers only dreaming of the murder, and that while awake he has only loved and respected Rupert, and couldn’t imagine committing such a heinous act. Subsequent research reveals that Scott suffers from somnambulism, the kind associated with ‘homicidal sleepwalking’ in particular, such that his brain continually tries to jump from slow-wave sleep to wakefulness, and often finds itself trapped between with various subpersonal mechanisms running on ‘wake mode’ while others remain in ‘sleep mode.’ ‘Whole Scott’ suddenly becomes ‘composite Scott,’ an entity that clearly should not be held responsible for the murder of his neighbour Rupert. Thankfully, our legal system is progressive enough to take the science into account and see justice is done.
The problem, however, is that we are fast approaching the day where any scenario where Scott murders Rupert could be parsed in subpersonal terms and diagnosed as a kind of ‘malfunction.’ If you have any recent experience teaching public school you are literally living this process of ‘subpersonalization’ on a daily basis, where more and more the kinds of character judgements that you would thoughtlessly make even a decade or so ago are becoming inappropriate. Try calling a kid with ADHD ‘lazy and irresponsible,’ and you have identified yourself as lazy and irresponsible. High profile thinkers like Dennett and Pinker have the troubling tendency of falling back on question-begging pragmatic tropes when considering this ‘spectre of creeping exculpation’ (as Dennett famously terms it in Freedom Evolves). In How the Mind Works, for instance, Pinker claims “that science and ethics are two self-contained systems played out among the same entities in the world, just as poker and bridge are different games played with the same fifty-two-card deck” (55)–even though the problem is precisely that these two systems are anything but ‘self-contained.’ Certainly it once seemed this way, but only so long as science remained stymied by the material complexities of the soul. Now we find ourselves confronted by an accelerating galaxy of real world examples where we think we’re playing personal bridge, only to find ourselves trumped by an ever-expanding repertoire of subpersonal poker hands.
The Intentional DissociationProblem, in other words, is not some mere ‘philosophical abstraction;’ it is part and parcel of an implacable science-and-capital driven process of fundamental subpersonalization that is engulfing society as we speak. Any philosophy that ignores it, or worse yet, pretends to have found a way around it, is Laputan in the most damning sense. (It testifies, I think, to the way contemporary ‘higher education’ has bureaucratized the tyranny of the past, that at such a time a call to arms has to be made at all… Or maybe I’m just channelling my inner Jeremiah–again!)
In continental circles, the distinction of recognizing both the subtlety and the severity of the Intentional Dissociation Problem belongs to Ray Brassier, one of but a handful of contemporary thinkers I know of who’ve managed to turn their back on the apologetic impulse and commit themselves to following reason no matter where it leads–to thinking through the implications of an institutionalized science truly indifferent to human aspiration, let alone conceit. In his recent “The View from Nowhere,” Brassier takes as his task precisely the question of whether rationality, understood in the Sellarsian sense as the ‘game of giving and asking for reasons,’ can survive the neuroscientific dismantling of the ontological self as theorized in Thomas Metzinger’s magisterial Being No One.
The bulk of the article is devoted to defending Metzinger’s neurobiological theory of selfhood as a kind of subreptive representational device (the Phenomenal Self Model, or PSM) from the critiques of Jurgen Habermas and Dan Zahavi, both of whom are intent on arguing the priority of the transcendental over the merely empirical–asserting, in other words, that playing normative (Habermas) or phenomenological (Zahavi) bridge is the condition of playing neuroscientific poker. But what Brassier is actually intent on showing is how the Sellarsian account of rationality is thoroughly consistent with ‘being no one.’
As he writes:
Does the institution of rationality necessitate the canonization of selfhood? Not if we learn to distinguish the normative realm of subjective rationality from the phenomenological domain of conscious experience. To acknowledge a constitutive link between subjectivity and rationality is not to preclude the possibility of rationally investigating the biological roots of subjectivity. Indeed, maintaining the integrity of rationality arguably obliges us to examine its material basis. Philosophers seeking to uphold the privileges of rationality cannot but acknowledge the cognitive authority of the empirical science that is perhaps its most impressive offspring. Among its most promising manifestations is cognitive neurobiology, which, as its name implies, investigates the neurobiological mechanisms responsible for generating subjective experience. Does this threaten the integrity of conceptual rationality? It does not, so long as we distinguish the phenomenon of selfhood from the function of the subject. We must learn to dissociate subjectivity from selfhood and realize that if, as Sellars put it, inferring is an act – the distillation of the subjectivity of reason – then reason itself enjoins the destitution of selfhood. (“The View From Nowhere,” 6)
The neuroscientific ‘destitution of selfhood’ is only a problem for rationality, in other words, if we make the mistake of putting consciousness before content. The way to rescue normative rationality, in other words, is to find some way to render it compatible with the subpersonal–the mechanistic. This is essentially Daniel Dennett’s perennial argument, dating all the way back to Content and Consciousness. And this, as followers of TPB know, is precisely what I’ve been arguing against for the past several months, not out of any animus to the general view–I literally have no idea how one might go about securing the epistemic necessity of the intentional otherwise–but because I cannot see how this attempt to secure meaning against neuroscientific discovery amounts to anything more than an ingenious form of wishful thinking, one that has the happy coincidence of sparing the discipline that devised it. If neuroscience has imperilled the ‘person,’ and the person is plainly required to make sense of normative rationality, then an obvious strategy is to divide the person: into an empirical self we can toss to the wolves of cognitive science and into a performative subject that can nevertheless guarantee the intentional.
Let’s call this the ‘Soul-Soul strategy’ in contradistinction to the Soul-First strategies of Habermas and Zahavi (or the Separate-but-Equal strategy suggested by Pinker above). What makes this option so attractive, I think, anyway, is the problem that so cripples the Soul-First and the Separate-but-Equal options: the empirical fact that the brain comes first. Gunshots to the head put you to sleep. If you’ve ever wondered why ‘emergence’ is so often referenced in philosophy of mind debates, you have your answer here. If Zahavi’s ‘transcendental subject,’ for instance, is a mere product of brain function, then the Soul-First strategy becomes little more than a version of Creationism and the phenomenologist a kind of Young-Earther. But if it’s emergent, which is to say, a special product of brain function, then he can claim to occupy an entirely natural, but thoroughly irreducible ‘level of explanation’–the level of us.
This is far and away the majority position in philosophy, I think. But for the life of me, I can’t see how to make it work. Cognitive science has illuminated numerous ways in which our metacognitive intuitions are deceptive, effectively relieving deliberative metacognition of any credibility, let alone its traditional, apodictic pretensions. The problem, in other words, is that even if we are somehow a special product of brain function, we have no reason to suppose that emergence will confirm our traditional, metacognitive sense of ‘how it’s gotta be.’ ‘Happy emergence’ is a possibility, sure, but one that simply serves to underscore the improbability of the Soul-First view. There’s far, far more ways for our conceits to be contradicted than confirmed, which is likely why science has proven to be such a party crasher over the centuries.
Splitting the soul, however, allows us to acknowledge the empirically obvious, that brain function comes first, without having to relinquish the practical necessity of the normative. Therein lies its chief theoretical attraction. For his part, Brassier relies on Sellars’ characterization of the relation between the manifest and the scientific images of man: how the two images possess conceptual parity despite the explanatory priority of the scientific image. Brain function comes first, but:
The manifest image remains indispensable because it provides us with the necessary conceptual resources we require in order to make sense of ourselves as persons, that is to say, concept-governed creatures continually engaged in giving and asking for reasons. It is not privileged because of what it describes and explains, but because it renders us susceptible to the force of reasons. It is the medium for the normative commitments that underwrite our ability to change our minds about things, to revise our beliefs in the face of new evidence and correct our understanding when confronted with a superior argument. In this regard, science itself grows out of the manifest image precisely insofar as it constitutes a self-correcting enterprise. (4)
Now this is all well and fine, but the obvious question from a relentlessly naturalistic perspective is simply, ‘What is this ’force’ that ‘reasons’ possess?’ And here it is that we see the genius of the Soul Soul strategy, because the answer is, in a strange sense, nothing:
Sellars is a resolutely modern philosopher in his insistence that normativity is not found but made. The rational compunction enshrined in the manifest image is the source of our ability to continually revise our beliefs, and this revisability has proven crucial in facilitating the ongoing expansion of the scientific image. Once this is acknowledged, it seems we are bound to conclude that science cannot lead us to abandon our manifest self-conception as rationally responsible agents, since to do so would be to abandon the source of the imperative to revise. It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world. Shorn of this horizon, all cognitive activity, and with it science’s investigation of reality, would become pointless. (5)
Being a ‘subject’ simply means being something that can act in a certain way, namely, take other things as intentional. Now I know first hand how convincing and obvious this all sounds from the inside: it was once my own view. When the traditional intentional realist accuses you of reducing meaning to a game of make-believe, you can cheerfully agree, and then point out the way it nevertheless allows you to predict, explain, and manipulate your environment. It gives everyone what the they want: You can yield explanatory priority to the sciences and yet still insist that philosophy has a turf. Wither science takes us, we need not move, at least when it comes to those ‘indispensable, ultimate horizons’ that allow us to make sense of what we do. It allows the philosopher to continue speaking in transcendental terms without making transcendental commitments, rendering it (I think anyway) into a kind of ‘performative first philosophy,’ theoretically innoculating the philosopher against traditional forms of philosophical critique (which require ontological commitment to do any real damage).
The Soul-Soul strategy seems to promise a kind of materialism without intentional tears. The problem, however, is that cognitive science is every bit as invested in understanding what we do as in describing what we are. Consider Brassier’s comment from above: “It is our manifest self-understanding as persons that furnishes us, qua community of rational agents, with the ultimate horizon of rational purposiveness with regard to which we are motivated to try to understand the world.” From a cognitive science perspective one can easily ask: Is it? Is it our ‘manifest understanding of ourselves’ that ‘motivates us,’ and so makes the scientific enterprise possible?
Well, there’s a growing body of research that suggests we (whatever we may be) have no direct access to our motives, but rather guess with reference to ourselves using the same cognitive tools we use to guess at the motives of others. Now, the Soul-Soul theorist might reply, ‘Exactly! We only make sense to ourselves against a communal background of rational expectations…’ but they have actually missed the point. The point is, our motivations are occluded, which raises the possibility that our explanatory guesswork has more to do with social signalling than with ‘getting motivations right.’ This effectively blocks ‘motivational necessity’ as an argument securing the ineliminability of the intentional. It also raises the question of what kind of game are we actually playing when we play the so-called ‘game of giving and asking for reasons.’ All you need consider is the ‘spectre’ neuromarketing in the commercial or political arena, where one interlocutor secures the assent of the other by treating that other subpersonally (explicitly, as opposed to implicitly, which is arguably the way we treat one another all the time).
Any number of counterarguments can be adduced against these problems, but the crucial thing to appreciate is that these concerns need only be raised to expose the Soul-Soul strategy as mere make-believe. Sure, our brains are able to predict, explain, and manipulate certain systems, but the anthropological question requiring scientific resolution is one of where ‘we’ fit in this empirical picture, not just in the sense of ‘destitute selves,’ but in every sense. Nothing guarantees an autonomous ‘level of persons,’ not incompatibility with mechanistic explanation, and least of all speculative appraisals (of the kind, say, Dennett is so prone to make) of its ‘performative utility.’
To sharpen the point: If we can’t even say for sure that we exist the way we think, how can we say that our brains nevertheless do the things we think they do, things like ‘inferring’ or ‘taking-as intentional.’
Brassier writes:
The concept of the subject, understood as a rational agent responsible for its utterances and actions, is a constraint acquired via enculturation. The moral to be drawn here is that subjectivity is not a natural phenomenon in the way in which selfhood is. (32)
But as a doing it remains a ‘natural phenomenon’ nonetheless (what else would it be?). As such, the question arises, Why should we expect that ‘concepts’ will suffer a more metacognitive-intuition friendly fate than ‘selves’? Why should we think the sciences of the brain will fail to revolutionize our traditional normative understanding of concepts, perhaps relegate it to a parochial, but ineliminable shorthand forced upon us by any number of constraints or confounds, or so contradict our presumed role in conceptual thinking as to make ‘rationality’ as experienced a kind of in inter fiction. What we cognize as the ‘game of giving and asking for reasons,’ for all we know, could be little more than the skin of plotting beasts, an illusion foisted on metacognition for the mere want of information.
Brassier writes:
It forces us to revise our concept of what a self is. But this does not warrant the elimination of the category of agent, since an agent is not a self. An agent is a physical entity gripped by concepts: a bridge between two reasons, a function implemented by causal processes but distinct from them. (32)
Is it? How do we know? What ‘grips’ what how? Is the function we attribute to this ‘gripping’ a cognitive mirage? As we saw in the case of homicidal somnambulism above, it’s entirely unclear how subpersonal considerations bear on agency, whether understood legally or normatively more generally. But if agency is something we attribute, doesn’t this mean the sleepwalker is a murderer merely if we take him to be? Could we condemn personal Scott to death by lethal injection in good conscience knowing we need only think him guilty for him to be so? Or are our takings-as constrained by the actual function of his brain? But then how can we scientifically establish ‘degrees of agency’ when the subpersonal, the mechanistic, has the effect of chasing out agency altogether?
These are living issues. If it weren’t for the continual accumulation of subpersonal knowledge, I would say we could rely on collective exhaustion to eventually settle the issue for us. Certainly philosophical fiat will never suffice to resolve the matter. Science has raised two spectres that only it can possibly exorcise (while philosophy remains shackled on the sidelines). The first is the spectre of Theoretical Incompetence, the growing catalogue of cognitive shortcomings that probably explain why it is only science can reliably resolve theoretical disputes. The second is Metacognitive Incompetence, the growing body of evidence that overthrows our traditional and intuitive assumptions of self-transparency. Before the rise of cognitive science, philosophy could continue more or less numb to the pinch of the first and all but blind to the throttling possibility of the latter. Now however, we live in an age where massive, wholesale self-deception, no matter what logical absurdities it seems to generate, is a very real empirical possibility.
What we intuit regarding reason and agency is almost certainly the product of compound neglect and cognitive illusion to some degree. It could be the case that we are not intentional in such a way that we must (short of the posthuman, anyway) see ourselves and others as intentional. Or even worse, it could be the case that we are not intentional in such a way that we can only see ourselves and others as intentional whenever we deliberate on the scant information provided by metacognition–whenever we ‘make ourselves explicit.’ Whatever the case, whether intentionality is a first or second-order confound (or both), this means that pursuing reason no matter where it leads could amount to pursuing reason to the point where reason becomes unrecognizable to us, to the point where everything we have assumed will have to be revised–corrected. And in a sense, this is the argument that does the most damage to Sellar’s particular variant of the Soul-Soul strategy: the fact that science, having obviously run to the limits of the manifest image’s intelligibility, nevertheless continues to run, continues to ‘self-correct’ (albeit only in a way that we can understand ‘under erasure’), perhaps consigning its wannabe guarantor and faux-motivator to the very dust-bin of error it once presumed to make possible.
In his recent After Nature interview, Brassier writes:
[Nihil Unbound] contends that nature is not the repository of purpose and that consciousness is not the fulcrum of thought. The cogency of these claims presupposes an account of thought and meaning that is neither Aristotelian—everything has meaning because everything exists for a reason—nor phenomenological—consciousness is the basis of thought and the ultimate source of meaning. The absence of any such account is the book’s principal weakness (it has many others, but this is perhaps the most serious). It wasn’t until after its completion that I realized Sellars’ account of thought and meaning offered precisely what I needed. To think is to connect and disconnect concepts according to proprieties of inference. Meanings are rule-governed functions supervening on the pattern-conforming behaviour of language-using animals. This distinction between semantic rules and physical regularities is dialectical, not metaphysical.
Having recently completed Rosenberg’s The Atheist’s Guide to Reality, I entirely concur with Brassier’s diagnosis of Nihil Unbound’s problem: any attempt to lay out a nihilistic alternative to the innumerable ‘philosophies of meaning’ that crowd every corner of intellectual life without providing a viable account of meaning is doomed to the fringes of humanistic discourse. Rosenberg, for his part, simply bites the bullet, relying on the explanatory marvels of science and its obvious incompatibilities with meaning to warrant dispensing with the latter. The problem, however, is that his readers can only encounter his case through the lense of meaning, placing Rosenberg in the absurd position of using argumentation to dispel what, for his interlocutors, lies in plain sight.
Brassier, to his credit, realizes that something must be said about meaning, that some kind of positive account must be given. But in the absence of any positive, nihilistic alternative–any means of explaining meaning away–he opts for something deflationary, he turns to Sellars (as did Dennett), and the presumption that meaning pertains to a different, dialectical order of human community and interaction. This affords him the appearance of having it both ways (like Dennett): deference to the priority of mechanism, while insisting on the parity of meaning and reason, arguing, in effect, that we have two souls, one a neurobiological illusion, the other a ‘merely functional’ instrument of enormous purport and power…
Or so it seems.
What I’ve tried to show is that cognitive science cares not a whit whether we characterize our commitments as metaphysical or dialectical, that it is just as apt to give lie to metacognitively informed accounts of what we do as to metacognitively informed accounts of what we are. ‘Inferring’ is no more immune to radical scientific revision than is ‘willing’ or ‘believing’ or ‘taking as’ or what have you. So for example, if the structures underwriting consciousness in the brain were definitively identified, and the information isolated as ‘inferring’ could be shown to be, say, distorted low-dimensional projections, jury-rigged ‘fixes’ to far different evolutionary pressures, would we not begin, in serious discussions of cognition or what have you, to continually reference these limitations to the degree they distort our understanding of the actual activity involved? If it becomes a scientific fact that we are a far different creature in a far different environment than what we take ourselves to be, will that not radically transform any discourse that aspires to be cognitive?
Of course it will.
Perhaps the post-intentional philosophy of the future will see the ‘game of giving and asking for reasons’ as a fragmentary shadow, a comic strip version of our actual activity, more distortion than distillation because neither the information nor the heuristics available for deliberative metacognition are adapted to the needs of deliberative metacognition.
This is one reason why I think ‘natural anosognosia’ is such an apt way to describe our straits. We cannot get past the ‘only game in town sense’, or agency, primarily because there’s nothing else to be got. This is the thing about positing ‘functions’: the assumption is that what we experience does what we think it does the way we think it should. There is no reason to assume this must be the case once we appreciate the ubiquity and the consequences of informatic neglect (and our resulting metacognitive incompetence). We have more than enough in the way of counterintuitive findings to worry that we are about to plunge over a cliff–that the soul, like the sky, might simply continue dropping into an ever deeper abyss. The more we learn about ourselves, the more post hoc and counterintuitive we become. Perhaps this is astronomically the case.
Here’s the funny thing: the naturalistic fundamentals are exceedingly clear. Humans are information systems that coordinate via communicated information. The engineering (reverse or forward) challenges posed by this basic picture are enormous, but conceptually, things are pretty clear–so long as you keep yourself off-screen.
We are the only ‘fundamental mystery’ in the room. The problem of meaning is the problem of us.
In addition to Rosenberg’s Atheist’s Guide to Reality I also recently completed reading Plato’s Camera by Churchland and The Cognitive Science of Science by Thagard and I found the contrast… bracing, I guess. Rosenberg made stark the pretence (or more charitably, promise) marbled throughout Churchland and Thagard, the way they ceaselessly swap between the mechanistic and the intentional as if their descriptions of the first, by the mere fact of loosely correlating to our assumptions regarding the latter, somehow explained the latter. Thagard, for instance, goes so far as to claim that the ‘semantic pointer’ model of concepts that he adapts from Eliasmith (of recent SPAUN fame) solves the symbol grounding problem without so much as mentioning how, when, or where semantic pointers (which are eminently amenable to BBT) gain their hitherto inexplicable normative/intentional properties. In other words, they simply pretend there’s no real problem of meaning–even Churchland! “Ach!” they seem to imply, “Details! Details!”
Rosenberg will have none of it. But since he has no way of explaining ‘us,’ he attempts the impossible: he tries to explain us away without explaining us at all, arguing that we are a problem for neuroscience, not for scientism (the philosophical hyper-naturalism that he sees following from the sciences). He claims ‘we’ are philosophically irrelevant because ‘we’ are inconsistent with the world as described by science, not realizing the ease with which this contention can be flipped into the claim that the sciences are philosophically irrelevant so long as they remain inconsistent with us…
Theoretical dodge-ball will not do. Brassier understands this more clearly than any other thinker I know. The problem of meaning has to be tackled. But unlike Jesus, we have cannot cast the subpersonal out into two thousand suicidal swine. ‘Going dialectical,’ abandoning ‘selves’ for the perceived security of ‘rational agency’ ultimately underestimates the wholesale nature of the revisionary/eliminative threat posed by the cognitive sciences, and the degree to which our intentional self-understanding relies on ignorance of our mechanistic nature. Any scientific account of physical regularities that explains semantic rules in terms that contradict our metacognitive assumptions will revolutionize our understanding of ‘rational agency,’ no matter what definitional/theoretical prophylactics we have in place.
Habermas’ analogy of “a consciousness that hangs like a marionette from an inscrutable criss-cross of strings” (“The Language Game or Responsible Agency and the Problem of Free Will,” 24) seems more and more likely to be the case, even at the cost of our ability to make metacognitive sense of our ‘selves’ or our ‘projects.’ (Evolution, to put the point delicately, doesn’t give a flying fuck about our ability to ‘accurately theorize’). This is the point I keep hammering via BBT. Once deliberative theoretical metacognition has been overthrown, it’s anybody’s guess how the functions we attribute to ourselves and others will map across the occluded, orthogonal functions of our brain. And this simply means that the human in its totality stands exposed to the implacable indifference of science…
I think we should be frightened–and exhilarated.
Our capacity to cognize ourselves is an evolutionary shot in the neural dark. Could anyone have predicted that ‘we’ have no direct access to our beliefs and motives, that ‘we’ have to interpret ourselves the way we interpret others? Could anyone have predicted the seemingly endless list of biases discovered by cognitive psychology? Or that the ‘feeling of willing’ might simply be the way ‘we’ take ownership of our behaviour post hoc? Or that ‘moral reasoning’ is primarily a PR device? Or that our brains regularly rewrite our memories? Think, Hume, the philosopher-prophet, and his observation that Adam could never deduce that water drowns or fire burns short of worldly experience. What we do, like what we are, is a genuine empirical mystery simply because our experience of ourselves, like our experience of earth’s motionless centrality, is the product of scant and misleading information.
The human in its totality stands exposed to the implacable indifference of science, and there’s far, far more ways for our intuitive assumptions to be wrong as opposed to right. I sometimes imagine I’m sitting around this roulette wheel, with fairly everyone in the world ‘going with their gut’ and stacking all their chips on the zeros, so there’s this great teetering tower swaying on intentional green, leaving the rest of the layout empty… save for solitary corner-betting contrarians like me and, I hope, Brassier.
January 2, 2013
The ‘Human’: Discovery or Invention?
Aphorism of the Day:
;
“A lack of historical sense is the congenital defect of all philosophers. Some unwittingly even take the most recent form of man, as it developed under the imprint of certain religions or even certain political events, as the fixed form from which one must proceed.”
– Nietzsche, Human, All Too Human
;
Hello, all! I’m Roger Eichorn, a guest-blogger, back from a rather lengthy sojourn in which I did a lot of philosophy, studied German, and started home-recording an album of original music. 2012 was a busy year for me. I expect 2013 to be even busier—but with more pay-offs. To begin with, I have an article on Sextus Empiricus coming out in the journal Ancient Philosophy this spring, which is nice. Looking ahead, my dissertation (on the history of Pyrrhonian skepticism in nineteenth- and twentieth-century German philosophy) should be well on its way to completion by year’s end. In addition, I hope to finish my album and, biggest of all, my fantasy novel, The House of Yesteryear, before celebrating another New Year.
But enough about me. I’ve been watching from the sidelines as Bakker has developed his ‘Blind-Brain Theory’ here at the TPB. I freely admit that I did not follow all his posts last year—but not for lack of interest. For my money, the BBT (given what I understand of it, anyway) is the most promising and philosophically exciting project currently on the philosophy-of-mind market. It emerges from a rare combination of scientific literacy and philosophical virtuosity, the latter in the sense both of (a) a wide-ranging knowledge of the field and (b) the sort of inherent creativity all truly great philosophers possess. Too often, philosophers suffer from a lack of sufficient appreciation for (or ‘fluency in’) science, while scientists’ lack of philosophical sophistication leaves them unable to articulate the philosophical implications of their own discoveries—even when they explicitly set out to do so. (Take, for example, Hawking’s latest book.) It is surprisingly rare, in my experience, to find a thinker willing to approach science philosophically without presuming the superiority of philosophical modes of reflection and to approach philosophy scientifically without taking on board some sort of neutralizing conceptual framework that allows him or her to settle or dismiss intractable philosophical problems with a shrug or a wave of the naïve-epistemic-optimism wand.
The BBT theory suffers from neither of these problems, as far as I can see. It is a monument to Bakker’s intellectual conscience: his willingness to place question-marks over anything and everything, and his restless search for a unified, coherent, and compelling account of the human ontologico–epistemic predicament.
As philosophers—as thinkers—we are all of us, however, at sea on Neurath’s boat, able to repair planks of our ship, but not all of them at once. In order to question, some things must be put beyond question; they must be taken for granted. (I’m intrigued by the idea that this might represent an epistemological parallel to Bakker’s idea of neural ‘informatic occlusion.’) What I want to explore in this post is whether Bakker’s theory incorporates science and philosophy so well at the expense of history, in particular intellectual history (including the history of philosophy). I want, in other words, to see if the BBT stands up to the test of what Nietzsche called ‘historical philosophizing.’
It is tempting, from the perspective of the BBT, to react to the raising of this question by claiming that ‘intellectual history,’ according to the BBT, can be nothing but a tissue of half-truth and outright confabulation. The BBT comes before intellectual history, as it were, and demolishes its foundations. But it can do so only given the truth of elements of our intellectual-historical heritage. This is the sort of double-bind in which the BBT finds itself, for it itself is nothing if not a positive philosophical theory. Most obviously, in order for the BBT to stand up, we need to hold in place an account of (or a blind faith in) science such that its explanatory power can underwrite the data and premises of the BBT. Blind faith is an affront to any healthy intellectual conscience, so we must reject it. Yet there is no satisfactory philosophical account of science. Arguments can be made in science’s favor, of course—loads of them. But any attempt to defend science must eventually, to paraphrase Wittgenstein, run out of reasons. We ask, ‘Why?’, and can find a ‘because’ for a time—but eventually our ‘becauses’ peter out. In the end, we can only point to science’s achievements, its apparent autonomy from (its utter lack of any need for) philosophical underpinning. Science simply marches on, regardless what people at any given time think or say about it. Moreover, science seems to be remarkably successful at shifting intuitions, or altering the way in which we ‘intuitively’ see ourselves and the world, how the world ‘shows up’ for us—regardless, again, of what anyone says or thinks at any given time about science. This is truly remarkable, if you think about it. Generation to generation, science alters our world-pictures without anyone’s consent. As I like to say, you can lock up Galileo, but sooner or later your descendants will exonerate him. Science simply doesn’t care what you think—but you should sure as shit care what it thinks, for it is quite likely that (in general outline, at least) what it thinks represents what future generations will take for granted.
These considerations are enough, in my view, to demolish the objection from the philosophically problematic character of scientific knowledge. Ultimately, the objection misses the point. Yet it suggests a different sort of objection. The efficacy of science is, after all, not the only thing that must be held in place in order for the BBT to do the work Bakker wants it to. We must also hold in place an account of the ‘appearance’ of consciousness. In a sense, this is a different sort of objection from the first, for it does not directly target the epistemic credentials of the BBT; rather, it targets the radicality with which Bakker is eager to credit it. Still, it seems to me that the BBT can only be understood as Bakker would have us understand it if it is understood as being radical. If this is right, then to undermine its radicality is to undermine the theory as Bakker understands it.
Now, the putative radicality of the BBT follows from the way in which it is said to deviate from our ‘intuitive’ understanding of ourselves. In Bakker’s previous post, he refers to “consciousness and intentionality as we intuit them.” It is given the ‘fissure’ that the BBT theory opens up between our ‘knowledge’ of ourselves and our ‘experience’ of ourselves that is prophesied to harbinger the ‘Akratic Culture’ whose coming Bakker ‘mourns.’ Such a culture is ‘akratic’ because our knowledge of ourselves, it is supposed, can never be squared with our experience of ourselves, in which case we will never be able, experientially, to believe what we know about ourselves, i.e., we will never believe we are other-than-we-experience-ourselves-to-be with the sort of lived conviction that only comes from ‘inhabiting’ a fact. Our experience is bound, Bakker claims, to be unmasked as “vacant affirmation and subreptive autonomy.”
But it seems plausible, I want to suggest, that our ‘intuitive experience’ of ourselves and the world is an artifact of a given biological and historico-cultural situation. It is not fixed, not an ‘eternal fact.’ There is, in other words, no stable ‘enemy’ for the BBT to pit itself against. It can, of course, pit itself against our intuitive self-understanding (or self-experience), and that may be radical enough. But why should we think, with Bakker, that “if the [BBT]… turns out to be correct, it will be the last theory in the history of philosophy as traditionally conceived”? That may be true (more on this point in a moment), but let’s consider Bakker’s reasons for advancing this view. The BBT, he claims, will effectively destroy philosophy because it will “transform the great intentional problems of philosophy into the mechanical subject matter of cognitive neuroscience.” This too may well turn out to be true—but it is hardly a necessary consequence of confirming, ‘scientifically,’ the truth of the BBT.
If the argument I made above (regarding the efficacy-of-science objection) is right, then the ‘confirmation’ of the BBT (if confirmed it is destined to be) is likely to be more a matter of a shift in world-picture than of a shift in what scientists or philosophers are willing or able to draw as the conclusions of their arguments. This is why the BBT is a “precursor of the posthuman.” Bakker’s alarming remarks about the possible technological applications of neuroscience, when seen from a ‘posthuman’ perspective, all apply to the nebulous period in which our knowledge outruns our experience in such a way that we are unable to believe what we know. Upon the advent of the posthuman, however, this problem ought to disappear, for there will no longer be a gap between our ‘manifest image’ of ourselves (as ‘humans’ with ‘minds’) and our ‘scientific image’ of ourselves (as ‘machines’). It seems, then, that the ‘akratic culture’ is merely an intermediary stage. Humans used to believe that the sun orbited the earth. We do not believe this any longer, but not because we were ever convinced of the claim that the earth orbits the sun. No, we inherited this belief as part of our world-picture. The menace of the BBT is the idea that the same shift in world-picture is going to occur with regard to ourselves—but with the key difference that, in the case of ourselves, the only way to square our knowledge with our experience entails transforming that experience such that we are no longer ourselves, no longer human.
Yet—and here’s my main point—what does it mean to be ‘human’? A survey of intellectual history suggests not only that the ‘intuitive’ conception of the human against which Bakker pits the BTT is a contingent artifact of a particular cultural tradition, but also that it is in fact a relative novelty in human history. It is a novelty not so much in its particulars as in the purity of their expression. Some of the most prominent traits of our self-understanding include autonomy and agency, intentionality and individuality. These ideas, it seems, go back to what Karl Jaspers called ‘the Axial Age,’ the original flowering of intellectual enlightenment—simultaneously yet independently—in Greece, India, and China. The Axial Age saw the emergence of a conception of the human that our more recent Western enlightenment spent several centuries refining. Prior to the Axial Age, it seems that ‘consciousness’ did not ‘appear’ to human beings the way it is said to ‘appear’ to us today. Even more recently—and still today, in most places—the enlightenment conception was typically diluted with something more like a pre-enlightenment conception.
The easiest way to differentiate the two conceptions is with respect to their views regarding practical agency. Bruno Snell argued, in his fascinating book The Discovery of the Mind, that “Homer’s man does not yet regard himself as the source of his own decisions; that development was reserved for tragedy… [P]rimitive man… has not yet roused himself to an awareness of his own freedom.” We can of course quibble endlessly over the accuracy of Snell’s claims; but if we take evolution seriously, then we’re bound to suppose that there is some similar sort of story to be told. The important point here is that Snell’s book is an account of the discovery of our enlightened natures: the discovery that we are free, autonomous individuals, ‘in the world but not of the world’ (or ‘locally nonlocal,’ as Bakker would say). In a reversal with astounding implications, the pre-enlightened ‘manifest image’ of Snell’s Homeric humanity, which once-upon-a-time gave way (at least in part) to a ‘scientific image’ that was given voice by tragedians and was later refined by philosophers and theologians—thereby becoming, in time, a new manifest image—is now itself giving way to a ‘scientific image’ that can be seen as a reversion to something much closer to the ‘manifest image’ of Homer’s time!
The enlightened conception of the human, it seems, was in all likelihood not a discovery at all, but an invention—and an artistic invention, at that. As this broad historical sketch makes plain, abandoning the invented enlightenment conception of the human does not entail endorsing the conception of the human that is said to follow from the BBT. To claim otherwise is to open oneself to the ‘efficacy-of-science’ objection I discussed above. It seems, then, that the apocalyptic overtones of the BBT depend ultimately on predictions regarding possible technological applications of neuroscience that are capable of transforming the way our brains work. There may be a host of excellent arguments to support such predictions, but the fact (if it is a fact) that the supposed radicality of the BBT depends on such predictions renders it tenuous.
Demonstrating the falsity of the enlightenment conception of the human is not sufficient to determine its replacement conception, especially given the fact that there are a great many alternatives conceptions—ones far more congenial to the BBT—already in existence. Alan Watts, for instance, begins his book The Book: On the Taboo Against Knowing Who You Are by claiming that “the prevalent sensation of oneself as a separate ego enclosed in a bag of skin is a hallucination which accords neither with Western science nor with the experimental philosophy-religions of the East.” What we need, Watts argues, is “a new experience—a new feeling of what it is to be ‘I.’” Why should we not think that such ‘new experiences’—such new ‘appearances of consciousness’—might arise? Looked at from an intellectual-historian’s perspective, we might say: Why should we not think that the BBT, far from being the last theory in the history of philosophy, is merely a doorway to a new kind of philosophizing? Doesn’t it remain an open question, even given the BBT, just how exactly we conceive of ourselves? It seems to me that the answer to this question is yes, in which case the ‘radicality’ of the BBT is either (a) dependent on possible future events (viz., regarding technological applications of neuroscience) going a particular way, with particular results, or (b) the implausible notion that there is such a thing as ‘human nature’ that can be ‘discovered’ by means of any known scientific or philosophical techniques. If our experience of ourselves is instead a sort of invention, then though the BBT theory may close down some inventive options, it is bound to open up new ones—or, as the case may be, old ones.
Watts characterizes the self-experience afforded by Eastern religious practices in the following way: “We do not ‘come into’ this world; we come out of it, as leaves from a tree. As the ocean ‘waves,’ the universe ‘peoples.’ Every individual is an expression of the whole realm of nature, a unique action of the total universe.” Strip away the poetry and we’re left with a picture that strikes me as surprisingly congenial to the BBT. My point is not that Watts—or Hinduism generally—is right; I use him merely as an example of a different sort of self-conception, one that, like Snell’s ‘Homeric man,’ seems less at odds with the BBT than is our modern, ‘enlightened’ self-conception.
Poetry can be writ slantwise across even the ugliest and most prosaic facts—and, in doing so, can even be ‘enlightening,’ though it lies.
December 31, 2012
How to Build a First Person (Using only Natural Materials)
Aphorism of the Day: Birth is the only surrender to fate possible.
.
In film you have the famous ‘establishing shot,’ a brief visual survey, usually a long or medium shot, of the space the ensuing sequence will analyze along more intimate angles. Space, you could say, is the conclusion that comes first, the register that always precedes its analysis. Some directors play with this, continually force their audience into the analysis absent any spatial analysand. The viewer is thrown, disoriented as a result. Sometimes directors build outward, using the lure of established space as a kind of narrative instrument. Sometimes they shackle the eye to detail, mechanically denying events their place, and so inciting claustrophobia in the airy void of the theatre. They use the space represented to wage war against the space of representing.
If the same has happened here, it’s been entirely inadvertent. I’m not sure how I’ll look back at this year–this attempt to sketch out ‘post-intentional philosophy.’ It’s been a tremendously creative time, to be sure. A hundred thousand words for the beast that is The Unholy Consult, and easily as much written here. I’m not sure I’ve ever enjoyed such a period of intense creativity. These posts have simply been dropping in my head, one after another, some as long as journal articles, most all of them bristling with detail, jargon, and counterintuitive complexities. When I think about it, I’m blown away that Three Pound Brain has grown the way it has, half-again over last year…
For I wanketh.
Large.
Now I want to think the explanation is simple, that against all reason, I’ve managed to climb into a new space, an undiscovered country. But all I know for sure is that I’m arguing something genuinely new–something genuinely radical. So folly or not, I pursue, run down what seem to be the never-ending permutations of this murderous take on the human soul. We have yet to see what science will make of us. And we have very little reason to believe our hearts won’t be broken the way human hearts are almost always broken when they pitch traditional hope against scientific indifference. Who knows? Three Pound Brain could be the place, the cradle where our most epic delusion dies.
Either way, the time has come to pan back, crank up the depth of field, and finally provide some kind of establishing shot. This ain’t going to be easy–for me or you. At a certain level the formulations are almost preposterously simplistic (a ‘machinology’ as noir-realism, I think, termed it). I’m talking about the brain in exceedingly general terms, after all. I could delve into the (of course stochastic) mechanics in more detail, I suppose, go ‘neuroanatomical’ in an effort to add more empirical plumage. I still intend to write about the elegant way the Blind Brain Theory falls out of Bayesian predictive-coding models of the brain.
But for the nonce, I don’t need to. The apparently insuperable conundrums of the first person, the consciousness we think we have, can be explained using some quite granular structural and developmental assumptions. We just need to turn our normal way of looking at things upside down–to stop viewing our metacognitive image of meaning and agency as some kind of stupendous achievement. Why? Because doing so takes theoretical metacognition at its word, something that cognitive science has shown–quite decisively–to be the province of fools. If anything, the ‘stupendous achievement’ is the one possessing far and away the greatest evolutionary pedigree and utilizing the most neural resources: environmental cognition. Taking this as our baseline, we can begin diagnosing the ancient perplexities of the metacognitive image as the result of informatic occlusion and cognitive overreach.
We could be a kind of dream, you and I, one that isn’t even useful in any recognizable manner. This is where the difficulty lies: the way BBT requires we contravene our most fundamental intuitions.
It’s all about the worst case scenario. Philosophy, to paraphrase Brassier, is no sop to desire. If science stands poised to break us, then thought must submit to this breaking in advance. The world never wants for apologists: there will always be an army of Rosenthals and Badious. Someone needs to think these things, no matter how dehumanizing or alienating they seem to be. Besides, only those who dare thinking the post-intentional need fear ‘losing’ anything. If meaning and morality are the genuine emergent realities that the vast bulk of thinkers, analytic or continental, assume them to be, they should be able to withstand any sustained attempt to explain them away.
And if not? Well then, welcome to the future.
.
So, how do you build a first person?
Imagine the sum of information, understood in the deliberately vague sense of systematic differences making systematic differences, comprising you and your immediate environment. The holy grail of consciousness research is simply understanding how what you are experiencing this very moment fits into this ‘natural informatic field.’ The brass ring, in other words, is one of understanding how you qua person resides in you qua organism–or in other words, explaining how mechanism generates consciousness and intentionality.
Now until recently, science could only track natural processes up to your porch. You qua organism are a mansion of astronomical complexities, and even as modern medicine overran your outer defences, your brain remained an unconquerable citadel, the one place in nature where the old, prescientific games of giving-and-asking-for-reasons could flourish. This is why I continually talk about the ‘bonfire of the humanities,’ the impending collapse of the traditional discourses of the soul. This is why I continually speak of BBT in eschatological terms, pose it as a precursor of the posthuman: if scientifically confirmed, it means that Man-the-meaning-maker is of a piece with Man-the-image-of-God and Man-the-centre-of-the-universe, that noocentrism will join biocentrism and geocentrism in the reliquary of human intellectual conceit and folly. And this is why I mourn ‘Akratic Culture,’ society fissured by the scission of knowledge and experience, with managerial powers exploiting the mechanistic efficiencies of the former, and the client masses fleeing into the intentional opacities of the latter, seeking refuge in vacant affirmation and subreptive autonomy.
So how does the soul fit into the natural informatic field? BBT argues that the best way to conceive the difference between the first and third person is in terms of informatic neglect. Since the structure and function of the brain is dedicated to reliably modelling the structure and function of its environment, the brain remains that part of the environment that it cannot reliably model. BBT terms the modelling structure and function ‘medial’ and the modelled structure and function ‘lateral.’ The brain’s inability to model its modelling, it terms medial neglect. Medial neglect simply means the brain cannot cognize itself as a brain, and so must cognize itself otherwise. This ‘otherwise’ is what we call the soul, mind, consciousness, the first-person, being-in-the-world, etc.
So consider a perspective on a brain:
Note that the target here is your perspective on the diagrammed brain, not the brain itself. Since the structure and function of your brain are dedicated to modelling the structure and function of your environment, the modelling nowhere appears within the modelled as anything resembling the modelled, even though we know the brain modelling is as much a brain as the brain modelled. The former, rather, provides the ‘occluded frame’ of the latter. At any given moment your perspective ‘hangs,’ as it were, outside of everything. You can pause and reflect on your perspective, of course, model your modelling, as say, something like this:
but only from the standpoint of another ‘occluded frame,’ the oblivion of medial neglect. This second diagram, in other words, can only model the medial, neurofunctional information neglected in the first by once again neglecting that information. No matter how many times we stack these diagrams, how far we press the Rylean regress, we will still be stranded with medial neglect, the ‘unframed frame’ of the first person. The reason for this, it is important to note, is purely mechanical as opposed to semantic: the machinery of modelling simply cannot model itself as it models.
But even though medial neglect means thoroughgoing neurofunctional occlusion–the brains only appear within the first person–these diagrams show it is by no means complete. As mentioned above, the brain’s inability to model itself as a brain (another natural mechanism in its environment) means it must model itself as a ‘perspective,’ something at once situated within its environment, and somehow mysteriously hanging outside of it–both local and nonlocal.
Many of the apparent peculiarities belonging to consciousness and intentionality as we intuit them, on the BBT account, turn on either medial neglect directly or one of a number of other structural and developmental confounds such as brain complexity, evolutionary caprice, and access invariance. The brain, unable to model itself as a brain, is forced to rely on what little metacognitive information its structure and evolutionary development afford.
This is where informatic neglect becomes a problem more generally, which is to say, over and above the problems posed by medial neglect in particular. We now know human cognition is fractionate, a collection of situation specific problem-solving devices, and yet we have no direct awareness of relying on anything save a singular, universal capacity for problem-solving. We regularly rely on dubious information, resort to the wrong device on the wrong occasion, entirely convinced of the justness of our cause, the truth of our theory, or what have you.
Mistakes like these and others reveal the profound and peculiar structural role informatic neglect plays in conscious experience. In the absence of information pertaining to our (medial) causal relation to our environment, we experience aboutness. In the absence of discriminations (in the absence of information) we experience wholes. In the absence of information regarding the insufficiency of information, we presume sufficiency.
But the most difficult-to-grasp structural quirk of informatic neglect has to be the ‘local nonlocality’ we encountered above, what I’ve been calling asymptosis, the fact that the various limits of cognitive and perceptual modalities cannot figure within those cognitive and perceptual modalities. As mechanical, no neural subsystem can model its modelling as it models. This is why, for instance, you cannot see the limits of your visual field–or why, in other words, the boundary of your visual field is asymptotic.
So in the diagrams above, you see a brain and none of the neural machinery responsible for that seeing primarily because of informatic neglect. It is you, a whole (and autonomous) person, seeing that brain and not a fractionate conglomerate of subpersonal cognitive mechanisms because of informatic neglect. Likewise, this metacognitive appraisal that it is ‘you’ looking at a brain is self-evident because of informatic neglect: you have no information to the contrary. And lastly, the ‘frame’ (the medial neurofunctionality) of what you see constitutively outruns what you see because, once again, of informatic neglect.
This is all just to say that the intentional, holistic, sufficient, and asymptotic structure of the first person simply follows from the fact that the brain is biomechanical.
This claim may seem innocuous, but it is big, I assure you, monstrously big. Why? Because, aside from at long last providing a parsimonious theoretical means of naturalizing consciousness and intentionality, it also argues that they (as intuitively conceived) are largely cognitive illusions, kinds of ‘natural anosognosias’ that we cannot but suffer given the constraints and confounds facing neural metacognition. It means that the very form of ‘subjectivity’ (and not merely the ‘self’) actually is a kind of dream.
Make no mistake, if the Blind Brain Theory (or something like it) turns out to be correct, it will be the last theory in the history of philosophy as traditionally conceived. Why? Because BBT is as much a translation manual as a theory, a potential way to transform the great intentional problems of philosophy into the mechanical subject matter of cognitive neuroscience.
Trust me, I know how out-and-out preposterous this sounds… But as I said above, the gates of the soul have been battered down.
Since the devil is in the details, it might pay to finesse this sketch with more information. So to return to what I termed the natural informatic field above, the sum of all the static and dynamic systematic differences that constitute you qua organism. How specifically does informatic neglect allow us to plug the phenomenal/intentional into the physical/mechanical?
From a life sciences perspective, the natural informatic field consists of externally-related structures and irreflexive processes. Our brain is that portion of the Field biologically adapted to model and interact with the rest of the Field (the environment) via information collected from the Field. The conscious subsystem of the brain is that portion of the Field biologically adapted to model and interact with the rest of the Field via information collected from the brain. All we need ask is what information is available to what cognitive resources as the conscious subsystem generates its model. In a sense, all we need do is subtract varieties and densities of information from the pot of overall information. I know the conceptual jargon makes this all seem dreadfully complicated, but it really is this simple.
So, what information can the conscious subsystem of the brain provide what cognitive resources in the course of generating its model? No causal information regarding its own neurofunctionality, as we have seen. The model, therefore, will have to be medially acausal. No temporal information regarding its own neurofunctionality either. The model, therefore, will have to be medially atemporal. Minimal information regarding its own structural complexity, given the constraints and confounds mentioned above. The model, therefore, will be structurally undifferentiated relative to environmental models. Minimal information regarding its own informatic and cognitive limitations, once again, given the aforementioned constraints and confounds. The model, therefore, will be both canonical (because of sufficiency) and intractable (because incompatible with existing, environmentally-oriented cognitive resources).
Now the key principle that seems to make this work is the way neglect leverages varieties of identity. BBT, in effect, interprets the appearance of consciousness as a kind of ‘flicker fusion writ large.’ In the absence of distinctions, the brain (for reasons that will fall out of any successful scientific theory of consciousness proper) conjures experiential continuities. Occlusion equals identity, according to BBT.
What makes the first person as it appears so peculiar from the standpoint of environmental cognition has to do with ‘informatic captivity’ or access invariance, our brain’s inability to vary its informatic relationship to itself the way it can its environments. So, on the BBT account, the ‘unity of consciousness’ that so impressed Descartes is simply of a piece with the way, in the absence of information, we confuse aggregates for individuals more generally, as when we confuse ants on the sidewalk with spilled paint, for instance. But where cognition can vary its access and so accumulate the information required to revise ‘spilled paint’ into ‘swarming ants’ in our environment, metacognition is trapped with the spilled paint of the ‘soul.’ The first person appears to be an internally-related ‘whole,’ in other words, simply because we lack the information to cognize it otherwise. The holistic consciousness we think we enjoy, in other words, is a kind of cartoon.
(This underscores the way the external-relationality characteristic of our environment is an informatic and cognitive achievement, something the human brain has evolved to model and exploit. On the BBT account, internal-relationality is generally a symptom of missing information, a structurally and developmentally imposed loss of dimensionality.)
But what makes the first person so intractable, a hitherto inexhaustible source of perplexity, only becomes apparent when we consider the diachronic dimension of this ‘fusion in occlusion,’ the way neglect winnows the implacable irreflexivity of the natural into the labile reflexivity of the mental. The conscious system’s inability to model its modelling as it models applies to temporal modelling as well. The temporal system can no more ‘time its timing’ than the visual system can ‘see its seeing.’ This means that metacognition has no way to intuit the ‘time of timing,’ leading, once again, to default identity and all the paradoxes belonging to the ‘now.’ The temporal field is ‘locally nonlocal’ or asymptotic, muddy and fleeting yet apparently monolithic and self-identical.
So, in a manner similar to the way information privation collapses external-relationality into apparent internal-relationality, it also collapses irreflexivity into apparent reflexivity. Conscious cognition can track environmental irreflexivity readily enough, but it cannot track this tracking and so intuits otherwise. The first person cartoon suffers the diachronic hallucination of fundamental continuity in time. Once again metacognition mistakes oblivion (or less dramatically, incapacity) for identity.
To get a sense of how radical this is one need only consider the very paradigm of atemporal reflexivity in philosophy, the a priori. On the BBT account, what we call the a priori is what algorithmic nature looks like from the inside. No matter how much content you hollow out of your formalisms, you are still talking about something magical, still begging what Eugene Wigner famously called ‘the unreasonable effectiveness of mathematics,’ the question of why an externally-related, irreflexive nature should prove so amenable to an internally-related, reflexive mathematics. BBT answers: because mathematics is itself natural, it’s most systematically ’viral’ expression. It collapses the disjunct, asserts continuity where the tradition perceives the inexplicable. Mathematics only seems ’supra-natural’ because until recently it could only be explored performatively in the ‘laboratory’ of our own brains, and because of the way metacognition shears away its informatic dimensions. Given the illusion of sufficiency, the a priori cartoon strucks us as the efficacious source of a special, transcendental form of cognition. Only now, as computational complexities force mathematicians and physicists to rely more and more on machines, mechanical implementations that (by some cosmic coincidence) are entirely capable of performing ’semantic’ operations without the least whiff of ‘understanding,’ are we in a position to entertain the possibility that ‘formal semantics’ are simply another ghost in the human machine.
And the list of radical reinterpretations goes on–after a year of manic exploration and elaboration I feel like I’ve scarcely scratched the surface. I could use some help, if anyone is so inclined!
So with that in ‘mind,’ I leave you with the following establishing shot: Consciousness as you conceive/perceive it this very moment now is the tissue of neglect, painted on the same informatic canvas with the same cognitive brushes as our environment, only blinkered and impressionistic in the extreme. Reflexivity, internal-relationality, sufficiency, and intentionality, can all be seen as hallucinatory artifacts of informatic closure and scarcity, the result of a brain forced to make the most with the least using only the resources it has at hand. This is a picture of the first person as an informatically intergrated series of scraps of access, forced by structural bottlenecks to profoundly misrecognize itself as something somehow hooked upon the transcendental, self-sufficient and whole….
To see you.
December 21, 2012
The Second Room: Phenomenal Realism as Grammatical Violation
Aphorism of the Day: Atheist or believer, we all get judged by God. The one that made us, or the one we make.
So just what the hell did Wittgenstein mean when he wrote this?
“And yet you again and again reach the conclusion that the sensation itself is a nothing.” Not at all. It is not a something, but not a nothing either! The conclusion was only that a nothing would serve just as well as a something about which nothing could be said.” (1953, 304)
I can remember attempting to get a handle on this section of Philosophical Investigations in a couple of graduate seminars, contributing nothing more than once stumping my professor with the question of fraudulent workplace injury claims. But now, at long last, I (inadvertently) find myself in a position to explain what Wittgenstein was onto, and perhaps where he went wrong.
My view is simply that the mental and the environmental are pretty much painted in the same informatic brush, and pretty much comprehended using the same cognitive tools, the difference being that the system as a whole is primarily evolved to the track and exploit the environmental, and as a result has great difficulty attempting to track and leverage the ‘mental’ so-called.
If you accept the mechanistic model of the life sciences, then you accept that you are an environmentally situated, biomechanical, information processing system. Among the features that characterize you as such a system is what might be called ‘structural idiosyncrasy,’ the fact that the system is the result of innumerable path dependencies. As a bottom-up designer, evolution relies on the combination of preexisting capacities and happenstance to provide solutions, resulting in an vast array of ad hoc capacities (and incapacities). Certainly the rigours of selection will drive various functional convergences, but each of those functions will bear the imprimatur of the evolutionary twists that led it there.
Another feature that characterizes you as such a system is medial neglect. Given that the resources of the system are dedicated to modelling and exploiting your environments, the system itself constitutes a ‘structural blindspot’: it is the one part of your environment that you cannot readily include in your model of the environment. The ‘medial’ causality of the neural, you could say, must be yoked to the ‘lateral’ causality of the environmental to adequately track and respond to opportunities and threats. To system must be blind to itself to see the world.
A third feature that characterizes you as such a system is heuristic specificity. Given the combination of environmental complexity, structural limitations, and path dependency, cognition is situation-specific, fractionate, and non-optimal. The system solves environmental problems by neglecting forms of information that are either irrelevant or not accessible. So, to give what is perhaps the most dramatic example, one can suggest that intentionality, understood as aboutness, possesses a thoroughly heuristic structure. Given medial neglect, the system has no access to information pertaining to anything but the grossest details of its causal relationship to its environments. It is forced, therefore, to model that relationship in coarse-grained, acausal terms–or put differently, in terms that occlude the neurofunctionality that makes the relationship possible. As a result, you experience apples in your environment, oblivious to any of the machinery this makes possible. This ‘occlusion of the neurofunctional’ generates efficiencies (enormous ones, given the system’s complexity) so long as the targets tracked are not themselves causally perturbed by (medial) tracking. Since the system is blind to the medial, any interference it produces will generate varying degrees of ‘lateral noise.’
A final feature that characterizes you as such a system might be called internal access invariability, the fact that cognitive subsystems receive information via fixed neural channels. All this means is that cognitive subsystems are ‘hardwired’ into the rest of the brain.
Given a handful of caveats, I don’t think any of the above should be all that controversial.
Now, the big charge against Wittgenstein regarding sensation is some version of crypto-behaviourism, the notion that he is impugning the reality of sensation simply because only pain behaviour is publicly observable, while the pain itself remains a ‘beetle in a box.’ The problem people have with this characterization is as clear as pain itself. One could say that nothing is more real than pain, and yet here’s this philosopher telling you that it is ‘neither a something nor a nothing.’
Now I also think nothing is more real than pain, but I also agree with Wittgenstein, at long last, that pain is ‘neither a something or a nothing.’ The challenge I face is one of finding some way to explain this without sounding insane.
The thing to note about the four features listed above is how each, in its own way, compromises human cognition. This is no big news, of course, but my view takes the approach that the great philosophical conundrums can be seen as diagnostic clues to the way cognition is compromised, and that conversely, the proper theoretical account of our cognitive shortcomings will allow us to explain or explain away the great philosophical conundrums. And Wittgenstein’s position certainly counts as one of the most persistent puzzles confronting philosophers and cognitive scientists today: the question of the ontological status of our sensations.
Another way of putting my position is this: Everyone agrees you’re are a biomechanism possessing myriad relationships with your environment. What else would humans (qua natural) be? The idea that understanding the specifics of how human cognition fits into that supercomplicated causal picture will go a long way to clearing up our myriad, longstanding confusions is also something most everyone would agree with. What I’m proposing is a novel way of seeing how those confusions fall out of our cognitive limitations–the kinds of information and capacities that we lack, in effect.
So what I want to do, in a sense, is turn the problem of sensation in Wittgenstein upside down. The question I want to ask is this: How could the four limiting features described above, structural idiosyncrasy (the trivial fact that out of all the possible forms of cognition we evolved this one), medial neglect (the trivial fact that the brain is structurally blind to itself as a brain), heuristic specificity (the trivial fact that cognition relies on a conglomeration of special purpose tools), and access invariability (the trivial fact that cognition accesses information via internally fixed channels) possibly conspire to make Wittgenstein right?
Well, let’s take a look at what seems to be the most outrageous part of the claim: the fact that pain is ‘neither a something or a nothing.’ This, I think, points rather directly at heuristic specificity. The idea here would be that the heuristic or heuristic systems we use to identify entities are simply misapplied with reference to sensations. As extraordinary as this claim might seem, it really is old hat scientifically speaking. Quantum Field Theory forced us quite some time ago to abandon the assumption that our native understanding of entities and existence extends beyond the level of apples and lions we evolved to survive in. That said, sensation most certainly belongs the ‘level’ of apples and lions: eating apples causes pleasure as reliably as lion attacks cause pain.
We need some kind of account, in other words, of how construing sensations as extant things might count as a heuristic misapplication. This is where medial neglect enters the picture. First off, medial neglect explains why heuristic misapplications are inevitable. Not only can’t we intuit the proper scope of application for the various heuristic devices comprising cognition, we can’t even intuit the fact that cognition consists of multiple heuristic devices at all! In other words, cognition is blind to both its limits and its constitution. This explains why misapplications are both effortless and invisible–and most importantly, why we assume cognition to be universal, why quantum and cosmological violations of intuition come as a surprise. (This also motivates taking a diagnostic approach to classic philosophical problems: conundrums such as this indirectly reveal something of the limitations and constitution of cognition).
But medial neglect can explain more than just the possibility of such a misapplication; it also provides a way to explain why it constitutes a misapplication, as well as why the resulting conundrums take the forms they do. Consider the ‘aboutness heuristic’ considered above. Given that the causal structure of the brain is dedicated to tracking the causal structure of its environment, that structure cannot itself be tracked, and so must be ‘assumed.’ Aboutness is forced upon the system. This occlusion of the causal intricacies of the system’s relation to its environment is inconsequential. So long as the medial tracking of targets in no way interferes with those targets, medial neglect simply relieves the system of an impossible computational load.
But despite it’s effectiveness, aboutness remains heuristic, remains a device (albeit a ‘master device’) that solves problems via information neglect. This simply means that aboutness possesses a scope of applicability, that it is not universal. It is adapted to a finite range of problems, namely, those involving functionally independent environmental entities and events. The causal structure of the system, again, is dedicated to modelling the causal structure of its environment (thus the split between medial (modelling) and lateral (modelled) functionality). This insures the system will encounter tremendous difficulty whenever it attempts to model its own modelling. Why? I’ve considered a number of different reasons (such a neural complexity) in a number of different contexts, but the primary, heuristic culprit is that the targets to be tracked are all functionally entangled in these ‘metacognitive’ instances.
The basic structure of human cognition, in other words, is environmental, which is to say, adapted to things out there functioning independent of any neural tracking. It is not adapted to the ‘in here,’ to what we are prone to call the mental. This is why the introspective default assumption is to see the ‘mental’ as a ‘secondary environment,’ as a collection of functionally independent events and entities tracked by some kind of mysterious ‘inner eye.’ Cognition isn’t magical. To cognize something requires cognitive resources. Keeping in mind that the point of this exercise is to explain how Wittgenstein could be right, we could postulate (presuming evolutionary parsimony) that second-order reflection possesses no specially adapted ‘master device,’ no dedicated introspective cognitive system, but instead relies on its preexisting structure and tools. This is why the ‘in here’ is inevitably cognized as a ‘little out there,’ a kind of peculiar secondary environment.
A sensation–or quale to the use the philosophy of mind term–is the product of an occurrent medial circuit, and as such impossible to laterally model. This is what Wittgenstein means when he says pain is ‘neither a something nor a nothing.’ The information required to accurately cognize ‘pain’ is the very information systematically neglected by human cognition. Second-order deliberative cognition transforms it into something ‘thinglike,’ nevertheless, because it is designed to cognize functionally independent entities. The natural question then becomes, What is this thing? Given the meagre amount of information available and the distortions pertaining to cognitive misapplication, it necessarily becomes the most baffling thing we can imagine.
Given structural idiosyncrasy (again, the path dependence of our position in ‘design space’), it simply ‘is what is it is,’ a kind of astronomically coarse-grained ‘random projection’ of higher dimensional neural space perhaps. Why is pain like pain? Because it dangles from all the same myriad path dependencies as our brains do. Given internal access invariability (again, the fact that cognition possesses fixed channels to other neural subsystems) it is also all that there is as well: cognition cannot inspect or manipulate a quale the way it can actual things in its environment via exploratory behaviours, so unlike other objects they necessarily appear to be ‘irreducible’ or ‘simple.’ On top of everything, qualia will also seem causally intractable given the utter occlusion of neurofunctionality that falls out of medial neglect, as well the distortions pertaining to heuristic specificity.
As things therefore, qualia strike as ineffable, intrinsic, and etiologically opaque. Strange ‘somethings’ indeed!
Given our four limiting features, then, we can clearly see that Wittgenstein’s hunch is grammatical and not behaviouristic. The problem with sensations isn’t so much epistemic privacy as it is information access and processing: when we see qualia as extant things requiring explanation like other things we’re plugging them into a heuristic regime adapted to discharge functional independent environmental challenges. Wittgenstein himself couldn’t see it as such, of course, which is perhaps why he takes the number of runs at the problem as he does.
Okay, so much for Wittgenstein. The real question, at this point, is one of what it all means. After all, despite what might seem like fancy explanatory footwork, we still find ourselves stranded with a something that is neither a something nor a nothing! Given that absurd conclusions generally mean false premises, why shouldn’t we simply think Wittgenstein was off his rocker?
Well, for one, given the conundrums posed by ‘phenomenal realism,’ you could argue that the absurdity is mutual. For another, the explanatory paradigm I’ve used here (the Blind Brain Theory) is capable of explaining away a great number of such conundrums (at the cost of our basic default assumptions, typically).
The question then becomes whether a general gain in intelligibility warrants accepting one flagrant absurdity–a something that is neither a something nor a nothing.
The first thing to recall is that this situation isn’t new. Apparent absurdity is alive and well at the cosmological and quantum levels of physical explanation. The second thing to recall is that human cognition is the product of myriad evolutionary pressures. Much as we did not evolve to be ideal physicists, we did not evolve to be ideal philosophers. Structural idiosyncrasy, in other words, gives us good reason to expect cognitive incapacities generally. And indeed, cognitive psychology has spent several decades isolating and identifying numerous cognitive foibles. The only real thing that distinguishes this particular ‘foible’ is the interpretative centrality (not to mention cherished status) of its subject matter–us!
‘Us,’ indeed. Once again, if you accept the mechanistic model of the life sciences (if you’re inclined to heed your doctor before your priest), then you accept that you are an environmentally situated, biomechanical information processing system. Given this, perhaps we should add a fifth limiting feature that characterizes you: ‘informatic locality,’ the way your system has to make due with the information it can either store or sense. Your particular brain-environment system, in other words, is its own ‘informatic frame of reference.’
Once again, given the previous four limiting features, the system is bound to have difficulty modelling itself. Consider another famous head-scratcher from the history of philosophy, this one from William James:
“The physical and the mental operations form curiously incompatible groups. As a room, the experience has occupied that spot and had that environment for thirty years. As your field of consciousness it may never have existed until now. As a room, attention will go on to discover endless new details in it. As your mental state merely, few new ones will emerge under attention’s eye. As a room, it will take an earthquake, or a gang of men, and in any case a certain amount of time, to destroy it. As your subjective state, the closing of your eyes, or any instantaneous play of your fancy will suffice. In the real world, fire will consume it. In your mind, you can let fire play over it without effect. As an outer object, you must pay so much a month to inhabit it. As an inner content, you may occupy it for any length of time rent-free. If, in short, you follow it in the mental direction, taking it along with events of personal biography solely, all sorts of things are true of it which are false, and false of it which are true if you treat it as a real thing experienced, follow it in the physical direction, and relate it to associates in the outer world. (“Does ‘Consciousness’ Exist?“)
The genius of this passage, as I take it, is the way refuses the relinquish the profound connection between the third person and the first, rather alternating from the one to other, as if it were a single, inexplicable lozenge that tasted radically different when held against the back or front of the tongue–the room as empirically indexed versus the room as phenomenologically indexed. Wittgenstein’s problem, expressed in these terms, is simply one of how the phenomenological room fits into the empirical. From a brute mechanistic perspective, the system is first modelling the room absent any model of its occurrent modelling, then modelling its modelling of the room–and here’s the thing, absent any model of its occurrent modelling. The aboutness heuristic, as we saw, turns on medial neglect. This is what renders the second target, ‘room-modelling,’ so difficult to square with the ‘grammar’ of the first, ‘room,’ perpetually forcing us to ask, What the hell is this second room?
The thing to realize at this juncture is that there is no way to answer this question so long as we allow the apparent universality of the aboutness heuristic get the better of us. ‘Room-modelling’ will never fit the grammar of ‘room’ simply because it is–clearly, I would argue–the product of informatic privation (due to medial neglect) and heuristic misapplication (due to heuristic specificity).
On the contrary, the only way to solve this ‘problem’ (perhaps the only way to move beyond the conundrums that paralyze philosophy of mind and consciousness research as a whole) is to bracket aboutness, to finally openly acknowledge that our apparent baseline mode of conceptualizing truth and reality is in fact heuristic, which is to say, a mode of problem-solving that turns on information neglect and so possesses a limited scope of effective application. So long as we presume the dubious notion that cognitive subsystems adapted to trouble-shooting external environments absent various classes of information are adequate to the task of trouble-shooting the system of which they are a part, then we will find ourselves trapped in this grammatical (algorithmic) impasse.
In other words, we need to abandon our personal notion of the ‘knower’ as a kind of ‘anosognosiac fantasy,’ and begin explaining our inability to resolve these difficulties in subpersonal terms. We are an assemblage of special purpose cognitive tools, not whole, autonomous knowers attempting to apprehend the fundamental nature of things. We are machines attempting to model ourselves as such, and consistently failing because of a variety of subsystemic functional limitations.
You could say what we need is a whole new scientific subdiscipline: the cognitive psychology of philosophy. I realize that this sounds like anathema to many–it certainly strikes me as such! But no matter what one thinks of the story above, I find it hard to fathom how philosophy can avoid this fate now that the black box of the brain has been cracked open. In other words, we need to see the inevitability of this picture or something like it. As a natural result of the kind of system that we happen to be, the perennial conundrums of consciousness (and perhaps philosophy more generally) are something that science will eventually explain. Only ignorance or hubris could convince us otherwise.
We affirm the cosmological and quantum ‘absurdities’ we do because of the way science allows us to transcend our heuristic limitations. Science, you could say, is a kind of ‘meta-heuristic,’ a way to organize systems such that their individual heuristic shortcomings can be overcome. The Blind Brain picture sketched above bets that science will sketch the traditional metaphysical problem of consciousness in fundamentally mechanistic terms. It predicts that the traditional categorical bestiary of metaphysics will be supplanted by categories of information indexed according to their functions. It argues that the real difficulty of consciousness lies in the cognitive illusions secondary to informatic neglect.
One can conceive this different ways I think: You could keep your present scientifically informed understanding of the universe as your baseline, and ‘explain away’ the mental (and much of the lifeworld with it) as a series of cognitive illusions. Qualia can be conceived as ‘phenomemes,’ combinatorial constituents of conscious experience, but no more ‘existential’ than phonemes are ‘meaningful.’ This view takes the third-person brain revealed by science as canonical, and the first-person brain (you!) as a ‘skewed and truncated low-dimensional projection’ of that brain. The higher-order question as to the ontological status of that ’skewed and truncated low-dimensional projection’ is diagnostically blocked as a ‘grammatical violation,’ by the recognition that such a move constitutes a clear heuristic misapplication.
Or one could envisage a new kind of scientific realism, where the institutions are themselves interpreted as heuristic devices, and we can get to the work of describing the nonsemantic nature of our relation to each other and the cosmos. This would require acknowledging the profundity of our individual theoretical straits, to embrace our epistemic dependence on the actual institutional apparati of science–to see ourselves as glitchy subsystems in larger social mechanisms of ‘knowing.’ On this version, we must be willing to detach our intellectual commitments from our commonsense intuitions wholesale, to see the apparent sufficiency and universality of aboutness as a cognitive illusion pertaining to heuristic neglect, first person or third.
Either way, consciousness, as we intuit it, can at best be viewed as virtual.
December 16, 2012
Getting Subpersonal: Should Dennett Rethink the Intentional Stance?
Don’t you look at my girlfriend,
She’s the only one I got.
Not much of a girlfriend,
Never seem to get a lot.
–Supertramp, “Breakfast in America”
.
This shows that there is no such thing as the soul–the subject, etc.–as it is conceived in the superficial psychology of the present day.
Indeed a composite soul would no longer be a soul.
–Wittgenstein, 5.5421, Tractatus Logico-Philosophicus
.
One way of conceptualizing the ‘problem of meaning’ presently confronting our society is in terms of the personal and the subpersonal. The distinction is one famously made by Wittgenstein (1974) in the Tractatus, where he notes the way psychological claims like ‘knows that,’ ‘believes that,’ ‘hopes that’ involve the individual taken as a whole (5.542). Here , as in so many other places, Daniel Dennett has been instrumental in setting out the terms of the debate. On his account, the personal refers to what Wittgenstein called the ‘soul’ above, the whole agent as opposed to its parts. The subpersonal, on the other hand, refers to the parts as opposed to the whole, the constitutive components of the whole. Where the personal figures in intentional explanations, enabling the prediction, understanding, and manipulation of our fellows, the subpersonal figures in functional explanations, enabling the prediction, understanding, and manipulation of the neural mechanisms that make us tick.
The personal and the subpersonal, in other words, provide a way of conceptualizing the vexing relation between intentional and functional conceptuality that pertains directly to you. Where the personal level of description pertains to you as an agent, a subject of belief, desire, and so on, the subpersonal level of description pertains to you as an organism, as a biomechanism consisting of numerous submechanisms. In a strange sense, you are your own doppelganger, one that apparently answers to two incommensurable rationalities. This is why your lawyer, when you finally get around to murdering that local television personality, will be inclined to defend the subpersonal you by blaming neural devils that made you do it, while the prosecutor will be hell bent on sending the personal you to the gas chamber. It’s hard to convict subpersonal mechanisms.
As Wittgenstein says, the ‘composite soul’ is no soul. The obvious question is why? Why is the person an indivisible whole? Dennett (2007) provides the following explanation:
The relative accessibility and familiarity of the outer part of the process of telling people what I can see–I know my eyes have to be open, and focused, and I have to attend, and there has to be light–conceals from us the utter blank (from the perspective of introspection or simple self-examination) of the rest of the process. How do you know there’s a tree beside the house? Well, there it is, and I can see that it looks just like a tree! How do you know it looks like a tree? Well, I just do! Do you compare what it looks like to many other things in the world before settling upon the idea that it’s a tree? Not consciously. Is it labeled “tree”? No, I don’t need to ‘see’ a label; besides, if there were a label I’d have to read it, and know that it labelled the thing it was on. I just know it’s a tree. Explanation has to stop somewhere, and at the personal level it stops here, with brute abilities couched in the familiar intentionalistic language of knowing and seeing, noticing and recognizing and the like. (9)
What Dennett is describing here is a kind of systematic neglect, and in terms, no less, that would have made Heidegger proud: What is concealed? An utter blank. This is a wonderful description of what I’ve been calling medial neglect, the way the brain, adapted and dedicated to tracking ‘lateral’ environments, must remain to a profound extent the blindspot in its environment. To paraphrase Heidegger (1949), what is nearest is most difficult to see. The human brain systematically neglects itself, generating, as a result, numerous confusions, particularly when it attempts to cognize itself. We just ‘know without knowing.’ And as Dennett says, this is where explanation has to stop.
“The recognition that there are two levels of explanation,” he writes, “gives birth to the burden of relating them” (1969, 20). In “Mechanism and Responsibility” (1981) he attempts to discharge this burden by isolating and defeating the various ‘incompatibility intuitions’ that lead to stark appraisals of the intentional/mechanical divide. So for instance, if you idealize rational agency, then any mechanical consideration of the agent will seem to shatter the illusion. But, if you accept that humans are always and only imperfectly rational, and that the intentional and mechanical are two modes of making sense of complex systems, then this extreme incompatibility dissolves. “What are we to make of the hegemony of mechanical explanation over intentional explanation?” he writes. “Not that it doesn’t exist, but that it is misdescribed if we suppose that whenever the former are confirmed, they drive out the latter” (246). Passages like these, I think, highlight a perennial tension between Dennett’s pragmatic and realist inclinations. The ‘hegemony,’ he often seems to imply, is pragmatic: the mechanical merely allows us to go places the intentional cannot. In this case, the only compatibility that matters is the compatibility of our explanations with our purposes. But when he has his realist hat on, the hegemony becomes metaphysical, the product of the way things are. And this is where his compatibilism begins to wobble.
So for instance, adopting Dennett’s pragmatic scheme means that intentional explanations will be appropriate or inappropriate depending on the context. As our needs change, so will the utility of the intentional stance. “All that is the case,” he writes, “is that we, as persons, cannot adopt exclusive mechanism (by eliminating the intentional stance altogether)” (254). If we were, as he puts it, “turned into zombies next week” (254) all bets would be off. It’s arguments like these that wear so many scowls into the brows of so many readers of Dennett. All it means to be an intentional system, he argues, is to be successfully understood in intentional terms. There is no fact of the matter, no ‘original intentionality.’ But if this is the case, how could we be turned into (as opposed to ‘taken as’) zombies next week?
Dennett, remember, wants to be simultaneously a realist about mechanism and a pragmatist about intentionality. So isn’t he really just saying we are zombies (mere mechanisms) all the time, and that ‘persons’ are simply an artifact of the way we zombies are prone (perhaps given informatic neglect) to interpret one another? This certainly seems to be the most straightforward explanation. If it were simply a matter of ‘taking as,’ why would the advance of the life sciences (and the mechanistic paradigm) constitute any sort of threat? In other words, why would the personal need fear the future? As Dennett writes:
All this says nothing about the impossibility of dire depersonalization in the future. Wholesale abandonment of the intentional is in any case a less pressing concern than partial erosion of the intentional domain, an eventuality against which there are no conceptual guarantees at all. If the growing area of success in mechanistic explanation of human behaviour does not in of itself rob us of responsibility, it does make it more pragmatic, more effective or efficient, for people on occasion to adopt less than the intentional stance toward others. Until fairly recently the only well-known generally effective method of getting people to do what you wanted them to was to treat them as persons. (255)
That was 1971 (when Dennett presented the first draft of “Mechanism and Responsibility” at Yale), and this is 2012, some 41 years later, a time when you could say this ‘dire’ process of incremental depersonalization has finally achieved ‘economies of scale.’ What I want to consider is the possibility that history has actually outrun the Dennett’s arguments for the intentional stance.
Consider NeuroFocus, a neuromarketing corporation that I’ve critiqued in the past, and that now bills itself as the premier neuromarketer in the world. In a summary of the effectiveness of various ads televised over the 2008 Beijing Olympics, they describe their methodology thus:
NeuroFocus conducts brainwave-based research employing high density EEG (electroencephalographic) sensor technology, coupled with pixel-level eye movement tracking and GSR (galvanic skin response) measurements. The company captures brainwave activity across as many as 128 different sectors of the brain, at 2000 times a second for each of these locations. NeuroFocus’ patented brainwave monitoring technology produces results that are far more accurate, reliable and actionable than any other form of research.
The thing to note is that all three of these channels–brain waves, saccades, and skin conductance–are involuntary. None of these pertain, in other words, to you as a person. In fact, the person is actually the enemy in neuromarketing, in terms of both assessing and engineering ad effectiveness. Using these subpersonal indices, NeuroFocus measures what they call ‘Brand Perception Lift,’ the degree to which a given spot influences subconscious brand associations, and ‘Commercial Performance Lift,’ the degree to which it subconsciously induces consumers to make purchases. As the Advertising Research Foundation notes in a recent report:
The human mind is not well equipped to probe its own depths, to explain itself to itself, let alone to others. Many of the approaches used in traditional advertising research are focused on rational, conscious processes and are, therefore, not well suited to understanding emotion and the unconscious. Regardless of our comfort level, we have to explore approaches that are fundamentally different—indirect or passive approaches to measuring and understanding emotion and its impact.
‘You’ quite literally have no clear sense as to how ads effect your attitudes and behaviours. This disconnect between what a person self-reports and what a person actually does has always meant that marketing was as much art as science. But since Coca Cola began approaching brain researchers in the early 1990′s, neuromarketing in America has ballooned into an industry consisting of a dozen companies and dozens more consultancies. This is just to say that no matter what one thinks of the effectiveness of neuromarketing techniques as they stand (the ARF report linked above details several ‘ROI’ efficiencies and predicts more as the technology and techniques improve), a formidable, and growing, array of resources have been deployed in the pursuit of the subpersonal consumer.
NeuroFocus is by no means alone, and neuromarketing is becoming more and more ubiquitous. Consider the show Intervention. Concerned that advertisers were avoiding the show due to its intense emotional content (because let’s face it, the trials and tribulations of addiction make the concerns motivating most consumer products, things like hemorrhoids or dandruff, almost tragically trivial) A&E contracted NeuroFocus to see how viewers were actually responding to ads on their show. Their results?
Because neurological testing probes the deep subconscious mind for this data, advertisers can rely on these findings with complete confidence. The results of this study provide scientific evidence that when a company decides to advertise in reality programming that contains the kind of powerful and gripping content that Intervention features, there is no automatic downside to that choice. Instead, there is an opportunity to engage viewers’ subconscious minds in equally, and often even more powerful and gripping ways.
In other words, extreme emotional content renders viewers more susceptible to commercial messaging, not less. Note the way the two kinds of communication, the personal and the subpersonal, seem to be blurred in this passage. The ‘powerful and gripping content’ of the show, one would like to assume, entails A&E taking a personal stance toward their viewers, whereas the ‘powerful and gripping content’ of the commercials entails advertisers taking a subpersonal stance toward their viewers. The problem, however, is that the question is the effectiveness of Intervention as a vehicle for commercial advertising, a question that NeuroFocus answers by targeting the subpersonal. A&E has hired them, in effect, to assess the subpersonal effectiveness of Intervention as a vehicle for subpersonal commercial messaging.
In other words, the best way to maximize ROI (‘return on investment’) is to treat viewers as mechanisms, as machines to be hacked via multiple messaging mechanisms, one overtly commercial (advertising), the other covertly (Intervention). The dismal irony here of course, is that the covert messaging mechanism features ‘real life’ narratives featuring addicts trying to recover–what else?–personhood!
Make no mistake, the ‘Age of the Subpersonal’ is upon us. Now a trickle, soon a deluge. Dennett, of course, is famous (or infamous) for his strategy of ‘interpretative minimization,’ his tendency to explain away apparent conflicts between the intentional and the mechanical, the personal and the subpersonal. But he is by no means so cavalier as to confuse the theoretical dilemmas manufactured by philosophers bent on “answering the ultimate ontological question” (2011) with the kind of practical dilemma posed by the likes of NeuroFocus. “There is a real crisis,” Dennett (2006) admits, “and it needs our attention now, before irreparable damage is done to the fragile environment of mutually shared beliefs and attitudes on which a precious conception of human dignity does indeed depend for its existence” (1).
The ‘solution’ he offers requires us to appreciate the way our actions will impact communal expectations. He gives the (not-so-congenial) example of the respect we extend to corpses:
Even people who believe in immortal immaterial souls don’t believe that human “remains” harbor a soul. They think that the soul has departed, and what is left behind is just a body, just unfeeling matter. A corpse can’t feel pain, can’t suffer, can’t be aware of any indignities–and yet still we feel a powerful obligation to handle a corpse with respect, and even with ceremony, and even when nobody else is watching. Why? Because we appreciate, whether acutely or dimly, that how we handle this corpse now has repercussions for how other people, still alive, will be able to imagine their own demise and its aftermath. Our capacity to imagine the future is both the source of our moral power and a condition of our vulnerability. (6)
To protect the fragility of the person from the zombie described by science we need recall–the corpse! (The problem, of course, is that we have possible subpersonal explanations for post-mortem care-taking rituals as well, such as those proposed, for instance, by Pascal Boyer (2001)). The idea he develops calls for us to begin managing our traditional ‘belief environments’ the way we manage any other natural environment threatened by science and its consequences. And the best way to do this, he suggests, is to begin encouraging a person-friendly, doxastic ecological mind-set: “If we want to maintain the momentousness of all decisions about life and death, and take the steps that elevate the decision beyond the practicalities of the moment, we need to secure the appreciation of this very fact, and enliven the imaginations of people so that they can recognize, and avoid wherever possible, and condemn, activities that would tend to erode the public trust in the presuppositions about what is–and should be–unthinkable.”
A slippery slope couched in moral indignation: the approach that failed when employed against evolution (against the mechanization of our origin), and will almost certainly fail against the corresponding mechanization our soul. Surely any real solution to the problem of ‘getting too subpersonal’ has to turn on the reason why the subpersonal so threatens the personal. We’re simply tossing homilies to the wind, clucking-clucking in disapproval, otherwise. No. It’s clear the problem must be understood. And once again, the obvious explanation seems to be that the ‘hegemony of mechanistic explanation,’ as Dennett calls it, is real in a way intentionality is not. How for instance, should one interpret the situation I describe above? As a grift, a collection of unscrupulous persons manipulating another collection of unwitting persons? This certainly has a role to play in the kinds of ‘moral intuitions’ violated. But couldn’t the executives plea obligation? They have been charged, after all, with maximizing their shareholder’s ROI, and if mechanistic messaging is more effective than intentional messaging, if no laws are broken and no individuals are harmed, then what on earth could be the problem? Does potential damage to the manifest or traditional ‘belief environment,’ as Dennett has it, trump that obligation? Good luck convincing the most powerful institutions on the planet of that.
Otherwise, if it is the case that the mechanistic trumps the intentional (as neuromarketing, let alone myriad other subpersonal approaches to the human are making vividly clear), why are we talking about morality at all? Morality presumes persons, and this situation would seem to suggest there are no such things, not really, not now, not ever. Giving Occam his due, why not say no persons were harmed in this (or any other) case because no persons existed outside the skewed, parochial assumptions of the zombies involved: the smart zombies on the corporate side hacking the stupid zombies on the audience side?
What the hell is going on here? Seriously. Have we really been reduced to honouring corpses?
The sad fact is, this situation looks an awful lot like a magic show, where the illusion ticks along seamlessly only so long as certain information remains occluded. Various lapses (or as Dennett (1978) calls them, ‘tropisms’) can be tolerated, odd glimpses behind the curtain, hands too lethargic to fool the eye, but at some point, the assumptive economy that makes the illusion possible falls apart, and we witness the dawning of a far less magical aspect–a more desolate yet far more robust ‘level of explanation.’ In this picture, the whole of the human race is hardwired relative to themselves, chained before the magician of their own brain, seeing only what every other human being can see, and so remaining convinced they see everything that needs to be seen. Since the first true homo sapiens, the show has been seamless, save for the fact that none of it can be truly explained. But now that science has as last surmounted the complexities of the brain, more and more nefarious souls have begun sneaking peaks behind the curtain in the hope of transforming the personal show into a subpersonal scam…
Intentionality, in other words, depends on ignorance. This is what makes Dennett’s rapprochement between the personal and the subpersonal a matter of context, something dependant upon the future. Information accumulates given language and culture. The ‘compatibility’ he describes (accurately, I think, though more coyly than I would wish) is the compatibility of a magician watching his crosstown rival’s show, the compatibility of seeing the magic, because it is flawless performed, yet knowing the mechanics of the illusion all the same.
More importantly, intentionality depends on ignorance of mechanism, which is to say, the very ignorance science is designed to overcome. Only now are we seeing the breakdown in compatibility he feared in 1971. Why? Because mechanistic knowledge is progressive in a way that intentional knowledge is not, and so pays ever greater dividends. The sciences of the brain are allowing more and more people to leave the audience and climb onto the stage. The show is becoming more and more discordant, more difficult to square with the illusion of seeing everything there is to see.
The manifest image is becoming more and more inchoate. Neuromarketing is beginning to show, on a truly massive scale, how to see past the illusion of the person.
Why illusion? Throughout his corpus, Dennett adamantly insists on the objectivity of the intentional stance, that it’s predictive and explanatory power means that it picks out ‘real patterns’ (1991). Granting this is so (because one could argue that the only ‘intentional stance’ is the one belonging to philosophers attempting to cognize what amounts to scraps of metacognitive information), the patterns ‘picked out’ are both blinkered and idiosyncratic. Dennett acknowledges as much, but thinks this parochial objectivity licenses second-order, pragmatic justifications. He is honest enough to his pragmatism to historicize these justifications, to acknowledge that a day may come. Likewise, he is honest enough to the theoretical power of science to resist contextualism tout court, to avoid the hubris of transforming the natural into a subspecies of the cultural on the strength of something so unreliable as philosophical speculation.
But now that the inevitability of that ‘day’ seems to be clearly visible, it becomes more difficult to see how his second-order pragmatism isn’t tendentious, or even worse, question-begging. Dennett wants us to say we are mechanisms (what else would we be?) that take ourselves for persons for good reason. When arguing against ‘greedy reduction’ (Dennett, 1995), he leans hard on that last phrase, and only resorts to the predicate when he has to. He relentlessly emphasizes the pragmatic necessity of the personal. When arguing against original intentionality, he reverses emphasis, showing how the subpersonal grounds the personal, how the ‘skyhooks’ of tradition are actually ‘cranes’ (1995), or how the explaining the ‘magic of consciousness’ amounts to explaining a certain evolutionary trick (2003, 2005).
This ‘reversal of emphasis’ strategy has served him, not to mention philosophy and cognitive science, well (See, Elton, 2003) over some 40 plus years. But with the rise of industries like neuromarketing, I submit that the contextual grounds that warrant his intentional emphasis are dissolving beneath his feet simply because they are dissolving beneath everybody’s feet. Does he really think treating the intentional as an ‘endangered ecology’ will allow us to prevent, let alone resolve, problems like neuromarketing? The simple need to become proactive about our belief environment, to institute regimes of explicit and implicit ‘make-think,’ demonstrates–rather dramatically one would think–that we have crossed some kind of fundamental threshold, the very one, in fact, that he worried about early in his philosophical career.
Things are simply getting too subpersonal. Dennett wants us to say we are mechanisms that take ourselves for persons for good reason. What he really should say at this point, as a naturalist opposed to a pragmatist, is that we are mechanisms that take ourselves for persons, for reasons science is only begin to learn.
The more subpersonal information that becomes available, the more isolated and parochial the person will seem to become. Quine’s ‘dramatic idiom’ is set to be come increasingly hysterical unless employed as a shorthand for the mechanical. Why? Because the sciences, for better or worse, monopolize theoretical cognition–it’s all mere philosophy otherwise. This is why Dennett referred to the prospect of depersonalization as ‘dire’ in 1971, and why his call to become stewards of our doxastic ecology rings so hollow in 2006. No matter how prone philosophers are to mistake rank speculation for knowledge, one can rely on science to show them otherwise. This is what I’ve elsewhere referred to as the ‘Big Fat Pessimistic Induction.’ The power of mechanism, the power of the subpersonal, will continue to grow as scientific knowledge progresses–period.
This is also the scenario I sketch in my novel Neuropath, a near-future where the social and cultural dissociation between knowledge and experience has become obviously catastrophic, an illustration of the Semantic Apocalypse and the kind of ‘Akratic Culture’ we might expect to rise in its wake. Dennett uses the corpse analogy above to impress the importance of doxastic consequences, the idea that failing to honour corpses as para-persons undermines the ecology that demands we honour persons as well. But what if this particular ecological collapse is going to happen regardless? Throwing verbiage at science, no matter how eloquent, how incendiary, will not make it stop, which means intentional conservatism, no matter how well ‘intentioned,’ will only serve to drag out the inevitable.
Radicalism is the only way forward. Rather than squandering our critical resources on attempts to salvage the show, perhaps we need to shoo it from the stage, get down to the hard work of reinventing ourselves.
If intentionality is like a magic trick, then the accumulation of information regarding the neurofunctional specifics of consciousness will render it progressively more incoherent. Intentionality, in other words, requires the Only-game-in-town-effect at the level of praxis. When it becomes systematically rational for a person to treat others, even themselves, as mechanisms, Dennett lacks the ‘contextual closure’ he requires to convincingly promote compatibility. It is not always best to treat others as persons. Given the way the subpersonal trumps the personal, it pays to put ‘persons’ on notice even in the absence of lapses of rationality–perhaps especially in the absence of lapses. The other guy, after all, could be doing the same with you. There is a gaping difference, in other words, between the intentional stance we necessarily take and the intentional stance we conditionally take. Certainly we are forced to continue relying on intentional idioms, as I have throughout this very post, but we all possess some understanding of the cognitive limitations of that idiom, the fact that we, in some unnerving fashion, are speaking from a kind of conceptual dream. In Continental philosophical terms, you might say we’re speaking ‘under erasure.’ We communicate understanding we are mechanisms that take ourselves to be persons for reasons we are only beginning to learn.
What might those reasons look like? I’ve placed my chits on the Blind Brain Theory. The ‘apparent wholeness’ of the person is a result of generalized informatic neglect–or ‘adaptive anosognosia.’ Our deliberative cognitive systems (themselves at some level ‘subpersonal’) are oblivious to the neural functions they discharge–they suffer a kind of WYSIATI (Kahneman, 2012) writ large. So as a result they confuse their parochial glimpse for the entire show. Call it the ‘Metonymic Error,’ or ‘ME,’ a sort of ‘mereological fallacy’ (Bennett and Hacker, 2006) in reverse, the cognitive illusion that leads fragmentary, subpersonal assemblages to mistake themselves for something singular and whole.
And as I hope should be clear, it is a mistake. ‘Apparent wholeness’ (sufficiency) is a cognitive illusion in the same manner asymmetric insight is a cognitive illusion. The fact that both are adaptive doesn’t change this. Discharging subreptive functions doesn’t make misconceptions less illusory (any more than does the number of people labouring under it). The real difference is simply the degree our discourses seem to depend on the veracity of the former, the way my use of ‘mistake’ above, for instance, seems to beg the very intentionality I’m claiming is discredited. But, given that I’m deploying the term ‘under erasure,’ all this speaks to is the exhaustive nature of the illusion–which is to say, to our mutual cognitive anosognosia. Accusing me of performative contradiction not only begs the question, it makes the above examples regarding ‘subpersonalization’ very, very difficult to understand. I need only ask for an account for why mechanism trumps intentionality while leaving it intact.
But given that this is a form of nonpathological anosognosia we are talking about, which is to say, a cognitive deficit regarding cognitive deficits, people are bound to find it exceedingly difficult to recognize. As I’ve learned first hand, the reflex is to simply fall back on the manifest image, the way pretty much everyone in philosophy and cognitive science seems inclined to do, and to incessantly repeat the question: How could persons be illusions if they feature in so much ‘genuine understanding’?
The question no one wants to ask is, What else could they feature in?
Or to put the question differently: Imagine it were the case we had a thoroughly fragmentary, distorted, depleted intentional understanding, but we possessed brains that had nevertheless evolved myriad ways to successfully anticipate and coordinate with others, What would our cognition look like?
Idiosyncratic. Baffling… And yet mysteriously effective all the same.
Some crazy shit, I know. All these years our biggest worry was that we were digging our grave with science, never suspecting we might find our own corpse before hitting bottom.
.
References
.
Bennett M., R., and Hacker, P. (2003). Philosophical Foundations of Neuroscience.
Boyer, P. (2001). Religion Explained: The Evolutionary Origins of Religious Thought.
Dennett, D., C. (1969). Content and Consciousness.
Dennett, D., C. (1981). “Mechanism and responsibility,” Brainstorms.
Dennett, D., C. (1991). “Real Patterns.”
Dennett, D., C. (1995). Darwin’s Dangerous Idea.
Dennett, D., C. (2003). “Explaining the ‘magic’ of consciousness.”
Dennett, D., C. (2005). Sweet Dreams: Philosophical Obstacles to a Science of Consciousness.
Dennett, D., C. (2006) “How to Protect Human Dignity from Science.”
Dennett, D., C. (2007). “Heterophenomenology reconsidered.”
Dennett, D., C. (2011). “Kinds of Things–Towards a Bestiary of the Manifest Image.”
Elton, M., (2003). Daniel Dennett: Reconciling Science and Our Self-Conception.
Heidegger, M. (1949). “Letter on ‘Humanism.’”
Kahneman, D. (2012). Thinking, Fast and Slow.
Wittgenstein, L. (1974). Tractatus Logico-Philosophicus.
December 14, 2012
Facing, Chirping, Hawking my Wares
Definition of the Day – Philosophy: 1) A form of ethereal waste known to clog the head. See, Pornography, Conceptual Forms of.
.
I actually have several announcements to make, but first I would like to thank Benjamin for a discussion every bit as cool and incisive as his post. By all means continue: the comments never close ’round these parts.
Otherwise, I want to officially announce that I’m now officially official. To whit:
The Official Website: The changes are in. Thanks one and all for your feedback. If all goes well, these very words should be glowing there at this very moment now.
The Official Facebook Page: Apparently, this is something that has to be done. Apparently, ‘word o’ mouth’ ain’t enough anymore, it’s gotta be word o’ face. Apparently, your attitudes regarding Facebook ‘say alot’ about your attitudes to the human race as a whole. But I can’t help it. I can’t help looking at Facebook in neuroprosthetic terms, like an informatic tapeworm exploiting a variety of subpersonal systems, not the least of which the occipital and fusiform face areas. If anything proves that I would be a wild-eyed hermit dressed in putrifying goatskins in any age other than this one, my totally irrational antipathy to Facebook has to be it. The World loves it - so of course it has to be poison! And then there’s the Book of Revelation. Maybe the number that had Jack of Patmos twisting in his goatskins was a Hamming number, the ugliest number of all.
The Devil’s Chirp: Okay, so the ‘Devil’s Tweet’ was already taken, but I’m actually glad in retrospect. Ambrose Bierce’s The Devil’s Dictionary is one of my favourite books, the near pitch-perfect combination of sarcasm and wisdom. My hope is to turn the Devil’s Chirp into a worthy homage to his assay into Satanic redefinition of the hypocritical human soul, but I’ll settle for a cheap knock-off. Now I just gotta figure out how it works. I have a hard time restricting myself to 140 characters in my novels, for Chrissakes. At the very least it’s proof I’ve sold my soul to the lowest bidder.
CBC Ideas: I’m due at the studio this morning.
And I felt wired already…
December 10, 2012
The Rant Within the Undead God – by Benjamin Cain
Some centuries before the Common Era, in a sweltering outskirt of the ancient Roman Empire, a nameless wanderer, unkempt and covered in rags, climbed atop a boulder in the midst of a bustling market, cleared his throat and began shouting for no apparent reason:
“Mark my harangue, monstrous abode of the damned and you denizens of this godforsaken place! I have only my stern words to give you, though most of you don’t recognize the existential struggle you’re in; so I’ll cry foul, slink off into the approaching night, and we’ll see if my rant festers in your mind, clearing the way for alien flowers to bloom. How many poor outcasts, deranged victims of heredity, and forlorn drifters have shouted doom from the rooftops? In how many lands and ages have fools kept the faith from the sidelines of decadent courts, the aristocrats mocking us as we point our finger at a thousand vices and leave no stone unturned? And centuries from now, many more artists, outsiders, and mystics will make their chorus heard in barely imaginable ways, sending their subversive message, I foresee, from one land to the next in an instant, through a vast ethereal web called the internet. Those philosophers will look like me, unwashed and ill-fed, but they’ll rant from the privacy of their lairs or from public terminals linked by the invisible information highway. Instead of glaring at the accused in person, they’ll mock in secret, parasitically turning the technological power of a global empire against itself.
“But how else shall we resist in this world in which we’re thrown? No one was there to hurl us here where as a species we’re born, where we pass our days and lay down to die–not we, who might have been asked and might have refused the offer of incarnation, and not a personal God who might be blamed. Nevertheless, we’re thrown here, because the world isn’t idle; natural forces stir, they complexify and evolve; this mindless cosmos is neither living nor dead, but undead, a monstrous abomination that mocks the comforting myths we take for granted, about our supernatural inner essence. No spirit is needed to make a trillion worlds and creatures; the undead forces of the cosmos do so daily, creating and destroying with no rational plan, but still manifesting a natural pattern. What is this pattern, sewn into the fabric of reality? What is the simulated agenda of this headless horseman that drags us behind the mud-soaked hooves of its prancing beast? Just this: to create everything and then to destroy everything! Let that sink in, gentle folk. The universe opens up the book of all possibilities, has a glance at every page with its undead, glazed-over eyes, and assembles miniscule machines–atoms and molecules–to make each possibility an actuality somewhere in space and time, in this universe or the next, until each configuration is exhausted and then all will fly apart until not one iota of reality remains to carry out such blasphemous work. How many ways can a nonexistent God be shown up, I ask you? Everything a loving God might have made, the undead leviathan creates instead, demonstrating spirit’s superfluity, and then that monster, the magically animated carcass we inhabit will finally reveal its headlessness, the void at the center of all things, and nothing shall be left after the Big Rip.
“I ask again, how else to resist the abominable inhumanity of our world, but to make a show of detaching from some natural processes of cosmic putrefaction, to register our denunciation in all existential authenticity, and yet to cling to the bowels of this beast like the parasites we nonetheless are? And how else to rebel against our false humanity, against our comforting delusions, other than by replacing old, worn-out myths with new ones? For ours is a war on two fronts: we’re faced with a horrifying natural reality, which causes us to flee like children into a world of make-believe, whereupon we outgrow some bedtime stories and need others to help us sleep.
“We conquered masses in what will one day be called the ancient world have become disenchanted with Roman myths, as the cynicism of the elites who expect us to honour the self-serving Roman spin on local fables infects the whole Roman world. Now that Alexander the Great has opened the West to the East, we long for revitalization from the fountain of exotic Eastern mysticism, just as millennia from now I foresee that the wisdom of our time will inspire those who will call themselves modern, liberal, and progressive. And just as our experiments with Eastern ideas will afford our descendants a hiding place in Christian fantasies, which will distract Europeans from their Dark Age after the fall of Rome, so too the modern Renaissance will bear tainted fruit, as technoscientific optimism will give way to the postmodern malaise.
“Our wizards and craftsmen are dunces compared to the scientists and engineers to come. Romans believe they’ve mastered the forces of nature, and indeed their monuments and military power are staggering. But skeptics and rationalists will eventually peer into the heart of matter and into the furthest reaches of the universe, and so shall confirm once and for all the horrifying fact that nature is the undead, self-shaping god. The modernists will pretend to be unfazed by that revelation as they exploit natural processes to build wonders that will encourage the masses: diseases will be cured and food will be plentiful; all races, creeds, and sexes will be made legally equal; and–lowly mammals that they are–the future folk will personally venture into outer space! Alas, though, I discern another motif in reality’s weave, besides the undead behemoth’s implicit mockery of God: civilizations rise and fall according to the logic of the Iron Law of Oligarchy. Take any group of animals that need to live together to survive, and they will spontaneously form a power hierarchy, as the group is stabilized by a concentration of power that enables the weaker members to be most efficiently managed. Power corrupts, of course, and so leaders become decadent and their social hierarchy eventually implodes. The Roman elite that now rules most of the known world will overreach in their arrogance and will face the wrath of the hitherto conquered hordes. As above, so below: the universe actualizes each possibility only to extinguish it in favour of the next cosmic fad.
“And so likewise in the American civilization to come, plutocrats will reign from their golden toilets, but their vanity will undo their economic hegemony as they’ll take more and more of the nation’s wealth while the masses of consumers stagnate like neglected cattle, again laying the groundwork for social implosion. For a time, that future world I foresee will trust in the ideal of each person’s liberty, without appreciating the irony that when we remove the social constraints on freedom of expression, we clear the way for the more indifferent natural constraint of the Iron Law to take effect, and so we establish a more grotesque rule of the few over the many. Thus, American government will be structured to prevent an artificial tyranny, by establishing a conflict between its branches and by limiting the leader’s terms of office, but this hamstringing of government will create a power vacuum that will be filled by the selfish interests of the mightiest private citizens. In whichever time or place they’re found, those glorious, sociopathic few are avatars of undead nature, ruling without conscience or plan for the future; they build economic or military empires only to bring them crashing down as their animal instincts prove incapable of withstanding temptation. Conservatives excel at devising propaganda to rationalize oligarchy; modern liberals will experiment with progressive socialism only to inadvertently confirm the Iron Law, and so liberalism will give way to postmodern technocracy, to the dreary pragmatism of maintaining the oligarchic status quo while the hollow liberals pretend to offer a genuine political alternative to conservatism.
“What myths we live by to avoid facing the horror of our existential predicament! We personify the sun and the moon the way a child makes toys even out of rocks and twigs. The scientists of the far future, though, will investigate not just the outer mechanisms, but will master the workings of human thought. They’ll learn that our folk tales about the majesty of human nature are at best legends: we are not as conscious, rational, or free as we typically assume. Our ridiculous lust for sex proves this all by itself. We have contempt for older virgins who fail to attract a mate, even though almost everyone would be mortified to be caught in the sex act; at least no one remains to pity the throngs of copulating human animals, save the marginalized drifters who detach from the monstrous world. Psychologists will discover that while we can deliberate and attend to formal logic, we also make snap, holistic judgments, which is to say associative, emotional and intuitive leaps. Most of our mind is unconscious and reason is largely a means of manipulating others for social advantage. But even as modern rationalists will learn as much, rushing to exploit human weaknesses for profit, they will praise ultraconsciousness, ultrarationality and ultrafreedom. These secular humanists will worship their machines and a character named Spock, and they’ll assume that if only society were properly managed, progress would ensue. Thus, Reason shall render all premodern delusions obsolete, but that last, modern delusion of rationalism will be overcome only through postmodern weariness from all ideologies.
“The curse of reason is that thinking enough to discover the appalling truth of natural life prevents the thinker from being happy. That curse might be mitigated, though, if we recognize that the irrational part of our mind has its own standards. We crave stories to live by, models to admire, and artworks to inspire us. Our philosophical task as accursed animals is to assemble all that we learn into a coherent worldview, reconciling the world’s impersonality with our crude and short-sighted preferences. Happiness is for the ignorant or the deluded sleep-walkers; those who are kept awake by the ghost story of unpopular knowledge are too melancholy and disgusted by what they see to take much joy. When you face the facts that there is no God, no afterlife, no immortal soul, no transcendent human right, no perfect justice, no absolute morality, no nonhuman meaning of life, and no ultimate hope for the universe, you’ll understand that a happy life is the most farcical one. We sentient, intelligent mammals are cursed to be alienated from the impersonal world and from the myths we trust to personalize our thought processes. We are instinctive story-tellers: our inner voice narrates our deeds as we come to remember them, and we naturally gossip and anthropomorphize, evolved as we are to negotiate a social hierarchy. But how do we cope with the fact that the truest known narrative belongs to the horror genre? How shall we sleep at night, relative children that we all are, preoccupied with the urges of our illusory ego, when we’re destined to look askance at optimistic myths, inheriting the postmodern horror show?
“Shall I proceed to the final shocker of this woeful tale that enervates those with the treacherous luxury of freedom of thought? Given that nature is the undead self-creator of its forms, what is the last word, the climax of this rant within the undead god? While there’s no good reason to believe there is or ever was a transcendent, personal deity, we instinctively understand things by relating them to what’s most familiar, which is us; thus, we personify the unknown, fearing unseen monsters in the dark, and so even atheists are compelled to blame their misfortune on some deity, crying out to no one when they accidentally injure themselves. But if there’s no room in nature for this personal God whose possible existence we’re biologically compelled to contemplate, and there’s nothing for this God to do in the universe that shapes itself, the supreme theology is the most dire one, namely the speculation that Philipp Mainlander will one day formulate before promptly going insane and killing himself: God is literally dead. God committed elaborate suicide by transforming himself into something that could be perfectly destroyed, which is the material universe. God became corrupted by his omnipotence and insane by his alienation, and so the creativity of his ultimate act is an illusion: the world’s evolution is the process of God’s self-destruction, and we are vermin feeding off of God’s undying corpse. Sure, this is just a fiction, but it’s the most plausible way of fitting God–and so also our instinctive, irrational theistic inclination–into the rest of the ghastly postmodern worldview to come.
“Is there a third pattern manifesting throughout the cosmos, one of resistance and redemption? Do intelligent life forms evolve everywhere only to discover the tragedy of their existential situation, to succumb to madness or else to respond somehow with honour and grace? Perhaps we’ll learn to re-engineer ourselves by merging with our machines so that we no longer seek a higher purpose and we’ll reconcile ourselves to our role as agents of the universe’s decay and ultimate demise. Maybe an artistic genius will emerge who will enchant us with a stirring vision of how we might make the best of our predicament. From the skeptical, pessimistic viewpoint, which will be so easily justified in that sorrowful postmodern time, even our noblest effort to overcome our absurd plight will seem just another twist in the sickening melodrama, yet another stage of cosmic collapse; a cynic can afford to scoff at anything when his well of disgust is bottomless. But there’s a wide variety of human characters, as befits our position in a universe that tries out and discards all possibilities. I rant to the void until my throat aches and my eyes water. The undead god has no ears to hear, no eyes to behold its hideous reflection, and no voice with which to apologize or to instruct–unless you count the faculties of the stowaway creatures that are left alone to make sense of where they stand. So may some of you grow magnificent flowers from the soil of my words!”
The sun had set and most of the townsfolk had long since returned to their homes, having ignored or taken the opportunity to spit upon the doomsayer. A few remained until the end of his diatribe, their mouths hanging open in dismay and when they glanced at each other, asking what should be done, they lost sight of the preacher as he had indeed scurried away as promised, homeless, into the dark.
R. Scott Bakker's Blog
- R. Scott Bakker's profile
- 2184 followers

