R. Scott Bakker's Blog, page 28

September 4, 2012

The ‘Person Fallacy’

Aphorism of the Day: Am I a man pinned for display, dreaming I am a butterfly pinned for display, or am I a butterfly pinned for display, dreaming that I am a man pinned for display? Am I the dream, the display… the pins?


.


Things have been getting pretty wank around here lately, for which I apologize. If the market is about people ‘voting with their feet,’ then nothing demonstrates the way meaning in contemporary society has become another commodity quite so dramatically as the internet. Wank goes up. Traffic goes down. It really is that simple.


Why do people, in general, hate wank? It makes no sense to them. We have a hardwired allergy to ‘opaque’ communicative contexts. I crinkle my nose like anyone else when I encounter material that mystifies me. I assume that something must be wrong with it instead of with my knowledge-base or meagre powers of comprehension. And go figure. I’m as much my own yardstick for what makes sense as you are of yours.


This is why there is a continual, and quite commercial, pressure to be ‘wank free,’ to make things as easy as possible for as many people as possible. Though I think this can be problematic in a number of ways, I actually think reaching people, particularly those who don’t share your views, is absolutely crucial. I think ‘lowest common denominator’ criticisms of accessibility have far more to do with cultivating the ingroup prestige of wankers than anything. Culture is in the process of fracturing along entirely different lines of self-identification, thanks to the information revolution. And this simply ups the social ante of reaching across those lines.


But, as I keep insisting, there is a new kind of wank in town, one symptomatic of what I call the Semantic Apocalypse, which is to say, the utter divorce of experience, the ‘meaning world’ of cares and projects that characterizes your life, from knowledge, the ‘world world’ as revealed by science. This new wank, I believe anyways, is in the process of scientific legitimation. It is, in other words, slowly being knitted into fact with the accumulation of more scientific information. It is, in short, our future–or something like it.


So I thought it would be worthwhile to give you all an example, with translation, from what is one of the world’s premier journals, Behavioral and Brain Sciences. The following is taken from a response to Peter Carruther’s “How we know our own minds,” published in 2009. Carruther’s argument, in a nutshell, is similar to one I’ve made here several times in several ways: that we understand ourselves, by and large, the same way we understand others: by interpreting behaviour. In other words, even though you assume you have direct, introspective access to your beliefs and motives, in point of fact, you are almost as much ‘locked out’ of your own brain as you are the brains of others. As a growing body of experimental and neuropathological evidence seems to suggest, you simply hypothesize what your ‘gut brain’ is doing, rather than accessing information from the source.


What follows is Bryce Huebner and Dan Dennett’s response to Carruther’s account, interpolated with explanations of my own–as well as a little commentary. I offer it as an example of where our knowledge of the ‘human’ is headed. As I mention in CAUSA SUIcide, we are entering the ‘age of the subhuman,’ the decomposition of the soul into its component parts. I take what follows as clear evidence of this.


Human beings habitually, effortlessly, and for the most part unconsciously represent one another as persons. Adopting this personal stance facilitates representing others as unified entities with (relatively) stable psychological dispositions and (relatively) coherent strategies for practical deliberation. While the personal stance is not necessary for every social interaction, it plays an important role in intuitive judgments about which entities count as objects of moral concern (Dennett 1978, Robbins & Jack 2006); indeed, recent data suggest that when psychological unity and practical coherence are called into question, this often leads to the removal of an entity from our moral community (Bloom2005, Haslam2006).


This basically restates Dennett’s long time ‘solution’ to the problems that ‘meaning talk’ poses for science. What he’s saying here, quite literally, is that ‘person’ is simply a convenient way for our brains to make sense of one another, one that is hardwired in. A kind of useful fiction.


Human beings also reflexively represent themselves as persons through a process of self-narration operating over System 1 processes. However, in this context the personal stance has deleterious consequences for the scientific study of the mind. Specifically, the personal stance invites the assumption that every (properly functioning) human being is a person who has access to her own mental states. Admirably, Carruthers goes further than many philosophers in recognizing that the mind is a distributed computational structure; however, things become murky when he turns to the sort of access that we find in the case of metacognition.


‘System 1’ here refers to something called ‘dual process cognition,’ the focus of Daniel Kahneman’s Thinking Fast and Thinking Slow, a book which I’ve mentioned several times here at TPB. System 1 refers to automatic cognition, the kinds of problem-solving your brain does without effort or awareness, and System 2 refers to deliberative cognition, the kinds of effort-requiring problem-solving you do. What they are saying is that the ‘personal stance,’ thinking of ourselves and others as persons, obscures investigation into what is really going on. Why? Because it underwrites the assumption that we are unified and that we have direct access to our ‘mental states.’ They applaud Carruthers for seeing past the first illusion, but question whether he runs afoul the ‘person fallacy’ in his consideration ‘metacognition,’ our ability to know our knowing, desiring, and deciding.


At points, Carruthers notes that the “mindreading system has access to perceptual states” (sect. 2, para. 6), and with this in mind he claims that in “virtue of receiving globally broadcast perceptual states as input, the mindreading system should be capable of self-attributing those percepts in an ‘encapsulated’ way, without requiring any other input” (sect. 2, para. 4). Here, Carruthers offers a model of metacognition that relies exclusively on computations carried out by subpersonal mechanisms. However, Carruthers makes it equally clear that “I never have the sort of direct access that my mindreading system has to my own visual images and bodily feelings” (sect. 2, para. 8; emphasis added). Moreover, although “we do have introspective access to some forms of thinking . . . we don’t have such access to any propositional attitudes” (sect. 7, para. 11; emphasis over “we” added). Finally, his discussion of split-brain patients makes it clear that Carruthers thinks that these data “force us to recognize that sometimes people’s access to their own judgments and intentions can be interpretative” (sect. 3.1, para. 3, emphasis in original).


This passage isn’t quite so complicated as it might seem. They are basically juxtaposing Carruther’s ‘person free’ mapping of information access, which system receives information from which system, with his ‘person-centric’ mapping of information access betrayed by his use of first-person pronouns. The former doesn’t take any account of whether you are conscious of what’s going on or not. The latter does.


Carruthers, thus, relies on two conceptually distinct accounts of cognitive access to metarepresentations. First, he relies on an account of subpersonal access, according to which metacognitive representations are accessed by systems dedicated to belief fixation. Beliefs, in turn, are accessed by systems dedicated to the production of linguistic representations; which are accessed by systems dedicated to syntax, vocalization, sub-vocalization, and so on. Second, he relies on an account of personal access, according to which I have access to the metacognitive representations that allow me to interpret myself and form person-level beliefs about my own mental states.


This passage simply recapitulates and clarifies the former. Carruthers is mixing up his maps, swapping between maps where information is traded between independent city-states, and maps where information is traded between independent city-states and the Empire of the person.


The former view that treats the mind as a distributed computational system with no central controller seems to be integral to Carruthers’ (2009) current thinking about cognitive architecture. However, this insight seems not to have permeated Carruthers’ thinking about metacognition. Unless the “I” can be laundered from this otherwise promising account of “self-knowledge,” the assumption of personal access threatens to require an irreducible Cartesian res cogitans with access to computations carried out at the subpersonal level. With these considerations in mind, we offer what we see as a friendly suggestion: translate all the talk of personal access into subpersonal terms.


Carruthers recognizes that the person is a fiction, something that our brains project onto one another, but because he lapses into the person stance in his consideration of how the brain knows itself directly (metacognition ), his account risks assuming the reality of the person, a ‘Cartesian res cogitans,’ or ‘thinking substance.’ To avoid this, they recommend he clean up his theory and get rid of the person altogether.


Of course, the failure to translate personal access into the idiom of subpersonal computations may be the result of the relatively rough sketch of the subpersonal mechanisms that are responsible for metarepresentation. No doubt, a complete account of metarepresentation would require an appeal to amore intricate set of mechanisms to explain how subpersonal mechanisms can construct “the self” that is represented by the personal stance (Metzinger 2004). As Carruthers notes, the mindreading system must contain a model of what minds are and of “the access that agents have to their own mental states” (sect. 3.2, para. 2). He also notes that the mindreading system is likely to treat minds as having direct introspective access to themselves, despite the fact that the mode of access is inherently interpretative (sect. 3.2). However, merely adding these details to the model is insufficient for avoiding the presumption that there must (“also”) be first-person access to the outputs of metacognition. After all, even with a complete account of the subpersonal systems responsible for the production and comprehension of linguistic utterances, the fixation and updating of beliefs, and the construction and consumption of metarepresentations, it may still seem perfectly natural to ask, “But how do I know my own mental states?”


They suspect that Carruthers lapses into the person fallacy because he lacks an account of the subpersonal mechanisms that generate ‘metarepresentations’–representations of the brain’s representations and representational capacities–which in turn require an account of the subpersonal mechanisms that generate the self, such as those postulated by Thomas Metzinger in Being No-one. Short of this more thorough (and entirely subpersonal) account, the question of the Empire (person) and what crosses its borders becomes very difficult to avoid. Again, it’s important to remember that the ‘person’ is an attribution, not a thing, not even an illusory thing. There just is no Empire according to Huebner and Dennett, so including imperial border talk in any scientific account of cognition is simply going to generate confusion.


The banality that I have access to my own thoughts is a consequence of adopting the personal stance. However, at the subpersonal level it is possible to explain how various subsystems access representations without requiring an appeal to a centralized res cogitans. The key insight is that a module “dumbly, obsessively converts thoughts into linguistic form and vice versa” (Jackendoff 1996). Schematically, a conceptualized thought triggers the production of a linguistic representation that approximates the content of that thought, yielding a reflexive blurt. Such linguistic blurts are protospeech acts, issuing subpersonally, not yet from or by the person, and they are either sent to exogenous broadcast systems (where they become the raw material for personal speech acts), or are endogenously broadcast to language comprehension systems which feed directly to the mindreading system. Here, blurts are tested to see whether they should be uttered overtly, as the mindreading system accesses the content of the blurt and reflexively generates a belief that approximates the content of that blurt. Systems dedicated to belief fixation are then recruited, beliefs are updated, the blurt is accepted or rejected, and the process repeats. Proto-linguistic blurts, thus, dress System 1 outputs in mentalistic clothes, facilitating system-level metacognition.


I absolutely love this first line, if only because of the ease with which it breezes past the radical counterintuitivity of what is being discussed. The theoretical utility of the ‘personal stance’ is that it allows them to embrace the sum of our intuitive discourse regarding persons by simply appending the operator: ‘from the person stance.’ The same way any fortune-cookie fortune can be turned into a joke by adding ‘in bed’ to the end, any ‘everyday’ claim can be ‘affirmed’ using the person stance. “Yes-yes, of course you have access to your own thoughts… that is, when considered from the personal stance.”


The jargon laden account that follows simply outlines a mechanistic model of what a subpersonal account of the brain knowing itself might look like, one involving the shuttling of information to and fro between various hypothesized devices performing various hypothesized functions that culminate in what is called metacognition, without any need of any preexisting ‘inner inspector’–or notion of ‘introspection.’


Carruthers (2009) acknowledges that System 2 thinking is realized in the cyclical activity of reflexive System 1 subroutines. This allows for a model of metacognition that makes no appeal to a pre-existing I, a far more plausible account of self-knowledge in the absence of a res cogitans.


The point, ultimately, is that the inner inspector is as much a product as what it supposedly inspects. There is no imperial consumer, no person. This requires seeing that System 2 thinking, or deliberative cognition, is itself a recursive wrinkle in the way automatic System 1 functions are executed, a series of outputs that ‘you,’ thanks to certain, dedicated System 1 mechanisms, compulsively mistake for you.


Dizzy yet?


I’m sure that even my explication proved hopelessly inaccessible to some of you, and for that, I apologize. At the very least I hope that the gist got through: for a great deal of cognitive scientific research, you, the dude eating Fritos in front of the monitor, are a kind of mirage that must be seen through if science is to uncover the facts of what you really are. I imagine more than a few feel a sneer crawling across their face, thinking this is a perfect example of wank at its worst: a bunch of pompous nonsense leading a bunch of pompous eggheads down yet another pompous blind alley. But I assure you this is not the case. One of the things that amazes me surfing the web in pursuit of these issues is the degree to which it is being embraced by business. There’s neuromarketing, which takes all this information as actionable, but there’s economics as well. These guys are reverse-engineering the consumer, not to mention the voter.


And knowledge, as ever, is power, whether it flies in the face of experience or not.


Welcome to the Semantic Apocalypse.



 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2012 09:24

August 27, 2012

WorldCon

Hey all!  This is Roger again.


WorldCon is right around the corner — temporally, for everyone; spatially, for me, since I live in Chicago.  I took a poll some time ago to see if there was any interest in a possible TPB meet-up of some sort.  Here are the polls again:





Take Our Poll



Take Our Poll

So what do we think?  We have, as of now, seven attendees and twelve maybe-attendees.  So sound off!  Are you going to be at WorldCon?  Want to meet up?  If so, do you have any idea how we might best go about doing that?


The Con schedule can be found here.


As for me, I’m leaving town on Saturday (unfortunately), so of the big days/nights, I’m only available Friday.



 •  0 comments  •  flag
Share on Twitter
Published on August 27, 2012 12:00

Why Philosophy? And Why Has the Soul Become its Stronghold?

Aphorism of the Day: Why me? is as honest a question as it is useless, given that no one deserves anything, least of all what they get.


.


It really is amazing how prone we are to overlook the obvious, especially when our knowledge of a field is genuinely deep. The question, Why philosophy? should rank with the most profound, most discussed questions within philosophy, but it isn’t. If you make a living being stymied, surely you would want to ask why you are stymied!


And yet, it’s scarcely considered, let alone mentioned. The answer, I’m sure most philosophers would tell you, is too obvious to be worth considering. Why philosophy? Because we don’t know. And since so much of philosophy is given over to the question of knowledge, they would argue that a great deal of philosophy is concerned with its own import and status. The problem is we just don’t know what the hell ‘knowing’ is.


The question, Why philosophy? in other words, follows the question, What is knowledge? Questions must be answered in the proper order to be answered at all.


Sounds sensible. But what if they got it backward? What if the question, Why philosophy? is actually prior to the question of, What is knowledge?


If you look at the history of philosophy you find a rather remarkable process of what might be called ‘reaching and yielding.’ Over the centuries, philosophy has retreated from countless of questions and problematics, namely, those incorporated within the natural sciences. Why have they retreated? Because the questions asked eventually found empirical resolution. Knowledge came to the rescue…


See? the philosopher can say. I told you so.


But what if we scaled back our answer? What if we said something more simple, but perhaps equally mysterious? What if we said the questions asked found empirical resolution because information came to the rescue? The idea here would be that philosophy is the kind of inquiry that humans turn to in impoverished informatic conditions, when they have enough information to formulate the question, but not enough to decisively arbitrate between its potential answers. This would be why philosophy is something that generally moves in advance of the sciences. When we initially encounter a problem, we necessarily have limited informatic resources to work with, and so, like children disputing shapes in clouds, have no way of distinguishing the patterns we think we see from the patterns that actually exist.


Nature, in this cartoon, is a kind of bottomless, multi-stable image. Scientific measurement and experimentation are the ways we isolate signals from the noise of immediate nature and so accumulate information. Scientific instrumentation is the way we access information from beyond the sensory horizon of immediate nature. Scientific institutional practices are the way we isolate signals from the noise generated by human cognitive shortcomings. Mathematics is the way we code and so manipulate this information. And philosophy, ideally, is the way we provide the information required to get these processes of scientific information gathering off the ground.


Questions, an old slogan of mine goes, are how we make ignorance visible. Questions, in other words, are how we make information regarding the absence of information available. Before questions, informatic sufficiency is the assumptive default: when you don’t know that you don’t know, you assume that you know all you need to know. The ancient Sumerians never worried about near earth objects or coronal mass ejections or so on for the same reason we don’t worry about any of the myriad things our descendants will fret about: they simply lacked information regarding their lack of information.


Why philosophy? Because we lack information. We are finite systems, after all, and you might expect that any intelligent alien species, as finite, would also have their own science and philosophy, their own histories of reaching and yielding. But what makes ‘information’ a better candidate for answering our marquee question than ‘knowledge’?


For one, it seems to enable a more nuanced account of the relation between philosophy and science. To ask, Why philosophy? is to also ask, Why not philosophy? which is to say, Why science and not philosophy? The account provided above, I would argue, reveals the conceptually unwieldy, cumbersome nature of ‘knowledge.’ Knowledge is an end product, the result of information gathering. As such, it’s explanatory utility is limited–extremely so. For example, you could say that philosophy is a form of human inquiry that turns on found information, what we simply have at hand when we raise a question–‘armchair information,’ you might say.


Does ‘armchair knowledge’ make any sense? Of course not. We call it ‘armchair speculation’ for a reason. Information, in other words, allows us to span the gap between mere speculation and knowledge with a term that admits comparative gradations. Answers posed in conditions of informatic insufficiency we call speculation. Answers posed in conditions of informatic sufficiency we call knowledge.


For another, information need not be semantic. Information, unlike meaning, can be quantified, and so expressed in the language of mathematics, and so amenable to empirical experimentation. We can, in other words, possess theoretical knowledge regarding information. Moreover, it reduces the risk of question-begging, given that meaning is perhaps the ‘great question’ within philosophy. If it turns out that meaning is the problem, the reason why we can only speculate–only philosophize–knowledge, then using knowledge to explain why we must resort to philosophy simply dooms us to speculation regarding speculation.


And lastly, information provides an entirely new way to characterize the history of philosophy, one that seems to shed no little light on the theoretical problems that presently bedevil a great number of philosophers. With information, we can characterize the retreat of philosophy and the advance of science in terms of complexity: the more complex the natural phenomena, the more information scientific knowledge requires. Thus, science has only now breached the outer walls of the human brain, the most complex thing we know of in the universe. Thus the preponderance of philosophy when it comes to matters of the soul.


In a certain sense, this narrative is obvious: Of course the complexity of the brain forced science to bide its time, refining and extending its repertoire of procedures and instrumentation, not to mention its knowledge base, before making serious inroads. Of course the soul became the stronghold of philosophy in the meantime, the one place it could reach and reach without worry of yielding. But what is surprising–even downright counterintuitive–about this tale is the fact that we are our brains. Of all the noise that nature has to offer, surely the signal most easily plucked, the information that hangs lowest, comes from ourselves!


And yet, arguably, nowhere are we more philosophical.


If philosophy is our response to informatic poverty, our inability to gather enough of the information required to decisively arbitrate between our claims, then philosophy itself becomes an important bearer of information. It is an informatic weather-vane. In this case, philosophy tells us that, despite all the information we think we have at our disposal via intuition or introspection, we actually represent a profound informatic blindspot.


Somehow, for some reason, the information we need to theoretically know ourselves is simply not available. Since the default assumption is that we are awash in information regarding ourselves, then something very peculiar must be going on. Essentially we find ourselves in the same straits vis a vis ourselves as our ancestors found themselves in relative to their environments prior to the institutionalization of science. We have plenty of information to theorize–and theorize we do–but not enough information, at least of the right kind, to resolve our theoretical disputes. In other words, we have only philosophy and its vexing consolations.


Thus the crucial importance of the question, Why philosophy? The fact that we endlessly philosophize intentional phenomena tells us that we quite literally lack the information required to gain theoretical knowledge of intentional or semantic phenomena. It’s important to note that we are talking about theoretical as opposed to practical knowledge here. When philosophers like Daniel Dennett, for instance, argue the predictive power and utility of intentionality, they seem to assume that intentionality as theorized possesses predictive power, when in point of fact, they are discussing predictive capacities that humans possessed long before the ancient Greeks and the birth of philosophy. The fact is, Dennett’s ‘intentional stance’ is a theoretical posit, a philosophically controversial way to theorize what it is we are doing when we predict what other systems will do. The fact is, we don’t know what it is we are doing when we predict what other systems will do. We just do it.


In other words, you have to assume the truth of Dennett’s theoretical account, before you can assert the predictive power of intentionality. But, as we have seen, we obviously lack the information required to do this–even though most assume otherwise. The question, Why philosophy? reveals that the information available to intuition and introspection is far more impoverished or distorted than it appears. If it were adequate, then first-person reflection would be sufficient for a first-person science, as certain psychologists and phenomenologists thought around the turn of the 20th century.


Why philosophy? in other words, allows us to side-step the default-assumption of sufficiency that plagues us when we lack (or fail to take into account) information regarding the absence or inadequacy of information. It reminds us that we are at sea with reference to ourselves.


And most importantly, it provides us with another series of questions to ask, questions that I think have the potential to revolutionize consciousness research and the philosophy of mind. We quite obviously lack the information we need, so the question becomes, Why?


Why do intuition and introspection provide only enough information for philosophy? Is evolution a culprit, or in other words, what kind of developmental constraints might be at work? Is neural architecture a factor, which is to say, what kind of structural constraints are involved? Given what neuroscience has discovered thus far, what kind of informatic constraints should we expect to suffer? Could ‘reflection,’ the act of bringing conscious activity (phenomenal or cognitive) into attentional awareness for the purposes of conscious deliberation, constitute a kind ‘informatic bottleneck,’ one that systematically depletes and/or distorts the information apparently available? Could intentionality be chimerical, a kind of theoretical hallucination? What brain systems cognize this information? Is there a relationship between the kinds of cognitive mistakes we make in the absence of information in environmental cognition and our various claims regarding conscious experience? How might informatic shortfalls find themselves expressed in conscious experience?


This is the perspective taken and these are the questions asked by the Blind Brain Theory. If the information that neuroscience is patiently accumulating eventually bears out its claims, then the stronghold of the soul will have finally fallen, and philosophers will become one more people without a nation, exiles in their armchairs.



 •  0 comments  •  flag
Share on Twitter
Published on August 27, 2012 11:05

August 23, 2012

The One-Eyed King: Consciousness, Reification, and the Naturalization of Heidegger

Aphorism of the Day I: Consciousness is something hooked across the top of your nose, like glasses, only as thick as the cosmos.


Aphorism of the Day II: Give me an arm long enough, and I will reach across the universe and punch myself in the back of the head. Not because I deserve it, but because I can take it.


.


“Can ontology be grounded ontologically,” Heidegger writes at the end of Being and Time, “or does it also need for this an ontic foundation, and which being must take over the function of this foundation?” (397) I have long ago lost faith in our ability to ontologically ground ontology. Why? Because the evidence for human Theoretical Incompetence has become nothing short of mountainous. As a result I have come to think that ‘ontology’ does require an ‘ontic foundation,’ namely, empirical knowledge of the brain.


The brain is the being that is being.


The German philosopher Martin Heidegger is one of the seminal figures of early Twentieth Century philosophy. His thought, either directly or in germ, informs a great many of the problems and themes that define as much as preoccupy so-called Continental philosophy, Existentialism being perhaps the most famous among them. He remains one of the most innovative and revolutionary figures in the history of Western thought.


There’s an ancient tradition among philosophers, one as venal as it is venerable, of attributing universal discursive  significance to some specific conceptual default assumption. So in contemporary Continental philosophy, for instance, the new ‘It Concept’ is something called ‘correlation,’ the assumption that the limits posed by our particular capacities and contexts prevent knowledge of the in-itself, (or as I like to call it, spooky knowledge-at-a-distance). Waving away the skeptical challenges posed by Hume and Wittgenstein with their magic wand, they transport the reader back to the happy days when philosophers could still reason their way to ultimate reality, and call it ‘giving the object its due’–which is to say, humility.


Heidegger’s It Concept was being, existence itself. Here’s one of the passages from his magnum opus, Being and Time,that I found so powerfully persuasive in my philosophical youth:


“The question of being thus aims at an a priori condition of the possibility not only of the sciences which investigate beings of such and such a type–and are thereby already involved in an understanding of being; but it aims also at the condition of the possibility of the ontologies which precede the ontic sciences and found them. All ontology, no matter how rich and tightly knit a system of categories it has at its disposal, remains fundamentally blind and perverts its innermost intent if it has not previously clarified the meaning of being sufficiently and grasped its this clarification as its fundamental task.” (9, Stambaugh translation)


Science, like all other discourses, is fraught with numerous assumptions that drive the kinds of conclusions it provides. Explanation requires an enormous amount of implicit agreement to get off the ground, a fact that the theoretical disarray of consciousness research illustrates in lurid detail. If no one agrees on the entity to be explained, as is the case with consciousness, then all explanations of that entity will be stillborn. What Heidegger is saying here is simple: the things or entities or beings that the sciences explain all presume some prior notion of being. An object of science, after all, is quite different than an object of envy or an object of literature, even when those objects all bear the same name. Heidegger is making a kind conceptual path dependency argument here: If our implicit presumptions regarding being are fundamentally skewed, then all our subsequent thought will simply magnify those distortions. Thus the importance of his investigation into the meaning of being–his attempt at ‘clarification.’


The problem, Heidegger thought, one riddling all philosophy back to Aristotle, lay in a single fundamental equivocation: the inclination to think being in terms of beings, and the faulty application of what might be called ‘thing logic’ to things that are not things at all and so require a different logic or inferential scheme altogether. The problem, in other words, was the universal tendency to ‘level’ what he called the Ontological Difference, the crucial distinction between being proper and beings, between what was prior and ontological, and what was derivative and ontic. Any philosophy guilty of this equivocation he labelled the Metaphysics of Presence.


What I want to do is clarify his clarification with some obscurities of my own, speculative possibilities that, if borne out by cognitive neuroscience, will have the effect of naturalizing the Ontological Difference, explaining what it is that Heidegger was pursuing in, believe it or not, empirical terms. Heidegger, of course, would argue that this must be yet another example of putting the ontic cart in front of the ontological horse, but I’ve long since lost faith in the ability of rank speculation to ‘ground’ anything, let alone the sum of scientific knowledge. I would much rather risk crossing my ontological wires and use the derivative to explain the fundamental than risk crossing my epistemic wires and use the dubious to ‘ground’ the reliable.


When reading Heidegger it’s always important to keep in mind the implicit authority gradient that informs all his writing. He believes that ontic discourses, for all their power, are profoundly artificial. The objects or beings of science, he argues, are abstracted from the prior course of lived life. Science takes beings otherwise bound up in the implicit totality of life and interrogates them in isolation from their original contexts, transforms them into abstract moments of abstract mechanisms. Rainfall becomes the result of condensation and precipitation, as opposed to a child’s scrubbed little-league game or a farmer’s life-giving dispensation. Rainfall, as an object of scientific inquiry, is something present, an abstract part waiting to be plugged into an abstract machine. Rainfall, as an element of lived life, is something knitted into the holistic totality of our daily projects and concerns. For many readers of Heidegger this constitutes his signature contribution to philosophy, the way he overturns the traditional relationship between lived existence and abstract essence. For Heidegger the human condition always trumps human nature.


The problem with taking on the tradition, however, is that the traditional conceptual vocabulary is typically the only one you got, and certainly the only one you share with your interlocutors. Thus the notorious difficulty of Being and Time: given the problematic as he defined it, Heidegger thought he had no choice but to innovate an entirely new conceptuality to slip out from under the traditional philosophical thumb, one that avoids thinking being in terms belonging to beings, and so grasps the prior logic of lived life. Heidegger thought the problem was radical, that the Metaphysics of Presence was so pervasive as to be well-nigh inescapable, enough to motivate greater and greater degrees of poetic obscurity in his later work.


Why is it so hard to think being outside the rubric of beings? Arguably it’s simply a consequence of making things explicit in reflection: in our nonreflective engagement with the world, the concepts we employ and objects we interact with are all implicit, which is to say, we have little or no awareness of their possibilities apart from whatever project we happen to be engaged in. As soon as we pause and reflect on those possibilities, we take what was implicit, which is to say, what framed our engagements, and make it explicit, which is to say, something that we frame in reflective thought. The most egregious example of this, Heidegger thought, was the subject-object dichotomy. If you look at our relation to objects in the world in the third-person, then the subject-object relation becomes an external one, the relation between two things. Something like,


S – O


There’s the subject, and there’s the object, and the relation between the two is accidental to either. But if you look at our relation to objects in the world in the first-person, then the subject-object relation becomes an internal one, the relation between figure and field. Something like,


[       O       ]


where the brackets represent the perspective of the subject. In this case, even though they purport to model the same thing, the logic of these two perspectives is incredibly different, as different, you might say, as between programming a strategy game and a first-person shooter. Given this analogy you could say that Heidegger took programming philosophy’s first true first-person shooter as his positive project in Being and Time, and critiquing the history of strategy game programming as his critical project.


The problem with this second model, however, is that simply adding the brackets has the effect of transforming the subject into another being, albeit one that is internally related to the objects it encounters. So even if adopting a first-person perspective is arguably ‘better,’ you are still, in some sense, guilty of levelling the ontological difference, and so disfiguring the very thing you are trying to disclose. The best way to model the first person would be to simply exclude the brackets,


O


to leave the subject (in this case, you reading this-very-moment) as an ‘occluded frame.’ The problem here, aside from rendering the subject occult, is that the object remains something abstracted from the course of lived life, and so another impoverished being. As with the Spanish Inquisition, it would seem there is no escaping the Metaphysics of Presence. Philosophy makes explicit, and making explicit covers over the relationality belonging to lived life.


So in a sense, what Heidegger was trying to do was find a way of making explicit something that is no thing at all, something essentially implicit. He was literally trying to speak around language, which is presumably why the world lost him around the corner of his later career.


So what could any of this have to do with consciousness and cognitive neuroscience?


Heidegger, as it turns out, has proven to be immensely influential in consciousness studies. ‘Heideggerians’ like Hubert Dreyfus, or even ‘Heideggerish’ thinkers like Andy Clark or Alva Noe, generally argue that consciousness cannot be explained as anything ‘inner,’ as something confined to the brain, but rather must be understood (if we are to risk using the concept at all) as embodied in a world of engagements and concerns. As I alluded above, Heidegger resorts to conceptual neologisms in a bid to escape the Metaphysics of Presence. As a result, ‘consciousness’ is a term scarce mentioned in Being and Time, and only then almost exclusively to fend against the tendency to interpret Dasein using “a mode of being of beings unlike Dasein,” and so reduce it to the ontic “thingliness of consciousness” (108). The exception to this is found in the final pages of Being and Time, where Heidegger, after innumerable strident declarations, suddenly cautions against dogmatic appraisals of his preliminary interpretation of the problematic of being thus far.


“We have long known that ancient ontology deals with ‘reified concepts’ and that the danger exists of ‘reifying consciousness.’ But what does reifying mean? Where does it arise from? Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Why does this reification come to dominate again and again? How is the being of ‘consciousness’ positively structured so that reification remains inappropriate to it? Is the ‘distinction’ between ‘consciousness’ and ‘thing’ sufficient at all for a primordial unfolding of the ontological problematic?” (397)


Despite all the disagreement, there is a broad consensus in consciousness research circles that consciousness involves the integration of information from nonconscious sources: we become ‘conscious of’ things when the requisite information becomes available for integration in the conscious subsystems of the brain. Consciousness, in other words, possesses numerous informatic thresholds pertaining to any number of neural processes.


Among other things, the Blind Brain Theory (BBT) proposes that these informatic thresholds play a decisive role in the apparent structure of consciousness experience. All you need do is attend to the limits of your visual field, to the way vision simply peters out into visual oblivion, in order to apprehend a visual expression of an information horizon. Since visual information enables sight, the limits of visual information cannot themselves be seen. The conscious cognition of the absence of information always requires more information: I call this the Principle of Informatic Adumbration (PIA), and as we shall see, it is absolutely crucial to understanding consciousness.


PIA essentially means that the conscious subsystems of the brain necessarily suffer a kind of ‘natural anosognosia.’ Anosognosia refers to neurological deficits that patient’s simply cannot recognize. With Anton-Babinksi Syndrome, for instance, patients are blind as well as blind to their blindness–they literally insist they can still see. These patients, for whatever reason, cannot access or process the information required to cognize the fact of their blindness. The anosognosias found in clinical contexts literally leap out at us because of the way they rattle our intuitive sense of our own cognitive capacities. The kinds of natural anosognosias suggested by BBT, on the other hand, are both universal and congenital. A blindness that cannot be seen is a blindness that does not exist.


To say that the conscious subsystems of the brain can only process the information available for processing seems trivial, which is probably why no one in the consciousness research community has bothered to ponder its functional consequences, or how these effects might find themselves expressed in consciousness experience, not to mention how they might impact our attempts to naturalistically understand consciousness. I’ve explored these consequences at length elsewhere. Here I will consider only those pertinent to Heidegger. My claim, which will no doubt strike many as preposterous, is that the logic that structures the early Heidegger’s distinctive phenomenology follows directly from the experiential consequences of PIA…


That his ontology actually possesses an ontic explanation.


The consequence of PIA most germane to understanding Heidegger is what might be called ‘asymptosis.’ Consider the margins of your visual attention once again, the way vision just ends. The limits of your visual field ‘transcend’ your visual field, as they must, given the unavailability of visual information. The boundaries of your visual field are asymptotic, what I have elsewhere called ‘Limits with One Side’ (LWOS). The edge of viewing cannot come into view without ceasing to be the edge.


PIA essentially means that conscious experience must be swaddled in varieties of asymptosis, horizons that we cannot perceive as horizons simply because the conscious subsystems of our brain necessarily lack any information regarding them. I say ‘necessarily’ because providing information pertaining to those horizons simply generates new, inaccessible horizons. The actual operational limits of conscious experience, in other words, cannot enter conscious experience without, 1) ceasing to be operational limits, and 2) establishing new operational limits.


In a sense, the conscious subsystems of the brain are continually ‘outrunning themselves.’ Conscious experience, as a result, is fundamentally asymptotic, which is to say, blind to its own informatic limits. We actually witnessed a phenomenal expression of this above, in our first-person consideration of the subject-object relation as,


[      O      ]


where the brackets, once again, represent the subject. Even though this formulation transforms the external relationality of thing and thing into the internal relationality of figure and field, the problem, from the Heideggerian perspective, lies in the way it still renders the subject a discrete being. This is essentially Heidegger’s critique of his equally famous mentor Edmund Husserl, who, despite adopting the figure-field relationality of the first-person perspective, confused the informatic poverty of his abstractions, the violence of bracketing or epoche, for essences. In Being and Time, anyway, Heidegger thought that answering the question of the meaning of being required the interpretation of actual, concrete, living being, not abstractions.


But again, as the final pages of Being and Time reveal, he wasn’t entirely clear why this should be. Now consider the consequence of PIA noted above: The actual operational limits of conscious experience cannot enter conscious experience without, 1) ceasing to be operational limits, and 2) establishing new operational limits. Given PIA, there’s a sense that every time we try to make conscious experience explicit conscious experience has already moved on. If the occlusion of the operational limits of conscious experience is essential to what conscious experience is, then all reflection on conscious experience involves some kind of essential loss, or ‘covering over’ as Heidegger might say.


Conscious experience is fundamentally asymptotic, finite yet queerly unbounded. Reflection on conscious experience renders it symptotic, as something bounded and informatically embedded. In fact, it has to do this. The conscious brain is not reflexive, only recursive. To cognize itself, it has to utilize the very machinery to be cognized, thus rendering itself unavailable for cognition. In a sense, all it can access are discrete snapshots, informatic residue taken up by cognitive systems primarily adapted to external natural and social environments and the beings that inhabit them.


The Blind Brain Theory actually possesses the resources to reinterpret a number of the early Heidegger’s central insights, thrownness and ecstatic temporality among them. The focus here, however, is the Ontological Difference, and the kind of hermeneutic logic Heidegger developed in an attempt to mind the distinction between being and beings, and so avoid the theoretical sin of reification.


So to return to Heidegger’s own questions:


1) What does reifying mean? Reifying refers to a kind of systematic informatic distortion engendered by reflection on conscious experience.


2) Where does it arise from? Reification is a consequence of the Principle of Informatic Adumbration, the fact that the conscious cognition of the absence of information always requires more information. Because of PIA, conscious experience is asymptotic, something not informatically embedded within conscious experience. Reflection, or the act of bringing conscious experience into attentional awareness for deliberative cognition, cannot but informatically embed, and therefore ‘reify,’ conscious experience.


3) Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Because conceptualizing being requires reflection, and reflection necessitates symptosis.


4) Why does this reification come to dominate again and again? Because of PIA, once again. Absent any information regarding the informatic distortion pertaining to all reflection on conscious experience, symptosis must remain invisible, and that reflection must seem sufficient.


5) How is the being of ‘consciousness’ positively structured so that reification remains inappropriate to it? Short of actually empirically determining the ‘being of consciousness’–which is to say, solving the Hard Problem–this question is impossible to answer. From the standpoint of BBT, the consciousness that Heidegger refers to here, that he interprets under the rubric of Dasein, is a form of Error Consciousness, albeit one sensitive to PIA and the asymptotic structure that follows. Reification is ‘inappropriate’ the degree to which it plays into the illusion of symptotic sufficiency.


6) Is the ‘distinction’ between ‘consciousness’ and ‘thing’ sufficient at all for a primordial unfolding of the ontological problematic? Heidegger, of course, would come to believe the answer to this was no, realizing the way drawing being into attentional awareness for the purposes of deliberative cognition necessarily concealed its apparent asymptotic structure. From the standpoint of BBT, the Ontological Difference is an important clue as to the kinds of profound and systematic distortions that afflict our attempts to cognize consciousness.


Heidegger’s hope in Being and Time was that the development of  an ‘asymptotic logic’ would enable him to approach the question of the meaning of being without succumbing to the Metaphysics of Presence, the equivocation of being and beings. Throughout Being and Time you find statements of the form, ‘As x, Dasein is…’ where x is something that philosophers typically regard as either ontologically distinct (time, world) or metonymically subordinate (care, anxiety, resoluteness) to the subject as traditionally conceived. With the former categories, the norm is to see the subject as something contained within time and world. Even in traditional (as opposed to Hegelian) idealism, the transcendental subject remains symptotic, a being, albeit one that creates time and world to empirically dwell within. With the latter categories, the norm is to see the subject as the container, as something containing the capacity for care and anxiety and so on. These things are parts of the subject, and nothing more.


By embracing asymptosis, Heidegger discovered a radically new inferential schema, one that allows the subject to become those containing and contained things. Lacking boundaries, these containing and contained things could no longer contain or be contained, and the tidy hierarchies of the tradition dissolved into the existential vicissitudes of Dasein. Regarding the ‘containers,’ Heidegger performs a kind of ontological equivocation, so that Dasein, unlike the traditional subject, becomes time, becomes the world. Regarding the ‘contained,’ he performs a kind of metonymic inflation, so that Dasein, unlike the traditional subject, becomes care, becomes anxiety. You could say that ontological equivocation (As temporalization, Dasein is…) and metonymic inflation (As care, Dasein is…) are the pillars of his interpretative method, what makes his philosophical implicature so radical. In one fell swoop, it seemed, Heidegger had sidestepped centuries of philosophical dilemma. By equivocating the world and Dasein, he was able to bypass the subject-object dichotomy, and thus make the epistemological dilemma look like a quaint, historical relic. The discrete, accidental relation between discrete subjects and objects became an encompassing, constitutive relation, one that Dasein is.


The fact that so many found this defection from traditional philosophy so convincing despite its radicality reflects the simple fact that it follows from asymptosis, the way the modes of prereflective conscious experience express PIA. Consciousness, as we experience it, is asymptotic, as it has to be given the Principle of Informatic Adumbration. The fact that the conscious subsystems of the brain cannot cognize inaccessible information is trivial. The corollary of this, our corresponding inability to cognize the limits of cognition, is where the profundities begin to pile up. Heidegger had stumbled upon a very real, very powerful intuition–but from the phenomenological side of the coin. Short of some inkling of the Blind Brain Theory, he had no way of knowing that he was working through a logic that expressed what are likely very real structural facts about our neurophysiology–that, far from grounding beings in being, he was describing the phenomenological consequence of a structural feature of the brain…


The being that is being.



 •  0 comments  •  flag
Share on Twitter
Published on August 23, 2012 12:52

August 16, 2012

Beware the Neuro-Inquisition!

Aphorism of the Day: Any day that references TJ Hooker is a good day.


.


Charlie Rose is rebroadcasting its series on contemporary neuroscience, with Eric Kandel moderating discussions with a number of luminaries from the field. WGHB ran the episode on consciousness last night, which is available on the web here for those of you who missed it. Great fun, and one of the best introductions to the field that I can imagine. Stanislaw Dehaene is the man.


One of my pet peeves with the discussion, even back when it originally aired last year, is the continual use of the ‘tip of the iceberg’ metaphor for consciousness. I’m sympathetic to the idea that consciousness only accesses a fraction of the brains overall information load, certainly, but the metaphor perpetuates what might be called the ‘Pinnacle Conceit,’ the notion that all these non-conscious processes somehow culminate in consciousness. This is the problem I have with Freud’s ‘preconscious,’ or even Dennett’s ‘fame in the brain’ metaphor, the way these characterizations lend themselves to the idea that this… what you are experiencing know, is a kind of crowning achievement, rather than a loose collection of cogs in a far, far vaster machine.


The fact is, consciousness is more like a grave than a summit, something buried in the most complicated machinery known. It evolved to service the greater organism, not vice versa. The superiority of the ‘cog in the machine’ metaphor lies in the fact that the conscious brain is neurofunctionally embedded in the gut brain, something that accesses information from nonconscious neural processors and provides information to other nonconscious neural processors. This allows us to see what I call the ‘Positioning Problem’ in “The Last Magic Show“: the way the neurofunctional context of the information that enters conscious experience in no way exists for conscious experience (not even as an absence), stranding conscious cognition with fragmentary episodes it can only confuse for the whole story–what we call ‘life.’


Imagine an ‘orthogonal’ cable TV channel, one that continually leaps from channel to channel without you knowing, so that you see a continuous show made up of episodic fragments of other shows–say, William Shatner shooting a man who becomes a woman applying lipstick just as the Death Star explodes–without having any knowledge whatsoever of TJ Hooker or Cover Girl or Star Wars. Since this is the show you have always watched, it necessarily forms the very baseline for what counts as a ‘coherent narrative’–which is to say, something meaningful. Then the neuroscientific channel surfers come along and begin talking about narratives that run at right angles to your own, narratives that are far more coherent intellectually, but make utter hash of the ‘baseline narrative’ of your orthogonal viewing.


This illustrates the Positioning Problem in a nutshell. Given that the neurofunctional context of any conscious experience is utterly occluded from conscious experience, we have no way of knowing what role that conscious experience actually plays. For all we know, the channels could be crossed, and things like the ‘feeling of willing,’ for example, may actually follow our actions rather than triggering them. For all we know, the ‘feeling of certainty’ we enjoy may have nothing to do with our reasoning whatsoever, but rather be the result of some unhappy neural birth defect. For us, Bill Shatner shooting a man seems to necessarily cue a woman applying lipstick simply because the possibility of other channels, programs running at right angles to conscious experience, does not belong to our eclectic broadcast.


This is basically what I’m driving at in my brief ‘bestiary’ of possible consciousnesses, and why I’m so pessimistic about what neuroscience will make of the human soul. We presently find ourselves on the rack of knowledge, and we have no reason to think our Inquisitors will be kind. Sure, they seem warm and friendly enough, and even telegenic, as that episode of Charlie Rose reveals. But they are pursuing questions whose answers care nothing for our joints or their range of motion. Nature is their primary authority, and no Pope could be more indifferent to our needs and concerns. This particular Church of Rome, I fear, is about to tear us apart.



1 like ·   •  0 comments  •  flag
Share on Twitter
Published on August 16, 2012 09:27

August 14, 2012

Don’t Forget Your Pillow…

Because Curiosity has no passenger seats. Ruby and I checked this out this morning, and on a screen that’s more than big enough to make it way cool. I couldn’t think of a better way to teach your kid about planets: bring them there.


So I watched Limitless for the second time last night and was mightily impressed by the trippy ‘frame games’ it plays. It has a number of mise en abyme effects going on, mostly decorative, but very pretty nonetheless. It struck me, yet again, the way place can be plugged into place, the way the ‘view from Mars’ can be plugged into your den or office cube or what have you. Perspective is portable, which is what makes it so powerful. And this, if you think about it, has to be its signature structural feature, the way it is, as Heidegger would say, something continually thrown, constitutively blind to its functional origins, and so as easy to toss across the room as a postcard.


Poof! You’re on Mars. You’re not, but you are. What does it matter how long the lines of communication are?


This, once again, shows just how out-and-out antithetical the first-person view is to natural explanation: the very information that is the grist of scientific understanding has to be absent as a condition of its possibility. You have to be nowhere to be anywhere, as hidden as a photographer. Like someone suffering transportational narcolepsy, we simply pop from place to place, frame to frame, the ultimate informatic end-user, thinking we see all there is to see.


Morra has nothing on Kellhus.



 •  0 comments  •  flag
Share on Twitter
Published on August 14, 2012 06:56

August 9, 2012

Error Consciousness (Part One): The Smell of Experience

Aphorism of the Day: Are you giving me a ‘just so’ story here? Saying that introspection, despite all the structural and developmental constraints it faces, gets exactly the information it needs to cognize consciousness as it is? Even without the growing mountain of contrary empirical data, this strikes me as implausible. Or are you giving me a ‘just enough’ story? Saying that introspection gets enough information it needs to cognize what consciousness is more or less. I have a not enough story, and an extreme one. I think we are all but blind, that introspection is nothing but a keyhole glimpse that only seems as wide as the sky because it lacks any information regarding the lock and door. I’m saying that we attribute subjectivity to ourselves as well as to others, not because we actually have subjectivity, but because it’s the best we can manage given the fragmentary information we got.


.


Perplexities of Consciousness is unlike any philosophical text on consciousness you are apt to read, probably because Eric Schwitzgebel is unlike philosopher of mind you are apt to encounter. In addition to teaching philosophy at the UC Riverside, he’s both an avid SF fan and a long-time gamer. He also runs Splintered Minds, a blog devoted to issues in consciousness studies, cognitive psychology, and experimental ethics.


Did I mention he was also a skeptic?


Perplexities of Consciousness is pretty much unique in its stubborn refusal to provide any positive account of consciousness. Schwitzgebel’s goal, rather, is to turn an entire philosophical tradition on its head: the notion that our conscious experience is the one thing we simply can’t be wrong about. Headvances what might be called an Introspective Incompetence Thesis, the claim that, contrary to appearances, introspection is anything but the model of cognitive reliability it so often seems:


“Why did the scientific study of the mind begin with the study of conscious experience? And why, despite that early start, have we made so little progress? The two questions can be answered together if we are victims of an epistemic illusion–if, though the stream of experience seems readily available, though it seems like low-hanging fruit for first science, in fact we are much better equipped to learn about the outside world.” (159)


What Schwitzgebel essentially shows is that when it comes to reports of inner experience, consistent consensus is really, really hard to find. Consider the dated assumption that we dream in black and white: Schwitzgebel shows–quite convincingly, I think–that this particular conceit (once held by specialists and nonspecialists alike) lasted only as long as the cultural predominance of black and white movies. As preposterous as it sounds, there’s a good chance that questions even as rudimentary as this lie beyond our ability to decisively answer.


In lieu of reviewing Perplexities in any traditional sense, I would like to propose a positive account of Schwitzgebel’s negative thesis, an explanation of why consciousness “seems readily available,” at least in its details, even as it remains, in many ways, anything but available. Understanding this pseudo-availability provides a genuinely novel way of understanding the cognitive difficulties consciousness poses more generally. And once we have these difficulties in view, we can finally get down to the business of circumventing them. The fact is I actually think Schwitzgebel is telling a much larger story than he realizes, one that would likely strain even his estimable powers of incredulity.


Perplexities is anything but grandiose. The banality of the examples Schwitzgebel uses–whether we dream in colour, what we sense (aurally or visually) with our eyes closed, how we intuit flatness, whether we generally feel our feet in our shoes–belies, I think, the care he invested in selecting them. These are all questions that most lay readers would think easy to answer, perhaps eminently so. This presumption of ‘ready availability’ has the rhetorical effect of dramatically accentuating his conclusions. You would think we would know whether we dream in colour, immediately and effortlessly.


It turns out we only think we know.


The problem is anything but a new one. Schwitzgebel spends quite some time discussing attempts by various 19th Century introspective psychologists to train their subjects, particularly that of Edward B. Titchener, who wrote a 1600 page laboratory manual on introspective experimentation. Perhaps inner experience does require trained observers to become scientifically tractable–perhaps its truth needs a trained eye to be discerned. Or perhaps, as seems far more likely, psychologists like Titchener, faced with a fundamentally recalcitrant set of phenomena, required consistency for the sake of institutional credibility.


Coming out of the Continental philosophical tradition and its general insistence on the priority of lived experience, I quite literally saw philosophy in small in this narrative. I have suffered, or enjoyed, a number of profound conversions over the course of my philosophical life– from Dennett to Heidegger to Derrida to Wittgenstein–and in each case I have been mightily impressed by how well each of these outlooks ‘captured’ this or that manifold of experience. In fact, it was the degree to which I had identified with each of these perspectives, the fact that I could be so convinced at each and every turn, that led me to my present skeptical naturalism. In each case I was being trained, not simply to think in a certain way, but to perceive. Heidegger, in particular, revolutionized the way I ‘lived life.’ For a span of years, I was a hard-drinking, head-banging Dasein, prone to get all ontological with the ladies.


In a very real sense, Schwitzgebel’s historical account of early introspective psychology offers a kind of microcosm of philosophical speculation on the soul, mind–or whatever term we happen to find fashionable. Short of some kind of training or indoctrination, everyone seems to see something different. Our ‘observations’ are not simply ‘theory-laden,’ in many cases they seem to be out-and-out theory driven–and the question of how to sort the introspection from the conceptualization seems all but impossible to answer. I’ll return to this point later. For the moment I simply want to offer it as more evidence of the problem that Schwitzgebel notes time and again:


Problem One (P1): Conscious experience seems to display a comparatively high degree of ‘observational plasticity.’


As the question of dreaming in colour dramatically illustrates, conscious experience, in some respects at least, has a tendency to ‘meet us halfway,’ to reliably fit our idiosyncratic preconceptions. Now you might object that this is simply the cost of doing theoretical business more generally, that even in the sciences theorization involves the gaming of ambiguities this way or that. Consider cosmology. Theories are foisted on existing data, and then sorted according to their adequacy to the new data that trickles in. The problem with theories of consciousness, however, is that so little–if anything at all–ever seems to get sorted.


What distinguishes science from philosophy is the way it first isolates, then integrates the information required to winnow down the number of available theories. Like any other scientific enterprise, this is precisely what early introspective psychology attempted to do: isolate the requisite information. Titchener’s training manual, you could say, is simply an attempt to retrieve pertinent experimental information from the noise that seemed to plague his results otherwise. And yet, here we are, more than a century afterward, stymied by the very questions he and others raised so long ago. Despite its 1600 pages, his manual simply did not work.


As Kreigal notes in his review of Perplexities (linked above), it could be the case that psychology simply gave up too soon. Maybe training and patience are required. Perhaps introspection, though far more informatically impoverished than vision, is more akin to olfaction, a low resolution modality demanding much, much more time to accumulate the information needed for reliable cognition. Perhaps introspective psychology needed to keep sniffing. Either way it serves to illustrate a second problem that regularly surfaces through Perplexities:


Problem Two (P2): Conscious experience seems to exhibit a comparatively high degree of ‘informatic closure.’


Introspection, you could say, confuses what is actually an ‘inner nose’ with an ‘inner eye,’ which is to say, an impoverished sensory modality with a rich one. ‘Intro-olfaction,’ as it should be called, does access information, only in a way that requires much more training and patience to see results. So even if conscious experience isn’t informatically closed in the long term, it remains so in the short term, particularly when it comes to the information required to successfully arbitrate incompatible claims.


Given these two problems, the dilemma becomes quite clear. A high degree of observational plasticity means a large number of ‘theories,’ naive or philosophical. If you have a theory of consciousness to sell (like I do), you quickly realize that the greatest obstacle you face is the fact that everybody and her uncle also has a theory to sell. A high degree of informatic closure, on the other hand, means that the information required to decisively arbitrate between these countless theories will be hard to come by.


You could say conscious experience is a kind of perspectival trap, one where our cognitive guesses become ‘perceptual realities’ that we quite simply cannot sniff our way around. This characterization has the effect of placing a premium on any information we can get our hands on. And this is precisely what Perplexities of Consciousness does: provide the reader with new historical and empirical facts regarding conscious experience. Though he adheres to the traditional semantic register, Schwitzgebel is furnishing information regarding the availability of information to conscious cognition. In fact, he probes the question of this availability from both sides, showing us how, as in the case of ‘human echolocation,’ we seem to possess more information than we think we do, and how, as in the case of recollecting dreams, we seem to have far less.


And this is what makes the book invaluable. Something smells fishy about our theoretical approaches to consciousness, and I think the primary virtue of Perplexities is the way it points our noses in the right direction: the question of what might be called introspective anosognosia. This, certainly, has to be the cornerstone of all the perplexities that Schwitzgebel considers: not the fact that our introspective reports are so woefully unreliable, but that we so reliably think otherwise. As he writes:


“Why, then, do people tend to be so confident in their introspective judgments, especially when queried in a casual and trusting way? Here is my guess: Because no one ever scolds us for getting it wrong about our experience and we never see decisive evidence of our error, we become cavalier. This lack of corrective feedback encourages a hypertrophy of confidence.” (130)


I don’t so much disagree with this diagnosis as I think it incomplete. One might ask, for instance, why we require ‘social scolding’ to ‘see decisive evidence of our error’? Why can’t we just see it on our own? The easy answer is that, short of different perspectives, the requisite information is simply not available to us. The answer, in other words, is that we have only a single perspective on our conscious experience.


The Invisibility of Ignorance–the cognitive phenomenon Daniel Kahneman (rather cumbersomely) calls What-You-See-Is-All-There-Is, or WYSIATI–is something I’ve spilled many pixels about over many years now. The idea, quite simply, is that because you don’t know what you don’t know, you tend to think you know all that you need to know:


“An essential design feature of the associative machine is that it represents only activated ideas. Information that is not retrieved (even unconsciously) from memory might as well not exist. [Our automatic cognitive system] excels at constructing the best possible story that incorporates ideas currently activated, but it does not (cannot) allow for information it does not have.” (Thinking Fast and Slow, 85)


As Kahneman shows, this leads to myriad errors in reasoning, including our peculiar tendency to be more certain about our interpretations the less information we have available. But where the instances of WYSIATI studied by Kahneman involve variable information deficits, environmental ignorances or mnemonic failures that we can address by simply seeking out more information (typically by exploring our environments), the information deficits pertaining to conscious experience, as we have seen, are more or less fixed.


Our unwarranted confidence in our introspective judgments, in other words, turns on P2, informatic closure. When it comes to environmental cognition, there is always ‘more than what meets the eye’–as the truism goes. Take a step sideways, consult others standing elsewhere, turn to instrumentation: we literally have countless ways of extracting more information from our natural and social environments. When it comes to introspective cognition, on the other hand, there is only what meets the eye, and precious little else.


This offers a straightforward way to theorize the apparently dismal phenomenological portrait that Schwitzgebel sketches: When it comes to introspective cognition, there is only what meets the eye, and it is insufficient for cognition. Not only do we lack the information required to cognize conscious experience, we lack the information required to cognize this lack, and so are readily fooled into thinking we have cognized conscious experience. We are the victims of a kind of natural introspective anosognosia.


This, for me, constitutes one of the more glaring oversights you find in contemporary philosophy of mind and consciousness research. Conscious experience, whatever it turns out to be, is the product of some subsystem of the greater brain. The question of introspective competence is the question of how effectively that subsystem, the ‘conscious brain,’ accesses and uses information gleaned from the greater brain. When it comes to reflection on conscious experience, what information does the brain make available for what cognitive systems?


What makes this question so important is what I consider the grand inferential upshot of the Schwitzgebel’s Introspective Incompetence argument: the jarring but almost undeniable fact that in certain profound respects we simply do not possess the consciousness we think we do. This is another consequence of observational plasticity and informatic closure. If we assume that consciousness is a natural phenomenon that does not vary between humans, then the wild variety of interpretations of conscious experience, both local and global, means that most everyone has to be wrong about consciousness–at least in some respect.


Let’s coin a category for all these incompatible variants called ‘Error Consciousness.’ Error Consciousness, as defined here, is simply the consciousness we think we have as opposed to the consciousness we do have–and everyone, I think it’s safe to say, is in the grip of some version of it. The combination of informatic closure and observational plasticity, in fact, would seem to make it all but impossible to overcome. Our introspective inability to access the information required to distinguish what we discover from what we devise means that theorists are almost certainly trying to explain a consciousness that simply does not exist. Like blind guru’s groping an elephant, we confuse the trunk for a serpent, the leg for a tree, and the tail for a foul-smelling rope. Each of us thinks their determinations are obvious, but none of us can explain them because they don’t exist.


This is just to say that Error Consciousness provides a compelling way to understand the difficulty of the so-called Hard Problem of consciousness. If we make Error Consciousness our primary explanandum, we will never find a satisfactory neuroscientific explanation, simply because there is no such thing.


And even more importantly, it allows to ask what kinds of errors we might be prone to make.


Consider Schwitzgebel’s conclusion that “our judgments about the world tend to drive our judgments about our experience. Properly so, since the former are the more secure” (137). This certainly makes evolutionary sense. As a very recent evolutionary development, human consciousness would have inherited the brain’s existing cognitive resources, namely, its ancient and powerful environmentally oriented systems. For me, this raises a question that has the potential to transform consciousness research: What if the kinds of errors we make environmentally are, in some respects, the same errors, perceptual or cognitive, that we make introspectively?


Consider, for instance, the way we sense aggregates as individuals in the absence of information. Astronomers, for instance, once thought quasars were singular objects, rather than a developmental phase of galaxies possessing supermassive blackholes. The ‘heavens’ in general are a good example of how the accumulation of information led us to progressively differentiate the celestial sphere that Aristotle thought he observed. Short of information regarding distinct constituents, we have a pronounced tendency to perceive singular things, a fact that finds its barest psychophysical expression in the phenomena of flicker fusion. For whatever reason, the perceptual and cognitive default is to clump things together for the want of distinctions.


Could something so perplexing as the ‘unity of consciousness’ simply be an introspective version of this? Could consciousness, in other words, be something like a cartoon, a low resolution artifact of constraints on interoceptive informatic availability?


A kind of flicker fusion writ large?


If so, it foregrounds what could be a pervasive and systematic fault in ongoing attempts to puzzle through the riddles of conscious experience. The orthodox approach to the question of conscious unity asks, What could unify conscious? It conceptualizes conscious unity as a kind of accomplishment, one requiring neural devices to be explained. But if the intuition of conscious unity relies on the same cognitive systems that regularly confuse aggregates for individuals in the absence of information, and if the ‘introspective faculty’ responsible for that intuition is, as Schwitzgebel’s arguments imply, ‘low resolution,’ then why should we expect we would intuit a more differentiated consciousness, let alone one approaching the boggling complexity of the brain that makes it possible? In other words, Why not expect that we are simply getting consciousness wrong?


We seem to be using the wrong cognitive equipment after all.


Pressing Schwitzgebel’s findings in this direction, we can readily see the truly radical upshot of Perplexities of Consciousness: the way it systematically undermines the presumption that introspection is a form of ‘vision,’ and so the notion that consciousness is ‘something visible.’ The analogy Kreigal offers to smell in his review is quite instructive here. With olfaction, we are quite comfortable moving between the object of perception and the medium of perception. We smell odours as readily as odorous things. With vision, on the other hand, we typically see things, not the light they reflect. This is probably as much a function of resolution as anything: Since olfaction is so low resolution, we often find ourselves smelling just the smell. Analogizing introspection to olfaction allows us to see consciousness as a special kind of stink rather than a special kind of thing. The visual metaphor, you could say, delivers conscious experience to the ‘object machinery’ of our cognitive system, and has the consequence of rendering consciousness substantival, transforming it into something that we somehow see rather than something that we somehow are. The olfactory metaphor, on the other hand, allows us to sidestep this processing error, and to cognize conscious experience off the traditional inferential grid…


And so conceive consciousness in terms that make hay of the cardinal distinction between perception and cognition. We think the unity of consciousness is something to be explained because we think it is something that is achieved prior to our attentional awareness of it rather than a product of that attentional awareness. Perplexities shows that we have good reason to doubt this happy assumption: if introspection, like vision, simply reveals something independently existing, Schwitzgebel asks, then why the lack of consensus, the endemic confusion, the perpetual second-guessing? Reflection on consciousness is an attenuation of consciousness–as we might expect, given that it’s simply another moment within consciousness. Introspection is an informatic input, a way to deliver neural information to deliberative cognition. If that information is as skewed and as impoverished as Perplexities implies, then we should expect that our concepts will do the perceptual talking. And if our deliberative systems are primarily geared to environmental cognition, we should expect to make the same kinds of mistakes we make in the absence of environmental information.


The conscious unity we think we ‘perceive,’ on this account, is simply the way conscious experience ‘smells’ in attentional awareness. It is simply what happens when inadequate interoceptive neural information is channelled through cognitive systems adapted to managing environmental information. In a strange sense, it could be an illusion no more profound than thinking you see Mary, Mother of God, in a water stain. What makes it seem so profound is that you happen to be that water stain: its false unity becomes your fundamental unity. To make matters worse, you have no way of seeing it any other way–no way of accessing different interoceptive information–simply because you are, quite literally, hardwired to yourself.


Observational plasticity makes it as apparently real as could be. Informatic closure blocks the possibility of seeing around or seeing through the illusion. An aggregate becomes an individual, and you have no way of intuiting things otherwise. Enter the intuition of unity, a possible cornerstone of Error Consciousness.


Schwitzgebel would likely have many problems with the positive account I offer here (for a more complete, and far more baroque version, see here), if only because it changes the rules of engagement so drastically. Unlike me, Schwitzgebel is a careful thinker, which is one of the reasons I found Perplexities such an exciting read. It’s not often that one finds a book so meticulously dedicated to problematizing consciousness research supporting, at almost every point, your own theory of consciousness.


To reiterate the question: Why should interoceptive information privation not have similar cognitive consequences as environmental information privation? This question, when you ponder it, has myriad and far-reaching consequences for consciousness research–particularly in the wake of studies like Schwitzgebel’s. Why? Because once you pull the interoceptive rug out from underneath speculation on consciousness, once you understand that, as evolutionary thrift would suggest, we have no magical ‘inner faculty’ aside from our ancient environmental cognitive systems, then ‘error’ (understood in some exotic sense) has to become, to some extent at least, the very tissue of who we are.


And as bizarre as it sounds, it makes more than a little empirical sense. In natural terms, we have an information processing system–the human brain–that, after hundreds of millions of years of adapting to track the complexities of its natural and social environments, only recently began adapting to track its own complexities. Since our third-person tracking has such an enormous evolutionary pedigree, let’s take it as our cognitive baseline for what would count as ‘empirically accurate’ first-person tracking. In other words, let’s say that our first-person tracking is empirically accurate the degree to which its model is compatible with the brain revealed by third-person tracking. The whole problem, of course, is that this model seems to be thoroughly incompatible with what we know of the brain. Our first-person tracking, in other words, appears to be wildly inaccurate, at least compared to our third-person tracking.


And yet, isn’t this what we should expect? The evolutionary youth of this first-person tracking means that it will likely be an opportunistic assemblage of crude capacities–anything but refined. The sheer complexity of the brain means this first-person tracking system will be woefully overmatched, and so forced to make any number of informatic compromises. And perhaps most importantly, the identity of this first-person tracking system with the brain it tracks means it will be held captive to the information it receives, that it will, in other words, have no way of escaping the inevitable perspectival illusions it will suffer.


Given these developmental and structural constraints, the instances of Introspective Incompetence described in Perplexities are precisely the kinds of problems and peculiarities we should expect (what, in fact, I did expect before reading the book). This includes our introspective anosognosia, our tendency to think our introspective judgments are incorrigible: the insufficiency of the information tracked must itself be tracked to be addressed by our first-person tracking system. Evolution flies coach, unfortunately. Not only should we expect to suffer errors in many of our judgments regarding conscious experience, we should, I think, expect Error Consciousness, the systematic misapprehension of what we are.


Of course, one of the things that makes the notion of Error Consciousness so ‘crazy,’ as Schwitzgebel would literally call it, is the difficulty of making sense of what it means to be an illusion. But this particular berry belongs to a different goose.



 •  0 comments  •  flag
Share on Twitter
Published on August 09, 2012 14:22

August 8, 2012

Chinese Mereology

Aphorism of the Day:  So much of the social alchemy of give and take lies in the difference between feeling what you feel and caring what you feel. Wincing and laughing is something we all too easily do.


.


I just signed contracts for Chinese translations of The Prince of Nothing - which is pretty exciting given the explosive growth in speculative fiction over there, as well as the prospect of potentially reaching a truly dissenting audience of readers.


I also caught a piece on the tube regarding empathy research and psychopathy, and the discovery that psychopaths do seem to have the capacity to experience empathy, they just don’t seem to care. This complicates the ‘bad guy’ picture considerably. The capacity to empathize is variable, and the capacity to care about empathizing seems to be variable as well. So you could have people who care alot about what little they feel of your pain, or care not at all even though they relive your suffering in detail.


Anyone know anything more about this? The reason I find this so interesting, aside from the obvious reasons, is that I’ve been thinking about pain asymbolia a lot lately, wondering what other kind of ‘asymbolias’ are possible. It demonstrates, quite dramatically, the composite nature of experience, and in a way that is entirely consonant with the Blind Brain Theory. I’ll be posting more on this soon.


And just an open question. As a sports fan I’ve been watching as much of the Olympics as I can, and I find myself wondering how many Olympics we have before things like post-natal gene-doping or pre-natal genetic design take it over. Is it my imagination, or is the gap between ‘developed nations’ and the rest of the world increasing?



 •  0 comments  •  flag
Share on Twitter
Published on August 08, 2012 11:27

July 26, 2012

The Death of All Authors, Hairless or Hirsute

Aphorism of the Day: In every human ear you will find a little bone that translates, ‘We are nothing special,’ into ‘You fucking loser.’ Thus the will to affirm everything, or at the very least, maintain polite silence. Nothing like an angry loser to ruin your day.


.


The Problem of Meaning is just one of those problems that refuses to stick.


Many assume it has to be self-refuting. “There must be somebody there,” A. A. Milne wrote in Winnie the Pooh, “because somebody must have said, ‘Nobody.’” Others see it as a kind of social reductio, “a natural consequence,” as Cornell West puts it, “of a culture (or civilization) ruled and regulated by categories that mask manipulation, mastery and domination of peoples and natures.” Meaning, the mass, assumptive consensus seems to be, has to exist somehow. The fact that scientific reason seems to break it down merely indicates, as Adorno would argue, the limitations of scientific reason.


Fair enough. The chips of conviction are made of lead, after all, and the arms of doubt are easily exhausted. Why not place them somewhere safe, comfortable, make-believe?


For more than ten years, now, I’ve been ranting about the way the sciences, after spending centuries purging intentionality from the world, have finally besieged the walls of the human. I’ve argued that epic fantasy is cultural symptom of that siege. I’ve also argued that the ‘Humanities,’ as we know them, are about to go extinct, swept away or radically reconfigured by the findings of cognitive neuroscience and other disciplines. I’ve also argued that this could very well be the beginning of the Semantic Apocalypse, the point at which meaning and cognition, experience and knowledge, irrevocably part ways, leading to the process of profound cultural bifurcation that already seems well under way, one where power, in the pursuit of power, treats us as mechanisms behind the blind of a culture bent on feeding our hunger for false autonomy and meaning.


One where the Cognitive Difference becomes the very spine of society, dividing those who hope and serve from those who know and command. The world Neuropath.


To me, it just seems obvious that, as Nietzche observed, “man has been rolling from the centre toward X” since Copernicus. And because I think it’s important to have some sense of where we’re going, since I loathe stumbling backward anywhere, least of all the future, I happen to think this X-we’re-rolling-into is pretty much the most significant question humanity has ever faced–period.


For all we know, it could be a drain.


Which is why through all these years I’ve been baffled, even dismayed, by the frivolity of contemporary academic fashion and its stubborn refusal to consider the Problem of Meaning now, in an age when machines are translating thought into images, anticipating our choices before any consciousness of making them, or even worse, making those choices for us. For me, the cultural significance of contemporary science just is the Problem of Meaning, the problem of the human. How could it not be, when moral and existential autonomy has been the essence of what humanity has meant since the Old Enlightenment?


The primary dividend of science, thus far, has been power over our environments, the ability to ‘hack’ the mechanisms about us, to intervene and instrumentalize processes with ever increasing efficiency. The human brain, given its forbidding complexity, remained a black box, something that transcended our knowledge and so seemed transcendental. It should come as no surprise, given this new power and our primeval conceits, that the intuition of autonomy would come to frame the new image of the human for Old Enlightenment thinkers. Man, who had been the Meaning Receiver prior to their murder of God, became the Meaning Transmitter.


Before the Old Enlightenment, all creation was a text, something authored. After the Old Enlightenment, ‘creation’ became ‘cosmos,’ something indifferent to the fears and aspirations of the real authors, humanity. For all the cultural tumult and upheaval it occasioned, the Old Enlightenment delivered–in addition to the technological dividends of science–a new and profoundly flattering image of the human: Authorial Man.


And now, on the cusp of the New Enlightenment, it seems there may be no such thing as ‘authors’–at all. The problem is that, far from transcending our environments, we are simply its most complicated pocket, something that differs, not in kind, but in degree. The problem is that we are natural–just one more mechanism that can be hacked and instrumentalized. The problem is that the Old Enlightenment image of semantic autonomy seems to be yet another self-congratulatory myth. And this, given our inherited conceptualities, is a disaster quite literally beyond our comprehension–as it has to be, once you appreciate the degree to which our comprehension turns on those very conceptualities.


So what do you find in academia? The same fractured in-group status scrum you find everywhere else in society of course, one where the importance attributed to a given problematic turns far more on who is fretting than on what is being fretted about. For the bulk of humanities, the Semantic Apocalypse is little more than a preposterous rumour. People continue mining their niche (the one that spared them the horror of having no dissertation topic), parsing esoteric definitions, exchanging hothouse rationalizations, elaborating discourses that will be little more than intellectual curiosities in a generation’s time. A neo-Scholasticism rendered irrelevant by a neo-Enlightenment.


Small surprise, given the bureaucratic immensity of the institution. The only noteworthy thing about this generation of humanities scholars is the sheer extent of their conceit and hypocrisy, the stupendous way they have confused dogmatic orthodoxy for ‘criticality,’ and militant intellectual conservatism as ‘radicalism.’


Ink is still spilled on the subject, to be sure. In Nihil Unbound, for instance, Ray Brassier argues that “Philosophy should be more than a sop to the pathetic twinge of human self-esteem. Nihilism is not an existential quandary but a speculative opportunity. Thinking has interests that do not correspond with those of living; indeed, they can and have been pitted against the latter.” But when you consider the political and technological immediacy of the problem, the way neuroscience and its pageant of nihilistic implications now permeates markets, classrooms, courts, and elections as well as mainstream headlines, you would think that the academic humanities, the institution charged with raising profound and pervasive problems to general consciousness, would be responding en masse, rather than relying on iconoclastic courage.


The dimensions of this disconnect are nowhere more apparent than in the academic debates surrounding the question of what comes after the ‘human.’ On the one hand, you have transhumanists like Nick Bostrom, who seem to think the naturalization of the human, though possessing peril, will engender benevolent instrumentalization, a material efflorescence of Old Enlightenment autonomy, that we will become, to steal the Tyrell Corporation tagline, ‘more human than human.’ Given the priority of the material, the brute fact that gunshots to the head do kill, science holds out the promise of human perfectibility as a technical enterprise.


On the other hand you have the posthumanists like Cary Wolfe and Donna Haraway who seem to think the naturalization of the human will at last debunk the Great Lies of the Old Enlightenment (while magically preserving the ‘truths’), and conceptually justify a wholesale revaluation of the nonhuman… That the New Enlightenment will, in effect, overthrow the conceptual hegemony of what they call ‘Anthropocentrism.’


The general idea seems to be that we humans are too inclined to make too much of our own humanity, that the sciences, in the course of revealing all the profound ways we are continuous with nature, have shown us that ‘humanism’ (at least in its self-referentially blind incarnations) is little more than a conceit, a way to justify our crimes against the ‘merely natural.’ Once we set aside anthropocentrism, and the ‘speciesism’ that it underwrites, we will see, just for instance, that factory farms are actually concentration camps in moral disguise. The Problem of Meaning, if there is any such problem at all, is the human presumption to be its sole possessor.


My fear is that these people live in a fantasy world. A vegan Middle-earth. And I want to convince them, not that they do, but that they need to seriously consider the possibility that they might.


The first, most obvious and perhaps most trenchant question, is one of why the naturalization of the human should warrant a wholesale reevaluation of the nonhuman rather than a wholesale devaluation of the human? This might sound horrible, but the question here is epistemic, not moral. Scientific discovery didn’t give a damn about the word of God, so why should it give a damn about vegan scruples–or scruples at all? It discovers what it discovers, and we have good reason to fear the worst where meaning is concerned.


Why? Well, on the one hand you have the pessimistic induction I noted above: science has spent centuries chasing value out of the natural world, so why should we, as something natural, be any exception? If the Old Enlightenment drained the world of intentionality, why, short of wishful thinking, should we assume the New Enlightenment will pour it back in?


On the other hand you have the fact that no one–and I mean literally no one–has managed to convincingly reconcile the intentional and the natural. This is nothing short of the bloody holy grail in cognitive science circles. This is the problem, and something that theorists like Wolfe blithely assume will be solved (or, worse yet, require no solution)–and here’s the thing, in a manner amenable to the very notion of ‘human’ they fervently wish undone.


As Wolfe writes in What Is Posthumanism?:


If it is true that cognitive science has an enormous amount to contribute to the area of philosophy that we used to call phenomenology・if it has even, in a way, taken it over・then it is also true that the textually oriented humanities have much to teach cognitive science about what language is (and isn・t) and how that, in turn, bears on any possible philosophy of the subject (human or animal). This is simply to say that it will take all hands on deck, I think, to fully comprehend what amounts to a new reality: that the human occupies a new place in the universe, a universe now populated by what I am prepared to call nonhuman subjects. And this is why, to me, posthumanism means not the triumphal surpassing or unmasking of something but an increase in the vigilance, responsibility, and humility that accompany living in a world so newly, and differently, inhabited.


New reality? Humility? Here we see how arguments against ‘anthropocentrism’ almost effortlessly lapse into arguments for rampant anthropomorphism, a kind of pan-anthropocentrism. Here I am, struggling to find ways to believe in morality for us, wondering whether there has ever been a subject, in the face of our growing knowledge of the natural, and these jokers are out painting the whole town in ‘value.’


This particular quote follows a reading that paints Daniel Dennett as a closet Cartesian. The strategy in these particular (Derridean) theoretical circles is to show how claims you don’t like follow from implicit commitments to the ‘metaphysics of presence,’ conceptualities that systematically devalue the constitutive role that occluded contexts play in meaning. Since the bulk of animal rights arguments turn on analogistic inferences, Wolfe spends a good deal of time attempting to level what might be called the ‘Linguistic Difference’ between humans and nonhumans, showing how our apparently unprecedented discursive and communicative abilities belong to a natural continuum. He then takes Dennett to task for his representationalism, and the way it “unwittingly reproduces” the commitments to Cartesianism he is elsewhere so keen to critique.


The problem, as it so happens, is that Dennett, unlike Wolfe, is profoundly acquainted with the Problem of Meaning. In fact, you could argue that his signature contribution to the Philosophy of Mind is his ‘intentional stance,’ and his denial of anything resembling ‘original intentionality.’ Even though these views are (unlike those belonging to Wolfe’s theoretical mentors, Derrida or Luhmann) genuinely radical, even though they thoroughly condition what Dennett means when speaking of natural systems like humans or animals, Wolfe does not so much as mention it in his argument. His reading of Dennett is, in effect, almost entirely tendentious. Dennett nowhere argues that humans are ontologically intentional (representational) in a way that animals are not, only that human systems are, thanks to something about their organization, the most conducive to the attribution of intentional sophistication, ‘stance stances,’ such as meta-dissimulation and linguistically reportable beliefs. Accusing him of being Cartesian because he uses the word ‘representation’ when talking about cognition is no different than accusing him of being Creationist because he uses the word ‘design’ discussing evolution. He just doesn’t mean it that way.


If anyone is working through residual commitments to Descartes, here, it’s Wolfe. He’s the one making transcendental arguments for this special thing–value–and the need to spread it far and wide. He’s the one who thinks that transcendental argumentation, despite everything cognitive psychology has discovered, despite thousands of years of abject inability to provide anything but the most meagre consensus, counts as a form of knowledge. Just consider the quote above. ‘The science is all well and fine,’ he’s saying, ‘but it can’t aspire to knowledge short of my philosophy.’


Ooooookay.


Dennett is a meaning skeptic–perhaps the most famous living. Since Wolfe is a meaning dogmatist who has hidden his commitment to original intentionality behind his fancy for Derrida (who stuck to criticizing Searle for a reason), he needs to make Dennett seem theoretically retrograde somehow. But the sad fact is, Wolfe is the one behind the curve, the one mired in neo-Scholasticism. He would almost certainly balk at the notion of original intentionality, but it really is hard to see how he (or Derrida) could do without some bait-and-switch version of it. Thinking of meaning in terms of the ‘trace,’ as always-already derived, does nothing to change the fact that your conceptual register is wholly intentional, that it begs, at every turn, the question of whether there has ever been such a thing. For that is the radical question, the one that makes Derrida another transcendental conservative.


Is Wolfe arguing that our existing commitments suggest that we take the moral stance toward the interpretation of animal systems, or is he arguing that animals are moral beings, and that only our conceptual conceits have led us to think otherwise? Trust me, I fully appreciate the ugly corner questions like these paint me in, but this is precisely my point: the Problem of Meaning is the ugly corner we all find ourselves painted in. The question of ‘conceptual conceits’ potentially has no bottom–and it almost certainly reaches further than Wolfe is prepared to go.


In “A Difficulty in the Path of Psychoanalysis,” Freud discusses what he calls the ‘three great narcissistic wounds’ to humanity, the way the theories of Copernicus, Darwin, and (in his humble view) Freud have robbed humanity of their ontological privilege. Psychoanalysis, of course, never became the science he thought it would, and so never accrued the cognitive authority to do much more than prick the pride of the odd human here and there. The situation is far, far different with cognitive science, however. It is perhaps inevitable that moralists like Wolfe will cherry-pick its findings in an attempt to undermine the apparent disanalogies between the human and the animal, and so rationalize their arguments for animal rights. Instead of seeing the mechanization of the human and the animal (not to mention the conceptual, social, and political consequences that this mechanization implies), he sees the animalization of the human and the humanization of the animal–what he needs to see. Thanks to confirmation bias, he sees opportunity in this third wound, and utterly fails to consider the primary question every grave wound raises: whether it is mortal.


That meaning–and value with it–might be dead before all is said and done.


Perhaps this is the reason no one wants to confront the Problem of Meaning in humanities circles: it simply does not serve their moral agendas. As Haidt says, human cognition is pretty much the bitch of our intuitive scruples, and these guys are as much skewed judging machines as the rest of us–even more, given that they are professionally trained rationalizers. They have an egalitarian impulse, one that I admire and share, and so, like everyone else, they cook up reasons why that impulse should sweep the table–the facts be damned.


Because the fact is, we are only now learning what the ‘human’ is, and the picture emerging from the fog of our bias and ignorance is troubling to say the least. Wolfe and his cadre will no doubt continue having faith in their hyper-egalitarian intuitions, will no doubt find ways to further rationalize whatever I have problematized here. But the neuroscientific research will continue accumulating all the same, and as it does, those interpretative approaches that ignore it will simply drift deeper into the fog of apologia, greasing the wheels of those who know and command by feeding the preconceptions of those who hope and serve.


Telling people–not unlike the evangelical Christians–that everything animate has a reason, everything animate has a claim.


This is why I catch a whiff of decadence when reading ‘posthuman’ theorists like Wolfe, that moral twinge you get when someone worries pets to the exclusion of starving children. I know thatthe latter suffer, and I believe this makes a binding claim upon us all. And I see it as the scandal of our age–the signature tragedy–that the faith required to hold this belief grows in proportion to our scientific knowledge of the human soul.


“It is a self-deception of philosophers and moralists,” Nietzsche writes, “to imagine that they escape decadence by opposing it. That is beyond their will; and, however little they acknowledge it, one later discovers that they were among the most powerful promoters of decadence.”



1 like ·   •  1 comment  •  flag
Share on Twitter
Published on July 26, 2012 12:52

July 15, 2012

I, and Silence…

Aphorism of the Day: I don’t make unwise decisions. They make me.


.


I spent last week in a cabin in the high north, doing everything I could to avoid communing with the vast vacancy of nature. Boating. Jet-skiing. Getting hammered listening to Godsmack. Teaching my daughter how to swim.


Marvelling at her laughter was the closest I got to Nature, I’m sure.


This week is a holiday week as well, so in lieu of anything ‘substantial,’ I thought I would post my favourite Emily Dickinson poem…


.


I felt a Funeral, in my Brain,


And Mourners to and fro,


Kept treading – treading – till it seemed


That Sense was breaking through -


.


And when they all were seated,


A Service, like a Drum -


Kept beating – beating – till I thought


My Mind was going numb -


.


And then I heard them lift a Box,


And creak across my Soul


With those same Boots of Lead, again.


Then Space – began to toll,


.


As all the Heavens were a Bell,


And Being, but an Ear,


And I, and Silence, some strange


Race, Wrecked, solitary, here -


.


And then a Plank in Reason, broke


And I dropped down, and down -


And hit a World, at every plunge,


And Finished knowing – then -


.


This is the third ‘official’ version. It’s interesting to note that in the original she used ’Soul’ to replace ‘Brain’ (which she had crossed out) in the third stanza.


“I Felt a Funeral” is probably the first poem that made me physically dizzy for reading it. As a first-year undergrad it literally owned me in a way very few instances of language ever have. Back then it was the image of plunging through worlds in reason’s broken wake that most struck me. In fact, ‘a plank in reason broke’ has been a kind of solitary shorthand for me ever since, something my gut brain offers up whenever I witness something tragic or just plain crazy.


Once I began pondering consciousness more intently it became a catechism proper, something I used to remind myself that no one knew what the fuck they were talking about, period. I often imagine that I can crawl into her headspace as she was writing this, even though I know this a conceit that every reader suffers thinking that those fragments that speak to them are due to some ethereal correspondance of souls. Ever since discovering the way depression allows us to see past a number of self-congratulatory filters – ever since discovering that happiness turns on delusion – I’ve come to think of Dickinson as a kind of phenomenologist of the ‘nooliminal,’ as someone gnawing on the contradictory iron that bars our cage.


As much prophet as poet.


So, given that these are the most indolent days of summer, I thought I would offer this up, and invite anyone with similar ’keystone’ poems or prose passages to post them here for others to laze about and contemplate. I know that more than me and Silence make up this this strange race…



 •  0 comments  •  flag
Share on Twitter
Published on July 15, 2012 09:24

R. Scott Bakker's Blog

R. Scott Bakker
R. Scott Bakker isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow R. Scott Bakker's blog with rss.