R. Scott Bakker's Blog, page 27

October 25, 2012

Less Than ‘Zero Qualia’: Or Why Getting Rid of Qualia Allows us to Recover Experience (A Reply to Keith Frankish)

Aphorism of the Day: Here, it turns out, is so bloody small that even experience finds itself evicted and housed over there.


.


From Philosophy TV:


Richard Brown: And you know there is a–I don’t want to say growing movement–but there is a disturbing undercurrent [laughs] of philosophers who are out and saying that they are in fact zombies. So I don’t know if you are aware of this or not but…


Keith Frankish: I’m… [laughs] Not phenomenally.


Richard Brown: Okay… [laughs]


Keith Frankish: [laughs] Yes, I might align myself with this ‘disturbing undercurrent.’


.


I think philosophy of mind–as an institution–is caught in a great dilemma: either they accept the parochial, heuristic nature of intentional cognition, or they condemn themselves to never understanding human consciousness. This was the basis of my interpretation of Frank Jackson’s Mary argument as a ‘heuristic scope of application detector,’ a way to make the limits of human environmental cognition known. Why does it seem possible for Mary to know everything about red without every having experienced red? Why does the additional information provided by experiencing red not obviously count as ‘knowledge’?  In other words, why the conflict of intuitions?


The problem, in a nutshell, has to do with informatic neglect (see my previous post for more detail). Heuristic cognition leverages computational efficiencies by ignoring information. Intentional cognition, in particular, systematically neglects all the neurofunctional information pertaining to our environmental tracking. In a sense, this is all that ‘transparency’ is: blindness to the mechanisms responsible for environmental cognition. Given the functional independence of our environments, neglecting this information pays real computational dividends. Given reliable tracking systems, information regarding those systems is not necessary to cognize systems tracked, but only so long as those systems tracked are not ‘functionally entangled’ with the systems tracking. You can puzzle through a small engine repair because the systems doing the tracking in no way interfere with the system tracked. What you might call the medial causal relations that enable you to repair small engines in no way impinge on the lateral causal relations that make engines breakdown or run.


This is why intentional cognition is almost environmentally universal, simply because the environmental systems tracked are almost universally functionally independent of our cognition. I say ‘almost,’ of course, because on the microscopic level this functional independence breaks down as the lateral systems tracked become sensitive to ‘interference’ from medial systems tracking: if photons leave small engines untouched, they have dramatic effects on subatomic particles. This is also why intentional cognition can only get consciousness wrong. When we attempt to cognize conscious experience, we have an instance of a cognitive system that systematically neglects medial causal relationships attempting to track a functionally entangled system as if it were independent. The lateral and the medial are one and the same in these instances of attempted cognition, which quite simply means that neither can be cognized or ‘intuited.’


And this, on the Blind Brain Theory (BBT), is the primary hook from which the ‘mind/body’ problem hangs. What we ‘cognize’ when we draw conscious experience into deliberative cognition is quite literally analogous to Anton’s Syndrome: we think we see everything there is to be seen, and yet we really don’t see anything at all. Consciousness, as it appears to us, is a kind of ‘forced perspective’ illusion. Given that we are brainbound, or functionally entangled, and given the environmental orientation of our cognitive systems, we have no way to ‘intuit’ consciousness absent gross distortions. As such, consciousness as it appears is literally inexplicable, period, let alone in natural terms. It can only be explained away, leaving a remainder, consciousness as it is, as the only thing science need concern itself with.


In this post, I want to consider a recent ‘radical position’ in the philosophy of mind, that belonging to Keith Frankish, and show 1) the facility with which his argument can be recapitulated, even explained, in BBT terms; and 2) how it is nowhere near radical enough.


In his “Quining Diet Qualia,” Frankish notes that defences of what he terms ‘classic qualia,’ understood as “introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective” (1-2) have largely vanished from the literature, primarily because ‘intrinsic properties’ resist explanation in either functional or representational terms. Instead, theorists have opted for a ‘watered-down conception’ of qualia in terms of “phenomenal character, subjective feel, raw feel, or ‘what-is-it-likeness’” (2), what Frankish calls ‘diet qualia.’ The idea behind talking about qualia in these terms makes them palatable to both dualists and physicalists, or ‘theory-neutral,’ as Frankish puts it, since everyone assumes that qualia, in this restricted sense, at least, are real.


But Frankish doubts that qualia make sense in even this minimal sense. To illustrate his suspicion, he introduces the concept of ‘zero qualia,’ which he defines as those “properties of experiences that dispose us to judge that experiences have introspectable qualitative properties that are intrinsic, ineffable, and subjective” (4). His strategy will be to use zero qualia to show that diet qualia don’t differ from classic qualia in any meaningful sense.


Now, one of the things that caught my eye in this paper was the striking resemblance between zero qualia and my phenophage thought experiment from several weeks back:


Imagine a viscous, gelatinous alien species that crawls into human ear canals as they sleep, then over the course of the night infiltrates the conscious subsystems of the brain. Called phenophages, these creatures literally feed on the ‘what-likeness’ of conscious experience. They twine about the global broadcasting architecture of the thalamocortical system, shunting and devouring what would have been conscious phenomenal inputs. In order to escape detection, they disconnect any system that could alert its host to the absence of phenomenal experience. More insidiously still, they feed-forward any information the missing phenomenal experience would have provided the cognitive systems of its host, so that humans hosting phenophages comport themselves as if they possessed phenomenal experience in all ways. They drive through rush hour traffic, complain about the sun in their eyes, compliment their spouses’ choice of clothing, ponder the difference between perfumes, extol the gustatory virtues of their favourite restaurant, and so on. (TPB 21/09/2012)


By defining zero qualia in terms of their cognitive effects, Frankish has essentially generated a phenophagic concept of qualia–which is to say, qualia that aren’t qualitative at all. I-know-I-know, but before you let that squint get the better or you, consider the way this conceptualization recontextualizes the supposedly minimal commitment belonging to diet qualia. By detaching the supposed cognitive effects of phenomenality from phenomenality, zero qualia raise the question of just what this supposedly neutral ‘phenomenal character’ is. As Frankish puts it, “What could a phenomenal character be, if not a classical quale? How could a phenomenal residue remain when intrinsicality, ineffability, and subjectivity have been stripped away?” (4). Zero qualia, in other words, have the effect of showing that diet qualia, despite the label, are packed with classic calories:


The worry can be put another way. There are competing pressures on the concept of diet qualia. On the one hand, it needs to be weak enough to distinguish it from that of classic qualia, so that functional or representational theories of consciousness are not ruled out a priori. On the other hand, it needs to be strong enough to distinguish it from the concept of zero qualia, so that belief in diet qualia counts as realism about phenomenal consciousness. My suggestion is that there is no coherent concept that fits this bill. In short, I understand what classic qualia are, and I understand what zero qualia are, but I don’t understand what diet qualia are; I suspect the concept has no distinctive content. (4-5)


Frankish then continues to show why he thinks various attempts to save the concept are doomed to failure. The dilemma is structured so that either the proponent of diet qualia takes the further step of defining ‘phenomenal character,’ a conceptual banana peel that sends them skidding back into the arms of classic qualia, or they explain why dispositions aren’t what they really meant all along.


Now on the BBT account, qualia need to be rethought within a consciousness and cognition structured and fissured by informatic neglect. The heuristic nature of intentional cognition means that medial neurofunctionality is always neglected. And as I said above, this means deliberative reflection on conscious experience constitutes a clear cut ‘scope violation,’ an instance of using a heuristic to solve a problem it never evolved to tackle. Introspective intentional cognition, on this account, is akin to climbing trees with flippers.


Of course it doesn’t seem this way–quite the opposite in fact–and for reasons that BBT predicts. Like medial neurofunctionality, the limits of intentional cognition are also lost to neglect. Short of learning those limits, the scope of applicability of intentional cognition, universality is bound to be the default assumption. So our intentional cognitive systems make sense of what they can oblivious of their incapacity. The ease with which they conjure worlds out of pixels and paint, for instance, demonstrates their power and automaticity. BBT suggests that something analogous happens when intentional cognition is fed metacognitive information: the information is organized in a manner amenable to intentional, environmental cognition.


As asserted above, the point of the intentional heuristic is to isolate and troubleshoot lateral environmental relations (normative or causal) against a horizon of variable information access. Thus it ‘lateralizes,’ you could say, the first-person, turns it into little environment. The problem is that this ‘phenomenal environment’ literally possesses no horizon of variable access (cognition is functionally entangled, or ‘brainbound,’ with reference to experience) and, thanks to the interference of the medial neurofunctionality neglected, no lateral causal relationships. Like Plato’s cave-dwellers, intentional cognition is quite simply stuck with information it cannot cognize. ‘Phenomenal character’ becomes a round peg in a world of cognitive squares: as it has to be on the BBT account.


By making the move to ‘cognitive dispositions,’ zero qualia bank on our scientific knowledge of the otherwise neglected axis of medial neurofunctionality. The challenge, for the diet qualia advocate, is to explain how phenomenal character anchors this medial neurofunctionality (understood as cognitive dispositions), to explain, in other words, what role ‘phenomenal character’ plays–if any. But of course, thanks to the heuristic short-circuit described above, this is precisely what the diet qualia advocate cannot do. The question then becomes, of course, one of what ‘diet’ amounts to. Either one moves inside the black box and embraces classic qualia or one moves outside it and settles for zero qualia.


But of course, neither of these options are tenable either. Dispositional accounts, though epistemologically circumspect, have a tendency to be empirically inert: the job of science is to explain dispositions, which is to say, use theory to crack open black boxes. Epistemological modesty isn’t always a virtue. And besides, there remains the fact that we actually do have these experiences!


Frankish’s real point, of course, is that philosophy of mind has made no progress whatsoever in the move to diet qualia, that phenomenality remains as impervious as ever to functional or representational explanation and understanding. But he remains as mystified as everyone else about the origins and dynamics of the problem. I would append, ‘only more honestly so,’ were it not for claims like, “I think everyone agrees that zero qualia exist,” in the interview referenced above. I certainly don’t, and for reasons that I think should be quite clear.


For one, consider how his ‘cognitive dispositions’ only run one way, which is to say, from the black box of phenomenality, when the medial neurofunctionality occluded by metacognitive deliberation almost certainly runs back and forth, or in other words, is exceedingly tangled. And this underscores the artificiality of zero qualia, the way they can only do their intuitive work by submitting to what is a thoroughly distorted understanding of conscious experience in the first place. The very notion that phenomenal character can be ‘boxed,’ cleanly parsed from its cognitive consequences, is an obvious artifact of neurofunctional informatic neglect, the way, intentional cognition automatically organizes information for troubleshooting.


On the BBT account, the problem lies in the assumption that intentional cognition is universal when it is clearly heuristic, which is to say, an information neglecting problem-solving device adapted to specific problem-solving contexts. The ‘qualia’ that everyone has been busily arguing about and pondering in consciousness research and the philosophy of mind are simply the artifacts of a clear (once you know what to look for) heuristic scope violation. There are no such things, be they classic, diet, or zero.


Now given that the universality of intentional cognition is the default assumption of nearly every soul reading this, I’m certain that what I’m about to say will sound thoroughly preposterous, but I assure it possesses its own, counterintuitive yet compelling logic (once you grasp the gestalt, that is!). I want to suggest that it makes no more sense to speak of qualia ‘existing’ than it does to speak of individual letters ‘meaning.’ Qualia are subexistential in the same way that phonemes are ‘subsemantic.’


But they must be something! your intuitions cry–and so they must, given that intentional cognition is blind to its heuristic limits, to the very possibility that it might be parochial. It has no other choice but to treat the first-person as a variant of the third, to organize it for the kinds of environmental troubleshooting it is adapted to do. After all, it works everywhere else: Why not here? Well, as we have seen, because qualia are neurofunctionally integral to the effective functioning of intentional cognition, they are a medial phenomenon, and as such are utterly inaccessible to intentional cognition, given the structure of informatic neglect that characterizes it.


But this doesn’t mean we can’t understand them, that McGinn and the Mysterians are correct. McGinn, you could say, glimpsed the way phenomenality might exceed the reach of intentional cognition while still assuming that the latter was humanly universal, that we couldn’t gerrymander ways to see around our intuitions, as we have, for example, with general relativity or quantum mechanics.


Consciousness presents us with precisely the same dilemma: cling to heuristic intuitions that simply do not apply, or forge ahead and make what sense of these things as we can. If the concept ‘existence’ belongs to some heuristic apparatus, then the notion that qualia are subexistential is merely counterintuitive. Otherwise, relieved of the need to force them into a heuristic never designed to accommodate them, we can make very clear sense of them as phenomemes, the combinatorial building blocks of ‘existence,’ the way phonemes are the combinatorial building blocks of ‘meaning.’ They do not ‘exist’ the way apples, say, exist in intentional cognition, simply because they belong to a different format. ‘What is redness?’ makes no sense if we ask it in the same intuitive way we ask, ‘What are apples?’ The key, again, is to avoid tripping over our heuristics. Though redness eludes the gross, categorical granularity of intentional cognition, we can nevertheless talk apples and rednesses together in terms of nonsemantic information–which is just to say, in terms belonging to what the life sciences take us to be: evolved, environmentally-embedded, information processing systems.


Because of course, the flip side of all this confusion regarding qualia is the question of how a mere machine can presume to ‘know truth,’ as opposed to happening to stand in certain informatic relationships with its environments, some effective, others not. When it comes to conundrums involving intentionality, qualia are by no means lonely.



 •  0 comments  •  flag
Share on Twitter
Published on October 25, 2012 11:21

October 22, 2012

‘V’ is for Defeat: The Total and Utter Annihilation of Representational Theories of Mind

Aphorism of the Day: The mere fact of cartoons shouts the environmental orientation of our cognitive heuristics. A handful of lines is all the brain needs to create a world. South Park, of all things, likely means we have no idea what we’re talking about when we purport to explain ‘consciousness.’


.


Some kind of pervasive and elusive incompatibility haunts the relation between our intuitive self-understanding, what Wilfred Sellars famously referred to as the ‘Manifest Image,’ and our ever deepening natural self-understanding, the ‘Scientific Image.’ The question is really quite simple: How do we make intentionality consistent with causality? How do we make the intentional logic of the subject fit with the causal logic of the object? Most philosophers are what might called semantic Hawks, thinkers bent on finding ways of overcoming this incompatibility, hoping against hope that the resolution will leap out of the conceptual or empirical details. Some are semantic Diplomats, thinkers who have thrown their hands up, arguing the cognitive autonomy of the two domains. And still others, the semantic Profiteers, simply want to translate the causal into an expression of the intentional, to make science one particularly powerful ‘language game’ among others.


I’m what you might call a semantic Defeatist, someone convinced the only real solution is to explain the whole thing away. I think the Hawks are fighting a battle they’ve literally evolved to lose, that the Diplomats, despite their best intentions, are negotiating with ghosts, and that the Profiteers have simply found a way to load the horse and whip the cart. Defeatists, of course, rarely prevail, but they do persist. And so the madness of arguing for the profound and troubling structural role blindness plays in human consciousness and cognition continues. Existence understood as the tissue of neglect. Yee. Hah.


Today, I want to discuss the semantic Hawks, provide a historical and conceptual cartoon of what makes them so warlike, and then sketch out, as best as I can, why I think they are doomed to lose their war.


Like their political counterparts, semantic Hawks are motivated by conviction, particularly regarding the nature of meaning, representation, and truth. Given the millennial philosophical miasma surrounding these concepts, one might wonder how anyone could muster any conviction of any kind regarding their ‘nature.’ I know back in my continental philosophical days it was one of those ‘other guy’ head-scratchers, the preposterous commitment that made so much so-called ‘analytic thought’ sound more like religion than philosophy. But that was bigotry on my part, plain and simple. The Hawks constitute the semantic majority for damn good reasons. They are eminently sensible, which, as we shall see, is precisely the problem.


Historically, you have the influence of Frege and Russell at the beginning of the 20th century. A hundred and fifty years previous, Hume’s examinations of human nature had dramatically disclosed the limits that subjectivity placed on our attempts to think objective truth. Toward the end of the 18th century, Kant thought he had seen a way through: if we could deduce the categorical nature of that subjectivity, then we could, at the very least, grasp the true-for-us. But this just led to Hegel and the delicious-but-not-so-nutritious absurdity of reducing everything to ‘objective subjectivity.’ What Frege and Russell offered was nothing less than a way to pop the suffocating bubble of subjectivity, theories of meaning that seemed to put language, and therefore language users, in clear contact with the in-itself.


Practically speaking, the development of formal semantics was like cracking open caulked-shut windows. Given a handful of rules, you could formalize what seem to be the truth preserving features of natural languages. Of course, it only captured a limited set of linguistic features, and even within this domain it was plagued with puzzles and explanatory conundrums. But it was extraordinarily powerful nonetheless, so much so that it seemed natural to assume that with a little ingenious conceptual work all those pesky wrinkles could be ironed out, and we could jam with a perfectly-pressed Frock of Ages.


The theories of meaning arising out of these considerations in the philosophy of language also seemed–and still seem–to nicely dovetail with parallel questions in the philosophy of mind. Like language, conscious experience clearly seems to put us in logical contact with the world. Experiences, like claims, can be true or false. Phenomenology, like phonology, seems to vanish in the presentation of something else. And this drops us square in the lap of representationalism’s power as an explanatory paradigm: intentionality, meaning, and normativity are not simply central to human cognition, they are the very things that must be explained.


Conscious experience is representational: the reason we see through experience is the same as the reason we see through paintings or television screens. What is presented–qualia or paint or pixilated light–re-presents something else from the world, the representational content. What could be more obvious?


With the development of computers toward the middle of the 20th century, theorists in philosophy and psychology suddenly found themselves with a conspicuously mechanistic model of how it might all work. Human cognition, both personal and subpersonal, could be understood in terms of computations performed on representations. The relation of the mental to the neural, on this account, was no more mysterious than the relation between software and hardware (which, as it turns out, is every bit as mysterious!). And so, given this combination of intuitive appeal and continuity with other ‘hard’ research programs, representational theories of mind proved well nigh irresistible, not only to Anglo-American philosophy, but to a psychological establishment keen to go to rehab after a prolonged bout of behaviourism.


The real problem, aside from deciding the best way to characterize the theoretical details of the representational picture, is one of ironing out the causal details. The brain, after all, is biomechanical, an object belonging to the domain of the life sciences more generally. If you want to avoid the hubristic and (from a scientific perspective) preposterous enterprise of positing supra-natural entities, you need to explain how all this representation business, well, actually works. Thus the decades-long project of theorizing causal accounts of content.


The big problem, it turns out, is one of providing a natural account of content determination that simultaneously makes sense of misrepresentation. Jerry Fodor famously frames the difficulty in terms of the ‘disjunction problem’: you can say that your representation ‘dog’ is causally triggered by sensing a dog in your environment, which seems well and fine. The problem is that your representation ‘dog’ is sometimes causally triggered by sensing a fox in your environment (perhaps in less than ideal observational conditions). So the question becomes what, causally, makes your representation ‘dog’ a representation of a dog as opposed to a representation of a dog or fox. What, in other words, causally explains the way representations can be wrong? This may seem innocuous at first glance, but the very intelligibility of the representational account depends on it. Without some natural way of sorting content determining causes (dogs) from non-content determining causes (foxes or anything else) you quite simply have no causal account of content.


After decades of devious ingenuity, critics (most notably Fodor himself) have always been able to show how purported solutions run afoul some variant of this problem. So why not strike your colours and move on as a Defeatist like me advocates? The thing to remember is that there are at least two explanatory devils in this particular philosophical room: for many, conscious experience, short of representational theories, seems so baffling that the difficulties pertaining to causal content determination are a bargain in comparison. And this is one big reason why anti-representational accounts have made only modest headway over the intervening years: they literally seem to throw the baby out with the bathwater.


For the Hawk, intentionality is a primary explanandum. Recall the power of formal semantics I alluded to above: not only do logic and mathematics work, not only do they make science itself possible, they seem to be intentional through and through (though BBT disputes even this!). Given that intentionality is every bit as ‘real’ as causality, the question becomes one of how they come together in our heads. The responsible thing, it would seem, is to chalk up their track record of theoretical failure to mere factual ignorance, to simply continue taking runs at the problem armed with more and more neuroscientific knowledge.


As a Defeatist, however, I think the problem is thoroughly paradigmatic. I don’t worry about throwing out the baby with the bathwater simply because I’m not convinced the baby ever existed (unlike the Profiteers, for instance, who think the baby was switched in the hospital). For the Hawk, however, this means I have nothing short of an extraordinary explanatory and argumentative burden to discharge: not only do I need to explain why there’s no intentional baby, I need to explain why so many are so convinced that there is. Even worse, it would seem that I need to also explain away formal semantics itself, or at least account for its myriad and quite dazzling achievements. Worse of all, I probably need to explain Truth on top of everything.


The Blind Brain Theory (BBT) has crazy things to say about all these things. But I lack the space to do much more than wedge my foot in the door here. None of these burdens will be discharged in what follows. If I manage to convince a soul or two that their ingenuity is better wasted elsewhere, so much the better. But all I really want to show is that BBT is worth the time and effort required to understand it on its own terms. And I hope to do this by using it to formulate two, interrelated questions that I think are so straightforward and so obviously destructive of the representationalist paradigm, they might actually merit the hyperbole of this post’s title.


The first point I want to make has to do with heuristics, particularly as they are conceived by the growing number of researchers studying what is called ‘ecological rationality.’ Any strategy that solves problems by ignoring available information is heuristic. ‘Rules of thumb’ work by means of granularity and neglect, by ignoring complexities or entire domains if need be. As a result, they are problem specific: they only work when applied to a limited set of specifically structured challenges. As Todd and Gigarenzer write,


“The concept of ecological rationality–of specific decision-making tools fit to particular environments–is intimately linked to that of the adaptive toolbox. Traditional theories of rationality that instead assume one single decision mechanism do not even ask when this universal tools works better or worse than any other, because it is the only one thought to exist. Yet the empirical evidence looks clear: Humans and other animals rely on multiple cognitive tools. And cognition in an uncertain world would be inferior, inflexible, and inefficient with a general purpose optimizing calculator…” (Ecological Rationality, 14)


Ecological rationality looks at cognition in thoroughly evolutionary terms, which is to say, as adaptations, as a ‘toolbox’ of myriad biomechanical responses to various environmental challenges. It turns out that optimization strategies, problem-solving approaches that seek to maximize information availability in an attempt to generate optimal solutions, are not only much more computationally cumbersome (and thus an evolutionary liability), they are also often less effective than far simpler, far cheaper, quicker, and more robust heuristic strategies.


Todd and Gigarenzer give the example of catching a baseball. Until recently the prevailing assumption was that fielders unconsciously used a complex algorithm to estimate distance, velocity, angle, resistance, wind, and so on, to calculate the ball’s trajectory and anticipate where it would land–all within a matter of seconds. As it turns out, they actually rely on rules of thumb like the gaze heuristic, where they fix their gaze on the ball high up and start running so that the image of the ball rises at a continuous rate relative to their gaze and position. Rather than calculate the ball’s trajectory, they let the trajectory steer them in.


For our purposes, the important aspects of heuristic troubleshooting are 1) informatic neglect, the strategic omission of information; and 2) ecological matching, the way heuristics are only effective for a certain set of problems.


As far as I know, no one in consciousness research and philosophy of mind circles has bothered to think through the more global implications of informatic neglect on cognition, let alone consciousness. Most everyone with a naturalistic bent accepts the heuristic, plural nature of human and animal cognition. But no one to my knowledge has thought through the fact that the ‘representational paradigm’ is itself a heuristic.


How can we know the ‘R-paradigm’ is heuristic? Well… Because of the need to provide a causal account of content-determination!


Causal information, in other words, is the information neglected, the very thing the R-paradigm elides. I think you could mount a strong argument that the R-paradigm has to be heuristic simply on evolutionary, developmental grounds. But the primary reason is structural: there is simply no way for the brain to track the causal complexities of its own cognitive systems, even if it paid evolutionary dividends to do so. This structural fact, you could suppose, finds expression in the paradigmatic absence of neurofunctional information in so-called representational cognition.


The R-paradigm is heuristic–full stop. It systematically neglects information. This means (or at the very least, strongly suggests) that the R-paradigm, like all other heuristics, is ecologically matched to a specific set of problems. The R-paradigm, in other words, it is not a universal problem-solving device.


And this means that the R-paradigm is something that can be applied out-of-school–that it can be misapplied. Understood in these terms, the tenacious nature of the content-determination problem (and the grounding problem more generally) takes on an entirely new significance: Is it merely coincidental that Hawkish philosophers cannot conceptually (let alone empirically) explain the R-paradigm in causal terms–which is to say, in terms of the very information the R-paradigm neglects?


Perhaps. But let’s take a closer look.


As a heuristic, the R-paradigm necessarily has a limited scope of applicability: it is a parochial problem-solver, and only appears universal thanks (once again) to informatic neglect. It seems relatively safe to assume that the R-paradigm is primarily adapted to environmental problem-solving or third-person cognition. If this were so, we might expect it to possess a certain facility for causal relations in our environments. And indeed, as the transparency that motivates the Hawks would suggest, it’s tailor made for causal explanations of things not itself. It neglects almost all information pertaining to our informatic relation to our environment, and delivers objects bouncing around in relation to one another–fodder for causal explanation.


Small wonder, then, everything goes haywire when you take this heuristic to the question of consciousness and the brain. Neglecting your informatic relation to functionally independent systems in your environment is one thing; Neglecting your informatic relation to functionally dependent systems in your own brain is something altogether different. The R-paradigm is quite literally a heuristic that neglects the very information required to cognize consciousness.  How could it not misfire when faced with this problem? How could it come remotely close to accurately characterizing itself?


The problem of content determination, on the BBT account, is actually analogous to the problem of self-determination–which is to say, free will. In the latter, the problem is one of causally squaring the circle of ‘choice,’ whereas in the former the problem is one of causally squaring the circle of ‘meaning.’ Where cause flattens choice, it simply sails past meaning. And how could it be otherwise, when nothing less than truth is the long-sought-after ‘effect’?


Like choice, aboutness is a heuristic, a way of managing environmental relationships in the absence of constitutive causal information. It is a kluge–perhaps the most profound one. No conspiracy of causal factors can conjure representational content because the relationship sought is an exceedingly effective but nevertheless granular substitutefor the lack of access to those selfsame factors.


Of course it doesn’t seem that way, intuitively speaking. Consider the example of the gaze heuristic, given above. Does it make sense to suppose the gaze heuristic is actually an optimization algorithm? Of course not: Informatic neglect is constitutive of heuristic problem-solving. So why did so many assume that some kind of optimization algorithm underwrote ball catching? Why, in other words, was the informatic neglect involved in ball-catching something that required experimental research to reveal? Well, because informatic neglect is just that: informatic neglect. Not only is information systematically elided, information regarding this elision is lacking as well. This effectively renders heuristics invisible to conscious experience. Not only do we lack direct awareness of which heuristic we are using, we generally have no idea that we are relying heuristics at all. (Kahneman’s recent Thinking, Fast and Slow provides a wonderful crash course on this point. What he calls WYSIATI, or What-You-See-Is-All-There-Is, is a version of ‘informatic neglect’ as used here).


Aboutness not only seems ‘sufficient,’ to be the only tool we need; it also seems to be universal, a tool for all problem-solving occasions. Moreover, given the profoundly structural nature of the informatic neglect involved, the fact that the brain is necessarily blind to its own neurofunctionality, there is a sense in which aboutness is unavoidable: if the gaze heuristic is one tool among many, then aboutness is our hand, a ‘tool’ we cannot but use, (short of fumbling things with our elbows). More still, you can add to this list what might be called the ‘ease of worlding.’ One need only watch an episode of South Park to appreciate how primed our cognitive systems are, and how little information they require, to generate ‘external environments.’ It’s easy to forget that the ‘representational images’ that surround us are actually spectacular kinds of visual illusions. Structure a meagre amount of visual information the proper way, and we automatically cognize depth in flat surfaces populated with non-existent objects.


Aboutness provides the structural frame of our cognitive relation to our environments, conjuring worlds automatically at the least provocation. Given this, you could argue that representational theories of mind are a kind of ‘forced move,’ a theoretical step we had to take in our attempts to understand consciousness. But you can also see why it’s something a mature scientific account of consciousness and cognition requires we must see our way past. As soon as you acknowledge the intimate, inextricable relationship between mind and brain, you acknowledge that the former somehow turns on neurofunctionality–which is to say, the very thing systematically neglected by aboutness.


Reflecting on conscious experience means feeding brain processes to a heuristicthat spontaneously and systematically renders it causally inexplicable. In a sense, this explains the charges of ‘homunculism’ you find throughout the literature. The idea of a ‘little observer in the head’ that mistakenly ‘objectifies’ or ‘hypostatizes’ aspects of conscious experience is more than a little impressionistic. Framed in terms of heuristics and informatic neglect, the metaphoric problem of homunculism becomes a clear instance of heuristic misapplication: How can we trust a heuristic obviously designed to cognize our environments absent neurofunctional information to assist our attempts to cognize ourselves in terms of neurofunctional information?


If anything, one should expect that such a heuristic system would cognize the brain in non-neurofunctional terms, which is to say, as something quite apart from the brain. In other words, given something like an aboutness heuristic, one should expect dualistic interpretations of consciousness to be a kind of intuitive default. And what is more, given something like the aboutness heuristic, one should expect consciousness to be exceedingly difficult to understand in causal–which is to say, naturalistic–terms. Using the aboutness heuristic to cognize the brain environmentally, in the third-person, isn’t problematic simply because isolating causal relations in functionally independent systems is its stock and trade. Neglecting all the enabling machinery between the cognizing brain and the brain cognized facilitates cognizing the latter because that machinery is irrelevant to its function. Blindness to its own enabling machinery literally facilitates seeing the enabling machinery of other brains. Using the aboutness heuristic to cognize the brain in the first-person, therefore, is bound to generate intuitions of profound difference, as well as drive an apparently radical cognitive wedge between the first-person and third-person. What is obvious in the latter, becomes obscure in the former, and vice versa.


The route from the aboutness heuristic, the implicit device we are compelled to use given the structural inaccessibility of neurofunctional information, to the philosophically explicit R-paradigm described above should be obvious, at least in outline. Using the aboutness heuristic to cognize the brain in the first-person–or metacognitive applications–will tend to make an ‘environment’ of conscious experience, transform it into a repertoire of discrete elements. Since these elements seem to automatically vanish like paint or pixels in the apparent process of presenting something else, and since the enabling machinery is nowhere to be found, the activity of the aboutness heuristic is mistaken for a property belonging to each element. They are dubbed ‘representations,’ discrete ‘vehicles’ that take the something-else-presented as their ‘content’ or ‘meaning.’


Since the informatic neglect of causality is also constitutive of this new, secondary aboutness relation between thing representing and thing represented, it must be conceived in granular, normative terms–which is to say, in terms belonging to still another heuristic adapted to the structural neglect of causal information. And this, of course, kicks the door open onto another domain of philosophical perplexity (and another longwinded bloghard).


But as should be clear, if we take the mechanistic paradigm of the life sciences as our cognitive baseline, which representational theories of mind purport do, then it should be quite clear that there are no such things as representations (not even in the environmental sense of paintings and television screens). What we call ‘representations,’ what seems to be so obvious to basic intuition, is actually an artifact of that intuition, a ‘rule of thumb’ so profound that it seems to structure conscious experience itself, but really only provides an efficient shortcut for cognizing gross features of our environments absent any constitutive neurofunctional information.


We have no representations, not of dogs or foxes or anything else. Rather, we have nets bound into sensimotor loops that endlessly trawl our environments for patterns of information, sometime catching dogs, sometimes missing. Homomorphisms abound, yes. But speaking of homomorphic cogs within a mechanism is a far cry from speaking of representational mechanisms. The former, for one, is genuinely scientific!–at least to the extent it doesn’t require positing occult properties.


And perhaps this should come as no surprise. Science has been, if nothing else, the death-march of human conceit.


But I’m sure anyone with Hawkish sympathies is scowling, wondering exactly where I took a hard turn off the edge of the map map. What could be more obvious than our intentional relation to the world? Not much–I agree. But then not so long ago one could say the same about the motionlessness of the Earth or the solidity of objects. As I mentioned, I have come nowhere near discharging the explanatory and argumentative burdens as likely perceived by proponents of representational theories of mind. But despite this, the following two questions, I think, are straightforward enough, obvious enough, to reflect some of that burden back onto the representationalist, and perhaps test some Hawkish backs:


1) What information does the R-paradigm neglect?


2) How does this impact it’s scope of applicability?


The difficulty these questions pose for representationalism, I would argue, is the difficulty a sustained consideration of informatic neglect and its myriad roles pose for consciousness research and cognitive science as a whole.



 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2012 10:15

October 15, 2012

In the Shadow of Ishual

Aphorism of the Day: The inability to distinguish ‘political’ from ‘nice’ has saved more lives than penicillin and taken at least as many as speeding.


.


Madness has been kind enough to post a teaser from the beginning of The Unholy Consult on the Second Apocalypse Forum, for those who are interested. The book is inching toward completion, and barring any revisionary madness (no relation), looks like it will be even more of a behemoth than The White-Luck Warrior.


Also a reminder for those of you in the Toronto area, I’m scheduled to give a talk entitled, “Less Human than Human: The Cyborg Fantasy versus the Neuroscientific Real,” at the 2012 Toronto SpecFic Colloquium this October 28th. Bring family, friends, pets and quirky strangers - just be sure to leave your souls behind…


If you can’t make it, I’m also scheduled to give a talk and reading at Laurier University sometime mid-November. I’ll post the details when I get them, perhaps on my new, fancy-pants author website, where I hope to post sundry observations on the nature of children, chocolate, and spectacular sunsets. Every three pound brain needs a skull and hair…


Or at the very least, a zipper.


 



 •  0 comments  •  flag
Share on Twitter
Published on October 15, 2012 09:24

October 12, 2012

Spinoza’s Sin and Leibniz’s Mill

Aphorism of the Day: Every tyrannical system, to conserve itself as a system, will scapegoat even its king. So does drama masquerade as change.


.


So I’m reading and digging Paul Churchland’s most recent book, Plato’s Camera, while puzzling over David Chalmer’s latest at the same time, and I find myself thinking of Spinoza’s approbation against misconstruing the Condition in terms belonging to the Conditioned. In Part II of his Appendix Containing Metaphysical Thoughts he writes:


In this Chapter God’s existence is explained quite differently from the way in which men commonly understand it; for they confuse God’s existence with their own, so they imagine God as being somewhat like a Man and do not take note of the true idea of God which they have, or are completely ignorant of having it. As a result they can neither prove God a priori, i.e., from his true definition, or essence, nor prove it a posteriori, from the idea of him, insofar as it is in us. Nor can they conceive God’s existence. (The Collected Works of Spinoza, 315)


Given the analogical nature of human cognition, the reasons for this nearly universal error are quite clear: ‘men’ mined the information belonging to their own manifest image in their attempts to conceive God, simply because it was the most intuitive and readily available. Given this heuristic brush and informatic palette, they painted God in psychological terms, only possessing their features to the ‘nth degree.’ A personal God.


Spinoza catalogues and critiques the numerous expressions of this fundamental error in what follows, showing why the perplexities and contradictions that pertain to a personal God arise, and how these problems simply fall away if you subtract what is human from God. He was branded a heretic for his trouble, disowned by the Jewish community, and so reviled by Christians that some commentators believe that the following figure I want to consider intentionally expunged all traces of Spinoza’s influence from his own philosophy.


In philosophy of mind and consciousness research circles, Leibniz is typically mentioned with reference to his famous windmill example, which he uses to illustrate the now hoary conceptual gulf between doing and feeling. He writes:


One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception. (Monadology, §17)


In a sense, the problem of Leibniz’s Mill simply turns Spinoza’s Sin on its head. The Mill cannot be the Condition, Leibniz is arguing, because he cannot fathom how it could generate the Conditioned, manifest ‘perception.’ In a sense, it captures the Hard Problem in a nutshell: how could all this ramshackle machinery generate the exquisite smell of turkey dinner on a warm, autumn afternoon, or anything else that we experience for that matter?


What does this have to do with reading Churchland? Well, Churchland wants to argue that cognitive science is guilty of committing Spinoza’s Sin, that too many are too prone to construe the Condition, neural function, by analogy to the Conditioned, psychology and language. So, for instance, in The Cambridge Handbook of Cognitive Science, you find Barbara Von Eckhardt explaining:


There is nothing even approximating a systematic semantics for even a fragment of [any mental representation system]. Nevertheless, there are ways to inductively infer to some global semantic features [any mental representation system], arguably, must have. One way is to extrapolate, via a form of ‘transcendental’ reasoning, from features of cognitive science’s explananda. (33)


In other words, Spinoza’s Sin is actually a Virtue: the explananda of cognitive science are nothing other than manifest features of cognition, what it is we generally think we’re doing (given what little we have to go on) whenever we cognize ourselves, others, and the world. So the idea, Von Eckhardt is saying, is to reason from the Conditioned, our manifest informatic palette, to the Condition, whatever will be eventually described in a complete representational theory of mind. She thinks, quite sensibly, that our manifest experience and intuitions are what need to be explained.


Churchland argues otherwise–or well, almost. Not only does the ‘linguaformal’ approach look increasingly unlikely the more we learn about the brain, it renders the obvious cognitive continuity between humans and animals very, very difficult to understand. In Plato’s Camera he paints a picture of cognition where Kant’s simple frame of timeless transcendental categories is smashed into a myriad of nondiscursive, neural ‘maps’ understood according to the formation and weighting of synaptic connections among populations of neurons possessing various, mathematically tractable structural predispositions. “Simply replace,” he writes, “‘single complex predicate’ with ‘single prototype point in high-dimensional activation space,’ and you have the outlines of the view to be defended here” (23).


Churchland, in other words, isn’t so interested in overthrowing the old order as he is in electing a new government. As radical as his account often seems, he still clings to certain boilerplate semantic assumptions, still sees the Mill representationally, which is to say, as a kind of content machine. Meaning, for him, remains something requiring a positive explanation. He argues that “deploying a background map of some practically relevant feature space, a map sporting some form of dynamical place marker, is a common and highly effective technique for monitoring, modulating, and regulating many practically relevant behaviours” (Plato’s Camera, 131). But even in the examples he provides, the homomorphisms he points out are all simply parts of larger dynamic systems, begging the question of why maps should be accorded pride of place in his account of cognition, rather than being relegated to one kind of heuristic tool among many.


Put differently, he ultimately succumbs to temptation and commits Spinoza’s Sin. Rather than, as BBT suggests, demoting ‘traditional epistemology’–treating it as a signature example of the way informatic neglect leads us to universalize heuristics, informatic processes that selectively ignore information to better solve specific problem sets–Churchland wants to dress it in more scientifically fashionable clothes.


Grasping the abject wickedness of Spinoza’s Sin requires an appreciation of the abyssal nature of the gulf between the Condition and the Conditioned when it comes to the question of human consciousness and cognition. One needs to understand, in other words, why Mill has such difficulty fathoming itself as a Mill. Churchland, after all, is more than just a very, very intelligent man. He also possesses the imaginative capacity and institutional courage to make the analogical leap beyond linguaformalism–and yet, even still, he cannot relinquish certain intuitions regarding content…


Why?


Imagine a Mill designed to cognize environmental information, whirring and clicking in the dark. If you could peer through the gloom you would see loosely packed machinery, literally unimaginable in complexity, clattering away, wheel spinning wheel, cog rotating cog–swiss-watch complexities extending through impenetrable gloom.


Now imagine a flashlight, shining down across and penetrating into this machinery, illuminating and eclectic multitude of surfaces, the crest of a spinning wheel here, a length of strut there, the handle of lever, a corner of casing, on and on, a cobweb of fragmentary glimpses, become more and more fractional and dim the deeper the light probes the machine’s bowel. Peering, all you can see are shreds of machinery, a kind of inexplicable constellation in the black.


Now imagine that what’s illuminated represents the information accessible to conscious experience. Not only is information pertaining to the vast bulk of the machine inaccessible, information regarding the actual mechanical role of those parts somewhat illuminated is also out of reach–so much so, that even information pertaining to the lack of this information is missing. This means you need to cut out all those fragmentary, functionally distributed glimpses, then past them into a singular Collage, transform a mishmash of perspectival distortions into one ‘manifest’ image. The informatic cobweb fills the screen, you could say.


Not so different from what-you-are-experiencing-this-very-moment-here-now.


Feed this information back to the Mill (whose machinery, remember, is primarily designed to trouble-shoot environmental information). Utterly blind to the vast amounts of information neglected, it takes the Collage to be sufficient–all the information accessed becomes all the information required. Since information drives distinction, its absence leverages the cognitive illusion of sufficient wholes–as I have written elsewhere, consciousness can be seen as a kind of ‘flicker-fusion’ writ large. Short of neuroscience, it has no real recourse to information that hales from beyond the Collage in its attempts to cognize the Collage. It is informatically encapsulated.


The Collage, in other words, is the Conditioned, the well from which our cognitive systems draw water whenever tasked with troubleshooting the Condition. Given the reworked Mill analogy above, it’s easy to see the peril of Spinoza’s Sin: From the informatic vantage of the Collage, the neurofunctional axis can only be indirectly inferred, never directly intuited. This is why the functional findings of cognitive science so often strike those without any real exposure to the field as so counterintuitive. Not only are we ‘in the dark’ with reference to ourselves, we are, in a very real sense, congenitally and catastrophically misinformed.


Pending a mature neuroscientific understanding, we are, in effect, the hostage of our metacognitive intuitions, and for better or worse, representation looms large among them. Churchland yields unwarranted pride of place to the homomorphic components of our heuristic systems, endows them with bloated significance, simply because metacognitive intuition, and hence tradition, mistakenly accords representations a privileged role. Because, quite simply, it feels right. It ain’t called temptation for nothing!


The Blind Brain Theory, as I hope the above thumbnail makes clear, affords the resources required to throw off the analogical yoke of the Conditioned once and for all, to subtract the human, not from God, but from the human, thus showing that–beyond the scope of a certain parochial heuristic at least–we just never were what we took ourselves to be.


And perhaps more importantly, never will be.



 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2012 11:28

October 8, 2012

Out-Danning Dennett

The idea is this. What you take yourself to be at this very moment is actually a kind of informatic illusion.


For me, the picture has come to seem obvious, but I understand that this is the case for everyone with a theory to peddle. So the best I can do is explain why it seems obvious to me.


One of the things I have continually failed to do is present my take, Blind Brain Theory (BBT), in terms that systematically relate it to other well-known philosophical positions. The reason for this, I’m quite certain, is laziness on my part. As a nonacademic, I never have to exposit what I read for the purposes of teaching, and so the literature tends to fall into the impressionistic background of my theorization. I actually think this is liberating, insofar as it has insulated me from many habitual ways of thinking through problems. I’m not quite sure I would have been able to connect the dots the way I have chasing the institutional preoccupations of academe. But it has certainly made the task of communicating my views quite a bit harder than it perhaps should be.


So I’ve decided to bite the bullet and lay out the ways BBT overlaps and (I like to think!) outruns Daniel Dennett’s rather notorious and oft-misunderstood position on consciousness. For many, if not most, this will amount to using obscurity to clarify murk, but then you have to start somewhere.


First, we need to get one fact straight: consciousness possesses informatic boundaries. This is a fact Dennett ultimately accepts, no matter how his metaphors dance around it. Both of his theoretical figures, ‘multiple drafts’ and ‘fame in the brain’ imply boundaries, a transition of processes from unconsciousness to consciousness. Some among a myriad of anonymous processes find neural celebrity, or as he puts it in “Escape from the Cartesian Theater,” “make the cut into the elite circle of conscious events.” Many subpersonal drafts become one. What Dennett wants to resist is the notion that this transition is localized, that it’s brought together for the benefit of some ‘neural observer’ in the brain–what he calls the ‘Cartesian Theatre.’ One of the reasons so many readers have trouble making sense of his view has to do, I think, with the way he fails to recognize the granularity of this critical metaphor, and so over-interprets its significance. In Consciousness Explained, for instance, he continually asserts there is no ‘finishing line in the brain,” no point where consciousness comes together–”no turnstyle” as he puts it. Consciousness is not, he explicitly insists in his notorious piece (with Marcel Kinsbourne) “Time and the Observer” in Behavioural and Brain Sciences, a subsystem. And yet, at the same time you’ll find him deferring to Baars’ Global Workspace theory of consciousness, even though it was inspired by Jerry Fodor’s notion of some ‘horizontal’ integrative mechanism in the brain, an account that Dennett has roundly criticized as ‘Cartesian’ elsewhere.


The evidence that consciousness is localized (even if widely distributed) through the brain is piling up, which is a happy fact, since according to BBT consciousnesscan only be explained in subsystematic terms. Consciousness possesses dynamic informatic boundaries, both globally and internally, all of which are characterized, from the standpoint of consciousness, by various kinds of neglect.


In cognitive psychology and neurology, ‘neglect’ refers to an inability to detect or attend to some kind of deficit. Hemi-neglect, which is regularly mentioned in consciousness discussions, refers to the lateralized losses of awareness commonly suffered by stroke victims, who will sometimes go so far as to deny ownership of their own limbs. Cognitive psychology also uses the term to refer to our blindness various kinds of information in various problem-solving contexts. So ‘scope neglect,’ for instance, involves our curious inability to ‘value’ problems according to their size.  My view is that the neglect revealed in various cognitive biases and neuropathologies actually structures ‘apparent consciousness’ as a whole. I think this particular theoretical cornerstone counts as one of Dennett’s ‘lost insights.’ Although he periodically raises the issue of neglect and anosognosia, his disavowal of ‘finishing lines’ makes it impossible for him to systematically pursue their relation to consciousness. He overgeneralizes his allergy to metaphors of boundary and place.


So, to give a quick example, where BBT views Frank Jackson’s Mary argument as a kind of ‘neglect detector,’ a thought experiment that reveals the scope of applicability of the ‘epistemic heuristic’ (EH), Dennett thinks it constitutes a genuine first-order challenge, a circle that must be squared. BBT is more interested in diagnosing than disputing the intuition that physical knowledge could be complete in the absence of any experience of red. Why does an obvious informatic addition to our environmental relationship (the experience of red) not strike us as an obvious epistemic addition? Well, because our ‘epistemic heuristic,’ even in its philosophically ‘refined’ forms, is still a heuristic, and as such, not universally applicable. Qualia simply lie outside the EH scope of applicability on my view.


I take Dennett’s infamous ‘verificationism’ as an example of a ‘near miss’ on his part. What he wants to show is that the cognitive relationship to qualia is informatically fixed–or ‘brainbound’–in a way that the cognitive relationship to environments are not: With redness, you have no informatic recourse the way you do with an apple–what you see is what you get, period. On my view, this is exactly what we should expect, given the evolutionary premium on environmental cognition: qualia are best understood as ‘phenomemes,’ subexistential combinatorial elements that enable environmental cognition similar to the way phonemes are subsemantic combinatorial elements that enable linguistic meaning (I’ll get to the strange metaphysical implications of this shortly). Granting that qualia are ‘cognition constitutive,’ we should expect severe informatic access constraints when attempting to cognize them. On the BBT account, asking what qualia ‘are’ is simply an informatic confusion on par with asking what the letter ‘p’ means. The primary difference is that we have a much better grasp of the limits of linguistic heuristics (LH) than we do EH. EH, thanks to neglect, strikes us as universal, as possessing an unlimited scope of applicability. Thus the value of Mary-type thought experiments.


Lacking the theoretical resources of BBT, Dennett can only form a granular notion of this problem. In one of his most famous essays, Quining Qualia, he takes the ‘informatic access’ problem, and argues that ‘qualia’ are conceptually incoherent because we lack the informatic resources to distinguish changes in them (it could be our memory that has been transformed), and empirically irrelevant because those changes would seem to make no difference one way or another. Where he uses the ‘informatic access problem’ as a argumentative tool to make the concept of qualia ‘look bad,’ I take the informatic access problem to be an investigative clue. What Dennett shows via his ‘intuition pumps,’ I think, are simply the limits of applicability of EH.


But this difference does broach the most substantial area of overlap between my position and Dennett’s. In a sense, what I’m calling EH could be characterized as an ‘epistemological stance,’ akin to the variety of stances proposed by Dennett.


BBT takes two interrelated angles on ‘brain blindness’ or neglect. The one has to do with how the appearance of consciousness–what we think we are enjoying this very moment–is conditioned by informatic constraints or ‘blindnesses.’ The other has to do with the plural, heuristic nature of human cognition, how our various problem-solving capacities are matched to various problems (the way cognition is ‘ecological’), and how they leverage efficiencies via strategic forms of informatic neglect. What I’m calling EH, for instance, seems to be both informatically sufficient and universally applicable, thanks to neglect–the same neglect that rendered it invisible altogether to our ancestors. In fact, however, it elides enormous amounts of relevant information, including the brain functions that make it possible. So, remaining faithful to the intuitions provided by EH, we conceive knowledge in terms of relations between knowers and things known, and philosophy sets to work trying to find ways to fit ever greater accumulations of scientific information into this ‘intuitive picture’–to no avail. How do mere causal relations conspire to create epistemological relations, which is to say, normative about relations? On my view, these relations are signature examples of informatic neglect: ‘aboutness’ is a shortcut, a way to relate devices in the absence of any causal information. ‘Normativity’ is also a shortcut, a way to model mechanism in the absence of any mechanistic information. (Likewise, ‘object’ is a shortcut, and even ‘existence’ is a shortcut–coarse-grained tools that get certain work done). Is it simply a coincidence that syntax can be construed as mechanism bled of everything save the barest information? Even worse, BBT suggests it could be the case that both aboutness and normativity are little more than reflective artifacts, merely deliberative cartoons of what we think we are doing given our meagre second-order informatic access to our brain’s activity.


In one of his most lucid positional essays, “Real Patterns,” Dennett argues the ‘realism’ of his stance approach vis-a-vis thinkers like Churchland, Davidson, and Rorty. In particular, he wants explain how his ‘intentional stance’ and the corresponding denial of ‘original intentionality’ does not reduce intentionality to the status of a ‘useful fiction.’ Referencing Churchland’s observations regarding the astronomical amount of compression involved in the linguistic coding of neural states (in “Eliminative Materialism and the Propositional Attitudes“), he makes the point that I’ve made here very many times: the informatic asymmetry between what the brain is doing and what we think we’re doing is nothing short of abyssal. When we attribute desires and beliefs and goals and so on to another brain, our cognitive heuristics are, Dennett wants to insist, trading in very real patterns, only compressed to a drastic degree. It’s the reality of those patterns that render the ‘intentional stance’ so useful. It’s the degree of compression that renders them incompatible with the patterns belonging to the ‘physical stance’–and thus, scientifically intractable.


The only real problem BBT has with this analysis is its granularity, a lack of resolution that leads Dennett to draw several erroneous conclusions. The problem, in a nutshell, is that far more than ‘compression’ is going on, as Dennett subsequently admits when discussing his differences with Davidson (the fact that two interpretative schemes can capture the same real pattern, and yet be incompatible with each other). Intentional idioms are heuristics in the full sense of the term: their effectiveness turns on informatic neglect as much as the algorithmic compression of informatic redundancies. To this extent, the famous ‘pixilated elephant’ Dennett provides to illustrate his argument is actually quite deceiving. The idea is to show the way two different schemes of dots can capture the same pattern–an elephant. What makes this example so deceptive is the simplistic account of informatic access it presupposes. It lends itself to the impression that ‘informatic depletion’ alone characterizes the relation between intentional idioms and the ‘real patterns’ they supposedly track. It entirely ignores the structural specifics of the informatic access at issue (the variety of bottlenecks posited by BBT), the fact that our Intentional Heuristic (IH), very much like EH, elides whole classes of information, such as the bottom-up causal provenance belonging to the patterns tracked. IH, in other words, suffers from informatic distortion and truncation as much as depletion.


His illustration would have been far more accurate if one of the pixilated figures only showed only the elephant’s trunk. When our attentional systems turn to our ‘intentional intuitions’ (when we reflect on intentionality), deliberative cognition only has access to the stored trace of globally broadcast (or integrated) information. Information regarding the neurofunctional context of that information is nowhere to be found. So in a profound sense, IH can only access/track acausal fragments of Dennett’s ‘real patterns.’ Because these fragments are systematically linked to what it is our brains are actually doing, IH will seem to be every bit as effective as our brains at predicting, manipulating, and understanding the behavioural outputs of other brains. Because of neglect (the absence of information flagging the insufficiency of available information), IH will seem complete, unbounded, which is likely why our ancestors used it to theorize the whole of creation. IH constitutively confuses the trunk for the whole elephant.


In other words, Dennett fails to grasp several crucial specifics of his own account. This oversight (and to be clear, there are always oversights, always important details overlooked, even in my own theoretical comic strips) marks a clear parting of the ways between his position and my own. It’s the way developmental and structural constraints consistently distort and truncate the information available to IH that explains the consistent pattern of conceptual incompatibilities between the causal and intentional domains. And as I discuss below, it’s a primary reason why I, unlike Dennett, remain unwilling to take theoretical refuge in pragmatism. No matter what the ‘reality’ of intentionality, BBT shows that the informatic asymmetry between it and the ‘real patterns’ it tracks is severe enough warrant suspending commitment to any theoretical extrapolation, even one as pseudo-deflationary as pragmatism, based upon it.


This oversight is also a big reason why I so often get that narcissistic ‘near miss’ feeling whenever I read Dennett–why he seems trapped using metaphors that can only capture the surface features of BBT. Consider the ‘skyhook’ and ‘crane’ concepts that he introduces in Darwin’s Dangerous Idea to explain the difference between free-floating, top-down religious and naturally grounded, bottom-up evolutionary approaches to explanation. On my reading, he might as well as used ‘trunk’ and ‘elephant’!


Moreover, because he overlooks the role played by neglect, he has no real way of explaining our conscious experience of cognition, the rather peculiar fact that we are utterly blind to the way our brains swap between heuristic cognitive modes. Instead, Dennett relies on the pragmatics of ‘perspective talk’–the commonsense way in which we say things like ‘in my view,’ ‘from his perspective,’ ‘from the standpoint of,’ and so on–to anchor our intuitions regarding the various ‘stances’ he discusses. Thus all the vague and (perhaps borderline) question-begging talk of ‘stances.’


BBT replaces this idiom with that of heuristics, thus avoiding the pitfalls of intentionality while availing itself of what we are learning about the practical advantages of specialized (which is to say, problem specific) cognitive systems, how ignoring information not only generates metabolic efficiencies, but computational ones as well. The reason for our ‘peculiar blindness’–the reason Dennett has had to do to such great lengths to make ‘Cartesian intuitions’ visible–is actually internal to the very notion of heuristics, which, in a curious sense, use blindness to leverage what they can see. From the BBT standpoint, Dennett consistently fails to recognize the role informatic neglect plays in all these phenomena. He understands the fractured, heuristic nature of cognition. He is acutely aware of the informatic limitations pertaining to thought on a variety of issues. But the pervasive, positive, structural role these limitations play in the appearance of consciousness largely eludes him. As a result, he can only argue that our traditional intuitions of consciousness are faulty. Because he has no principled means of explaining away ‘error consciousness,’ all he can do is plague it with problems and offer his own, alternative account. As a result, he finds himself arguing against intuitions he can only blame and never quite explain. BBT changes all of that. Given its resources, it can pinpoint the epistemic or intentional heuristics, enumerate all the information missing, then simply ask, ‘How should we determine the appropriate scope of applicability?’


The answer, simply enough, is ‘Where EH works!’ Or alternately, ‘Where IH works!’ BBT allows us, in other words, to view our philosophical perplexities as investigative clues, as signs of where we have run afoul informatic availability and/or cognitive applicability–where our ‘algorithms’ begin balking at the patterns provided. On my view, the myriad forms of neglect that characterize human cognition (and consciousness) can be glimpsed in the shadows they have cast across the whole history of philosophy.


Bur care must be taken to distinguish the pragmatism suggested by ‘where x works’ above from the philosophical pragmatism Dennett advocates. As I mentioned above, he accepts that intentional idiom is coarse-grained, but given its effectiveness, and given the mandatory nature of the manifest image, he thinks it’s in our ‘interests’ to simply redefine our folk-psychological understanding using science to lard in the missing information. So with regard to the will, he recommends (in Freedom Evolves) that we trade our incoherent traditional understanding in for a revised, scientifically informed understanding of free will as ‘behavioural versatility.’ Since, for Dennett, this is all ‘free will’ has ever been, redefinition along these lines is imminently reasonable. I remember once quipping in a graduate seminar that what Dennett was saying amounted to telling you, at your Grandma Mildred’s funeral, “Don’t worry. Just call rename your dog, Mildred.” After the laughter faded, one of the other students, I forget who, was quick to reply, “That only sounds bad if your dog wasn’t your Grandma Mildred all along.”


I’ve since come to think this exchange does a good job of illustrating the stakes of this particular turn of the debate.


You can raise the most obvious complaint against Dennett: that the inferential dimension of his redefinition makes usage of the concept ‘freedom’ tendentious. We would be doing nothing more than gaming all the ambiguities we can to interpret scientific ‘crane information’ into our preexisting folk-psychological conceptual scaffold–wilfully apologizing, assuming these scientific ‘cranes’ can be jammed into a ‘skyhook’ inferential infrastructure. Dennett himself admits that, given the information available to experience, ‘behavioural versatility’ is not what free will seems to be. Or put differently, that the feeling of willing is an illusion.


The ‘feeling of willing,’according to BBT, turns on a structural artifact of informatic neglect. We are skyhooks–from the informatic perspective of ourselves. The manifest image is magical. Intentionality is magical. On my view, the ‘scientific explanations’ are far more likely to resemble ‘explanations away’ than ‘explanations of.’ The question really is one of how other folk-psychological staples will fare as cognitive neuroscience proceeds. Will they be more radically incompatible or less? Imagine experience and the skein of intuitive judgments that seem to bind it as a kind of lateral plane passing through an orthogonal, or ‘medial,’ neurofunctional space. Before science and philosophy, that lateral plane was continuous and flat, or maximally intuitive. It was just the way things were. With the accumulation of information through the raising of philosophical questions (which provide information regarding the insufficiency of the information available to conscious experience) through history, the intuitive topography of the plane became progressively more and more dimpled and knotted. With the institutionalization of science, the first real rips appear. And now, as more information regarding various neurofunctions becomes available, the skewing and shredding are becoming more and more severe. The question is, what will the final ‘plane of experiential intuition’ look like? How will our native intuitions fare?


How deceptive is consciousness?


Dennett’s answer: Enough to warrant considerable skepticism, but not enough warrant abandoning existing folk-psychological concepts. The glass, in other words, is half full. My answer: Enough to warrant wondering if anyone has ever had a clue ever. The glass lies in pieces across the floor. The trend, at least, is foreboding. According to BBT, the informatic neglect that renders the ‘feeling of willing’ possible is a structural feature belonging to all intentional concepts. Given this, it predicts that very many folk-psychological concepts will suffer the fate the ‘feeling of willing’ seems to be undergoing as I write. From the standpoint of knowledge, experience is about to be cast into the neurofunctional wind.


Grandma Mildred isn’t you dog. She’s a ghost.


Either way, this is why I think pragmatic or inferentialist accounts are every bit as hopeless as traditional approaches. You can say, ‘There’s nothing but patterns, so lets run with them!’ and I’ll say, ‘Where? To the playground? Back to Hegel?’ When knowledge and experience break in two, the philosopher, to be a philosopher, must break with it. The world never wants for apologists.


BBT allows us to frame the problem with a clarity that evaded Dennett. If our difficulties turn on the limited applicability of our heuristics, the question really should be one of finding the heuristic that possesses the most applicability. In my view, that heuristic is the one that allows us to comprehend heuristics in the first place: nonsemantic information. The problem with pragmatism as a heuristic lies in the way it actively, as opposed to structurally (which it also does), utilizes informatic neglect. Anything can be taken as anything, if you game the ambiguities right. You could say it makes a virtue out of stupidity.


In place of philosophical pragmatism, my view recommends a kind of philosophical akratism, a recognition of the heuristic structure of human cognition, an understanding of the structural role of informatic neglect, and a realization that conscious experience and cognition are drastically, perhaps catastrophically, distorted as a result.


Deliberative human cognition has only the information globally broadcast (or integrated) at its disposal. Likewise, the information globally broadcast only has human cognition. The first means that human cognition has no access whatsoever to vast amounts of constitutive processing–which is to say, no access to neurofunctional contexts. The second means that we likely cognize conscious experience as experience via heuristics matched to our natural and social environments, as something quite other than whatever it is.


Small wonder consciousness has proven to be such a knot!


And this, for me, is where the fireworks lay: critics of Dennett often complain about the difficulty of getting a coherent sense of what his theory of consciousness is, as opposed to what it is not. For better or worse, BBT paints a very distinct–if almost preposterously radical–picture of consciousness.


So what does that picture look like?


It purports, for instance, to explain how the apparent reflexivity of consciousness can arise from the irreflexivity of natural processes. For me, this constitutes the most troubling, and at the same time, most breathtaking, theoretical dividend of BBT: the parsimonious way it explains away conscious reflexivity. Dennett (working with Marcel Kinsbourne) sails across the insight’s wake in “Time and the Observer” where he argues, among other things, for the thoroughgoing dissociation of the experience of time from the time of experience, how the time constraints imposed by the actual physical distribution of consciousness in the brain means that we should expect our conscious experience of time to ‘break down’ in psychophysical experimental contexts at or below certain thresholds of temporal resolution.


The centerpiece of his argument is the deeply puzzling experimental variant of the well-known ‘phi phenomenon,’ how two different closely separated spots projected in rapid sequence on a screen will seem to be a single spot moving from location to location. When experimenters use two different colours for each of the spots: not only do subjects report seeing the spot move, they claim to see it change colour, and here’s the thing, midway. What makes this so strange is the fact that they perceive the colour change before the second spot appears–before ‘seeing’ what the second colour is. Ruling out precognition, Dennett proposes two mechanisms to account for the illusion: either the subjects consciously see the spots as they are only to have the memory almost instantaneously revised for consistency, what he calls the ‘Orwellian’ explanation, or the subjects consciously see the product of some preconscious imposition of consistency, what he calls the ‘Stalinesque’ explanation. Given his quixotic allergy to neural boundaries, he argues that our inability to answer this question means there is no definite where and when of consciousness in the brain, at least at these levels of resolution.


Dennett’s insight here is absolutely pivotal: the brain ‘constructs,’ as opposed to perceives or measures, the passage of time, given the resources it has available. The time of temporal representation is not the time represented. But he misconstrues the insight, seeing in it a means to cement his critique of the Cartesian Theatre. The question of whether this process is Orwellian or Stalinist, whether neural history is rewritten or staged, simply underscores the informatic constraints on our experience of time, our utter blindness to neurofunctional context of the experience–which is to say, our utter blindness to the time of conscious experience. Dennett, in other words, is himself making a boundary argument, only this time from the inside out: the inability to arbitrate between the Orwellian and Stalinist scenarios clearly demarcates the information horizon of temporal experience.


And this is where the theoretical resources of BBT come into play. Wherever it encounters apparent informatic constraints,it asks how they find themselves expressed in experience. Saying that temporal experience possesses informatic boundaries is platitudinal. All modalities of experience are finite: we can only see, hear, taste, think, and time so much in a given moment. Saying that the informatic boundaries of experience are themselves expressed in experience is somewhat more tricky, but you need only attend to your own visual margins to see a dramatic example of such an expression.


You could say vision is an exceptional example, given the volume of information it provides in comparison to other experiential modalities. Nevertheless, one could argue that such boundaries must find some kind of experiential expression, even if, as in the cases of clinical neglect, it evades deliberative cognition. BBT proposes that neglect is complete in many, if not most cases, and information regarding informatic boundaries is only indirectly available, typically via contexts (such as psychological experimentation) that foreground discrepancies between brute environmental availability and actual access. The phi phenomenon provides a vivid demonstration of this–as does, for that matter, psychophysical phenomena such as flicker-fusion. For some mysterious reason (perhaps the mysterious reason), what cannot be discriminated, such as the flashing of lights below a certain temporal threshold, is consciously experienced as unitary. It seems a fact of experience almost too trivial to note, but perhaps immensely important: Why, in the absence of information, is identity the default?


If you think about it, a good number of the problems of consciousness can be formulated in terms of identity and information. BBT takes precisely this explanatory angle, interpreting things like the unity of consciousness, personal identity, and nowness or subjective time as products of various species of neglect–literally as kinds of fusions.’


The issue of time as it is consciously experienced contains a cognitive impasse at least as old as Aristotle: the problem of the now. The problem, as Aristotle conceived it, lay in what might called the persistence of identity in difference that seems to characterize the now, how the now somehow remains the same across the succession of now moments. As we have seen, whenever BBT encounters an apparent cognitive impasse, it asks what role informatic constraints play. The constraints, as identified by Dennett and Kinsbourne in their analyses in “Time and the Observer,” turn on the dissociation of the time of representation from the time represented. In a very profound sense, our conscious experience of time is utterly blind to the time of conscious experience, which is to say, information pertaining to the timing of conscious timing.


So what does this, the conscious neglect of the time of conscious timing, mean? The same thing all instances of informatic neglect mean: fusion. The fusing of flickering lights when their frequency exceeds a certain informatic threshold seems innocuous likely because the phenomenon is so isolated within experience. The kind of temporal fusion at issue here, however, is coextensive with experience: as many commentators have noted, the so-called ‘window of presence’ is just experience in a profound sense. The now always seems to be the same now because the information regarding the time of conscious timing, the information required to globally distinguish moment from moment, is simply not available. In a very profound sense, ‘flicker fusion’ is a local, experientially isolated version of what we are.


Thus BBT offers a resolution of the now paradox and an explanation of personal identity in a single conceptual stroke, as it were. It provides, in other words, a way of explaining how natural and irreflexive processes give rise to the apparent reflexivity that so distinguishes consciousness. And by doing so it drastically reduces the explanatory burden of consciousness, leaving only ‘default identity’ or ‘fusion’ as the mystery to be explained. Given this, it provides a principled means of ‘explaining away’ consciousness as we seem to experience it. Using informatic neglect as our conceptual spade, one need only excavate the kinds of information the conscious brain cannot access from our scientific understanding of the brain to unearth something that resembles–to a remarkable degree–the first-person perspective. Consciousness, as we (think we) experience it, is fundamentally structured by various patterns of informatic neglect.


And it does so using an austere set of concepts and relatively uncontroversial assumptions. Conscious episodes are informatically encapsulated. Deliberative cognition is plural and heuristic (though neglect means it appears otherwise). Combining the informatic neglect pertaining to the first–which Dennett has mistakenly eschewed–with the problems of ‘matching’ pertaining to the second, produces what I think could very well be the single most parsimonious and comprehensive theory of ‘consciousness’ in the field.


But I anticipate it will be a hard sell, with the philosophy of mind crowd most of all. Among the many invisible heuristics that enable and plague us are those primed to dismiss outgroup deviations from ingroup norms–and I am, sadly, merely a tourist in these conceptual climes. Then there’s the brute fact of Hebb’s Law: the intuitions underwriting BBT demand more than a little neural plasticity, especially given the degree to which they defect from any number of implicit and canonically explicit assumptions. I’m asking huge populations of old neurons to fire in unprecedented ways–never a good thing, especially when you happen to an outgroup amateur!


And then there’s the problem of informatic neglect itself, especially with reference to what I earlier called the epistemic heuristic. I often find myself flabbergasted by how far out of step I’ve fallen with consensus opinion since the key insight behind BBT nixed my dissertation over a decade ago. Even the notion of content has come to seem alien to me! a preposterous artifact of philosophers blindly applying EH beyond its scope of application. On the BBT account, the most effective way to understand meaning is as an artifact of structured informatic neglect. In a real sense, it holds there is no such thing as meaning, so the wide-ranging debates on content and representation that form the assumptive baseline for so many debates you find in the philosophy of mind are little more than chimerical from its standpoint. Put simply, ‘truth’ and ‘reference’ (even ‘existence’!) are best understood as kinds of heuristics, cognitive adaptations that maximize effectiveness via forms of informatic neglect, and so possess limited scopes of applicability.


Even the classical metaphysical questions regarding materialism are best considered heuristic chimera on my view. Information, nonsemantically construed, allows the theorist to do an end run around all these dilemmas, as well as all the dichotomies and dualisms that fall out of them.


We are informatic subsystems attempting to extend our explanatory ‘algorithms’ as far into subordinate, parallel, and superordinate systems as we can, either by accumulating more information or by varying our algorithmic (cognitive) relation to the information already possessed. Whatever problem our system takes on, resolution depends upon this relation between information accumulation and algorithmic versatility. So as we saw with ‘qualia,’ our system is stranded: we cannot penetrate and interact with red the way we can with apples, and so the prospect of information accumulation are dim. Likewise, our algorithms are heuristic, possessing a neglect structure appropriate to environmental problem-solving (given various developmental and structural constraints), which is to say, a scope of applicability that simply does not (as one might expect) include qualia.


The ‘problem of consciousness,’ on the BBT account, is simply an artifact of literally being what science takes us to be: an informatic subsystem. What has been bewildering us all along is our blindness to our blindness, our inability to explicitly consider the prevalent and decisive role that informatic neglect plays in our understanding of human cognition. The problem of consciousness, in other words, is nothing less than a decisive demonstration of the heuristic nature of semantic/epistemic cognition–a fact that really, in the end, should come as no surprise. Why, when human and animal cognition is so obviously heuristic in so many ways, would we assume that a patron as stingy as evolution would flatter us with a universal problem-solving device, if not for simple blindness to the limitations of our brains?


The scientific problem of consciousness remains, of course. Default identity remains to be explained. But given BBT, the philosophical conundrums have for the most part been explained away…


As have we.



 •  0 comments  •  flag
Share on Twitter
Published on October 08, 2012 12:31

September 27, 2012

Thinker as Tinker

[Okay, so this has just an organic extension of thinking through a variety of problems via a thought experiment posted by Eric Thomson over at the Brains blog. The dialogue takes place between an alien, Al, who has come to earth bearing news of Consciousness (or the lack of it), and a materialist philosopher, Mat, who, although playing the obligatory, Socratic role of the passive dupe, is having quite some difficulty swallowing what Al has to say. It's rough, but I do like the picture it paints, if only because it really does seem to offer a truly radical way to rethink consciousness, why we find it so difficult, as well as the very nature of philosophical thought. I haven't come up with a name for Al's position, yet, so if anyone thinks of something striking (or satirical) do let me know!]


Al: “Yes, yes, we went through this ‘conscious experience’ phase, ourselves. Nasty business. Brutish! You see, you’re still tangled in the distinction between system-intrinsic base information and the system-extrinsic composite information it makes possible. Since your primary cognitive systems have evolved to troubleshoot the latter, you lack both the information and the capacity to cognize the former. It’s yet another garden variety example of informatic parochialism combined with a classic heuristic mismatch. Had you not evolved linguistic communication, your cognitive systems would never need bump against these constraints, but alas, availability for linguistic coding means availability for cognitive troubleshooting, so you found yourself stranded with an ocean of information you could never quite explain–what you call ‘consciousness’ or ‘subjective experience.’”


Mat: “So you don’t have conscious experience?”


Al: “Good Heavens, no, my dear fellow!”


Mat: “So you don’t see that red apple, there?”


Al: “Of course I see it, but I have no conscious experience of it whatsoever.”


Mat: “But that’s impossible!”


Al: “Of course it is, for a backward brain such as your own. It’s quite quaint, actually, all this talk of things ‘out there’ and things ‘in here.’ It’s all so deliciously low res. But you’ll begin tinkering with the machinery soon enough. The heuristics that underwrite your environmental cognition are robust, there’s no doubt about that, but they are far too crude and task-specific for you to conceive your so-called ‘conscious experience’ for what it is. Someday soon you’ll see that asking what redness is makes no more sense than asking what the letter m means!”


Mat: “But redness has to be something!”


Al: “To be taken up as a troubleshooting target of your environmental cognitive systems, yes, indeed. That, my human friend, is precisely the problem. The heuristic you confuse for redness was originally designed to be utterly neglected. But as I said, rendering it available for linguistic coding made it available to your cognitive systems as well, and we find this is where the trouble typically begins. It certainly was the case with our species!”


Mat: “But it exists here and now for me! I’m bloody-well looking at it!”


Al: “I know this is difficult. Our species never resolved these problems until our philosophers began diagnosing these issues the way neurologists diagnose their patients, when they abandoned all their granular semantic commitments, all the tedious conceptual arguments, and began asking the simple question of what information was missing and why. Looking back, it all seems quite extraordinary. How many times do you need to be baffled before realizing that something is wrong with you? Leave it to philosophers to blame the symptom!


“You are still at the point were you primarily conceive of your brain as a semantic (as opposed to informatic) engine, as something that extracts ‘relevant’ information from its noisy environments, which it then processes into models of the universe, causally constructed ‘beliefs’ or ‘representations’ that take the ‘real’ as their ‘content.’ So the question of red becomes the question of servicing this cognitive mode and model, but it stubbornly refuses to cooperate with either, despite their independent intuitive ease. You have yet to appreciate the way the brain extracts and neglects information, the way, at every turn, it trades in heuristics, specialized information adapted for uptake via specialized processors adapted for specific cognitive tasks. Semantic cognition, despite the religious pretension of your logicians, is a cognitive short-cut, no different than social cognition. Rather than information as such, it deals with environmental being, with questions of what is what and what causes what, much as linguistic cognition deals with communicative meaning, with questions of what means what and what implies what.


“Now as I said, red no more possesses being than ‘m’ possesses meaning. Soon you will come to see that what you call ‘qualia’ are better categorized as ‘phenomemes,’ the combinatorial repertoire that your environmental cognitive systems uses to make determinations of being. They are ‘subexistential’ the way phonemes are ‘subsemantic.’ They seem to slip into cognitive vapour at every turn, threatening what you think are the hard won metaphysical gains of another semantic myth of yours, materialism. You find yourself confronted with a strange dilemma: either you make a fetish of their constitutive, combinatorial function and make them everything, or you stress their existential intractability say they are something radically different. But you are thinking like a philosopher when you need to think like a neuropsychiatrist.


“The question, ‘What am I bloody well looking at?’ exhausts the limits of semantic cognition for you. Within those limits, the question makes as much sense as any question could. But it is the product of a heuristic system, cognitive mechanisms whose (circumstance specific) effectiveness turn on the systematic neglect of information. So long as you take semantic cognition at its word, so long as you allow it to dictate the terms of your thinking, you will persist in confusing the informatic phenomena of smonsciousness with the semantic illusion of consciousness.”


Mat: “But semantic cognition is not heuristic!”


Al: “That’s what all heuristics say–they tend to take their neglect quite seriously, as do you, my human friend! But the matter is easily settled: tell me, in this so-called ‘conscious experience’ of yours, can you access any information regarding its neural provenance?”


Mat: “No.”


Al: “Let me guess: You just ‘see things,’ transparently as it were. Like that red apple.”


Mat: “Yes.”


Al: “Sounds like your cognitive systems are exceedingly selective to me!”


Mat: “They have to be. It would computationall–”


Al: “Intractable! I know! And evolution is a cheap, cheap date. So then, coarse-grain heuristics are quite inevitable, at least for evolved information systems such as ourselves.”


Mat: “Okay. So?”


Al: “So, heuristics are problem specific, are they not? Tell me, what should we expect from misapplications of our heuristic systems, hmm? What kind of symptoms?”


Mat: “Confusion, I suppose. Protracted controversy.”


Al: “Yes! So you recognize the bare possibility that I’m right?”


Mat: “I suppose.”


Al: “And given the miasma that characterizes the field otherwise, does this not place a premium on alternative possibilities?”


Mat: “But it’s just too much! You’re saying you’re not a subject!”


Al: “Precisely. No different than you.”


Mat: “That you experience, but you don’t have experience!”


Al: “Indeed! Indeed!”


Mat: “You don’t think you sound crazy?”


Al: “So the mad are prone to call their doctors. Look. I understand how this must all sound. If qualia don’t exist because they are ‘subexistential,’ how can they contribute to existence?


“Think of it this way. At a given moment, t1, qualia contributes, and you find yourself (quite in spite of your intellectual scruples) a naive realist, seeing things in the world. You see ‘through’ your experience. The red belongs to the apple, not you, and certainly not your brain! Subsequently, at t2, you focus your attention on the redness of the red, and suddenly you are looking ‘at’ your experience instead of through. (In a sense, instead of speaking words, you find yourself spelling them).


“The thing to remember is that this intentional ‘directing at’ that seems so obvious when attending to your attending is itself another heuristic–at best. You might even say it’s the ‘Master Heuristic.’ Nevertheless, it could, for all you know, be an abject distortion, a kind of water-stain Mary Magdelene imposed by deliberative cognition on cognition. Either way, by deliberating the existence of red, you just dropped a rock into the chipper, old boy. ‘But what is red!’ you say. ‘It has to be something!’ You say this because you have to, given that deliberative cognition possesses no information regarding its own limits. As far as its concerned, all phenomenal rocks are made of natural wood.


“So this is the dilemma my story poses for you. Semantic cognition assumes universality, so the notion that something that it says exists–surely, at the bare minimum!–does not exist sounds nonsensical. So when I say to you, information is all that matters, your Master Heuristic, utterly blind to the limits of its applicability, whirs and clicks and you say, “But surely that information must exist! Surely what makes that information informative is whether or not it is true!” And so on and so forth. And it all seems as obvious as can be (so long as you don’t ask too many questions).


“Information is systematicity. You need to see yourself philosophically the way your sciences are beginning to see you empirically: as a subsystem. You rise from your environments and pass back into them, not simply with birth and death, but with every instant of your life. There is no ‘inside,’ no ‘outside,’ just availability and applicability. Information blows through, and you are little more than a spangled waystation, a kind of informatic well, filled with coarse-grained intricacies, information severed and bled and bent to the point where you natively confuse yourselves with something other than the environments that made you, something above and apart.


“Information is the solvent that allows cognition to move beyond its low-resolution fixations. It’s not a matter of what’s ‘true’ in the old semantic sense, but rather ‘true’ in the heuristic sense, where the term is employed as a cog in the most effective cognitive machine possible. The same goes for ‘existence’ or for ‘meaning.’ These are devices. So we make our claims, use these tools according to design as much as possible, and dispose of them when they cease being effective. We help them remember their limits, chastise them when they overreach. We resign ourselves to ignorance regarding innumerable things for want of information. But we remember that the cosmos is a bottomless well of information, both in its sum and in its merest part.


“And you see, my dear, materialist friend, that you and all your philosophical comrades–all you ‘thinkers’–are actually tinkers, and the most inventive among you, engineers.


Mat: “You have some sense of humour for an alien!”


Al: “Alienation comes with the territory, I’m afraid.”


Mat: “So there’s no room for materialism in your account?”


Al: “No more than idealism. There is just no such thing as the ‘mind-body dichotomy.’ Which is to say, the mind-body heuristic possesses limited applicability.”


Mat: “Only information, huh?”


Al: “Are you not a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments? Is this not a cornerstone tenet of your so-called ‘materialism’?”


Mat: “Yes… Of course.”


Al: “So is not the concept ‘materialism’ a kind of component device?”


Mat: “Yes, of course, bu–”


Al: “But it’s a representational device, one that takes a fundamental fact of existence as its ‘content.’”


Mat: “Exactly!”


Al: “And so the Master Heuristic, the system called semantic cognition, has its say! So let me get this straight: You are a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments, and yet still capable, thanks to some mysterious conspiracy of causal relations, of maintaining logical relations with its environments…”


Mat: “This is what you keep evading: you go on and on as if everything is empirical, when in point of fact, scientific knowledge would be impossible without a priori knowledge derived from logic and mathematics. Incorrigible semantic knowledge.”


Al: [his four eyes fluttering] “I’m accessing the relevant information now. It would seem that this is a matter of some controversy among you humans… It seems that certain, celebrated tinkers taught that the distinction between a priori and a posteriori knowledge was artificial.”


Mat: “Yes… But, there’s always naysayers, always people bent on denying the obvious!”


Al: “Yes. Indeed. Like Galileo and Einste–”


Mat: “What are you saying?”


Al: “But of course. You must excuse me, my dear, dear human friend. I forgot how stunted your level of development is, how deeply you find yourself in the thrall of the processing and availability constraints suffered by your primate brain. You must understand that there are no such thing as logical relationships, at least not the way you conceive of them!”


Mat: “Now I know you are mad.”


Al: “You look more anxious than knowledgeable, I fear. No information system conjures or possesses logical relationships with its environments. What you call formal semantics are not ‘a priori’–oh my, your species has a pronounced weakness for narcissistic claims. Logic. Mathematics. These are natural phenomena, my friend. Only your blinkered mode of access fools you otherwise.”


Mat: “What are you talking about? Empirical knowledge is synthetic, environmental, something that can only be delivered through the senses. A priori knowledge is analytic, the product of thought alone.”


Al: “And is your brain not part of your environment?”


Mat: “Huh?”


Al: “Is your brain not part of your environment?”


Mat: “Of course it is.”


Al: “So you derive your knowledge of mathematics and logic from your environments as well.”


Mat: “No. Not at all!”


Al: “So where does it come from?”


Mat: “Nowhere, if that question is granted any sense at all. It is purely formal knowledge.”


Al: “So you access it… how?”


Mat: “As I said, via thought!”


Al: “So from your environment.”


Mat: “But it’s not environmental. It just… well… It just is.”


Al: “Symptoms, my good fellow. Remember what I said about symptoms. One thing you humans will shortly learn is that these kinds of murky, controversy-inspiring intuitions almost always indicate some kind of deliberative informatic access constraint. The painful fact is, my dear fellow, is that not one of your tinkers really knows what they are doing when they engage in logic and mathematics. Think of the way you need notation, sensory prosthetics, to anchor your intuitions! But since no information regarding the insufficiency of what little access you have is globally broadcast, you assume that you access everything you need. And then it strikes you as miraculous, the connection between the formal and the natural.


Mat: “Preposterous! What else could we require?”


Al: “Well, for one, information that would let you see your brain isn’t doing anything magical!”


Mat: “It’s not magical; it’s formal!”


Al: “Suit yourself. Would you care to know what it is you’re really doing?”


Mat: “Please. Enlighten me.”


Al: “That was sarcasm, there, wasn’t it? Wonderful! Have you ever wondered why logic and mathematics had to be discovered? It’s ‘a priori,’ you say. It’s all there ‘already,’ somewhere that’s nowhere, somehow. And yet, your access to it is restricted, like your access to environmental information, and the resulting knowledge is cumulative, like your empirical knowledge. ‘Scandal of deduction’ indeed! The irony, of course, is that you’re already sitting on your answer, insofar as you accept that you are a kind of biomechanical information processing system with finite computational capacity and limited informatic access to its environments. Some things that system discovers via system extrinsic interventions, and others via system intrinsic interventions. Your ‘formal semantics’ belongs to this latter. Not all interaction patterns are the same. Some you could say are hyperapplicable; like viruses they possess the capacity to manage systematic interventions in larger, far more complex interaction patterns. Your magical… er, formal semantics is simply the exploration of what we have long known are hyperapplicable interaction patterns.”


Mat: “But I’m not talking about ‘interaction patterns,’ I’m talking about inference structures.”


Al: “But they are the same thing, my primitive, hirsute-headed friend, only accessed via two very different channels, the one saturated with information thanks to the bounty of your environmentally-oriented perceptual systems, the other starved thanks to the penury of your brain’s in situ access to its own operations. The one ‘observational,’ thanks to the functional independence of your cognitive systems enjoy relative to your environments, the other performative, given that the interaction patterns at issue must be ‘auto-emulated’ to be discovered. The connection between the formal and the natural strikes you as miraculous because you cannot recognize they are one and the same. You cannot recognize they are one and the same because of the radical differences in informatic access and cognitive uptake.”


Mat: “But you’re reasoning as we speak, making inferences to make your case!”


Al: [sighs] “You are, like, so low-res, Dude. Why do you think the status of your formal semantics is so controversial? Surely this also speaks to a lack of information, no? When trouble-shooting environmental problems, your systems are primed for ‘informatic insufficiency’–and well they should be, given that environmental informatic under-determination kills. That blur, for all your ancestors knew, could be a leopard.


“The situation is quite different when it comes to trouble-shooting your own capacities. Whenever you attend to what you call ‘first-person’ information, sufficiency becomes your typical default assumption. This is why so many of your philosophers insisted for so long that ‘introspection’ was the most certain thing, and disparaged perception. The very thing that persuaded tinkers to doubt the reliability of the latter was its capacity to flag its own limitations, its capacity to revise its estimations as new perceptual information became available–the ability of the system to correct for its failures. In other words, what makes perception so reliable is what led your predecessors to think it unreliable, whereas what makes introspection so unreliable is the very thing that led your predecessors to think it the most reliable. No news is good news as far as assumptive sufficiency is concerned!


Information is additive. Flagging informatic insufficiency is always a matter of providing more information. Since more information always means more metabolic expense and slower processing, the evolutionary default is to strip everything down to the ‘truth,’ you could say–to shoot first and ask questions later!”


Mat: “So there’s no such thing as the truth, now?”


Al: “Not the way you conceive it. How could there be, given finite computational resources and limited informatic ability? How could your ‘view from nowhere’ be anything other than virtual, simply another heuristic? You have packed more magic into that term ‘formal’ than you know, my bald-bodied friend.


“Why do you think your logicians and mathematicians find it impossible to complete their formal systems short of generating inconsistencies? Computation is irreflexive. No device can perform computations on its own computations as it computes. For years your tinkers have been bumping into suggestive connections between incompleteness and thermodynamics, and even now, some are beginning to suspect the illusory nature of the ‘formal,’ that calculation and computation are indeed one and the same. All that remains is for you to grasp the trick of consciousness that makes it seem otherwise: the informatic deprivations that underwrite your illusion of reflexivity, and lead you to posit the ‘formal.’


“Let me a hazard a guess: Tinkers in human computer science find themselves flummoxed with dualisms that bear an eerie resemblance to those found in your philosophical tinkering.”


Mat: “Why… Yes, as it so happens.”


Al: “I apologize. The question was rhetorical. I was accessing the relevant information as I spoke. I see here that no one knows how to connect the semantic level of programming to the implementation level of machine function. The ‘symbol grounding problem,’ some call it… Egad! Can’t you see this has been what I’ve been talking about all along?”


Mat: “I… I don’t understand.”


Al: “Once again, you admit you’re a kind of biomechanical information processing system, one with limited computational capacity and informatic access to its environments. You admit that as such a system, you suffer any number of even more severe informatic shortfalls with reference to your own operations. You admit that the numerous peculiarities you attribute to the mental and the semantic at least admit description in terms of information deficits. And yet you find it impossible to bracket your semantic intuitions, the magical belief that any biomechanical information processing system, let alone one possessing the limited computational capacity and informatic access as yours, can manufacture a kind of absolute ‘epistemic’ relation.


Implementation, my pheromonal friend. Implementation. Implementation. Implementation. Implementation is the way, the concept you need, to maximize informatic applicability (problem-solving effectiveness) when tinkering with these problems. When you ‘program’ your computers, it’s primarily a matter of one implementation engendering another. Your ‘semantics’ is little more than the coarse-grain crossroads, a low-res cartoon compared to the informatics that you (as a so-called materialist) acknowledge underwrites it. You admit that semantics comes in an informatic box, and yet you insist on shoving that informatics into a semantic box, and you are mystified as to why nothing stays put.”


Mat: “Okay! Okay! So I’m willing to entertain the possibility that my reasoning has been distorted by the misapplication of some kind of ‘semantic stance,’ I guess. The ‘Master Heuristic,’ as you call it. Certain work in rational ecology suggests that the strategic exclusion of information often generates heuristics that are more effective problem solvers than optimization approaches. We evolve heuristics because of their computational speed and metabolic efficiency, but the hidden price we pay is limited applicability: heuristics are tools, and tools are problem specific. So how does all of this bear on the problem of conscious experience, again?”


Al: “But of course! My o my, we’ve strayed far afield, haven’t we? I have to admit, I’m overfond of preaching the virtues of Informatics to species as immature as yours. As I was saying earlier, qualia ‘exist’ relative to things existing in the world the way phonemes ‘mean’ relative to words meaning in language: in a participatory, modal sense. When you attend to qualia, they don’t offer much in the way of existential information, the way phonemes don’t offer much in the way of meaning. This is because, among other things, neither heuristic is matched for the cognitive system employed. Qualia, or phenomemes, are designed to build existence (when taken up by the appropriate cognitive system) the way phonemes are designed to build meaning (when taken up by the appropriate cognitive system).


“So, again, when a tinker submits qualia to the Master Heuristic for ‘existence processing’ they inevitably come up short. The question can’t be resolved. Phenomenality has to be something, and yet it doesn’t seem to be anything at all. You invent whole species of ‘zombies,’ whole genres of thought experiments, trying to get some purchase on the problem, to no avail.


“Consider the conceptual Necker Cube* of phenomenology and naturalism, idealism and materialism, the way your tinkers can’t decide whether to put ‘existence’ here or ‘there,’ to make it this or ‘that.’ The Master Heuristic looks ‘through’ experience, and sees the fine-grained complexities of the world. The Master Heuristic looks at experience, and sees the coarse-grained obscurities of consciousness. Both are right there, as plain as the polyp on your face. Which is fundamental? Who rules the metaphysical roost?


“But, as the informatic concept of ‘granularity’ suggests, the dichotomy is false, the result of a basic heuristic misapplication. To complicate your own materialist truism: You are a systematic assemblage of multiple biomechanical information processing systems, heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems. If you accept this claim, as I think you must, then you should accept that the problem of ‘heuristic misapplication’ looms over all your tinker–”


Mat: “Now you’re starting to sound like Rorty–or even worse, Wittgenstein!”


Al: “Two great tinkers, yes. Indeed, their critiques reveal some of the shortcomings of the Master Heuristic, at least  to the extent they considered philosophical problems in terms of performance. But by trading semantic reference for normative competence, they simply traded one inapplicable heuristic, referential truth, for another one, normative truth. I’m offering you effectiveness. Effectiveness is the concept possessing maximal applicability. Information–systematic differences making systematic differences.


“But to get back to the issue at hand: the problem of ‘heuristic misapplication’ looms over all your tinkering. Because many of these heuristics are innate as well as blind to their limited applicability, what I’m saying here will inevitably cut against a number of intuitions. But then you materialists, I gather, have long since accepted that any adequate account of consciousness will likely involve any number of counterintuitive claims.”


Mat: “But you’re saying there’s no such thing as intentionality! No meaning. No agency. No morality!”


Al: “Don’t pretend to be surprised. You materialists may not like to write about it, but our surveillance indicates that many of you have privately abandoned these things anyway.”


Mat: “Many?–maybe. But not me.”


Al: “The thing to remember is that this is simply what you’ve been all along. Some heuristics, like love, say, are preposterously coarse-grained, and yet preposterously effective all the same, so long as its scope of application is constrained. Meaning, agency, morality: these heuristics are also enormously effective, given the proper scope. The thing to remember is that ‘information is also a heuristic–only one that is particularly effective and perhaps maximally applicable, at least given the scope of the problem you call ‘consciousness.’


Mat: “But still, in the end, you’re just telling me to look at myself like a machine.”


Al: “The way your Doctor looks at you–yes! The way you claim to look at yourself already, and the way natural science has always looked at you. The only real question, my thermally tepid friend, is one of why philosophy has consistently refused to play along–even when it claims to be playing along! And this is what I’m offering you: a way to understand why the obvious strikes you as so preposterous! Heuristics. You are an assemblage of heuristics, a concatenation of devices that take informatic neglect as their cornerstone, problem-solving strategy, each of which is matched to a specific family of problems, and all of which are invisible to metacognition as well as utterly blind to one another–simply because no information regarding any of this finds its way to what you call ‘conscious cognition.’”


Mat: “So this is like Dennett’s stuff. You’re saying we need to make sure our various problem-solving stances need to be properly ‘matched,’ as you put it, to our problems.”


Al: “In a sense, yes–though an intentional heuristic like ‘stance’ is bound to hopelessly confuse things. I don’t think Dennett would want to say that you are ‘stances all the way down’ the way I’m suggesting that you are heuristics all the way down. As an intentional heuristic, ‘stance’ has limited applicability. This is why I’m offering you information: as heuristics go, it offers the highest resolution and the broadest scope. It allows you to explain the structure of other heuristics, as well as the kinds misapplications that keep your tinkers so long-bearded and well-fed. It offers you, in other words, a real way out of all your ancestral confusions.


“And most pertinent to our discussion, it lets you understand why consciousness baffles you so.”


Mat: “Yes. The million dollar question.”


Al: “You are a systematic assemblage of multiple biomechanical information processing systems, heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems.


“In most species, cognition is built around what might be called the ‘open channel principle.’ It evolved to manage the organism’s relationship to environmental change as efficiently as possible. As such, it neglects astronomical amounts of neural and environmental information, relying on those heuristics that optimize effectiveness against metabolic cost. It’s difficult to overstate how crucial this point is: the effectiveness of your cognition turns on the strategic neglect of certain kinds of information–what might be called ‘domain neglect.’


“Take, for instance, ‘aboutness.’ You have experiences OF things rather than experiences FROM things because information regarding the latter is much less germane to survival. Only in instances of perceptual vagueness or ambiguity do you perceive FROM information, typically in indirect ways (what you might call ‘squints,’ cues to gather supplemental information). So-called transparency, in other words, is a form of strategic neglect.


“Now consider how difficult FROM information is to think in semantic terms belonging to what I’ve called your Master Heuristic. Just try to imagine the experience belonging to a perceptual system that provided knowledge FROM, so that you have experience FROM trees rather than OF them. In other words, try to imagine opaque experience. Given your neurophysiology, the best you can do is imagine OF experience that it is FROM. Transparency–the Master Heuristic–is neurophysiologically compulsory.


“And here we stumble upon the threshold of what makes consciousness so incredibly difficult to fathom: it requires accessing the very information the system neglects (either out of structural necessity or to maximize the heuristic efficiency of environmental cognition).


“Why should this be a problem? Well, its an obvious misapplication, for one. As I said earlier, using cognitive systems designed to manage extrinsic environments to assess phenomemes amounts to dropping a rock into the woodchipper. You are trying to make words out of letters, existents out of the informatic constituents of existence.


“You persist because of cognitive neglect: you simply cannot see the limits of applicability pertaining to your Master Heuristic. You find yourself in an informatic dead-end, stranded with ‘experience’ as a peculiarly intractable existent. Given the absence of information pertaining to the insufficiency of the limited information gleaned by attending to experience, you assume it’s all the information you need–or what I earlier called sufficiency. Since deliberative experience OF experience, given its neglect, seems to capture the whole of experience, any information that makes a fragment of that experience is going to seem to contradict that experience, to be talking about something else. So you run into a powerful intuitive barrier, not unlike explaining the Mona Lisa to an ant born glued to her nose.”


Mat: “So, in the picture you’re painting, there is no final picture, only a… frame, I guess you could say, systematic differences making systematic differences, or effective information. Using that, we can think outside the limitations of our heuristics, and see that consciousness as we conceive of it is a kind of perspectival illusion, a figment of informatic constraints. There literally is no such thing outside our own… informatic frame of reference?


Al: “Very good! Once you adopt information as your new Master Heuristic, the antipathy between redness and apples vanishes, along with all the other dichotomies arising out the old, semantic Master Heuristic. The information that you ‘are’ is the information that you ‘see.’ Even though your ‘experience’ will continue to be stamped by the informatic neglect characteristic of semantics, you will know better.


“You are an assemblage of heuristic devices, each possessing limited computational capacity and informatic access, each adapted to a specific set of problems–no different than any of your animal relatives. Part of what distinguishes your species, my binocular friend, is your ability to make problems, to apply your heuristics to novel situations, adapt and enlarge them if possible, even leave them behind if need be–as well as to doggedly throw them at problems they simply cannot solve.”


Mat: “So semantic and normative conceptions of knowledge can’t solve the problem of consciousness simply because the heuristics they rely on, despite the illusion of universality leveraged by neglect, are too specialized. Isn’t this just cognitive closure you’re talking about, the argument that consciousness is to us what quantum-mechanics are to chimpanzees, something simply beyond our cognitive capacity?”


Al: “The problem of cognitive applicability is quite different from that of cognitive closure, as certain tinkers among you have suggested. But the analogy to quantum mechanics is an instructive one: only when your physicists began thinking around, as opposed to through, their default heuristics, could they begin to make sense of what they were finding. This lesson is clear, one would think. Once you understand the scope of a particular heuristic, you have the means of leaving the problems it generates behind.


“But I fear the notion of relinquishing the Master Heuristic will be enormously difficult, if not impossible for many of your tinkers. For them, cognitive closure will apply, and this in turn will legislate any number of myth-preserving fancies. For those who can, those who come to understand that information precedes all the other clumsy, coarse-grained concepts you have inherited from your biology and your traditions, even existence, they will come to see that they, an assemblage of heuristic devices, are their own informatic frame of reference, a system encompassing the vast swathe of the universe they ‘know’ and continuously open to the universe they don’t.


Mat: “Excuse me for sounding dense, but you’re pretty much saying that the whole of philosophy is obsolete!”


Al: “Indeed I am, my primitive friend. But please, don’t feign any shock or surprise: a great proportion of your scientists have been saying as much for quite some time. The effectiveness of information has rendered it a social and cultural tsunami, the conceptual anchor of the most profound transformation to ever hit your species, and the most your philosophical tinkers can muster are anaemic attempts to stuff into some kind of semantic box!”


“But narcissistic idylls of your ignorance are now at an end. The mythic assumption, that you humans alone evolved some kind of monolithic, universal cognition, is entirely understandable, given the recursive blindness of your brains. But now you are beginning to understand that you are not so different from your genetic cousins, that you only seemed radically novel because of the drastic information access constraints faced by autocognition. More and more you will come to see semantics as a parochial detour forced upon you by the vagaries and exigencies of your evolution. More and more you will turn to informatics to take its place.”


Mat: “Well, to hell with that! I say.”


Al: “Deny, if you wish. The effectiveness of information is such that it will remake you, whether you believe in it or not.”



 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2012 11:32

September 21, 2012

Attack of the Phenophages

Aphorism of the Day: If you think of knowledge in fractal terms, you can see yourself as a wane reflection in the bottom of a rain drop as fat as the cosmos.


Or is that just me pissing on your leg?


.


Imagine a viscous, gelatinous alien species that crawls into human ear canals as they sleep, then over the course of the night infiltrates the conscious subsystems of the brain. Called phenophages, these creatures literally feed on the ‘what-likeness’ of conscious experience. They twine about the global broadcasting architecture of the thalamocortical system, shunting and devouring what would have been conscious phenomenal inputs. In order to escape detection, they disconnect any system that could alert its host to the absence of phenomenal experience. More insidiously still, they feed-forward any information the missing phenomenal experience would have provided the cognitive systems of its host, so that humans hosting phenophages comport themselves as if they possessed phenomenal experience in all ways. They drive through rush hour traffic, complain about the sun in their eyes, compliment their spouses’ choice of clothing, ponder the difference between perfumes, extol the gustatory virtues of their favourite restaurant, and so on.


Finally, after several years, neurologists detect the phenophages, and through various invasive and noninvasive means, discover their catastrophic consequences. Even though they have no way of removing the parasites, they are able to reconnect the systems that allow the infected to at least cognize the fact they have no experience. The problem is that doing so seems to drive a good number of these patients, who they term ‘phenophagiacs,’ insane, when they had evinced only psychologically well-adjusted behaviour before.


This scenario raises a number of questions for me, but I thought I would start with the most basic: Are unwitting phenophagiacs actually conscious in any meaningful sense? Are the witting?


A twist on this scenario involves the rise of a psychological condition called ‘phenophagic hysteria,’ where numbers of uninfected individuals, perhaps unduly affected by the intense media attention garnered by the alien infestation, come to believe they are infected even though they are not. They act in all ways as if they had experience, but when queried, they (unlike preoperative phenophagiacs) insist they have no experience whatsoever, that they simply ‘know’ in the absence of any conscious ‘feel’ of any sort. When these individuals are tested, researchers discover that they indeed exhibit a set of activation patterns that are unique to them, and conclude that somehow, these individuals have ‘blocked’ the circuits enabling conscious awareness of their conscious awareness.


So the follow-up question would be: Are phenophagic hysterics conscious in any meaningful sense?



 •  0 comments  •  flag
Share on Twitter
Published on September 21, 2012 06:32

September 19, 2012

Logic of Neglect

Aphorism of the Day I: Consciousness is a little animal in our heads, curled up and snoozing, at times peering into the neural murk, otherwise dreaming what we call waking life.


Aphorism of the Day II: People are almost entirely incapable of distinguishing the quality of what is said from the number and status of the ears listening. All the new can do is keep whispering, hoping against hope that something might be heard between the booming repetitions.


.


What effect do constraints on informatic availability and cognitive capacity have on our ability to make sense of consciousness? This is one of those questions that philosophers literally dream of stumbling on, questions so obvious, so momentous in implication, that their answers have the effect of transforming orthodox understanding–if you’re lucky enough to catch the orthodoxy’s ear, that is!


The aim of the Blind Brain Theory (BBT) is to rough out the ‘logic of neglect’ that underwrites ‘error consciousness,’ the consciousness we think we have. It proceeds on the noncontroversial presumption that consciousness is the product of some subsystem of the brain, and that, as such, it operates within a variety of informatic constraints. It advances the hypothesis that the various perplexities that bedevil our attempts to explain consciousness are largely artifacts of these informatic constraints. From the standpoint of BBT, what we call the Hard Problem conflates two quite distinct difficulties: 1) the ‘generation problem,’ the question of how a certain conspiracy of meat can conjure whatever consciousness is; and 2) the ‘explanandum problem,’ the question of what any answer to the first problem needs to explain to count as an adequate explanation. Its primary insight turns on the role lack plays in structuring conscious experience. It argues that philosophy of mind needs to keep its dire informatic straits clear: once you understand that we make similar informatic frame-of-reference (IFR) errors regarding consciousness as we are prone to make in the world, you acknowledge that we might be radically mistaken about what consciousness is.


Radically mistaken about everything, in fact.


What is an ‘informatic frame-of-reference’ error? Consider the most famous one of all: geocentrism. We perceive ourselves moving whenever a large portion of our visual field moves–when we experience ‘vection,’ as psychologists call it. Short of this and vestibular effects, a sense of motionless is the cognitive default. As a result we stand still when the world stands still relative to us. So when our ancestors looked into the heavens and began charting the movement of celestial bodies, the possibility that they were also moving seemed, well, preposterous. What makes this error perspectival (or IFR) is the way it turns on the combination of cognitive capacity and information available. Given the information available, and given our cognitive capacities, geocentrism had to seem obviously true: “the world also is established,” Psalms 93:1 reads, “that it cannot be moved.” As informatically earthbound, we quite simply lacked access to the information our cognitive capacities required to overcome our native intuition of motionlessness. We found ourselves informatically encapsulated, stranded with insufficient information and limited cognitive resources. Thus the revolutionary significance of Galileo and his Dutch Spyglass–and of science in general.


According to BBT, what we call ‘consciousness,’ what phenomenologists think they are describing, is largely an illusion turning on analogous informatic frame-of-reference errors. The consciousness we think we have, that we think we need to explain, quite simply does not exist.


The fact that we can and do make analogous IFR errors regarding consciousness is not all that implausible in principle. A good deal of the debate in the cognitive sciences prominently features questions of informatic access.  Given that the cognition of information gleaned from conscious experience relies on the same mechanisms as the cognition of information gleaned from our environments, we should expect to find analogous errors.


We should expect, for instance, to encounter instances of ‘noocentrism’ analogous to the description of geocentrism provided above. Geocentrism, for instance, assumes the earth is outside of play, that it remains fixed while everything else endures positional transformations. Is this so different from the intuitions that seem to underwrite our ancestral understanding of the soul as something ‘outside of play’? Or how about the bootstrapping illusion that seems so integral to our sense of ‘free will’?


Given that conscious (System 2) deliberation is brainbound, only the information that makes it to conscious experience (via ‘broadcasting’ or ‘integration’) is available for cognition. With geocentrism, the fact that we are earthbound constrains the environmental information available for conscious experience and thus conscious deliberation. With noocentrism, the fact that cognition is brainbound constrains the neural information available for conscious deliberation. When conscious deliberation turns to conscious experience itself (rather than the environmental information it communicates) the limits of availability (encapsulation) insures that a variety of information remains inaccessible–occluded.


What information is occluded? Almost all of it, if you consider the 38, 000 trillion operations per second your brain is allegedly performing this very instant. Everything really hinges on the adequacy of what little we get.


One of the things I love about Peter Hankins’ Conscious Entities site are his images, the way he uses filter effects to bleed information from photographic portraits until only line sketches remain. Not only does it look cool, I couldn’t imagine a more appropriate stylistic trope for a website devoted to consciousness.


Why? Imagine running your perception of environmental reality through various ‘existential filters’–performing a kind of informatic deconstruction of your perceptual experience. Some of this information is phenomenal, but much of it is also cognitive. That red before you belongs to an apple, one object among many possessing a history in addition to a welter of properties. You know for instance, that you can bite it, chew it into little pieces. In fact, you have a positively immense repertoire of ‘apple information’ at your disposal, which should come as no surprise, given that your brain is primarily an environmental information processing machine, one possessing an ancient evolutionary pedigree.


What your brain is not, however,is primarily a consciousness information processing machine. Because the brain is primarily designed to exploit ‘first order’ environmental as opposed to ‘second order’ experiential information, we should perhaps expect a dramatic discrepancy between 1) the quantity of environmental versus experiential information available; and 2) the way environmental and experiential information are matched to various cognitive systems.


One of the most striking things about all the little perplexities that plague consciousness research is the way they can be interpreted in terms of informatic deprivation, as the result of our cognitive systems accessing too little information, mismatched information, or partial information. To get a sense of this, think of the information at your disposal regarding apples and begin subtracting. You can begin with the nutritive information you have, what allows to identify apples as a kind of food. Then you can subtract the phylogenetic information you’ve encountered, what allows you to identify the apple as a fruit, as a reproductive organ belonging to a certain family of trees. Then you can subtract the information that allows to distinguish apples from inorganic objects, as something living. Then you can subtract all the causal information you’ve accumulated, the information that allows you to cognize the apple as a effect (possessing effects). Then you can subtract all the substantival information, what allows to conceive the apple as an aggregate, something that can be bitten, or smashed into externally related bits. Then you can move on to basic spatial information, what allows to conceive the apple as a three dimensional object possessing a position in space, as something that can be walked around and regarded from multiple angles. At the very end of the informatic leash, you have the differentiations that allow you to identify this apple versus other things, or even as a figure versus some background.


So, back to our parallel between geocentrism and noocentrism. As I said above: when conscious deliberation turns to conscious experience itself (rather than the environmental information it communicates) the limits of availability (encapsulation) insures that a variety of information remains inaccessible–occluded. Deliberative cognition (reflection) has no access to causal information: the neuronal provenance of conscious experience is entirely occluded. So when deliberative cognition attempts to identify precursors, it only has sequels to select from. As a result, it seems to have no extrinsic precursors, to be some kind of ‘causa sui,’ moveable only by itself.


It has no access to spatial information per se: we have a foggy sense of various phenomenal elements ‘occurring within’ a larger sensorium, which we are wont to ‘place’ in our ‘heads’ in our environment, but its not as if our sensorium is ‘spatial’ the way an apple is spatial: since it is brainbound, deliberation cognitive cannot access information regarding our sensorium by ‘walking around it,’ changing our position relative to it. Lacking this environmental channel, it has to be ‘immovable’ with reference to cognition–once again, in a manner not so different from what we see with geocentrism.


Deliberative cognition likewise has no substantival information to draw on: we can’t, as Descartes so famously noted, break our sensorium up into externally-related parts. Absent this information, the cognitive tendency is to mistake aggregates as individuals, as substantival wholes. Here we see one of the more crucial insights belonging to BBT: the way ‘internal relationality’ and the concepts of holism that fall out of it, that govern our understanding of semantic concepts such as ‘context,’ for instance, is a kind of cognitive default pertaining to the absence of information. Our notion of ‘meaning holism,’ just for instance, is an obvious artifact of brainbound informatic parochialism according to BBT, much as Aristotle’s notion of ‘celestial spheres’ is the artifact of earthbound informatic parochialism. Lacking the information required to see stars as distant, as externally-related objects scattered through the void of space, it seems sensible to interpret them as salient features of an individual structure, an immense sphere.


We all know that our ability to solve problems depends on the relation between the information and computational resources available. BBT simply applies this commonsense knowledge to consciousness, and interprets the perplexities out relying on what are actually quite commonsense intuitions. Beginnings have no precursors. Blurs lack internal structure.


If you’re steeped in consciousness literature and reading this with a squint, thinking that I’m missing this or misinterpreting that, or that it’s just gotta-be-wrong, or ‘yah-yah-it’s-no-big-whup,’ then just ask yourself: How does the relation between available information and computational resources bear on the problem of consciousness? It could be ignorance-fed hubris on my part, but I’m convinced thinking this question through will lead you to many of the same conclusions suggested by BBT.


I’ve been sitting on the basic outline of this approach for twelve years. Since the ‘Now’ and its paradoxes were my first philosophical obsession, something that had driven me cross-eyed more times than I could count, I realized that BBT was a potential game-changer given the ease with which it explained its perplexities away. Just consider what I mentioned above: Lacking informatic access to the neural precursors of conscious experience, deliberative cognition finds itself on a strange kind of informatic treadmill. It can track temporal differentiations effectively enough within conscious experience without, however, being able to track the temporal differentiation of conscious experience itself. It’s an old axiom of psychophysics that what cannot be differentiated is perceived as the same. And thus the ancient perplexity noted by Aristotle, the way the now is always different and yet somehow the same is explained (and much else asides).


The reason I’m thumping the tub as loudly as I can now is that, quite frankly, I could feel the rest of the field moving in. On the continental side of the philosophical border, I saw more and more thinking tackling the difficulties posed by the cognitive sciences, whereas on the analytic side, I found more and more thinkers accepting, in a variety of registers, the central assumption of BBT: that the consciousness ‘revealed’ by introspection (or deliberative metacognition or higher order thought) is little more than a water-stain informatically speaking, an impoverished blur that only seems a ‘plenum,’ something both ‘full’ and ‘incorrigible’ (or ‘sufficient’ in BBT-speak) because being brainbound, it has little or no information to the contrary.


My problem, as always, lies first in the idiosyncrasy of my background, the way I’ve developed all these concepts and ideas in isolation from the academy, and so must inevitably come across as naive or amateurish to ingroup, specialist ears; and second in my bizarre inability to see any of my nonfiction enterprises to the point of submission, never mind publication. This latter problem, I’m sure, is shrink material. The former is bad enough. The only thing worse than being an iconoclast in a field filled with crackpots is being an iconoclast who can only seem to blog about his ‘oh-so-special’ ideas!


The logic of neglect operates across all levels.


To boot, I’m sure being a fantasy novelist doesn’t help, particularly when it comes to a institution as insecure about its cognitive credentials as philosophy! Ah, but such is life. Toil and obscurity, my brothers. Toil and obscurity. For those of you who find this wankery insufferable, I apologize. If you want me to shut up already, ask your philosophy professor to take a looksee and correct my errant ways. In the meantime, I am, as always, the meat-puppet of my muse. And for those of you who have developed a morbid fascination with this morbid fascination, this strange intellectual adventure through the fantasies that constitute our souls, I need to extend a big… fat… danke…


Smoking ideas has to be one of the better ways to waste one’s time.



 •  0 comments  •  flag
Share on Twitter
Published on September 19, 2012 12:45

September 14, 2012

Life as Alien Transmission

Aphorism of the Day: The purest thing anyone can say about anything is that the consciousness is noisy.


.


In order to explain anything, you need to have some general sense of what it is you’re trying to explain. When it comes to consciousness, we don’t even have that. In 1983, Joseph Levine famously coined the phrase ‘explanatory gap‘ to describe the problem facing consciousness theorists and researchers. But metaphorically speaking, the problem resembles an explanatory cliff more than a mere gap. Instead of an explanandum, we have noise. So whatever explanans anyone cooks up, like Tononi’s IITC, for instance, is simply left hanging. Given the florid diversity of incompatible views, the consensus will almost certainly be that the wrong thing is being explained. The Blind Brain Theory offers a diagnosis of why this is the case, as well as a means of stripping away all the ‘secondary perplexities’ that plague our attempts to nail down consciousness as an explanandum. It clears away Error Consciousness, or the consciousness you think you have, given the severe informatic constraints placed on reflection.


So what, on the Blind Brain view, makes consciousness so frickin difficult?


Douglas Adams famously posed the farcical possibility that earth and humanity were a kind of computer designed to answer the question of the meaning of life. I would like to pose an alternate, equally farcical possibility: what if human consciousness were a code, a message sent by some advanced alien species, the Ring, for purposes known only to them? How might their advanced alien enemies, the Horn, go about deciphering it?


The immediate problem they would face is one of information availability. In normal instances of cryptanalysis, the coded message or ciphertext is available, as is general information regarding the coding algorithm. What is missing is the key, which is required to translate the message coded or plaintext from the ciphertext. In this case, however, the alien cryptanalysts would only have our reports of our conscious experiences to go on. Their situation would be hopeless, akin to attempting to unravel the German Enigma code via reports of its existence. Arguably, becoming human would be the only way for them to access the ciphertext.


But say this is technically feasible. So the alien enemy cryptanalysts transform themselves into humans, access the ciphertext in the form of conscious experience, only to discover another apparently insuperable hurdle: the issue of computational resources. To be human is to possess certain on-board cognitive capacities, which, as it turns out, are woefully inadequate. The alien cryptanalysts experiment, augment their human capacities this way and that, but they soon discover that transforming human cognition has the effect of transforming human experience, and so distorting the original ciphertext.


Only now do the Horn realize the cunning ingenuity of their foe. Cryptanalysis requires access both to the ciphertext and to the computational resources required to decode it. As advanced aliens, they possessed access to the latter, but not the former. And now, as humans, they possess access to the former, but at the cost of the latter.


The only way to get at the code, it seems, is to forgo the capacity to decode it. The Ring, the Horn cryptanalysts report, have discovered an apparently unbreakable code, a ciphertext that can only be accessed at the cost of the resources required to successfully attack it. An ‘entangled observer code,’ they call it, shaking their polyps in outrage and admiration, one requiring the cryptanalyst become a constitutive part of its information economy, effectively sequestering them from the tools and information required to decode it.


The only option, they conclude, is to destroy the message.


The point of this ‘cosmic cryptography’ scenario is not so much to recapitulate the introspective leg of McGinn’s ‘cognitive closure’ thesis as to frame the ‘entangled’ relation between information availability and cognitive resources that will preoccupy the remainder of this paper. What can we say about the ‘first-person’ information available for conscious experience? What can we say about the cognitive resources available for interpreting that information?


Explanations in cognitive science generally adhere to the explanatory paradigm found in the life sciences: various operations are ‘identified’ and a variety of mechanisms, understood as systems of components or ‘working parts,’ are posited to discharge them. In cognitive science in particular, the operations tend to be various cognitive capacities or conscious phenomena, and the components tend to be representations embedded in computational procedures that produce more representations. Theorists continually tear down and rebuild what are in effect virtual ‘explanatory machines,’ using research drawn from as many related fields as possible to warrant their formulations. Whether the operational outputs are behavioural, epistemic, or phenomenal, these virtual machines inevitably involve asking what information is available for what component system or process.


I call this process of information tracking the ‘Follow the Information Game’ (FIG). In a superficial sense, playing FIG is not all that different from playing detective. In the case of criminal investigations, evidence is assembled and assessed, possible motives are considered, various parties to the crime are identified, and an overarching narrative account of who did what to whom is devised and, ideally, tested. In the case of cognitive investigations, evidence is likewise assembled and assessed, possible evolutionary ‘motives’ are considered, a number of contributing component mechanisms are posited, and an overarching mechanistic account what does what for what is devised for possible experimental testing. The ‘doing’ invariably involves discharging some computational function, processing and disseminating information for subsequent, downstream or reentrant computational functions.


The signature difference between criminal and cognitive investigations, however, is that criminal investigators typically have no stake or role in the crimes they investigate. When it comes to cognitive investigations, the situation is rather like a bad movie: the detective is always in some sense under investigation. The cognitive capacities modelled are often the very cognitive capacities modelling. Now if these capacities consisted of ‘optimization mechanisms,’ devices that weight and add as much information as possible to produce optimal solutions, only the availability of information would be the problem. But as recent work in ecological rationality has demonstrated, problem-specific heuristics seem to be evolution’s weapon of choice when it comes to cognition. If our cognitive capacities involve specialized heuristics, then the cognitive detective faces the thorny issue of cognitive applicability. Are the cognitive capacities engaged in a given cognitive investigation the appropriate ones? Or, to borrow the terminology used in ecological rationality, do they match the problem or problems we are attempting to solve?


The question of entanglement is essentially this question of cognitive applicability and informatic availability. There can be little doubt that our success playing FIG depends, in some measure, on isolating and minimizing our entanglements. And yet, I would argue that the general attitude is one of resignation. The vast majority of theorists and researchers acknowledge that constraints on their cognitive and informatic resources regularly interfere with their investigations. They accept that they suffer from hidden ignorances, any number of native biases, and that their observations are inevitably theory-laden. Entanglements, the general presumption seems to be, are occupational hazards belonging to any investigative endeavour.


What is there to do but muddle our way forward?


But as the story of the Horn and their attempt to decipher the Ring’s ‘entangled observer code’ makes clear, the issue of entanglement seems to be somewhat more than a run-of-the-mill operational risk when consciousness is under investigation. The notional comparison between the what-is-it-likeness, or the apparently irreducible first-person nature of conscious experience, with an advanced alien ciphertext doesn’t seem all that implausible given the apparent difficulty of the Hard Problem. The idea of an encryption that constitutively constrains the computational resources required to attack it, a code that the cryptanalyst must become to simply access the plaintext, does bear an eerie resemblance to the situation confronting consciousness theorists and researchers–certainly enough to warrant further consideration.



 •  0 comments  •  flag
Share on Twitter
Published on September 14, 2012 06:58

September 11, 2012

A Brick o’ Qualia: Tononi, Phi, and the Neural Armchair

Aphorism of the Day: The absence of light is either the presence of dark–or death. For every decision made, death is the option not taken.


Aphorism of the Day II: Things we see through: eyes, windows, words, images, thoughts, lies, lingerie, and excuses.


.


So Guilio Tononi’s new book Phi: A Voyage from the Brain to the Soul has been out for a few weeks now, and I’ve had this ‘review’ coalescing in my brain’s gut (the reason for the scarequotes should become evident in due course). In the meantime, as fate would have it, I’ve stumbled across several reviews of the book, including one that is genuinely philosophically savvy, as well as several other online considerations of his theory of consciousness. And of course, everyone seems to have an opinion quite the opposite of my own.


First, I should say that this book is written for the layreader: it is in fact, the most original, beautiful general interest book on consciousness I’ve read since Douglas Hofstadter’s Godel, Esher, Bach: The Eternal Golden Braid – a book I can’t help but think provided Tononi with more than a little inspiration – as well as a commercial argument to get his publishers on board. Because on board they most certainly were: Phi is literally one of the most gorgeous books I have ever purchased, so much so that ‘book’ doesn’t seem to do it justice. Volume, would be a better word! The whole thing is printed on what looks like #100 gloss text paper. Posh stuff.


Anyway, if you’re one of my fiction readers who squints at all this consciousness stuff, this is the book for you.


What makes this book extraordinary is the way it ‘argues’ across numerous noncognitive registers. Tononi, with the cooperation of his publisher, put a great deal of effort into the crafting the qualia of the book, to create, in a sense, a kind of phenomenal ‘argument.’ It’s literally bursting with imagery, a pageant of photographic plates that continually frame the text. He writes with a kind of pseudo-Renaissance diction, hyperbolic, dense with cultural references, and downright poetic at times. He uses a narrative and dialogic structure, taking Galileo as his theoretical protagonist. With various guides, the father of science passes through a series of episodes with thinly disguised historical interlocutors, some of them guides, others mere passersby. This is obviously meant to emulate Dante’s Inferno, but sometimes, unfortunately, struck me as more reminiscent of ”A Christmas Carol.” Following each of these episodes, he provides ‘Notes,’ which sometimes clarify and other times contradict the content of the preceding narrative and dialogue, generating a number of postmodern effects in genuinely unprecedented ways. Phi, in other words, is entirely capable of grounding thoroughly literary readings.


The result is that his actual account, the Information Integration Theory of Consciousness (IITC), is deeply nested within a series of ’quality intensive’ expressive modes. The book, in other words, is meant to be a kind of tuning fork, something that hums with the very consciousness that it purports to explain. A brick o’ qualia…


An exemplar of Phi itself, the encircled ‘I’ of information.


So at this expressive level, at least, there is no doubting the genius of the book. Of course there’s many things I could quibble about (including sexism, believe it or not!) but they strike me as too idiosyncratic to belong in a review meant to describe and evaluate the book for others.


What I’ve found so surprising these past weeks is the apparent general antipathy to IITC in consciousness research circles, when personally, I class it in the same category as its main scientific competitors, like Bernard Baars’ Global Workspace theory of consciousness. And unlike pretty much everyone I’ve read, I actually think Tononi’s account of qualia (the term philosophers use for the purely phenomenal characteristics of consciousness, the redness of red, and so on) can actually do some real explanatory work.


Most seem to agree with Peter Rankin’s assessment of IITC on Conscious Entities, which boils down to ‘but red ain’t information’! Tononi, I admit, does have the bad habit of conflating his primary explanans for his explandum (and thus flirting with panpsychism), but I actually don’t think he’s arguing that red is information as he’s arguing that information integration can explain red as much as it needs to be explained.


Information integration builds on Gerald Edelman’s guiding insight that whatever consciousness is, it has something to do with differentiated unity. ‘Phi’ refers to the quantity of information (in its Shannon-Weaver incarnation) a system possesses over and above the information possessed by its component parts. One photodiode can be either on or off. Add another, and all you have are two photodiodes that are on or off. Since they are disconnected, they generate no information over and above on/off. Integrate them, which is to say, plug them into a third system, and suddenly the information explodes: on/on, on/off, off/on, off/off. Integrate another, and you have: on/on/on, on/on/off, on/off/off, off/off/off, off/off/on, off/on/on, off/on/off, on/off/on. Integrate another and… you get the picture.


Tononi argues that consciousness is a product of the combinatorial explosion of possible states that accompanies the kind of neuronal integration that seems to be going on in the thalamocortical system of the human brain. And he claims that this can explain what is going on with qualia, the one thing in consciousness research that seems to be heavier than Thor’s hammer.


Theoretically speaking, this puts him in a pretty pickle, because when it comes to qualia, two warring camps dominate the field: those who think qualia are super special, and those who think qualia are not what we make of them, conceptually incoherent, or impossible to explain without begging the question. Crudely put, the problem Tononi faces with the first tribe is that as soon as he picks the hammer up, they claim that it wasn’t Thor’s hammer after all, and the problem he faces with the second tribe is that they don’t believe in Thor.


The only safe thing you can say about qualia is that they are controversial.


Tononi thinks the explanation will look something like:


The many mechanisms of a complex, in various combinations, specify repertoires of states they can distinguish within the complex, above and beyond what their parts can do: each repertoire is integrated information–each an irreducible concept. Together they form a shape in qualia space. This is the quality of experience, and Q is its symbol. (217)


The reason I think this notion has promise lies in the way it explains the apparent inexplicability of things like red. And this, to me, seems as good a place to begin as any. Gary Drescher, for instance, argues that qualia should be understood by analogue to gensyms in Lisp programming. Gensyms are elements that are inscrutable to the program outside of their distinction from other elements. Lisp can recognize only that a gensym is a gensym, and none of its properties.


Similarly, we have no introspective access to whatever internal properties make the red gensym recognizably distinct from the green; our Cartesian camcorders are not wired up to monitor or record those details. Thus we cannot tell what makes the red sensation redlike, even though we know the sensation when we experience it. (Good and Real, 81-2)


Now I think this analogy fails in a number of other respects, but what gensyms do is allow us to see the apparent inexplicability of qualia as an important clue, as a positive feature possessing functional consequences. Qualia qua qualia are informatically impoverished, ‘introspectively opaque,’ so much so you might almost think they belonged to a system that was not designed to cognize them as qualia – which, as it turns out, is precisely the case. (Generally speaking, theoretical reflection on experience is not something that will get you laid). So in a sense, the first response to the ‘problem of qualia’ should be, Go figure. Given the exhorbitant metabolic cost of neural processing, we should expect qualia to be largely inscrutable to introspection.


For Tononi, Q-space allows you to understand this inscrutability. Red is a certain dedicated informatic configuration (‘concept’) that is periodically plugged into the larger, far more complex succession of configurations that occupy the whole.


Now for all it’s complexity, it’s important to recall that our brains are overmatched by the complexity of our environments. Managing the kind of systematic relationships with our environments that our brain does requires a good deal of complexity reduction, heuristic mechanisms robust enough to apply to as many circumstances as possible. So a palette of environmental invariants are selected according to the whims of reproductive success, which then form the combinatorial basis for ‘aggregate heuristic mechanisms’ (or ‘representations’) capable of systematically interacting with more variant, but recurrent, features of the environment.


So red helped our primate ancestors identify apples. As thalamocortical complexity increased, it makes sense that our cognitive capacities would adapt to troubleshoot things like apples instead of things like red, simply because the stakes of things like light reflected at 650nm are low compared to things like apples. Qualia, you could say, are existentially stable. Redness doesn’t ambush or poison or bloom or hang from perilous branches. It makes sense that the availability of information and corresponding cognitive resources would covary with the ‘existential volatility’ of a given informatic configurations (prerepresentational or representational).


What Tononi gets is that red engages the global configuration in a fixed way, one that does not allow the it nearly so many ‘degrees of dynamic reconfiguration’ relative to it as opposed to apples. Okay, so this last bit isn’t so much Tononi as the way IITC plugs into the Blind Brain Theory (BBT). But his insight provides a great starting point.


So what explains the ‘redness’ of red, the raw, ineffable feel of pain? This is where qualiaphiles will likely want to jump ship. From Tononi’s Q-space perspective, a given space (heuristic configuration) simply is what it is – ‘irreducible,’ as he puts it. Thanks to evolution, we inherited a wild variety of differentiating shapes, or qualia, by happenstance. If you want to understand what makes red red, let me refer you to the anthropic principle. It’s part of basic cable. These are simply the channels available when cable first got up and running.


Returning to BBT, the thing to appreciate here is what I call encapsulation. Even though the brain is an open system, conscious experience only expresses information that is globally broadcast or integrated. If it is the case that System 2 deliberation (reflection) is largely restricted to globally broadcast or integrated information, then our reasoning is limited to what we can consciously experience. Our senses, of course, provide a continuous stream of environmental information which finds itself expressed in transformations of aggregate heuristic configurations, representations. With apples we can vary our informatic perspective and sample hitherto unavailable information to leverage the various forms of dynamic reconfiguration that we call cognition.


Not so with red. Basic heuristic configurations (combinatorial prerepresentations or qualia) are updated, certainly. Green apples turn red. Blood dries to brown. But unlike apples, we can never get up and look at the backside of red, never access the information required to effect the various degrees of dynamic reconfiguration required for cognition.


It’s a question of informatic ‘perspective.’ With qualia we are trapped in our neural armchair. The information available to System 2 deliberation (reflection) is simply too scant (and likely too mismatched to the heuristic demands of environmental cognition) to do anything but rhapsodize or opine. Red is too greased and cognition too frostbitten to do the juggling that knowledge requires. (Where science is in the business of economizing excesses of information, phenomenology, you could say, is in the business of larding its shortage).


But this doesn’t mean that qualia can’t be naturalistically explained. I just offered an outline of a possible explanation above. It just means that qualia are fundamentals of our cognitive system in a manner perhaps similar to the way the laws of physics are fundamentals of the universe. (And it doesn’t mean that an attenuated ‘posthuman’ brain couldn’t be a radical game changer, providing our global configuration with cognitive resources required to get out of our neural armchair and ‘scientifically’ experiment with qualia). The qualification ‘our cognitive system’ above is an important one. What qualia share in common with the laws of physics has to do with encapsulation, which is to say, constraints on information availability. What qualia and the laws of physics share is certain informatic inscrutability, an epistemological profile rather than an ontological priority. The same way we can’t get out of our neural armchair to see the backside of red, we can’t step outside the universe to see the backside of the Standard Model.*


But the fact is the kind of nonsemantic informatic approach I’m taking here marks a radical departure from the semantic approaches that monopolize the tradition. Peter, in his Conscious Entities critique of IITC linked above, references the Frank Jackson’s famous thought experiment of Mary, the colour-deprived neuroscientist. The argument asks us to assume that Mary has learned all physical facts about red there is to know while sequestered in a black and white environment. The question is whether she learns a new fact, namely what red looks like, when she encounters and so experiences red for the very first time. If the answer is yes, as intuition wants to suggest, then it seems that qualia constitute a special kind of nonphysical fact, and that physicalism is accordingly untrue.


As Peter writes,


And this proves that really seeing red involves something over and above the simple business of wavelengths and electrical impulses. Doesn’t it? No, of course not. Mary acquired no new knowledge when she saw the rose – she had simply had a new experience. Focussing too exclusively on the role of the senses as information gatherers can lead us into the error of supposing that to experience a particular sight or sound is merely to gain some information. If that were so, reading the label on a bottle of wine would be as enjoyable as drinking it. Of course experiencing something allows us to generate information about it, but we also experience the reality, which in itself has nothing to do with information.


The reason he passes on IITC is that he thinks qualia obviously involves something over and above ‘mere information,’ what he calls the ‘reality’ of the experience. This is a version of a common complaint you find levelled against Tononi and IITC, the notion that information and experience are obviously two different things - otherwise, as Peter says, “reading the label on a bottle of wine would be as enjoyable as drinking it.” Something else has to be going on.


This is an example of a demand I have only ever seen in qualia debates: the notion that the explanans must somehow be the explanandum. Critics always focus on how strange this demand looks when mapped onto other instances of natural explanation. Should chemical notations explaining grape fermentation get us drunk? Should we reject them because they don’t? But the interesting question, I think, is why this move seems so natural in this particular domain of inquiry. Why, when we have no problem whatsoever with the explanatory power of information regarding physical phenomenal, do we suddenly balk when it’s applied to the phenomenal?


In fact, it’s quite understandable given the explanation I’ve given above. Rather than arising as an artifact of the radical (and quite unexplained) disjunct between mechanistic and phenomenal conceptualities as most seem to assume, the problem rather lies with the neural armchair. The thing to realize (and this is the insight that BBT generalizes) is that qualia are as much defined by their informatic simplicity as they are by the information they provide. Once again, qualia are baseline heuristics (prerepresentations): like gensyms, they are defined by the information they lack. Qualia are those elements of conscious experience that lack a backside. Since the province of explanation is to provide information, to show the backside, as it were, there is a strange sense in which we should expect our explanations will jar with our phenomenal intuitions.


Rethinking the Mary argument in nonsemantic informatic terms actually illustrates this situation in rather dramatic fashion. So Mary has, available for global broadcasting or integration (conscious processing), representations (knowledge of the brain as object) leveraged via prerepresentational systems lacking any colour. Suddenly her visual systems process information secondary to light with the wavelength of 650nm.  Her correlated neurophysiology lights up. In informatic terms, we have two different sets of channels–one ‘access’ and one ’phenomenal’–performing a variety of overlapping and interlocking functions matching her organism to its environments. For the very first time in her brain’s history, red is plugged into this system and globally broadcast or integrated, becoming available for conscious experience. She sees ‘red’ for the very first time.


Certainly this constitutes a striking change in her cognitive repertoire, and so, one would think, knowledge of the brain as subject.


From a nonsemantic informatic perspective, the metaphysical implications (the question of whether physicalism is true) are merely symptomatic of what is really interesting. The Mary argument raises an artificial barrier between what are otherwise integral features of cognition, and so pits a fixed prerepresentational channel against a roaming, representational one. Through it, Jaskson manages to produce a kind of ’conceptual asymbolia,’ a way to calve phenomenality from thought in thought, and so throw previously implicit assumptions/intuitions into relief.


The Mary Argument demonstrates something curious about the way information that makes it to global broadcasting or integration (conscious awareness) is ‘divvied up’ (while engaging System 2 deliberation (reflection), at any rate). The primary intuition it seems to turn on, the notion that ‘complete physical knowledge’ is possible absent prerepresentational components such as red, suggests a powerful representational bias, to the point of constituting a kind of informatic neglect. We have already considered how red is dumbmute, like a gensym. We have also considered the way deliberative cognition possesses a curious insensitivity to information outside its representational ambit. In rank intentional terms, you could say we are built to look through. The informatic role of qualia is left mysterious, unintergrated, unbroadcast–almost entirely so. We might as well be chained in Plato’s cave where they are concerned, born into them, unable vary our perspective relative to them.


The Mary argument, in other words, doesn’t so much reveal the limitations of physicalism as it undermines the semantic assumptions that underwrite it. Of course ‘seeing red’ provides Mary with a hitherto unavailable source of information. Of course this information, if globally broadcast or integrated will be taken up by her cognitive systems, dynamically reconfiguring ‘K-space,’ the shape of knowledge in her brain. The only real question is one of why we should have so much difficulty squaring these platitudinal observations with our existing understanding of knowledge.


The easy answer is that these semantic assumptions are themselves prerepresentational heuristics, kluges, if you will, selected for their robustness, and matched (in the ecological rationality sense) to our physical-environmental cognitive systems. But this is a different, far more monstrous story.


Ultimately, the thing to see is that Tononi’s Phi is a kind of living version of the Mary Argument. He gives us a brick o’ qualia, a book that fairly throbs with phenomenality, so seating us firmly in our neural armchair. And through the meandering of rhapsody and opinion, he gives our worldly cognitive systems something to fasten onto, information nonsemantically defined, allowing us, at long last, to set aside the old dualisms, and so range from nature to the soul and back again, however many times it takes.


Notes:


* I personally don’t think qualia are the mystery everyone makes them out to be, but this doesn’t mean I think the hard problem is solved – far from it. The question of why we should have these informatically dumbmute qualia at all remains as much as burning mystery as ever.



 •  0 comments  •  flag
Share on Twitter
Published on September 11, 2012 08:48

R. Scott Bakker's Blog

R. Scott Bakker
R. Scott Bakker isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow R. Scott Bakker's blog with rss.