Chris Wright's Blog

September 30, 2025

Published a new book!

In case of interest: I just published a book of essays entitled "Class War, Then and Now: Essays toward a New Left." It would be awesome if people would buy it and review it! It's on Amazon, Barnes and Noble, and other sites.

In the hope of getting more reviews, I'll provide a free copy. Here: https://osf.io/preprints/socarxiv/afm....

Here's the blurb:

"Nearly fifty years of outright class war against America’s working and middle classes have brought the country to the brink of social and political collapse. According to some sources, 60 percent of Americans live paycheck to paycheck. Since 1975, $80 trillion have been transferred from the bottom 90 percent of earners to the top 1 percent. Meanwhile, little action is being taken to mitigate global warming and ecological destruction, while military budgets, used in part to wage disastrous wars and genocides, climb annually.

"There isn't much hope for the United States, or indeed for civilization, unless we can forge an international left that prioritizes class struggle above all else. It is time to fight back, by any means necessary, against a ruling class interested in nothing but profits and power. In this book, a historian of the U.S. labor movement attempts to advance this agenda through a series of essays on everything from right-wing libertarianism to the inadequacies of identity politics, from the career of Jimmy Hoffa to the catastrophic consequences of American imperialism. Victory in a war for the future of humanity is far from assured, but we’re lucky enough to be living in a time when there’s still some hope. It is our duty to act on this hope."

So, feel free to spread the word... And encourage friends to BUY the book, not just download it. :)
 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2025 16:06 Tags: anarchism, capitalism, class-struggle, identity-politics, imperialism, labor-history, marxism, socialism

Published a new book!

In case of interest, I just published a book of essays entitled "Class War, Then and Now: Essays toward a New Left." It would be awesome if people would buy it and review it! It's on Amazon, Barnes and Noble, and other sites.

In the hope of getting more reviews, I'll provide a free copy of it. Here: https://osf.io/preprints/socarxiv/afm....

Here's the blurb:

"Nearly fifty years of outright class war against America’s working and middle classes have brought the country to the brink of social and political collapse. According to some sources, 60 percent of Americans live paycheck to paycheck. Since 1975, $80 trillion have been transferred from the bottom 90 percent of earners to the top 1 percent. Meanwhile, little action is being taken to mitigate global warming and ecological destruction, while military budgets, used in part to wage disastrous wars and genocides, climb annually.

"There isn't much hope for the United States, or indeed for civilization, unless we can forge an international left that prioritizes class struggle above all else. It is time to fight back, by any means necessary, against a ruling class interested in nothing but profits and power. In this book, a historian of the U.S. labor movement attempts to advance this agenda through a series of essays on everything from right-wing libertarianism to the inadequacies of identity politics, from the career of Jimmy Hoffa to the catastrophic consequences of American imperialism. Victory in a war for the future of humanity is far from assured, but we’re lucky enough to be living in a time when there’s still some hope. It is our duty to act on this hope."

So, feel free to spread the word... And encourage friends to BUY the book, not just download a free copy. :)
 •  0 comments  •  flag
Share on Twitter
Published on September 30, 2025 16:03 Tags: marxism

June 22, 2021

The secret history of automation

Here are some notes on an unjustly neglected book by the great Marxist David F. Noble (social historian of technology) called Progress Without People: New Technology, Unemployment, and the Message of Resistance. It was published in 1995, but it was ahead of its time, since its warnings resonate even more profoundly today:

The information highway is barely under construction, the virtual workplace still largely experimental, but their consequences are readily predictable in the light of recent history. In the wake of five decades of information revolution, people are now working longer hours, under worsening conditions, with greater anxiety and stress, less skills, less security, less power, less benefits, and less pay. Information technology has clearly been developed and used during these years to deskill, discipline, and displace human labour in a global speed-up of unprecedented proportions. Those still working are the lucky ones. For the technology has been designed and deployed as well to tighten the corporate stranglehold on the world’s resources, with obvious and intended results: increasing dislocation and marginalization of a large proportion of the world’s population—within as well as without the industrial countries; growing structural (that is, permanent) unemployment and the attendant emergence of a nomadic army of temporary and part-time workers (the human complement of flexible production); a swelling of the ranks of the perpetually impoverished; and a dramatic widening of the gap between rich and poor to nineteenth-century dimensions.

All these tendencies have reached crisis levels by now, and even mainstream economists are warning that in the coming years automation will eliminate millions of jobs. –On the other hand, automation could make possible an unprecedented life of leisure for everyone—for instance, a four-day or three-day workweek—if we could only shift the priorities of government.

One of the uses of the book is as a reminder that it is the government, not the private sector, that’s responsible for the most important technological and scientific innovations. The only reason the capitalist economy works at all is that government (i.e., the public, the working and middle classes) shoulders most of the burden and the cost of innovation. The public sector and what you could call the “public-private sector” (involving public-private partnerships, most notably in high technology and science) is the most dynamic part of the economy. E.g.:

What mechanization was to the first industrial revolution, automation was to the second. The roots of the second industrial revolution lay in the state-sponsored technological developments of World War II. Military technologies—control systems for automatic gunfire, computers for ballistics and A-bomb calculations, microelectronics for proximity fuses, radar, computers, aircraft and missile guidance systems, and a host of sensing and measuring devices—gave rise to not only programmable machinery but also “intelligent” or self-correcting machinery. In the postwar years, the promotion of such technologies was fuelled by Cold War concerns about “national security,” the enthusiasm of technical people, management’s quest for a solution to its growing labour problems, and by a general cultural offensive to restore confidence in scientific salvation and technological deliverance following the twin traumas of depression and global war. Often with state initiative and subsidy, industrial application of these new technologies (as well as an intensification of older forms of fixed automation and mechanization) began to take hold, in steel, auto, petroleum refining, chemical processing (and uranium enrichment), and aircraft, machinery, and electrical equipment manufacture, among others (pp. 24–25).

The economy was growing healthily in the postwar era, so the slow and gradual introduction of automation didn’t lead to crises of unemployment or massive worker resistance. It started to by the late 1960s, though. “The increasing displacement, deskilling, and disciplining of workers in industry proceeded apace, largely unnoticed except by the workers themselves until, by the end of the 1960s, the situation exploded in an upsurge of pent-up rank-and-file militancy.” All across the Western world, the ruling class had to put down an epidemic of rank-and-file resistance, often violently, in the 1970s. Layoffs, the degradation of work, and speed-ups resulted in countless wildcat strikes and relatively new forms of working-class direct action such as “shop-floor organization, counter-planning strategies against management, rank-and-file caucuses against union leadership, and systematic sabotage.” North America and Western Europe were riven by industrial conflict.

One of the responses to all this discontent was to partially co-opt it. “The managers of some companies experimented with new methods—so-called job enrichment, job enlargement, and quality of worklife schemes—designed to absorb discontent and redirect energies along more productive paths. Sweden was a centre for such experimentation and became a model throughout the industrialized world.” In the end, most of workers’ victories from these schemes of “participation” and “job-enrichment” were very limited and short-lived. “In the wake of these limited gains, the rebellious [rank-and-file] energies that had brought them about dissipated and all but disappeared. In their place arose committees, rules, agreements, and other formal devices for dealing with the new challenges at the workplace, including the challenge of new technology.” The technology issue was formalized, bureaucratized, regulated in the form, e.g., of new contract language between unions and employers pertaining to the introduction of new technology. “Whatever these gains, however, they were achieved at the expense of removing the technology issue from the shop floor and thus from the realm of direct action available to the workers themselves. ‘With increasing formalization,’ [one sociologist] observed, ‘the spread of sabotage could once again be held in check by pressure from trade union organizations opposed to it.’”

As has often happened, then, union leadership cooperated with business to suppress and defuse rank-and-file discontent. Resistance lasted into the 1980s, as workers in Europe and North America engaged in sabotage of new technology that threatened to put them out of a job, deskill their work, and give ever greater powers of surveillance to management, but ultimately the rolling tide of automation—organized and implemented so as to benefit management, not workers—proved unstoppable. So here we are today, on the precipice.

*

Noble is an expert on the social history of technology, and he continually insists on the fact that automation has been designed in such a way as to benefit management, not workers. It isn’t some “automatic,” “natural,” or apolitical process by which the best and most efficient technological designs are chosen by engineers, adopted by businessmen, and then tested (for productivity, efficiency, profitability) in the crucible of the marketplace. Technical development “is not some abstractly rational enterprise with an internal logic all its own, but rather a human effort that reflects at every turn the relations of power in society.”

Engineers simply want to do what’s good for society, “yet, consistently, again and again, they turn out [technical] solutions that are good for the people in power (management) but often disastrous for the rest of us (workers). Can this be explained?” Yes, it can! “For one thing, few technical people have any contact whatsoever with workers; in their education and their professional careers, they typically communicate only with management. Not surprisingly, they tend to view the world pretty much as management does, whether they know it or not. They are taught and usually believe that this is simply the most objective way of looking at things, but it is, in reality, the view from the top, the perspective of those with power.”

Noble gives a couple of examples. Here’s one:

For seven years I investigated the history of automated machine tools. Much of the pioneering design and development work of these tools took place at MIT, and I spent many months pouring over the vast collection of documents from the ten-year project. I discovered that the engineers involved in creating this self-professed revolution in metal-working manufacturing had been in constant contact with industrial managers and military officers, who had sponsored and monitored the project. Yet I found not a single piece of paper indicating that there had been contact with any of the many thousands of men and women who work as machinists in the metal-working industry—those most knowledgeable about metal-cutting and…those most directly affected by the technical changes under development… The engineering effort was essentially a management effort, and the resulting technology reflected this limited perspective—the worldview of those in power (p. 74).

In our society, an authoritarian pattern predominates in all institutions and workplaces. “So when an engineer begins to design a top-down technical system, he reasonably assumes from the outset that the social power of management will be available to make his system functionable. Such authoritarian systems are also simpler to design than more democratic ones, since they entail fewer independent variables, and this also makes them more appealing to designers. Finally, authoritarian systems satisfy the engineer’s own will to control and offer the engineer a powerful place in the scheme of things. Thus, for all these reasons, new technical systems are conceived from the outset as authoritarian ones, perfectly suited for today’s world.”

Behind the history of industrial automation, Noble found not merely technical and economic considerations but rather three overriding impulses: “1) a management obsession with control; 2) a military emphasis upon command and performance; and 3) enthusiasms and compulsions that blindly fostered the drive for automaticity.”

So, first of all, managers “consistently solicit and welcome technologies that promise to enhance their power and minimize challenge to it, by enabling them to discipline, deskill (in order to reduce worker power as well as pay) and displace potentially recalcitrant workers. Perhaps more than any other single factor, this explains the historical trend toward capital-intensive production methods and ever more automatic machinery, which have typically been designed with such purposes in mind.” These purposes, and technologies designed in accord with them, are hardly new: they go back to the early days of the first industrial revolution. But let’s focus on more recent times:

In the late 1940s control engineers at MIT (who had just completed a rolling mill control system designed to enable Bethlehem Steel management to eliminate “pacing” by workers) turned their “fertile genius” to the metal-working industry. The ultimate result of their efforts, “numerical control” (NC), reflected management’s [priorities] and set the pattern for all subsequent development of what are now known as computer-aided manufacturing systems. As the very name suggests, control was and remains its essence, not just management control of machines but, through them, of machinists as well.

Noble quotes industry insiders on the nature of numerical control. For example: “With numerical control, there was a shift of control to management. The control over the machine was placed in the hands of management.” “I remember the fears that haunted industrial management in the l950s. There was the fear of losing management control over a corporate operation that was becoming ever more complex and unmanageable. Numerical control is restoring control of shop operations to management.” “Numerical control has been defined in many ways. But perhaps the most significant definition is that [it] is a means for bringing decision-making in many manufacturing operations closer to management. Since decision-making at the machine tool has been removed from the operator and is now in the form of pulses on the control media, [NC] gives maximum control of the machine to management.” Automation, in the ways it has been carried out, is therefore just another stage in the long history of “scientific management” (sometimes called Taylorism, though that makes it too narrow) that Harry Braverman wrote about in the 1970s.

In the history of automated machine tools, alternative designs, more democratic and worker-friendly than numerical control, were proposed, but they never got much funding from military or (later) industrial backers. “Thus, NC became the dominant and, ultimately, the only technology for automating metal-working.”

The perfect compatibility of the military mentality with the business mentality is interesting, and telling. Sorry for the long quotation, but it’s informative:

The military has always played a central role in the technological development of U.S. industry, from mining and metallurgy to shipping and navigation, from interchangeable parts manufacture to scientific management. As the army and navy have been the major movers in the past, the air force has led the way in our time… If we just consider today’s so-called high technology—electronics, computers, aerospace, cybernetics (automatic control), lasers—all are essentially military creations. When some of these war-generated technologies were brought together to automate the metal-working industry, the military was once again the driving force.
From the start in the late 1940s down to the present day, the air force has been and remains the major sponsor of industrial automation. With regard to numerical control, the air force underwrote the first several decades of research and development of both hardware and software, determined what the technology would ultimately look like by setting design specifications and criteria to meet military objectives, created an artificial market for the automated equipment by making itself the main customer and thereby generating demand, subsidized both machine-tool builders and industrial (primarily aerospace) users in the construction, purchase, and installation of the new equipment, and even paid them to learn how to run it.
Numerical control was just the beginning of air force involvement in the automation drive. The air force numerical-control project had global significance; on a recent visit to a locomotive factory in Prague, I was surprised to find the air force NC programming system in use even there. And before long, this single project had evolved into the more expansive Integrated Computer Aided Manufacturing Program. More recently, ICAM became the still more ambitious and diversified MANTECH (manufacturing technologies) programs, designed to promote the computer automated approach to manufacturing not only in industry but also in universities…
The effects of this military involvement reflect the peculiar characteristics of the military world. First and most obvious is the military emphasis upon command, the quintessence of the authoritarian approach to organization. This means, essentially, that subordinates must do as they are told, with no ifs, ands, or buts; the intent is to eliminate wherever possible any human intervention between the command (by the superior) and the execution (by the subordinate). It is easy to understand the military emphasis upon automation, given its potential for eliminating such intermediate steps… In the military outlook, an army of men behaving like machines is readily replaced by an army of machines. This command orientation neatly complements and powerfully reinforces the managerial obsession with control. If the business suit and the uniform are interchangeable in our day, so too are the minds that go with them.

As for the idea that the ostensibly “no-nonsense economic rationality of profit-seeking businessmen” is primarily what guides their adoption of new technologies, it’s largely false. Noble presents fascinating evidence that businessmen aren’t motivated mainly by the prospect of costs savings or higher profits in their adoption of automation. Rather, there are various cultural, social, and psychological pressures to get the newest cool thing, to follow the herd mentality and do what others are doing, to obey the supposedly objective momentum of technological progress even without calculating whether some new machine is profitable or might actually increase costs. It seems that rigorous studies of the cost effectiveness of new equipment—does it increase productivity or not?—are rarely conducted. Actually, “by means of creative accounting and sophisticated use of the tax laws, machines can mysteriously make money for their owners even if they don’t work or are never used.”

So the dominant ideology of the “inevitable laws of technological progress” and the hyper-rationality of businessmen’s economic genius is nonsense. New equipment is frequently unnecessary or costlier than what was used before.

What about the vaunted discipline of the market? Doesn’t this ensure that firms whose acquisitions of new equipment aren’t profitable will succumb to the competition? No. For one thing, the state, which is perfectly willing to spend extravagantly and wastefully, is largely responsible for the advance of automation. “Not only has it subsidized extravagant developments that the market could not or refused to bear, but it also absorbed excessive costs and thereby kept afloat those competitors who would otherwise have sunk.” When your primary customer is the government, you don’t have to worry about market discipline.

Moreover, defense-related industries “are expanding along with the military automation programs, as more and more businesses rush to this state-supported sanctuary to escape the unpredictable vicissitudes of the market. At the same time, the military automation programs are today being matched by those of civilian agencies such as the Department of Commerce, the National Science Foundation, and others. All have now become the publicly funded pushers of automation madness, charting a course and promoting a pace that no self-adjusting market, had it existed, would ever have tolerated.”

Economists tend to focus on “market logic” and ignore the influence of the state, so they write as if it’s the market that is impelling the “inexorable,” “automatic” adoption of automation. But, again, this is either false or highly oversimplified.

*

Noble dedicates chapter five to an effective critique of market worship, together with the “quasi-religious faith in the automatic beneficence of technological progress” that tends to go along with it. He summarizes the quasi-religious philosophy as follows:

It goes like this—people with money are offered incentives (the chance to make more money) to urge them to invest in a new, improved plant and equipment (so-called innovation). This innovation automatically yields increased productivity and, hence, lower costs and prices, which results in greater competitiveness. Finally, this enhanced competitiveness necessarily brings about what Adam Smith, the great eighteenth-century philosopher of capitalism, called the “wealth of nations”: economic growth, jobs, cheap and plentiful commodities, in short, prosperity.

At every point, this religious faith is questionable.

For example: “To begin with, the assumption that rich people will invest in new means of production if given sufficient lucrative incentives presupposes that they would not do so voluntarily without such inducements. As such, it is itself a tacit recognition of the inadequacy of the market as a stimulus to development.” What if it isn’t lucrative to invest in new means of production, new factories or businesses or whatnot? Then capitalists won’t invest in them. In recent decades, real estate and financial speculation have become more lucrative than productive investment. This is one reason for the deterioration of infrastructure, the decline of manufacturing, and the slow collapse of society. So much for the automatic beneficence of the market.

What about the link between innovation and productivity? “When investment does in fact generate innovation [as it usually doesn’t], does such innovation necessarily yield greater productivity? The assumption here is that the return of profits to the investor will be matched by more and cheaper goods for society.” He remarks that this assumption is “the cornerstone of apologies for capitalism, its central tenet of legitimation.” But it has become highly dubious. Even the business press admits that there’s a lot of investment in labor-saving technology designed to increase profits without necessarily adding to productive output.

In any case, with all the capital investment in recent decades and supposed advances in productivity, have you noticed any declines in prices? Hardly. If there have been gains from productivity, it seems that neither workers nor consumers have captured them. (You can check business headlines from recent years to confirm this.) So there goes another justification of capitalism.

Well, I’ll end the summarizing here, even though I haven’t discussed at all a very interesting chapter on the millennium-old (Christian) origins of the Western (and now global) “religion of technology” that has become hegemonic since the first industrial revolution. It’s a rich book, despite its brevity, and worth reading, in its totality amounting to a brilliant critique of this technology-worship (as if technology itself, regardless of social or political context, holds out salvation). Elsewhere, Noble wrote a cutting critique of the automation and commodification of higher education. It seems that his life’s mission, or one of them, was to deepen our understanding of the human meaning of technology and to critique and contextualize our blind reverence for it. As an antediluvian traditionalist, I can only wish more writers would adopt the same calling.

 •  0 comments  •  flag
Share on Twitter
Published on June 22, 2021 15:20

June 15, 2021

On humans' language faculty

You may have heard of the “talking gorilla” Koko. I remember as a child reading about this fascinating creature that had been trained to use sign language to communicate desires, thoughts, feelings, and the like. This was utterly amazing to me, because the idea of non-human apes possessing even the rudiments of human language seemed outlandish. Apes seem to be on such a vastly more primitive cognitive level than humans that I found it miraculous and thrilling that Koko could apparently express somewhat sophisticated ideas. The truth, of course, was that she couldn’t.

Laura Ann-Petitto’s contribution to the first edition of The Cambridge Companion to Chomsky, titled “How the brain begets language,” is interesting and instructive in this Koko-context. (The book can be downloaded at Z-Library.) So much so that I want to quote long passages for the benefit of any other amateurs like me who might be interested in these matters. One of the intriguing things about Petitto’s chapter is the same thing that has always baffled me about people’s responses to Chomsky: somehow, ideas that strike me as perfectly reasonable, if not embarrassingly truistic, are considered controversial or obviously wrong by hordes of academics and many others. (I have to admit that this has given me a rather uncharitable view of the intelligence of the average intellectual.) E.g.:

I first met Noam Chomsky through a project that attempted to get the baby chimp Nim Chimpsky to “talk.” At nineteen, with the certainty of youth, I knew that I would soon be “talking to the animals.” Nim was the focus of our Columbia University research team’s Grand Experiment: could we teach human language to other animals through environmental input alone with direct instruction and reinforcement principles? Or would there prove to be aspects of human language that resisted instruction, suggesting that language is a cognitive capacity that is uniquely human and likely under biological control? Nim was affectionately named “Chimpsky” because we were testing some of Chomsky’s nativist [rationalist] views. To do so, we used natural sign language. Chimps cannot literally speak and cannot learn spoken language. But chimps have hands, arms, and faces and thus can, in principle, learn the silent language of Deaf people.
By the early 1970s, a surprising number of researchers had turned to learning about human language through the study of non-human apes. Noam Chomsky had stated the challenge: important parts of the grammar of human language are innate and specific to human beings alone. Key among these parts is the specific way that humans arrange words in a sentence (syntax), the ways that humans change the meanings of words by adding and taking away small meaningful parts to word stems (morphology), and the ways that a small set of meaningless sounds are arranged to produce all the words in an entire language (phonology). The human baby, Chomsky argued, is not born a “blank slate” with only the capacity to learn from direct instruction the sentences that its mother reinforces in the child’s environment, as had been one of the prevailing tenets of a famous psychologist of the time, B. F. Skinner… Innately equipped with tacit knowledge of the finite set of possible language units and the rules for combining them, the baby listens to the patterns present in the specific language sample to which she is being exposed, and “chooses” from her innate set of possible grammars the grammar she is hearing…
My departure from Project Nim Chimpsky in the mid 1970s to attend graduate school in theoretical linguistics at the University of California, San Diego, was bittersweet. It had become clear that while Nim had some impressive communicative and cognitive abilities, there was a fundamental divide between his knowledge and use of language and ours. No one can “talk to the animals” by sign or otherwise. Nim’s data, along with our close analyses of data from all other chimp language projects, unequivocally demonstrated that Chomsky was correct: aspects of human language are innate and unique, requiring a human biological endowment.

And people were surprised by this? Wow. Poor Chomsky has had to argue with perverse irrationalists his whole life.

It turns out that non-human primates are (of course) almost completely incapable of abstraction. For them, as for every other animal except humans, it seems that only what is immediately present—in sensory experience, memory, desire, or whatever—exists. For example, apes cannot construct patterned sequences of three or more signs. “After producing a ‘matrix’ two words, they then—choosing from only the top five or so most frequently used words that they can produce (all primary food or contact words, such as eat or tickle)—randomly construct a grocery list. There is no rhyme or reason to the list, only a word salad lacking internal organization.” Petitto continues:

Alas, the whole story is even worse than irregularities in chimpanzees’ syntax, morphology, and phonology: the very meanings of their words were “off.” For one thing, chimps cannot, without great difficulty, acquire the word fruit. While apes seem to have some capacity to associate words with concrete things and events in the world they inhabit, unlike humans, they seem to have little capacity to acquire and readily apply words with an abstract sense. Thus, while chimps can associate a small set of labels with concrete objects in the world (apple for apples, orange for oranges), they have enormous difficulty acquiring a word like fruit, which is a classification of both apples and oranges. There is no tangible item in the world that is literally fruit, only instances or examples of this abstract kind-concept that seems to exist only in human heads.
For another thing, chimps do not use words in the way we do at all. When we humans use the common noun apple in reference to that small round and juicy object in the world that we eat, we do not use it to index (pick out) only one object in the world (say, a specific red apple on a table), nor do we use it to refer to all things, locations, and actions globally associated with apples. Instead we use the label to “stand for” or symbolize the set of related objects in the world that are true of this particular kind-concept in our heads. Crucially, we also know the range or scope over which word kind-concepts may apply: for example, the label apple symbolizes a set of related objects and therefore this label is used only in reference to objects, not actions… Chimps, unlike humans, use such labels in a way that seems to rely heavily on some global notion of association. A chimp will use the same label apple to refer to the action of eating apples, the location where apples are kept, events and locations of objects other than apples that happened to be stored with an apple (the knife used to cut it), and so on and so forth—all simultaneously, and without apparent recognition of the relevant differences or the advantages of being able to distinguish among them. Even the first words of the young human baby are used in a kind-concept constrained way (a way that indicates that the child’s usage adheres to “natural kind” boundaries—kinds of events, kinds of actions, kinds of objects, etc.). But the usage of chimps, even after years of training and communication with humans, never displays this sensitivity to differences among natural kinds. Surprisingly, then, chimps do not really have “names for things” at all. They have only a hodge-podge of loose associations with no Chomsky-type internal constraints or categories and rules that govern them. In effect, they do not ever acquire the human word apple.

Surprising! Frankly, what surprises me is that apes can be taught to sign “words” at all. I find that pretty impressive. Though maybe I shouldn’t, since apes obviously communicate to each other in some primitive way that involves gestures.

Aside from information about chimps’ non-acquisition of language, Petitto’s paper presents interesting data on comparisons between monolingual deaf children and monolingual hearing children, as well as bilingual hearing children in “typical” (spoken) contexts, bilingual hearing children who were taught two signed languages and nothing spoken (because their parents were deaf), and bilingual hearing children who acquired both a signed and a spoken language. One finding was that deaf babies’ development of (signed) language proceeds at exactly the same pace and follows exactly the same stages as hearing children’s development of language. (A point for Chomsky, which is to say for the biologicalpreprogrammed—perspective, no matter the modality of communication.) As for “bilingual hearing children exposed to both a signed and a spoken language from birth (e.g., one parent signs and the other parent speaks), [they] demonstrate no preference whatsoever for speech, even though they can hear.” In the experiment conducted, they produced their first word in French and their first word in sign language at the same time.

What’s also notable is that babies who were exposed to, say, French and English at the same time “achieved their linguistic milestones on the same timetable as monolinguals, revealing no language delay or confusion.” Isn’t that incredible? It’s hard enough—in fact, magical—for babies to construct in their head an entire language on the basis of the fragmentary, scattered linguistic data they’re exposed to from their parents. (This is the “poverty of the stimulus” miracle.) But for them to construct two languages on the basis of these confusing and conflicting data is impossible to comprehend. Imagine being an infant who hears a hodgepodge of sounds from adults, some of the sounds being organized according to the rules of English and others according to those of German: and you (your brain) instinctively can recognize that these differences exist, these sounds are organized in different ways, and you build up in your brain (unconsciously) a set of structures of two different languages at the same time, without getting confused. That feat is absolutely astonishing. Science has no conception of how it’s accomplished.

But one thing it indicates, at least, is that Chomsky is right that the infant’s brain hungers for linguistic data, it’s desperately eager to construct linguistic structures spontaneously on the basis of the verbal sounds (or hand gestures) it encounters. There is obviously a language faculty in the human brain, a specialized mental organ that effortlessly and unconsciously uses sensory (auditory and/or visual) data to construct the unimaginably complex rules of a particular language. Humans are genetically endowed with this aptitude, this predisposition; we don’t just use our “general intelligence” and rules of association or induction or whatnot to develop a language, contrary to what empiricists think. This fact is so obvious even from this single piece of evidence—the ease with which infants learn even two or more languages at the same time! (when adults find it infinitely more difficult)—that even I, who have a pretty dim view of intellectuals, find it shocking they can reject Chomsky’s “biologism” (Universal Grammar, etc.) and maintain an allegiance to empiricism.

I know I’m just a dilettante and have no professional status in linguistics or related disciplines. But these aren’t technical issues that you need extensive training in order to understand. They’re very general, having to do with simple rules of logic and induction (or “abduction”): it’s like inference to the best explanation. Despite what professionals might say—not Chomsky, who’s consistently democratic, but other professionals—you and I have a right to have an opinion about straightforward topics like this, insofar as we’re capable of reading and reasoning. (Not all people are truly capable of reasoning, but I assume you are.) The technical linguistics—no, I can’t possibly understand that. But the reasoning about rationalism vs. empiricism—yes, since I am, and you are, able to understand chains of reasoning and evaluate evidence bearing on these chains of reasoning, we’re qualified to hold opinions about general “philosophical” matters. (Of course, the more we read and the more we reflect, the more qualified we are.)

And it seems to me that however hard it might be sometimes to confirm rationalist (nativist) hypotheses, or even to flesh them out in such a way that they’re immune to all objections from empiricists, they have an enormous amount of plausibility prima facie. Not least because it’s uncontroversial that other animals are genetically preprogrammed to act in specific ways barely influenced by environmental stimulation. (E.g., baby squirrels engage in squirrely behavior (digging, etc.) even if they’ve been deprived of input from other squirrels and are confined in solitude to an isolated room.) Presumably empiricists wouldn’t deny that humans are animals, right? We’re part of the animal kingdom. So you’d think that what’s true of other animals would be true of us, no? Why should we not be “behaviorally” (mentally) preprogrammed if other animals are? We’re not angels![1]

Anyway, let’s leave this empiricist nonsense aside. Returning to interesting things… You’re aware of the infant “babbling” phenomenon, right? Ba-ba-ba-ba, etc. What does that babbling mean? For many researchers, babbling is “the initial manifestation of human language acquisition, or, at least, of language production.” This seems plausible. More specifically, though, some researchers think that babbling is “determined by the development of the anatomy of the vocal tract and the neuroanatomical and neurophysiological mechanisms subserving the motor control of speech production.” But no, that hypothesis is wrong, because deaf babies babble too! They manually, not vocally, babble! This scientific discovery of hand babbling “confirmed a claim central to Chomsky’s theory: that early language acquisition is governed by tacit knowledge of the abstract patterning of language that is biologically endowed in the species, and that this governance is so powerful that it will ‘out’ itself by mapping onto the tongue if given the tongue, or the hands if given the hands—all the while preserving linguistic structures across the two modalities.” In other words: empiricism loses again.

The idea of hand babbling is fascinating. If babbling is indeed an early manifestation of the language faculty, then, of course, you’d expect to see it manifested somehow in deaf babies too (who have the language faculty), not only hearing babies. But what does it mean to say that hand gestures can babble? Well, Petitto discovered that there is actually syllabic organization (analogous to ba-ba-ba) in deaf babies’ silent hand babbling. “In signed languages, the sign-syllable consists of the rhythmic closing and opening…alternations of the hands/arms. This sign-syllabic organization has been analyzed as being structurally homologous with the closing and opening of the mouth aperture in the production of consonant–vowel (closed–open) mouth alternations in spoken language.” So babies can express ba-ba-ba or da-da-da or ma-ma-ma with their hands, not only their mouths.

…If babies are born [as Chomsky argues] with tacit knowledge of the core patterns that are universal to all languages, even signed languages, then the [Chomskyan] hypothesis predicts that differences in the form of language input should yield differences in the hand activity of [deaf babies]. In biological terms, tacit knowledge was construed [in our experiment] as the baby’s sensitivity to specific patterns at the heart of human language—in particular, the rhythmic patterns that bind syllables, the elementary units of language, into baby babbles, and then into words and sentences.

Babies are, it appears, highly sensitive to these linguistic patterns, even if they’re deaf and exposed only to signed language. But why else would they be so sensitive if not that they were predetermined to be so sensitive?

Anyway, the broader point is that deaf children experience the same linguistic milestones at the same rate as hearing children, and the Chomskyan hypothesis that internal linguistic structures will find some way to manifest themselves in external expression, even if the usual verbal/auditory mode is impaired, is supported. The language faculty is a core genetically determined and elaborated property of humans. As should have been obvious all along.

None of this is to deny, however, the great conceptual and scientific difficulties associated with this faculty. Such as how and why it appeared in the first place (a question we’ll probably never have a good answer to, as Richard Lewontin would argue), or what the universal linguistic rules are, or how the baby's brain processes sounds to construct a language on the basis of these rules. The human brain is mostly a mystery and will, I suspect, remain so.

[1] As I've noted elsewhere on this website, the indubitable fact of psychological/behavioral preprogramming has uncomfortable implications with regard to the postmodern dogma that gender (gendered thinking and behavior) is nothing but a social construction: viz., that the dogma is flat-out false. Postmodern empiricism and idealism (elevation of "discourses" and "culture" above class structures and sheer brute material facts, including biological facts) are just the usual flaky irrationalism you can expect from intellectuals.

 •  0 comments  •  flag
Share on Twitter
Published on June 15, 2021 15:50

June 9, 2021

Introduction to my forthcoming book

If all goes well, I'll be publishing a book sometime in the not-too-distant future. It'll be called something like Popular Radicalism and the Unemployed in Chicago during the Great Depression. I thought I'd copy here the draft of its Introduction, based on the longer and more diffuse Introduction to my PhD thesis. I don't know how interesting the arguments are, but at least they have the merit of insisting on the fundamental importance of class struggle in a time when most academics (supposedly "leftist," according to critiques from the far right) still prefer not to make such sweeping statements of Marxist common sense, instead subordinating class to gender or race, when not avoiding semi-leftist commitments altogether.

Excerpts from a couple of other chapters, which I posted months ago, are linked below, for any readers with a streak of masochism. Later I might also post an excerpt from chapter five, on the brutal inhumanity of the Illinois and Chicago governments in the 1930s.

Capitalism and mass unemployment are inseparable. Ever since the destruction of the English handloom weavers following the introduction of the power loom in the early nineteenth century, the presence of a “reserve army of the unemployed” has been a permanent feature of capitalist society. Through perpetual structural change and business cycles, capitalism has manufactured unemployment no less reliably than industrial innovation, environmental degradation, and class conflict. The subject of this book is the collective suffering and struggles of the long-term unemployed during one of the great upheavals in American history, the Great Depression.

Unemployment during the Depression is hardly a novel subject of historical inquiry, so the question immediately arises, why return to a topic that has already been studied by historians? Can anything new be said? In part, my interest in this old topic has been motivated by ominous parallels between the political economy of the present-day United States and the political economy that eventuated in the Depression. The most obvious parallel, for example, is the extreme income and wealth inequality of the two eras. “U.S. wealth concentration,” the economist Gabriel Zucman wrote in 2019, “seems to have returned to levels last seen during the Roaring Twenties.”[1] This parallel is rooted, to some extent, in the comparable weakness of organized labor in the 1920s and today. Similar stock-market bubbles, too, have helped cause the wealth inequality of the two analogous eras. The income of the working class has, in both cases, stagnated as expansions of consumer credit have been necessary to keep the economy growing. In 1929, the weakness of aggregate demand that had been covered up by massive extensions of credit was largely responsible for the greatest economic contraction in the history of capitalism. It would be reasonable to conclude, in short, that we have a bleak future ahead.

But this fact in itself is hardly sufficient justification to write another social history of the unemployed. Rather, the justification, I hope, is that my interpretation differs from that of earlier scholars. Instead of simply describing the history for the sake of describing it, I want to use it to support a certain point of view about the nature of society. In particular, I want to defend some simple, even vulgar Marxian and anarchist ideas relating to capitalist institutional functioning and, conversely, anti-capitalist tendencies in human behavior. As for the choice of Chicago as the city to study, the fact that it was a major site of unemployed activism in the 1930s—being one of the cities hit hardest by the Depression—was what elicited my interest. Given the gallimaufry of ethnicities, races, classes, and political persuasions that constituted Chicago in these years, it would be hard to find a more fascinating and revealing object of study than this city. A local study of such a metropolis—central to the American political economy—would, it seemed, permit a sharper focus and greater depth than if I had undertaken a diffuse study of the entire country.

The social history of the jobless and underemployed masses, an ever-shifting group of people who, despite their teeming numbers, are often invisible and forgotten, is of interest in itself. It provides a lens through which to view some of the most adverse social consequences of capitalism, and it offers insight into how people and communities react to devastating loss—loss of income, loss of identity, loss of stability, loss of modes of sociability and self-expression. It needs interpretation, however; and here is an opportunity to add further interest to the subject.

The interpretation that guides the book amounts to a rejection of the sort of attitude that is all too easy to adopt with regard to scattered and atomized millions of unfortunates like the long-term unemployed. It is expressed in historian William Leuchtenberg’s judgment that “most of the unemployed meekly accepted their lot,” that the jobless man in the 1930s “spent his days in purposeless inactivity.” Society is inclined to sweeping condescension toward those who have lost their livelihood, who have consequently, in a sense, been socially outcast. It is as if they have been rendered passive, hopeless, apathetic, even apolitical. “These are dead men,” an observer wrote early in the Depression. “They are ghosts that walk the streets by day.”[2] They drift along aimlessly, pitifully acquiescent, the flotsam and jetsam of a turbulent society tossed by economic gales.

Instead, throughout this book I emphasize the realism and resourcefulness, the active resistance, of the millions of families who were, to a large degree, cast aside by an unfeeling world. While despair and “acquiescence” were hardly absent, I prefer to focus on the element of what one might call spontaneity in the consciousness and behavior of the Depression’s victims—the element of creativity, freedom, resilience, adaptability, and resistance to dehumanization. That is to say, I emphasize the old Marxian theme of struggle, indeed class struggle. My application of this concept of class struggle to the long-term unemployed, a group of people who have rarely been of much interest to Marxists, may seem perverse, but I think it is defensible on the basis of a few elementary considerations. First of all, as the historian G. E. M. de Ste. Croix argued long ago, there is no reason that class struggle need entail a lucid class consciousness or explicitly political action, or even collective action at all.[3] Class conflict, and therefore struggle, is implicit in the very structure and functioning of economic institutions, which are manifestly grounded in the subjugation and domination of one class by another. It is perfectly reasonable to have an “objectivist” understanding of class struggle, and it is in this sense that Marx made his infamous but broadly correct declaration that the history of all hitherto existing society (meaning class societies, not small-scale tribal ones) is the history of class struggle.[4]

Furthermore, the very efforts of the poor and the unemployed to survive in a hostile world can themselves be called a manifestation of class struggle, being determined by one’s location or non-location in a set of economic structures. One naturally adopts an antagonistic (or else a prudentially obedient) stance vis-à-vis economic and political authorities; correlatively, efforts to survive and adapt frequently involve collective solidarity, the solidarity of the poor with the poor. I take the feminist slogan “the personal is political” seriously: there can be a kind of political content in the most mundane day-to-day activities. In contexts of severe deprivation, the mere fact of tenaciously surviving can be a type of resistance to dominant social structures, a way of asserting oneself against realities of class and power that are, in effect, designed to crush one under the boot of the ruling class or even to erase one’s existence. And out of this mundane resistance can easily emerge more consciously political action: mass demonstrations for expansive unemployment insurance, marches on relief stations organized by Unemployed Councils, alliances between employed and unemployed workers or farmers and industrial workers. Whether individual or collective, these fights for dignity and survival are all in the mode of class struggle, a concept that thereby becomes of much broader applicability than it might have seemed.

Said differently, in this book I apply James C. Scott’s “weapons of the weak” framework to the study of the unemployed. In his 1989 paper “Everyday Forms of Resistance,” for example, Scott refers to such acts as “foot-dragging, dissimulations, false compliance, feigned ignorance, desertion, pilfering, smuggling, poaching, arson, slander, sabotage, surreptitious assault and murder, [and] anonymous threats” as characteristic forms of resistance by relatively powerless groups. “These techniques,” he observes, “for the most part quite prosaic, are the ordinary means of class struggle.”[5] Against the charge that he makes the concept of class resistance overly inclusive, Scott marshals a number of arguments, for instance that when such activities are sufficiently generalized to become a pattern of resistance, their relevance to class conflict is clear. Thus, even when workers shirk on the job or when the poor dissimulate to authorities in the hope of obtaining more unemployment relief, class resistance to dominant institutions and inegalitarian values is occurring.

In fact, however “hegemonic” values of capitalism (such as individualistic acquisitiveness), nationalism, and submission to authority may appear when one casts one’s glance over a seemingly well-ordered society, implicit opposition to such values and structures is nearly ubiquitous.[6] And it would be a fruitful terrain of study for historians, sociologists, and anthropologists to excavate such latent or explicit opposition. If capitalism, for instance, means private ownership of the means of production, private control by a “boss” over the workplace, production for the single purpose of accumulating profits that are privately appropriated by the owners, and such tendencies as ever-increasing privatization of society, the mediation of more and more human interactions through market processes, and commodification of even human labor-power, nature, and ideas, then it can be shown that the large majority of people are profoundly ambivalent or outright opposed to it. Much of labor history has this implication, though it is not always made clear.

Even apart from empirical analysis, considerations of a more transhistorical nature support the perspective being sketched here. The late anthropologist David Graeber argued that, notwithstanding appearances of social atomization and cutthroat competition in capitalist society, on a deeper level nearly everyone frequently acts in a “communistic” way. He called it “baseline communism.” For, if communism means “from each according to his abilities, to each according to his needs” (as Marx defined it), then it simply means sharing, helping, and cooperating—giving to others in need what you’re able to give them, even if it is only advice, assistance, sympathy, or some money to tide them over. Friends, coworkers, relatives, lovers, even total strangers continually act in this way. In this sense, “communism is the foundation of all human sociability”; it can be considered “the raw material of sociality, a recognition of our ultimate interdependence that is the ultimate substance of social peace,” as Graeber says.[7] Society is held together by this dense anti-capitalist fabric, into which the more superficial patterns of commercialism, the profit motive, and greed are woven. Capitalism is thus parasitic on “everyday communism,” which is but a manifestation of human needs and desires.

Lest the reader object that Graeber’s conceptualization is an inadmissible politicization of the innocuous, un-ideological facts of spontaneous compassion and altruism, I would reply, again, that to some degree “the personal is political.” The altruistic, democratic, and anarchist ideology of communism, elaborated by such thinkers as Peter Kropotkin, is little but an elevation and generalization of deep-seated “moral” tendencies—propensities of “mutual aid”—in human nature.[8] When socialists or less politically conscious people object to the brutalities of capitalist society, they are doing so on the basis of “un-ideological” impulses of sympathy and compassion, values of individual self-determination and group cooperation, which are, historically speaking, the heart of anarchist communism. It is therefore hardly far-fetched to perceive the seed of political radicalism in some of the most quotidian practices and emotional impulses of ordinary people, just as radicalism is latently or consciously present in the class struggles of the poor or the relatively powerless.

While everyday communism may, informally, be widespread even in the higher echelons of corporate America, historically it has been especially pronounced among the lower classes—the peasantry, industrial workers, struggling immigrants, the petty-bourgeoisie—who have relied on it for survival in hard times and even in normal times. Moreover, these classes have simply not been as deeply integrated into commercial structures and ideologies as the upper classes have. Social history has done much to illuminate the “communism” (without calling it that) of the American working class during its many formative decades, through description of the thick networks of voluntary associations that workers created among themselves, and of the “mutualist” ethic to which they subscribed in the context of their battles with employers, and in general of the vitally public (anti-capitalist, anti-market, anti-individualistic) character of much of their shared culture up to at least the 1940s (in fact beyond).[9] The long-term unemployed as such, however, have tended to be overlooked in this historiography, so I try to remedy that lacuna in the later chapters of this book. For unemployment did not produce only atomization, as is commonly supposed; it also gave rise to the opposite, community and solidarity. And that is what is most interesting to study.

My “agenda” with this book, then, is to highlight the brute material realities and imperatives that structure social life. Rather than focusing on cultural discourses, mass political indoctrination, ideological consent, or the hegemony of the ruling class as forces of social cohesion, I emphasize the more basic facts of class conflict, economic and political coercion, and ruling-class violence (or its threat) as fundamental to containing the struggles and strivings of subordinate groups. This was true in the 1930s and it is true today, notwithstanding the tendency of contemporary humanistic scholarship to privilege discourses over the role of violence and institutional compulsion. (Graeber makes an apt comment in The Utopia of Rules (2015): “graduate students [are] able to spend days in the stacks of university libraries poring over Foucault-inspired theoretical tracts about the declining importance of coercion as a factor in modern life without ever reflecting on the fact that, had they insisted on their right to enter the stacks without showing a properly stamped and validated ID, armed men would have been summoned to physically remove them, using whatever force might be required.”[10]) It was force, first and foremost, that contained the Depression’s mass groundswell, anchored initially in an unemployed constituency, of opposition to basic norms and institutions of capitalism. As we’ll see—contrary to liberal verities that have reigned since the postwar era—the popular movements of the early 1930s were in effect quasi-socialist and collectivist in their goals and practices.[11]

It is doubtless true that we all have a “divided consciousness” on questions of social and political organization, commitments to contradictory values—commitments not always conscious but revealed in our behavior—and are susceptible to indoctrination by institutions in the media, politics, and the corporate economy. Scholarship has established this fact beyond doubt.[12] Since at least the time of World War I and the Creel Committee on Public Information (dedicated to “manufacturing consent” in favor of America’s participation in the war), government and big business have devoted colossal resources to molding the public mind in a way friendly to the power of the ruling class. And their efforts have often met with considerable success. On the other hand, the very fact that it is necessary to constantly deluge the public with overwhelming amounts of propaganda, and to censor and marginalize views and information associated with the political left, is significant.[13] Why would such a massive and everlasting public relations campaign be necessary if the populace didn’t have subversive or “dangerous” values and beliefs in the first place? It is evidently imperative to continuously police people’s behavior and thoughts in order that popular resistance does not overwhelm structures of class and power.

What is interesting about the 1930s is that the ordinary methods of mass regimentation and indoctrination—methods that at the best of times are only partially successful (as shown, e.g., by polls[14])—substantially broke down and the working class had an opportunity to collectively fight for its interests and achieve some limited versions of its goals. Insofar as society in the coming years may see a similar breakdown of established norms and hierarchies, it is of interest to reconsider that earlier time.

The fact is that the political program of a remarkably broad swath of Americans in the 1930s would, if enacted, have constituted a revolution without a “revolution.” Upton Sinclair’s End Poverty in California campaign, Huey Long’s Share Our Wealth program, Father Charles Coughlin’s overwhelmingly left-wing radio broadcasts in 1934 and 1935 (“Capitalism is doomed and not worth trying to save”), and the immensely popular though forgotten Workers’ Unemployment Insurance Bill, introduced in Congress in 1934 and 1935 in opposition to the more conservative Social Security Act, all amounted to full-on class war against the rich.[15] Again, this is not the received interpretation among historians, who have often preferred to emphasize (and puzzle over) Americans’ supposed individualism and conservatism relative to, say, the “socialistic” and “class-conscious” Europeans, but in chapter six I will defend my unorthodox interpretation at some length.[16]

The book is organized as follows. In chapter one I provide a brief overview of the Great Depression and its effects on Chicago, and then, at the end, summarize again some of the main arguments I’ll make in later chapters. The second chapter is different from the others in saying nothing about the agency of the unemployed, consisting instead of a litany of the woes they had to endure. While not much is said explicitly about the machinations of Chicago’s political and business elite, in its totality it serves as an implied critique of the class priorities of this elite that was happy to sacrifice the well-being of hundreds of thousands on the altar of “lower costs.”

The third chapter explores some of the dimensions of people’s “activeness,” specifically the ways they coped with the tragedies that had befallen them. Having been virtually outcast from many of society’s dominant institutions, the long-term unemployed had to reconstruct their lives even in the midst of their collapse. In most cases this would not have been possible if the poor had not been munificent in aiding one another—a feature of Depression life that scholars have still not exhaustively analyzed. In addition, I examine the many ways in which the Depression’s victims constructed their own modes of recreation, from sports to gambling to dancing.

The fourth chapter is devoted to “the unattached,” who often had to live in flophouses or public shelters because they could not afford their own rooms. Not until late 1935 did Chicago’s relief administration provide outdoor relief, or home relief, to most of the unattached, and even then thousands still used the free shelters that remained open or the cheap flophouses in the Hobohemian district. I describe the miserable conditions in which “shelter men” lived, conditions that reveal much about the class-determined priorities of the economic and political elite. Shelter clients, it seems, tended to be well aware of class structures and the conflict between rich and poor that shaped U.S. politics, even organizing with the help of Communists to press for changes in shelter administration. I focus on what these men thought of their situation, and on how they adjusted to being the objects of inhumane policies.

In the following chapter I discuss three types of institutions that had an impact on the unemployed: governments, unions, and churches. With regard to the first, I demonstrate what a low priority the well-being of the poor was to the Chicago and Illinois governments by recounting the dreary story of relief financing from 1930 to 1941, which is to say the story of how political authorities singularly failed to provide for the millions of Illinoisans thrown out of work. As a wealthy state that periodically even had budget surpluses, Illinois certainly could have afforded to be more generous than it was in the funds it diverted to relief. (In general, historians have not sufficiently highlighted the degree to which niggardly relief policies were a political choice rather than an economic necessity.) Unions and churches, on the other hand, frequently showed striking compassion for, and solidarity with, the unemployed, although their inadequate resources prevented them from being as effective as they might have been.

The picture I delineate in this chapter might seem too clear-cut, the contrasts (between government and voluntary associations) exaggerated, as if I am simplifying or caricaturing the reality. Such a criticism, indeed, is often made of Marxian accounts: they are said to be reductive, oversimplifying, too class-focused or one-dimensional. Liberal historians, say, are apt to criticize a work like Howard Zinn’s famous People’s History of the United States for its one-sidedness or “oversimplifications,” unaware that in order to understand the world at all it is necessary to simplify it a bit and explain it in terms of general principles.[17] This is what science does, for example, abstracting from the infinite complexity of a given natural phenomenon in order to formulate a few dominant laws that provide a basis for understanding. There is little point in simply reproducing reality in all its many-splendored complexity; this is mere description for its own sake, not much different from data collection, as opposed to explanation or understanding. While complications must be allowed for and introduced, the writer who “reduces” a confusing mess of phenomena to the principle of class conflict is (if he can support his arguments with evidence) proceeding in a properly scientific way, simplifying the world in order to understand it.[18]

Thus, while I try not to romanticize the functions of unions and churches in relation to the unemployed, I do draw a rather stark contrast between the behavior of local and state governments that were substantially in thrall to the business community and the behavior of more “popular” institutions that to some extent succeeded in breaking away from the values and priorities of the ruling class. The record of unions and churches in Chicago was far from morally spotless, but in their aggregate they made a difference in the lives of the economically insecure. I am also interested in how these oppressed people, such as Blacks on the South Side, used their religious life as in part a sublimation of struggle, of opposition to dominant values and institutions.

The sixth chapter follows this account of the politics of relief with a discussion of the politics and activism of the unemployed. My main concern, again, is to highlight the realism and frequent militancy of ordinary people, to challenge the notion of their easy acceptance of what Marxists have sometimes called “bourgeois hegemony.” Especially when material comforts fall away and people sense that they are being treated unfairly, radicalization can happen very quickly. The “self-blame” of the unemployed, for example, was not such a universal reaction as historians have implied.[19] And even when there was self-blame, anger at an unjust society was not infrequently present as well. Such anger helped motivate the radicalism that emerged on local and national scales, a radicalism of both “form”—including widespread occupying of private property, sit-ins at relief stations and legislative chambers, continual demonstrations and hunger marches, collective thefts—and “content,” which is to say the policy goals many of which were in essence revolutionary.

The question of why these “revolutionary” policy goals, despite their popularity, nevertheless failed can be answered in a number of ways, but what they all boil down to is that the ruling class had far more resources than oppositional movements. Through force, media censorship, and the lack of sympathy of national and state-level power centers (Congress, the Roosevelt administration, state legislatures, etc.), it was possible to suppress movements that in fact, because of their insufficient resources—itself a result of their being contrary to the interests of the owning class—even had difficulty organizing nationally in the first place.

Throughout the book I try to make distinctions between subcategories of the unemployed, such as different ethnicities and income levels. The most obvious distinctions are between Blacks and whites, especially native whites, because the hardships of Blacks were more acute than those of whites. Not surprisingly, then, the former were more frequently militant and “class-conscious” than the latter. However, what I found in the course of research was that, despite my attempts to differentiate between groups, having similar class positions tends to homogenize experiences, values, and ideas. I am reminded of what the historian Susan Porter Benson argued in her analysis of working-class family economies in the interwar years: “when it came to confronting the market, ethnicity became a kind of second-order influence; some groups, in some places, turned more to one strategy than to another, but the difference was more one of degree than of kind, and all drew on a common array of strategies.”[20] Class was supreme.

It may seem odd for a Marxist to write a somewhat positive account of the long-term unemployed, who have traditionally not been of much interest to Marxists. The actively working industrial proletariat has been seen as the most revolutionary class, the unemployed more akin to the despised “lumpenproletariat.” In fact, in Worker Cooperatives and Revolution: History and Possibilities in the United States [summarized here] I have argued that the focus on the industrial working class was always rather limited, that any collective agent of “socialist revolution”—a revolution, incidentally, that would have to be gradual rather than insurrectionary or completely “ruptural”—would surely include a variety of groups relatively disempowered or exploited by late capitalism, including service-sector workers, the young, the jobless, many peasants and farmers, etc.[21] It isn’t creditable or sensible for Marxists to be scornful of a large and permanent subcategory of the working class (viz., those without work) that will likely continue to grow in numbers in the coming years and decades. On the other hand, no group of “the oppressed” should be romanticized either. While I have found it more interesting to try to “problematize” conventional dismissive or negative stereotypes of the unemployed, I hope I have not romanticized or homogenized a very diverse group of people. By reconceptualizing class struggle, for example, I have not meant to ascribe certain conscious ideological beliefs to people many of whom doubtless remained, at least in their own eyes, politically conservative. I have simply tried to apply a more objectivist and, I think, defensible understanding of the concept than the collectivist and subjectivist (involving something called “class consciousness”) understanding that tends to prevail.

If nothing else, I hope to have partially rehabilitated a category of people who, despite the very real impact they made on American history, have generally elicited far less interest than the industrial workers who a few years later built the Congress of Industrial Organizations. This lack of interest is ironic, for it was the struggles of the jobless in the early 1930s that provoked the most fear among authorities and most threatened the stability of the social order.

[1] Quoted in Jesse Colombo, “America’s Wealth Inequality Is At Roaring Twenties Levels,” Forbes, February 28, 2019.

[2] William Leuchtenburg, Franklin D. Roosevelt and the New Deal, 1932–1940 (New York: Harper and Row, 1963), 119. See also the various adverse judgments scattered in Arthur M. Schlesinger, Jr., The Crisis of the Old Order: 1919–1933 (New York: Houghton Mifflin Company, 2003 [1957]), such as his statement that Franklin Roosevelt’s inauguration finally awoke a nation from “apathy and daze” (p. 8).

[3] G. E. M. de Ste. Croix, The Class Struggle in the Ancient Greek World, from the Archaic Age to the Arab Conquests (Ithaca: Cornell University Press, 1981), 44, 57.

[4] It is puzzling that generations of intellectuals have found problematic the Marxian claim that economic relations (or production relations), incorporating class conflict, are the foundation or the “base” of society while politics, culture, and ideologies are the “superstructure.” One would have thought this statement—admittedly a crude metaphor, but a useful one—to be mere common sense. After all, culture and politics are not somehow the product of spontaneous generation; they are brought into being by actors and institutions, which need resources in order to bring them into being. The production and distribution of resources, in particular material resources, takes place in the economic sphere. So, the way that resources are allocated according to economic structures—who gets the most, who gets the least, how the structures operate, etc.—will be the key factor in determining, broadly speaking, the nature of a given society with its culture and politics. The interests of the wealthy will tend to dominate, but at all times individuals and groups will be struggling by various means, implicitly or explicitly, to accumulate greater resources and power for themselves. –This simple argument, which grounds historical materialism or “the economic interpretation of history” in the overwhelming importance of control over resources, strikes me as compelling.

[5] James C. Scott, “Everyday Forms of Resistance,” Copenhagen Papers, no. 4 (1989): 33–62. See also James C. Scott, Weapons of the Weak: Everyday Forms of Peasant Resistance (New Haven: Yale University Press, 1985).

[6] The historian Rick Fantasia rebukes “progressive critics of American cultural life [who] tend to sustain the hegemonic myth of culture. Individualism, narcissism, and class subordination read as personal failure,” he says, “are often seen as dominant values absorbed and reproduced by the powerless with little recognition of problematic, indeed counterhegemonic, cultural practices and impulses.” Rick Fantasia, Cultures of Solidarity: Consciousness, Action, and Contemporary American Workers (Berkeley: University of California Press, 1988), 15. For a thoughtful critique of the Gramscian concept of hegemony, see Nicholas Abercrombie et al., The Dominant Ideology Thesis (London: George Allen & Unwin, 1980).

[7] David Graeber, “On the Moral Grounds of Economic Relations: A Maussian Approach,” Journal of Classical Sociology, vol. 14, no. 1 (2014): 65–77. See also Graeber, Debt: The First 5000 Years (New York: Melville House, 2011).

[8] See Peter Kropotkin, Mutual Aid: A Factor of Evolution (Mineola, New York: Dover Publications, 2006 [1902]) and The Conquest of Bread (Mineola, New York: Dover Publications, 2011 [1906]).

[9] See, among countless others, Herbert Gutman, Power and Culture: Essays on the American Working Class, ed. Ira Berlin (New York: Pantheon Books, 1987); David Montgomery, The Fall of the House of Labor (Cambridge: Cambridge University Press, 1987); Leon Fink, The Maya of Morganton: Work and Community in the Nuevo New South (Chapel Hill: University of North Carolina Press, 2003); Paul Avrich, Sacco and Vanzetti: The Anarchist Background (Princeton: Princeton University Press, 1991); Susan Porter Benson, Household Accounts: Working-Class Family Economies in the Interwar United States (Ithaca: Cornell University Press, 2007).

[10] David Graeber, The Utopia of Rules: On Technology, Stupidity, and the Secret Joys of Bureaucracy (Brooklyn: Melville House, 2015), 58. For critiques of postmodern idealism, see my Notes of an Underground Humanist (Bradenton, FL: Booklocker, 2013), chapters 1 and 2.

[11] On the “liberal verities”: Lizabeth Cohen, for example, in her classic Making a New Deal, argues that workers wanted nothing more radical than a somewhat stronger state and stronger unions. See Lizabeth Cohen, Making a New Deal: Industrial Workers in Chicago, 1919–1939 (New York: Cambridge University Press, 1990), chapter 6. Jefferson Cowie, following Alan Brinkley and other historians, espouses an even more conventional liberalism with his insistence on the durability of “individualism” even at the darkest moments of the Depression. Jefferson Cowie, The Great Exception: The New Deal and the Limits of American Politics (Princeton: Princeton University Press, 2016), chapter 4.

[12] See, e.g., Alex Carey, Taking the Risk Out of Democracy: Corporate Propaganda versus Freedom and Liberty (Urbana: University of Illinois Press, 1997); Elizabeth Fones-Wolf, Selling Free Enterprise: The Business Assault on Labor and Liberalism, 1945–60 (Chicago: University of Illinois Press, 1994); Edward S. Herman and Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (New York: Knopf Doubleday Publishing, 1988); Patricia Cayo Sexton, The War on Labor and the Left: Understanding America’s Unique Conservatism (Boulder, CO: Westview Press, Inc., 1991); Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business (New York: Penguin Books, 1985).

[13] See the works cited in the previous footnote. To take just one example out of thousands, the fact that such a world-famous intellectual as Noam Chomsky has rarely been allowed to appear on mainstream American television or invited to write columns for establishment newspapers and magazines is extremely telling, in fact an eloquent confirmation of his well-known arguments regarding media propaganda and corporate self-censorship.

[14] See Benjamin I. Page and Robert Y. Shapiro, The Rational Public: Fifty Years of Trends in Americans’ Policy Preferences (Chicago: University of Chicago Press, 1992). Even in the 1980s, a time of conservative ascendancy, most Americans thought big business and the wealthy had too much power, environmental and safety regulations should be strengthened “regardless of cost,” the wealthy should pay more in taxes, etc.

[15] See Robert McElvaine, The Great Depression: America, 1929–1941 (New York: Three Rivers Press, 2009), 238–240.

[16] For a persuasive argument against this sort of American exceptionalism and in favor of the idea that “there is a history of class consciousness in the United States comparable to that of working-class movements in Britain and on the Continent,” see Sean Wilentz, “Against Exceptionalism: Class Consciousness and the American Labor Movement, 1790–1920,” International Labor and Working-Class History, no. 26 (Fall, 1984): 1–24. See also Rick Fantasia, Cultures of Solidarity. Michael Denning reconstructs the extremely broad cultural appeal and influence of communism, socialism, and Marxism during the 1930s in The Cultural Front: The Laboring of American Culture in the Twentieth Century (New York: Verso, 1997).

[17] The celebrated liberal historian Jill Lepore, for instance, expresses misplaced condescension toward Zinn in her New Yorker article “Zinn’s History” (February 3, 2010). See Nathan Robinson, “The Limits of Liberal History,” Current Affairs, October 28, 2018 for a brilliant evisceration of Lepore’s own attempt at a national history, her bestselling These Truths: A History of the United States (New York: W. W. Norton & Company, 2018). Among other weaknesses, she has forgotten that the country has a labor history. This is the kind of oversight predictable when one denies the fundamental importance of class.

[18] Karl Kautsky said the same thing when he wrote, “[T]he task of science is not simply a presentation of that which is, giving a faithful photograph of reality, so that any normally constituted observer will form the same image. The task of science consists in observing the general, essential element in the mass of impressions and phenomena received, and thus providing a clue by means of which we can find our bearings in the labyrinth of reality.” Karl Kautsky, Foundations of Christianity: A Study in Christian Origins (New York: Monthly Review Press, 1972 [1908]), 12. See also Adam Jones’s interview with Noam Chomsky entitled “The Radical Vocation,” February 20, 1990, at https://zcomm.org/wp-content/uploads/zbooks/www/chomsky/9002-vocation.html, where Chomsky explains that being somewhat “black and white” in one’s analysis—e.g., dividing the world (to a first approximation) between, crudely speaking, the rulers and the ruled or the oppressors and the oppressed—is exactly the rational method, the method that’s necessary in order to have a modicum of understanding of how society works.

[19] See, e.g., Cowie, The Great Exception, 100: “The supposedly collectivist ‘red decade’ actually featured a long line of individual declarations of self-blame, guilt, doubt, and despair. Given the massive economic failure, the ways in which working people internalized the blame for their situation bordered on the pathological.”

[20] Benson, Household Accounts, 7.

[21] Chris Wright, Worker Cooperatives and Revolution: History and Possibilities in the United States (Bradenton, FL: Booklocker, 2014), chapter 4. See also my article “Marxism and the Solidarity Economy: Toward a New Theory of Revolution,” Class, Race and Corporate Power, vol. 9, no. 1 (2021). Both the book and the article are available for free online.

 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2021 23:46

June 6, 2021

Reflecting on life at 40

That’s Byron in Don Juan, one of the greatest and wittiest poems in history. (Probably the wittiest.) I remember reading it for the first time in my early twenties and falling in love with it. As I wrote in my journal,

Its sentiments are so in tune with mine—its Ecclesiastical sentiments—and its style—that its genius is but a secondary reason for my infatuation. But what genius! It’s a poem unlike any in the English language—any in any language—far surpassing even Pope’s Dunciad. Yes, its power is a negative one, unlike Shelley’s or Keats’, but I don’t consider that a point against it. It’s still as timely as it was when written, and it’ll always be timely, because life, after all, is nothingness. And yet beneath the sophisticated cynicism is a heart-rending despair, and an idealism, and a passion for life, and a yearning for salvation.

At the age of 40, I still come back to Byron, as I did at 22. I can’t escape him however much I’d like to, because he remains the spirit of modernity—the Faustian spirit of striving, struggle, yearning for beauty and truth and freedom, but hedonistic, nihilistic, disillusioned, self-hating even in his self-love, pathologically self-conscious, existentialist (broadly speaking) yet enamored of reason—torn between existentialism and the Enlightenment—in the end rebellious. Byron had no home, as modernity, and ultimately humanity, has no home.

I’ve written elsewhere, and in my first book, about the torment of being too aware of life’s ridiculousness. It’s always in the back of my mind, that knowledge of meaninglessness, that detachment from life. I don’t know what it is about me that makes me fixate on it—maybe the abstractness and self-consciousness of my mind, and the emotional semi-immaturity. I just can’t get over how unreal life seems, how brief and questionable in all its aspects. It can be hard even to be ambitious, to care enough, when you find it so difficult to take things seriously. You start to feel like a spectator at a theatrical farce. Sometimes it seems that hardly anything except stupidity, shallowness, cruelty, and randomness (the randomness of being alive at all) confronts you—few experiences are truly satisfying, especially as you live longer and grow less excitable or impressionable. And all the while there remains the knowledge of death, the constant passage of time bearing down on you. Your own death will mean as little to people as their deaths mean to you.

To relate a somewhat trivial grievance: in a world of flakes, dating can be very frustrating. Relations with the opposite sex are rather important, after all, and if they’re unsatisfying you’ll find that your whole life tends to be unsatisfying. It’s fashionable, even feministy, for women to publish articles inveighing against internet dating, but hardly anyone seems interested in how unpleasant dating can be for men—sending hundreds of cute little messages and getting few replies, going on hundreds of first dates (literally: in the last two decades I’ve been on hundreds, with a number of short and long relationships, including a brief marriage, interspersed), regularly encountering the most incredible flakiness, all while grappling with the pain of sexual frustration. It’s no wonder that “incels,” millions of them, are driven to despair, suffocated by loneliness and frustration, suicidally sick of mere masturbation. (And then self-righteous liberals and leftists ridicule them, laugh at them, which is a pretty sick reaction to the suffering of another human being—and drives them into the loving arms of right-wing frauds like Jordan Peterson.) It requires strength and native buoyancy not to will a renunciation of life in these atomized conditions, starved of love and, frequently, sex.

On the other hand, that’s one of the perks of getting older: you start to care less, since you develop a more full-bodied, self-confident, even good-humored appreciation of the fact that everything is bullshit. (It helps that your hormones begin to take pity on you, at long last.) Deep down there remains a certain resentment toward the world for being such a senseless place, so senseless that it seems actually unreal, illusory—I wholly sympathize with the Buddhist teaching that life is saturated with illusion—but, to paraphrase Nietzsche, you’ve grown so used to your diffuse disgust that you’re almost fond of it now. “I’ve given a name to my pain,” Nietzsche said, “and call it ‘dog.’ It is just as faithful, just as obtrusive and shameless, just as entertaining, just as clever as any other dog…”

Anyway, leaving aside post-pubescent hormonal urges and returning to more pretentious matters of the “intellect”… As a young lad I wanted to get to the bottom of things—I wanted to answer all important philosophical and historical questions, I wanted to intellectually tie everything, all of history, up in a bow, even while helping to wage “the revolution” to establish socialism or communism. It was this that would give my life meaning, this quest for and achievement of near-perfection. Life itself was not enough: it was only a means to something greater, something timeless. Actually, I think I did make intellectual progress and answered many questions to my satisfaction (more or less). Still, my “achievements,” such as they were, were far from approaching what I had dreamed of—something like transcendence—and coming face to face with the mundane reality of my limitations was depressing.

I’m finding it hard, though, to express the source of my restlessness, or even the meaning of my restlessness. Compared to most people in the world, I’m ludicrously privileged. But internally I can hardly imagine “happiness.” I think it has to do with a peculiar contradiction: the contradiction between my knowledge of the miraculousness of being alive—the impossible majesty of the human brain, the wonders of the cosmos itself, the unfathomable mysteries, down to the quantum level, contained in a single eukaryotic cell, the astounding privilege of being homo sapiens (the “sovereign of nature,” as Marx said)[1] and capable of listening to a Beethoven symphony or even of playing a Beethoven piano sonata (think of how the brain has to somehow coordinate the activity of ten fingers, exquisitely calibrating every movement according to a continuous flood of sensory input from the tactile, auditory, and visual systems that it first has to instantaneously process, sift through, analyze, compute, at the same time as it attends to innumerable other matters of internal bodily monitoring and regulation, all while processing memories of the music that has been practiced repeatedly and anticipating what the next notes are, what the next finger movements are—and remember that all this activity, unconscious and effortless, is on behalf of something totally luxurious and purely aesthetic, the creation of music, a capacity only humans have)—and the frequently dull, grim realities of actual life. Each of us is a practically divine miracle, and yet each of us is subject to extraordinary social, physical, and metaphysical indignities. It seems as though every moment of life ought to be transcendently and timelessly beautiful—for that is what nature is and what our existence is[2]—but in fact life is shot through with mediocrity, mundanity, disappointment, shattered hopes, and death. Indeed, strictly speaking, the only “meaning” of life—the reason we’re here in the first place—is the exceedingly mediocre and animalistic one of procreating. Richard Dawkins is surely right that we’re mere “survival machines” for our genes, tools of DNA propagation (for that is the basis of evolution, the self-replication of DNA). As reductive as it sounds, we’re organisms that DNA constructs just so it can survive in a harsh environment and replicate itself. We’re driven to do the bidding of this arrogant and selfish molecule without even knowing why—we just crave sex, it’s out of our control, we’re robots programmed to serve our master.

But what a strange master that has given its creation the capacity to rebel and refuse to procreate! Instead, we often devote ourselves to “higher” pursuits like philosophy, like observing life and contemplating its “meaning”—an absurd and comical activity that only serves to increase unhappiness, sometimes to the point that the robot—or, better, the puppet, the human being—kills itself and thereby the chances for its DNA puppeteer to replicate itself! Homo sapiens is indeed a paradox!

Be that as it may, the point is that, given my awe at existence itself, even my gratitude, I had a desire to achieve things commensurate with this awe and gratitude. I wanted to intellectually assimilate as much as possible, through intellectual creation that would have a universal significance and be read even after I died. These desires, in a sense, weren’t much different from the dreams of a lot of young men, who boldly leap into life determined to conquer the world and by some means or other overcome their mortality and relative insignificance. Fame is a typical goal, for example. A young man glories in possibility and can tend to be rather self-enclosed—as manifested, for instance, in my naïvely self-publishing three books at least one of which I could and should have used a traditional publisher for. (It would have been a better book, dammit!)

As you approach middle age, the ambitions subside. Life starts to wear you down. The regrets pile up (although, truth be told, I had plenty of regrets already at 23); you go unnoticed for things you think deserve notice; you tire of your own mediocrity and the even greater mediocrity of so much that you encounter; you grow more aware of the brevity and lack of gravity of life.[3] The “fulfilling experiences” you longed for fail to materialize, and you realize you’ll never overcome the hollowness, the “ontological emptiness,” at your core. (“Nothingness lies coiled in the heart of being, like a worm,” Sartre said.) At least, that’s been my experience. You’ll write, but people won’t read. You’ll read, but you’ll wonder what’s the point of it when all the reading and writing leaves the world more or less as it is. You’ll still desire intellectual stimulation but you’ll feel guilty when indulging in it because what matters in our era of catastrophe is activism; but you’ll be bored when doing or writing something related to activism because you miss intellectual stimulation. And the years will go by, year after year, and before you know it you’ll be on the verge of death, and your former thrill at the miraculousness of life will seem as if from a quaint, forgotten land.

In short, it’s the ordinariness of it all, and the transience, that gradually dissolves the element of inspiration that had once made life interesting. The times I’ve felt most satisfied were when I was writing something I thought had merit, because it was as if I was (“permanently”) putting truth and profundity into the world, as if I was “beautifully objectifying” myself—being validated by Reality itself. But that’s only a pleasant conceit, and one that fades as you grow older.

Nor does it help, in my case, that I’ve always been susceptible to depression, and to doubt of my own reality (I’m sort of a detached observer). Still, I can’t be the only one who sometimes thinks that, notwithstanding all my experiences and my world travels, not enough has really happened in my life. Not enough moments of deep love; not enough meaningful connections with other people; not enough sex or drugs or debauchery. Lately there’s been a lot of “running on autopilot.” I need a change, but what sort of change isn’t entirely clear.

Well, whether I like it or not, a change is coming by the end of this year: I’m going to be a father. Thanks to a tryst with my ex-wife. I don’t know how involved I’ll be with raising the child, but conceivably fatherhood will make my life seem more meaningful. And yet, bringing a person into this world… By nature I’m actually quite cheerful, believe it or not, but I recall some hilariously dark thoughts I once wrote:

When you bring someone into this world, you introduce them to suffering. A lifetime of suffering. I don't see how Buddhists are wrong about that. Or how Schopenhauer is wrong. You come into the world in pain and you leave the world in pain. In between, you experience more than your fair share of pain.
At this point in history, there is no need to elaborate on these claims. Just read Schopenhauer or the Buddhists. Or use your common sense. Either you end up with the dull drumbeat of ordinary middle-class unhappiness or you suffer misery. It depends where you were born, in what circumstances, who your parents were, how lucky or unlucky you were in life. Indefinitely many horrible things might happen to you. Or maybe you'll get lucky and you'll merely have a boring career, a couple of annoying children, thousands of lonely hours, and get cancer at an old age and die painfully. That's not so bad compared to some of the alternatives.
The supreme act of love is to refrain from conceiving a child. It is a beautiful, unselfish act of pure kindness.
Think of what a contribution you've made to the world, how much suffering you've subtracted from it (ahead of time, as it were), by not having a child. It is virtually a holy thing, a saintly thing. To save one life is to save the world entire. You save a life by not creating it.
Even if you have children, your not having had more of them is noble. Whenever a man doesn't ejaculate inside a woman, he can imagine the millions of little people in his semen thanking him that they will not exist. To masturbate, far from being immoral (as religious loons sometimes think), is a moral act! Or at least more so than impregnating a woman.
To have an abortion is moral, humane, compassionate…

A one-sided point of view, to say the least, but not without a particle of merit. I hope my child’s life doesn’t provide more support for those grim thoughts!

Meanwhile, I’ll be publishing (not self-publishing) a book next year—a version of my Ph.D. thesis—which I suppose will be satisfying. Not that I care very much (I don’t consider the book particularly interesting).

I’m aware that, in a sense, everything I’m saying here is just a lot of unseemly, “unmanly” complaining. I don’t have much right to complain. That’s one of the many things I like about Chomsky: he is without self-pity, he’s a hyper-rational machine who cares only about what is true and what is right. He has remarked that existentialism—which in its concern with subjectivity, anguish, death, and the like is sort of a philosophical expression of human self-pity—doesn’t resonate with him, which is exactly what you’d expect from someone who doesn’t have an iota of cultural or psychological decadence. He’s like a Bach fugue. He’s a pure scientist: a scientist of language, of the mind, of society, and, most importantly, a scientist of morality, a rigorous and consistent paragon of morality. Life is what it is, often difficult and sometimes horrible but not in itself an evil, just a neutral thing that we experience and should make the best of. What’s the use of whining about it?

Moreover, insofar as I’m unsatisfied, it has more than a little to do with my own issues and is, to some degree, my own fault. I’m not as good a writer as I’d like; I’m not as hard a worker as I should be; my original life goals were wildly unreasonable to begin with (as I knew), and in any case I’m still fairly young; if I find it hard to take pleasure in many things, that’s my problem, not yours; whatever malfunctions there are in my brain chemistry (I’ve taken anti-depression pills much of my life) are, again, my own problem, not something general that should interest others. As a Marxist, I’m perfectly aware that each of us is situated: our experiences, perceptions, and thoughts emerge from particular conditions, particular economic, social, and psychological conditions that largely determine our discontent. It isn’t necessarily life itself that is the problem; it’s the conditions that prevail in any given case.

Okay, fine. But the test is, do these sorts of existentialist thoughts resonate with others? If not, then I’m just a freak; if they do, then the particular conditions are to some extent general, perhaps resulting from the nature of capitalist society. They seem even more general than that, though, since similar grievances have motivated the lamentations of poets, philosophers, theologians, and others for millennia. I do have “my own issues,” but it is plausible to suppose I’m just one of those people in whom the general grievances about the human condition are sharpened, because I had the great good fortune to be born with a pathological sensitivity to injustice and human suffering. As for the unbecoming and childish nature of whining about “the human condition” (or “the modern condition”), well, can’t we permit ourselves a little bit of whining now and then? I think we’re entitled to it.

I’m 40, the age of the midlife crisis. I recall the comedian Jim Gaffigan’s joke about being in your 40s:

It’s amazing how our attitude on alcohol changes. Even as a teenager, you know it’s wrong. You’re like, “You know, I don’t like the taste of it, but I want to look cool.” And then in your twenties, you’re like, “You know what, this kind of gives me confidence to talk to the opposite sex.” And then in your forties you’re like, “You know what, this is the only thing I like about being alive.” [Uproarious laughter from the audience.] It’s only funny ’cause it’s true.

You still have a lot of living left to do, but you’re less excited and interested than you used to be, so you’re in an awkward position. How are you going to spend the rest of your life? That’s the question that occupies me now, especially considering I’ll surely never get a tenure-track job—I don’t even think I want one—and I don’t want to spend the rest of my life on year-long contracts or the like. But most non-academic options aren’t very attractive either.

Aside from the career questions that plague young generations now, there are the more intimate, sad psychological facts of middle age hinted at by Gaffigan’s joke. Once again, I think of Byron:

But now at thirty years my hair is gray—

(I wonder what it will be like at forty?

I thought of a peruke the other day)

My heart is not much greener; and, in short, I

Have squander’d my whole summer while ’twas May,

And feel no more the spirit to retort; I

Have spent my life, both interest and principal,

And deem not, what I deem’d, my soul invincible.

No more—no more—Oh! Never more on me

The freshness of the heart can fall like dew,

Which out of all the lovely things we see

Extracts emotions beautiful and new,

Hived in our bosoms like the bag o’ the bee:

Think’st thou the honey with those objects grew?

Alas! ’twas not in them, but in thy power

To double even the sweetness of a flower.

No more—no more—Oh! never more, my heart,

Canst thou be my sole world, my universe!

Once all in all, but now a thing apart,

Thou canst not be my blessing or my curse:

The illusion’s gone forever, and thou art

Insensible, I trust, but none the worse,

And in thy stead I’ve got a deal of judgment,

Though heaven knows how it ever found a lodgment.

The freshness of the heart, the freshness of life, is gone. But at least Byron had lived intensely, had had fame, wealth, innumerable love affairs, incredible highs and lows. It’s probably more painful for people who feel as if they haven’t lived enough and yet by their late 30s are weary and bedraggled anyway, the victims of epidemic social atomization, loneliness, the petty bureaucratic hassles of contemporary life, colossal waste of nervous energy over decades, the weight of postmodern unreality—the unbearable lightness of postmodern being. (I just thought of Billy Joel’s song “Running on Ice”: not the greatest of songs, but expressive of this existential syndrome, particularly from the urban perspective that dominates life today.) I rarely knew the freshness of the heart, of youthful love, despite being a great romantic who pined for it. And now the time is long past when I could have felt it, so…the only thing is to just move on, cut your losses, keep looking ahead, fight the good fight—whatever cliché you like.

Meanwhile, as Gaffigan said, be grateful that alcohol exists.

The idea of getting older is, perhaps, especially uncomfortable for me because in my essence I’m somewhat of a child. That really is the key to my identity, I think. Not that anyone should be interested in me, but, hypothetically speaking, if they were, they would have to start with my childlikeness. There’s a Jim Carrey-esque silliness in me, and it may be this trait, this ingrained humorous attitude, that leads me to see silliness everywhere in the adult world.[4] There’s nothing wrong with that, I suppose, but, as I indicated earlier, it does make it hard for me to take seriously the pursuit of professional and worldly status, honors, “success.” My youthfulness means that most things that matter to me are internal; whatever is “external,” from quotidian responsibilities (e.g., grocery shopping, a dreaded imposition) to professional goals, seems false and unsatisfying. Doubtless this fact, incidentally, is yet another reason why I can’t help but think “life is a dream,” to quote the title of Calderón’s famous play.

Childlike interiority: I think that helps explain the difficulties of quite a few people in adjusting to modern life, and to aging. The world can seem external and alien, not only in the ways it does sometimes to everyone but in a deeper, “ontologically insecure” way. The psychoanalyst R. D. Laing used this concept of ontological insecurity to analyze schizoid and schizophrenic patients, whom he interpreted as being divided between a true, “inner” self and a false, “outer” self. But it isn’t only in the extremes of schizophrenia that you find a sort of intense interiority that can interfere with the enjoyment of anything that doesn’t emerge organically from one’s inner life. A lot of “creative people” likely suffer from this condition—a condition that’s largely responsible, by the way, for the popular association of creativity with madness. “No excellent soul is without a tincture of madness,” Aristotle said; or in the words of John Dryden, “Great wits to madness sure are near allied / And thin partitions do their bounds divide.” Being too much in your own head can take both constructive and destructive forms.

One “destructive” form, for example, is that the process of aging can seem surreal and paradoxical. You still feel rather young and immature (though tired) on the inside, but you’re old and decrepit on the outside. Indeed, the continuity of life is disturbing: instead of there being some sort of break between youth and maturity, or some process whereby you wholly shed your immaturity and become a Fully Grown Adult, you really just feel kind of young—at least in my case—your whole life. It’s like you’re a 25-year-old who gets older and older while staying approximately 25, never becoming a “different person.” At 70 I’ll feel like a very old young man, the same person as decades earlier but bizarrely wrinkled, weary, and weak. How different from what I used to imagine aging would be like!

Lest I seem overly negative, however, I should note that whatever disadvantages there are to having a juvenile and abstract cast of mind may be outweighed by the advantages. It can be pleasant to exist on a rarefied plane above daily cares, professional worries, material grievances, the phlegmatic character of the self-serious adult mind. My boyish curiosity gives me more insight into people than they have into themselves, these lumbering mental hippopotamuses who are far more particular than universal. (As I’ve remarked elsewhere, stupidity is particular, utterly immersed in itself; intelligence incorporates others.)[5] It’s also pleasant to have the sort of childlike aesthetic detachment that makes it possible to, say, experience inexpressible joy listening to Beethoven’s Fourth Symphony, a pure love of life that makes you want to dance around the streets just to contain the frantic energy bubbling up from your heart. This aesthetic receptivity, thank God, is something that will never leave me—life would be unbearably gray without my love of music.

Anyway, in the end, whatever advantages or disadvantages there are to my particular “situation”—my own situatedness—my only firm conclusion is that it’s all a mystery to me. Like Socrates and Byron, I know only that I don’t know. Why or how we’re here in the first place, what this thing called “time” is, why people are so callous and cruel, how one is supposed to keep trudging on even into one’s 80s or 90s, what the grim future will look like, what my child’s life will look like…I don’t have answers to any of these, not meaningful answers. I’m still almost as full of wonder and bewilderment as I was in my teenage years.

There are certain pat things I could say here, nuggets of pseudo-wisdom. For instance, it’s clear that the very enterprise I’m engaged in with this blog post is unhealthy, this stance of stepping back from life and abstractly evaluating it: it’s decadent and unnatural, in itself already symptomatic of alienation. You should just live, throw yourself into life without fixating too much on its underlying “conditions” (mortality, transience, chance, frequent loneliness, injustice, “absurdity”). As Nietzsche always insisted, the naïve and spontaneous attitude is healthier than the navel-gazing attitude. The latter indeed fosters the discontent that it thrives on! It’s like a self-fulfilling prophecy.

But how a thinking person is supposed to avoid asking these sorts of questions, and sometimes fixating on them, is beyond me. I suppose it helps if you’re psychologically well-adjusted to begin with.

I don’t know what the future holds for me. I’ve thought about dipping my toe into freelance journalism, likely in the labor sector. I could try to write articles of “commentary” that are more serious than most of what I’ve published. My mind, by its nature, will always be more comfortable with and interested in abstract philosophical questions than concrete political ones, but the latter are of more human significance and infinitely more urgent in the twenty-first century than the former, so I just have to adapt. I might even try to re-embrace academic historical research, although I find it more satisfying and socially relevant to write short pieces that aren’t necessarily read by an academic audience. (An immense amount of work goes into a scholarly paper, but how many people read it?? Its primary use is to be another item on your CV.)

To get even more personal than I’ve already been: last year, feeling a bit discouraged, I tried taking the drug ketamine in the hope that it would jumpstart the rusty old engine in my head and reintroduce me to life. No such luck. The four doses I took over several weeks at least brought on some fascinating experiences. A recent article in the New York Times described the experience well: “the room dissolved around me in a transcendent swirl of lucid dreaming. I traveled backward in time, inhabiting memories in a pleasantly detached manner. I traveled forward, too, and visited places I’d never been. It felt as though I’d shed my corporeal form and was melding into the fabric of the universe.” It was wonderfully trippy, especially as I was listening to a beautiful trippy soundtrack the treatment center had curated. The boundaries of my being dissolved as I was suffused by an expansive love for everyone and everything, a longing to show people how sublime life is. Time dilated: ten minutes felt like an hour, or maybe two hours—I really had no notion of time. I felt “enlightened,” albeit disoriented and overwhelmed by the cacophony of thoughts and feelings that flowed together with the music.

Pleasant after-effects lingered for days, perhaps weeks, but no longer. So that was that. It wouldn’t be so easy to jolt me out of my blues.

But, as with most people, the satisfaction and the dissatisfaction come and go; the contentment and the sadness come and go. Most of us manage to muddle through. We find ways to occupy ourselves for eight or nine decades. At the end we look back and think, “Wow. That was a lot. But it went by rather quickly. And now, whatever it all was, it’s over…” –How terribly strange to be seventy, to quote Simon and Garfunkel. I’ll let those two old poets have the last words:

Time it was, and what a time it was, it was

A time of innocence, a time of confidences.

Long ago it must be, I have a photograph—

Preserve your memories: they’re all that’s left you.

I’m not yet at that terminal age, but someday I will be, and it will be sad and surreal.

In the meantime….we have to push these kinds of thoughts out of our mind, just keep living, keep struggling (“life is struggle,” Marx said), keep being honest with ourselves and others, and keep spreading compassion, of which the world has too little.

[1] In the words of Chomsky, another great humanist, “none of the other 50 billion species [that have existed on Earth] can even form a thought, at least any thought that can be formulated in a symbolic system.” The gap between humans and all other animals is vast, and utterly mysterious.

[2] Well, strictly speaking, that isn’t true. Nothing has positive or negative value in itself. But it certainly is easy to think of nature as being beautiful and magnificent in many ways.

[3] More specifically, in retrospect life seems rather short, though while you're living it it can seem terribly long.

[4] I’ve given examples of such silliness in other blog posts, but here’s (a non-political) one I was just reminded of after seeing a picture of the fat-headed gorilla Donald Trump: ceteris paribus, being a man with a large head confers great advantages with respect to social presence, confidence, people’s impressions of you, “charisma,” sexual success, etc. Think of how often “leaders,” for instance American presidents—from Bill Clinton to FDR—have large heads. Women tend (though it’s only a tendency) to be more attracted to big-bodied and big-headed men, because size does matter: it communicates dominance. Humans are, after all, mere animals, beastly primates, in some ways more pathetic than any other species because of their self-deceiving pretensions and indescribable idiocies (for which, unlike other animals, they don’t have the excuse that they lack the capacity for abstract, symbolic thought).

[5] But most “intelligent” people are no less deluded, in their own way, than those who are judged unintelligent. The kind of intelligence that is wholly and ironically aware of itself, genuinely self-insightful, is rare.

 •  0 comments  •  flag
Share on Twitter
Published on June 06, 2021 15:55

June 5, 2021

Closed borders and democratic theory

You may have noticed that the question of immigration is on people’s minds lately (as it has been, almost without interruption, since the mid-nineteenth century;—ugh, the repetitiveness and predictability of history). The whole “border crisis” is absurd: the only crisis is a humanitarian crisis, and it could be solved simply by abolishing ICE, diverting more resources to processing immigrants (“legal” and “undocumented”), reforming the punitive nature of border control, and changing American foreign policies so as to support the democratization and development of countries from which migrants are fleeing (largely as a result of past American foreign policies). None of this will happen, since we live in a dystopia controlled by reactionaries, proto-fascists, and big business. But it’s always worth remembering that, in principle, the solutions to any given social problem are (in nearly all cases) pretty clear. The reason they can’t be enacted is that the powerful, despite what they might say, aren’t interested in solutions. They’re interested in the path of least resistance, and in maintaining their own power.

I thought I’d post something about border policies, not because I have anything new to say on the subject but because I recently came across an interesting article by a political theorist named Arash Abizadeh, “Democratic Theory and Border Coercion: No Right to Unilaterally Control Your Own Borders” (2008). Full disclosure: Abizadeh was a professor of mine at Wesleyan University. Few of us students thought much of him as a professor, but after reading his paper I have to admit he’s at least capable of compelling argumentation—for the morally right conclusion, moreover. I decided to translate his arguments into non-academese for any interested reader, since I think they’re of more than merely academic interest.

My own opinion, for what it’s worth, is that the politically fraught question of immigration should be a non-issue. It’s a nice tool for proto-fascists to rile up masses of ignorant people and get them to vote for politicians who are interested only in increasing repression and empowering the wealthy, but that’s all it is. What should happen, ideally, is that borders become ever more porous, in the long run heading toward a regime of open borders (to the extent possible), since freedom of movement should be a fundamental human right and immigration is far from harmful to a domestic economy or society. If there were more global freedom of movement, moreover, it’s possible that states would begin to implement more humane and progressive social policies, since otherwise they might lose some of their population to countries that did have such policies. Instead of the neoliberal race to the bottom, there might be something like a race to the top.

Wage levels would tend to equalize everywhere, which would be good for low-wage countries and bad for high-wage countries—but, at the same time, it would likely be easier to organize a global labor movement, which would be good for the non-rich everywhere. In the very long run, the nation-state system itself should be dismantled to the degree possible, since nation-states are violent, authoritarian, illegitimate entities.

Anyway, let’s get to Abizadeh’s article…

Usually we think of a country as having the right to unilaterally control its own border policy, subject only to democratic input from its citizens, not from foreigners. What foreigners want doesn’t matter because they’re not members of the state. Many commentators reject unilaterally closed borders, or restrictive and punitive border policies, on the grounds of liberalism, which is committed to universal human rights, individualism, freedom of movement, and so on. (Some, such as Frederick Whelan, even argue that “liberalism in its fully realized form would require the reduction if not the abolition of the sovereign powers of states…especially those connected with borders and the citizen-alien distinction”—a position Chomsky and many other anarchists advocate, since liberalism after all is just supposed to mean a regime of individual freedom, which is what anarchism means.) Abizadeh, however, chooses to argue against unilateral border policies on the basis of democratic grounds, not liberal grounds. This is what makes his paper novel and interesting, since, prima facie, it would seem that considerations of democracy are in tension with those of liberalism: democracy supposedly “requires a bounded polity whose members exercise self-determination, including control of their own boundaries.” Liberalism is universal, democracy is particular and bounded. So how can you use democratic theory to argue against unilaterally closed borders?

It’s simple: argue that “the demos [the community, the people] of democratic theory is in principle unbounded, and the regime of boundary control must consequently be justified to foreigners as well as to citizens.” This is Abizadeh’s conclusion. Whether a state recognizes or denies (to foreigners) the legal right to freedom of movement, to the degree that it is democratic it can only arrive at its policy as “the result of democratic processes giving participatory standing to foreigners asserting such a right [to freedom of movement].” What these democratic processes would look like is an open question, but they would involve international or supra-state organizations of some form or other.

Of course, one is free to reject Abizadeh’s arguments, but (if his arguments succeed) to that extent one is an authoritarian, not a democrat.

The core of the argument is very simple. According to democratic theory, a state’s coercive powers (involving coercive acts and coercive threats) have to be democratically justified to all those over whom they are exercised, which is to say the members of the demos must have the opportunity to participate in political decision-making on a free and equal basis. But a country’s regime of border control subjects both members and nonmembers (non-citizens) to state coercion. “Therefore, the justification for a particular regime of border control is owed not just to those whom the boundary marks as members, but to nonmembers as well.”

The obvious objection to this argument is that, even if foreigners are subject to coercion, justification is owed only to citizens, since foreigners have no political standing in the state. It seems to me that this objection reveals an unattractive and illiberal comfort with treating any non-citizen in an authoritarian or even brutal way (remember, that’s how the Nazis could “legally” exterminate Jews: they had deprived them of citizenship), but be that as it may, Abizadeh points to a more serious logical flaw: the objection’s presupposition, that the demos is inherently bounded, is incoherent.

It’s an argument that Whelan makes: how can a state democratically determine its own civic (and territorial) boundaries, the boundaries of the demos? Who are the people who will vote on this question of membership? You’d have to somehow determine the membership of the group that is entitled to vote on the question in the first place—but how do you decide this second-order membership question? It would itself need to be voted on. Ultimately you get an infinite regress. As Whelan says, “the boundary problem is one matter of collective decision that cannot be decided democratically… We would need to make a prior decision regarding who are entitled to participate in arriving at a solution… [Democracy] cannot be brought to bear on the logically prior matter of the constitution of the group itself, the existence of which it presupposes.”

As Abizadeh comments, the problem is that “democratic theory requires a democratic principle of legitimation for borders, because borders are one of the most important ways that political power is coercively exercised over human beings. Decisions about who is granted and who is denied membership, and who controls such decisions, are among the most important instances of the exercise of political power”—especially given the incredibly brutal nature of modern border controls (involving police dogs, electric wires, incarceration, deportation, torture, shooting on sight, etc.). By the very act of constituting the state’s civic and territorial borders, you’re disenfranchising large numbers of people—“outsiders”—over whom power is exercised. But this violates democratic theory.

(And if you look at history, you’ll find that no state, including no “democratic” state, has ever been founded democratically. Civic and territorial borders are always created by means of violence, and are policed very violently.)

If you abandon the “bounded demos” thesis, however, the incoherence disappears:

An alternative reading of democratic legitimacy comes into view: the view that political power is legitimate only insofar as its exercise is mutually justified by and to those subject to it, in a manner consistent with their freedom and equality… The democratic principle of legitimacy requires replacing coercive relations with relations of discursive argumentation, and legitimating the remaining instances of coercion by subjecting them to participatory discursive practices of mutual justification on terms consistent with the freedom and equality of all. On this view, democratic theory does provide an answer to the boundary question: the reach of its principle of legitimation extends as far as practices of mutual justification can go, which is to say that the demos is in principle unbounded.

Therefore, a regime of border control can be legitimate only if there are international democratic institutions in which both citizens and foreigners can participate to determine what the border policies will be. (Actually, it seems to me that Abizadeh’s arguments call into question the very existence of borders except insofar as “foreigners” and “citizens” can vote on who will be entitled to be a citizen and where the territorial border will be drawn. But he doesn’t draw this conclusion, for some reason.)

I’d note here that in the absence of such international institutions (in which foreigners can help determine what will be a given country’s border policies, to which they are coercively subjected), the most democratic option would be to adopt a maximally permissive and open border policy, since this reduces the element of coercion. People who oppose this conclusion, or Abizadeh’s arguments, are not “good democrats” standing up for their own country’s right to self-determination (right to determine its own policies); they are authoritarians comfortable with coercing masses of people who have had no role whatsoever in determining the policies to which they’re subjected.

The remainder of the article is concerned with answering objections based on the “self-determination argument for unilateral border control” (the argument I just mentioned, that a country has a right to determine its own policies). Abizadeh’s strategy is to argue that the self-determination argument is incompatible with the most plausible liberal and democratic arguments for the very existence of borders (separate countries) in the first place. So if you accept any of those arguments, you’re logically compelled to reject the self-determination argument.

For example, liberals are, rightly, afraid of “concentrated political power and its potential to breed tyranny.” One way they have tried to thwart this potential is by dividing and dispersing power (as in the “checks and balances” American system of government). Well, the worst tyranny of all would be a global tyranny. So, many liberals see “a plurality of political units [countries] as a crucial bulwark against tyranny,” in that it makes a global tyranny impossible. Okay, so far so good. But how do unilaterally closed borders help counteract the threat of a global tyranny? If tyranny is bad and individuals should have the right to escape it, there should be relatively open borders that hold out the promise of safe haven for refugees fleeing a tyranny. Thus, if you accept this particular liberal argument for the existence of separate countries, you’re bound to reject the argument for closed borders.

Anyway, I think you get the point. The main value of the article is in providing some theoretical points to make against (unilateral) closed-border advocates. These people are not democrats.

But, frankly, that should have been obvious all along. ICE is hardly a “democratic” agency. I also doubt that any academic article, or the arguments it makes, can sway the minds of more than a micro-fraction of conservatives, since very few of these people are susceptible to reason or questions of principle (much less compassion). Nevertheless, more highly evolved people can find food for thought in the occasional scholarly article.

 •  0 comments  •  flag
Share on Twitter
Published on June 05, 2021 17:37

May 25, 2021

The miraculous brain

[Old notes.] I’m reading Stanislas Dehaene’s Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts (2014). Good book, sophisticated but readable. Explores what an unbelievable amount of the brain’s activity is unconscious. For instance, after reviewing a bunch of experiments, Dehaene states what should have been obvious all along (although of course it did need to be experimentally confirmed): “in some respects, consciousness is irrelevant to semantics—our brain sometimes performs the same exact operations, all the way up to the meaning level, whether or not we are aware of [the words we have been exposed to, e.g. because they were flashed on a computer screen too briefly for us to consciously notice them].” The brain can unconsciously process word meanings, just as it can unconsciously process the emotional valences of images, etc.

Dehaene also summarizes experiments that show that consciousness definitely isn’t epiphenomenal. Which is to say, there are a hell of a lot of things we couldn’t do if we weren’t conscious. Duh. One of consciousness’s evolutionary roles, for instance, is “learning over time, rather than simply living in the instant.” Subliminal (unconscious) perceptions and thoughts can’t be retained for longer than a second or so, whereas conscious ones can be retained for much longer. Another function may be to simplify and focus perception (since our brain is constantly receiving an enormous amount of sensory information). Also, we need consciousness in order to rationally think through a problem. The unconscious mind doesn’t seem able, by itself, to carry out steps of reasoning; it can carry out single operations but not a cumulative series of them. And of course social information sharing is likely another essential function of consciousness.

Interesting discussion of “the signatures of a conscious thought.” I’ll just quote a particularly easy-to-read thing that I find pretty incredible, and also eery: “With intracranial electrodes, the effects of stimulation can be very specific. Sparking off an electrode atop the face region of the ventral visual cortex can immediately induce the subjective perception of a face. Moving the stimulation forward into the anterior temporal lobe can awaken complex memories drawn from the patient’s past experience. One patient smelled burnt toast. Another saw and heard a full orchestra playing, with all its instruments. [!] Others experienced even more complex and dramatically vivid dreamlike states: they saw themselves giving birth, lived through a horror movie, or were projected back into a Proustian episode of their childhood. [Apparently] our cortical microcircuits contain a dormant record of the major and minor events of our lives, ready to be awakened by brain stimulation.” If you stimulate the subthalamic nucleus, the result is a state of depression, “complete with crying and sobbing, monotone voice, miserable body posture, and glum thoughts.”

Astounding. Not surprising, but astounding nonetheless. It’s obvious but incredible anyway that Beethoven’s Ninth Symphony is in my head, encoded in neural pathways, and theoretically can be elicited by the right stimulation.

In any case, “putting together all the evidence inescapably leads us to a reductionist conclusion. All our conscious experiences…result from a similar source: the activity of massive cerebral circuits that have reproducible neuronal signatures. During conscious perception, groups of neurons begin to fire in a coordinated manner, first in local specialized regions, then in the vast expanses of our cortex. Ultimately, they invade much of the prefrontal and parietal lobes, while remaining tightly synchronized with earlier sensory regions. It is at this point, where a coherent brain web suddenly ignites, that conscious awareness seems to be established.”

But what is consciousness? Here’s Dehaene’s answer:

When we say that we are aware of a certain piece of information, what we mean is just this: the information has entered into a specific storage area that makes it available to the rest of the brain. Among the millions of mental representations that constantly crisscross our brains in an unconscious manner, one is selected because of its relevance to our present goals. Consciousness makes it globally available to all our high-level decision systems. We possess a mental router, an evolved architecture for extracting relevant information and dispatching it. The psychologist Bernard Baars calls it a “global workspace”: an internal system, detached from the outside world, that allows us to freely entertain our private mental images and to spread them across the mind’s vast array of specialized processors [e.g., language, memory, the motor system, etc.].

So according to this theory, consciousness is just “brain-wide information sharing.” He goes on to suggest that this was likely an evolutionary adaptation: in the harsh natural environment, it was necessary to have a mental map of space, a visual recognition of landmarks, a recall of past successes or failures at finding water or food, and so on. “Long-term decisions of such a vital nature, leading the animal through an exhausting journey under the African sun [for example], must make use of all existing sources of data. Consciousness may have evolved, eons ago, in order to flexibly tap into all the sources of knowledge that might be relevant to our current needs.”

***

In truth, we should all just be constantly gaping at each other and ourselves, gasping at the miraculousness of everything. Trillions of neural impulses and unconscious computations going on at every moment of our life. All a result of...billions of years of blind evolution, matter coming together in ever-more-complex clumps to replicate itself in a harsh and unforgiving natural environment. To take a mundane example from The Brain: The Story of You:

Imagine we’re sitting together in a coffee shop. As we’re chatting, you notice me lift my cup of coffee to take a sip... The field of robotics still struggles to make this sort of task run without a hitch. Why? Because this simple act is underpinned by trillions of electrical impulses meticulously coordinated by my brain.
My visual system first scans the scene to pinpoint the cup in front of me, and my years of experience trigger memories of coffee in other situations. My frontal cortex deploys signals on a journey to my motor cortex, which precisely coordinates muscle contractions – throughout my torso, arm, forearm, and hand – so I can grasp the cup. As I touch the cup, my nerves carry back reams of information about the cup’s weight, its position in space, its temperature, the slipperiness of the handle, and so on. As that information streams up the spinal cord and into the brain, compensating information streams back down, passing like fast-flowing traffic on a two-way road. This information emerges from a complex choreography between parts of my brain with names like basal ganglia, cerebellum, somatosensory cortex, and many more. In fractions of a second, adjustments are made to the force with which I’m lifting and the strength of my grip. Through intensive calculations and feedback, I adjust my muscles to keep the cup level as I smoothly move it on its long arc upward. I make micro-adjustments all along the way, and as it approaches my lips I tilt the cup just enough to extract some liquid without scalding myself.
It would take dozens of the world’s fastest supercomputers to match the computational power required to pull off this feat. Yet I have no perception of this lightning storm in my brain. Although my neural networks are screaming with activity, my conscious awareness experiences something quite different. Something more like total obliviousness. The conscious me is engrossed in our conversation...

And it's all automatic, the automatic result of billions of cells interacting according to the laws of nature! (Scientists have no idea, and probably never will, how a person makes a decision to do something, how decisions, even trivial ones, well up out of the continuous “lightning storm” of deterministic neural activity.) The universe itself is no more wonderful than our own brains, our own bodies, which are themselves a magnificent, completely unfathomable universe.[1]

[1] To get a sense of how utterly in the dark we are about nearly everything pertaining to the brain, think of this: we still have no idea why it is that one person is talented at music, another at math, another at painting, etc. We don't know what it is about neurons that specializes some of them in speech, others in music, others in spatial reasoning, and so on. It's hard even to imagine what it could be that allots 'specialties' to particular groups of neurons. Nor is it clear how we'd go about discovering what these mysterious mechanisms are. The extent of our knowledge is that when you do a certain activity, some region of the brain lights up, and with another activity a different region lights up. That's it! The primitiveness of this kind of knowledge is embarrassing.

 •  0 comments  •  flag
Share on Twitter
Published on May 25, 2021 18:47

May 24, 2021

Critiques of Richard Rorty (and postmodernism)

[Here are notes I took many years ago while reading Richard Rorty’s famous book Philosophy and the Mirror of Nature. As you’ll see from the mockery, I wasn’t impressed. The postmodernist polemic against the notion of objective truth, or the correspondence theory of truth, is wholly unconvincing and indeed self-refuting, as many writers have argued. It's also perniciousintellectually and politicallyfor the assumption that objective truth exists is a strong pillar of leftist analyses of society. Chomsky's Enlightenment-derived position is, as always, more defensible.]

Rorty, chapter 1: “I hope I have said enough to have incited the suspicion that our so-called intuition about what is mental may be merely our readiness to fall in with a specifically philosophical language-game. This is, in fact, the view I want to defend. I think that this so-called intuition is no more than the ability to command a certain technical vocabulary—one which has no use outside of philosophy books and which links up with no issues in daily life, empirical science, morals, or religion.” My intuition of the difference between body and mind is merely an ability?? How can an intuition be the same thing as an ability? The first is a specific event, the second a descriptive extrapolation, something vaguely like a disposition. (This is of course one of the problems with behaviorism. Mental events are not behavioral dispositions or abilities.) Granted that Rorty expressed himself badly, his position still doesn’t make sense. When I intuit the body/mind dichotomy, I’m not using concepts. Indeed, that’s the distinguishing mark of an intuition: it isn’t discursive. Furthermore, is it likely that, for example, people in the Kapauku tribe in Papua New Guinea are “commanding a certain technical [philosophical] vocabulary” when they intuit the mind-body dualism?[1] It may—possibly—be true that in some cultures the dualism is unintuitable because of primitive social conditions and a corresponding unsubtlety of thought (John Searle remarks that in a certain African language it’s impossible to communicate the mind-body problem—which, incidentally, doesn’t imply that the people can’t intuit [or at least ‘vaguely understand’] the issue), but given the sophistication of our culture, the mind-body division is most definitely intuitable outside philosophical language-games, and most definitely links up with issues in daily life, science, morals and religion.

***

Here’s Rorty on the ontological dualism implicit in the mind-body problem (specifically on the question of why we think of qualia as immaterial, to which he gives the silly answer “because we think of pains as universals (and universals are immaterial)”):

As long as feeling painful is a property of a person or of brain-fibers, there seems no reason for the epistemic difference between reports of how things feel and reports of anything else to produce an ontological gap. But as soon as there is an ontological gap, we are no longer talking about states or properties but about distinct particulars, distinct subjects of predication. The neo-dualist who identifies a pain with how it feels to be in pain is hypostatizing a property—painfulness—into a special sort of particular, a particular of that special sort whose esse is percipi and whose reality is exhausted in our initial acquaintance with it. The neo-dualist is no longer talking about how people feel but about feelings as little self-subsistent entities, floating free of people in the way in which universals float free of the instantiations. He has, in fact, modeled pains on universals. It is no wonder, then, that he can “intuit” that pains can exist separately from the body, for this intuition is simply the intuition that universals can exist independently of particulars. That special sort of subject of predication whose appearance is its reality—phenomenal pain—turns out to be simply the painfulness of the pain abstracted from the person having the pain. It is, in short, the universal painfulness itself. To put it oxymoronically, mental particulars, unlike mental states of people, turn out to be universals.

This is the sort of sophistical trash that passes for philosophy. Trying to argue that a pain isn’t the particular feeling of being in pain! Whatever Rorty might say, the fact is that Descartes was right: matter is extended and sensations, as such, are not. This is the source of the dualistic intuition.

In a passage on the Greek image of the Eye of the Mind as a metaphor for knowledge—“knowledge [is interpreted] as looking at something”—Rorty says he doesn’t know how the image originated or why it caught on. He doesn’t even offer a suggestion. Being Richard Rorty, he’s content to admit it’s a mystery to him. In a book whose main purpose is to criticize the “mirror” image of philosophy—i.e., representationalism and the correspondence theory of truth—this is quite a lacuna. I’m a nice guy, though, so I’ll do Rorty’s thinking for him. He should have been struck by the parallel between Plato’s metaphor and Wittgenstein’s picture theory of meaning. And then he should have realized that the essence of both is correspondence (both a picture and sight are of something). And then he should have noted that language has a representational, self-transcending character,[2] that every statement (even an imperative) posits a state of affairs[3]. And then he should have noted that knowledge itself is always of or that something, that it implies correspondence with something external to it. And finally he should have seen that the ocular metaphor is the best possible one, related as it is to pictures (of states of affairs), to representations of an external world and thus to knowledge and the correspondence theory of truth. At this point he should have stopped writing his book.

I get the feeling that Rorty is succumbing—or is going to succumb later in the book—to the common mistake made by readers of Thomas Kuhn and postmodernists in general, that because certain modern ideas are (arguably) products of “gestalt switches” that occurred between the sixteenth and twentieth centuries, they have only relative validity. This conclusion doesn’t follow from the premise, unless you add the further premise that no revolutionary idea that partly bypasses previous questions rather than directly answering them can be true outside the culture in which it arises. But this is absurd. It implies, for example, that the theory of general relativity, if true, is true only in Western culture. (Alternatively, it implies that general relativity can’t be true, for the simple reason that it resulted from a so-called gestalt switch.) In any case, the rejection of an old “vocabulary” and adoption of a new one whose categorial framework is, to an extent, incommensurable with the old one[4] and thus leads to very different questions may after all signify a more adequate conceptualization of the subject-matter and thus an advance towards truth. The whole point of philosophy is to find the subtlest, most fine-grained conceptualizations of phenomena so that our “representations” of them can have the most explanatory power. This is also, of course, the goal towards which Rorty and all postmodernists are (despite themselves) looking: they want to represent their subject-matter in the truest way possible.

The farthest I’m willing to go down the road of relativism is to say that there are degrees of truth. But I’ve always said this, and anyway it isn’t relativistic. The implicit goal of theoretic progress remains correspondence with that towards which the given utterance transcends itself (i.e., that which it represents).

Ultimately I have to agree with Rorty that “‘sensation’ and ‘brain process’ are just two ways of talking about the same thing”—which isn’t a third thing (in addition to mind and body), since ‘sensation’ is just an emergent aspect of bodily processes [see my paper on the mind-body problem]—but his claim doesn’t make sense except on the basis of my arguments [in that paper]. For he doesn’t suggest any way to conceptualize it. He simply says, “It would be better at this point to abandon argument and fall back on sarcasm, asking rhetorical questions like ‘What is this mental-physical contrast anyway? Whoever said that anything one mentioned had to fall into one or other of two (or half-a-dozen) ontological realms?’” (That isn’t sarcasm, by the way.) He does admit that that tactic seems disingenuous, since “it seems obvious that ‘the physical’ has somehow triumphed”, but in the end he leaves it at that. His final argument is that “if knowledge is not a matter of accuracy of representation, in any but the most trivial and unproblematic sense, then we need no inner mirror, and there is thus no mystery concerning the relation of that mirror [viz., the mind] to our grosser parts [viz., the body]”. Oh-so-cleverly he thus connects the mind-body problem with epistemology, which is conveniently the subject of the next part of his book. (He also has cleverly obscured the fact that his solution to the mind-body problem is the prosaic materialistic one, namely by arbitrarily and inexplicably declaring that we have to drop the “whole cluster of images” inherited from the seventeenth century. If we just “drop the images”, the problem will be solved![5])

Rorty: “I shall try to back up the claim (common to Wittgenstein and Dewey) that to think of knowledge which presents a ‘problem’, and about which we ought to have a ‘theory’, is a product of viewing knowledge as an assemblage of representations—a view of knowledge which, I have been arguing, was a product of the seventeenth century. The moral to be drawn is that if this way of thinking is optional, then so is epistemology, and so is philosophy as it has understood itself since the middle of the last century.” It’s optional? Granted. There is indeed such a thing as history, congratulations on the discovery. But it’s a non sequitur to go from “optional” to “wrong”.

The crucial premise of [my] argument is that we understand knowledge when we understand the social justification of belief, and thus have no need to view it as accuracy of representation. Once conversation replaces confrontation, the notion of the mind as Mirror of Nature can be discarded. Then the notion of philosophy as the discipline which looks for privileged representations among those constituting the Mirror becomes unintelligible. A thoroughgoing holism has no place for the notion of philosophy as “conceptual”, as “apodictic”, as picking out the “foundations” of the rest of knowledge, as explaining which representations are “purely given” or “purely conceptual”, as presenting a “canonical notation” rather than an empirical discovery, or as isolating “trans-framework heuristic categories”. If we see knowledge as a matter of conversation and of social practice, rather than as an attempt to mirror nature, we will not be likely to envisage a metapractice which will be the critique of all possible forms of social practice.[6] So holism produces, as Quine has argued in detail and Sellars has said in passing, a conception of philosophy which has nothing to do with the quest for certainty.

I agree with him that the quest for certainty, for foundations of knowledge, should be abandoned. But I think he sets up a false dichotomy between an implausible realism and an anti-representationalist pragmatism. The notion of truth as correspondence is implicit in all theorizing, but our philosophical self-understanding can incorporate the fact that the social sciences are less exclusively ‘representational’ or ‘mirror-like’ than the natural sciences, in that the reality they purport to describe, lacking a physical aspect, doesn’t clearly and directly confront us in the way physical things do. We have more freedom, so to speak, in the social sciences; we can’t constantly test our theories in the crucible of physical nature. Still, in the end even our self-understanding has to make room for ‘truth-as-correspondence’, because, after all, we are trying to describe an external reality, albeit an invisible one. (It asserts its presence in the form of logical and inductive persuasiveness, regularities in human behavior, empirical evidence for theories, etc.)

***

Sorry to be repetitive, but I can’t help it: Rorty refutes himself. The methods he uses contradict his conclusions. Like any theorist, he makes use of rational argumentation, logic, induction and so on to adjudicate between competing positions. He necessarily approaches his material from a “common ground”; the problem is that he’s arguing for the impossibility of a common ground. Even someone arguing for a paradigm shift, as he’s doing, adopts a meta-level perspective outside the two competing paradigms. To quote Scheffler (who’s commenting on Kuhn): “the comparative evaluation of rival paradigms is quite plausibly conceived of as a deliberative process occurring at a second level of discourse....regulated, to some degree at least, by shared standards appropriate to second-order discussion”. Rorty himself “deliberates”—necessarily—“at a second level of discourse”. This is the trap into which every relativist falls. Even anti-correspondence theorists and anti-representationalists fall into the trap of self-contradiction, though less obviously than Rortyan pragmatists. (In the latter case the contradiction is between the form of argument and the content of the theory; in the former case it’s between the form of language and the content of the theory. Language, and our use of it, itself implies correspondence and representation.)

Oh great, here we go with the “winner writes history” argument for relativism:

We are the heirs of three hundred years of rhetoric about the importance of distinguishing sharply between science and religion, science and politics, science and art, science and philosophy, and so on. This rhetoric has formed the culture of Europe. It made us what we are today. We are fortunate that no little perplexity within epistemology, or within the historiography of science, is enough to defeat it. But to proclaim our loyalty to these distinctions is not to say that there are ‘objective’ and ‘rational’ standards for adopting them. Galileo, so to speak, won the argument, and we all stand on the common ground of the ‘grid’ of relevance and irrelevance which ‘modern philosophy’ developed as a consequence of that victory. But what could show that the Bellarmine-Galileo issue ‘differs in kind’ from the issue between, say, Kerensky and Lenin, or that between the Royal Academy (circa 1910) and Bloomsbury?

Good question. Maybe the fact that certain practices, called “scientific”, give us the technical ability to manipulate nature, to predict events, and can be evaluated for truth by standards shared by thinking people across cultures and paradigms, while other practices, called “political”, are concerned mainly with the power-relations between individuals in society.

‘Power-relations are reality itself! There’s nothing outside power-relations! (—except for that sentence, which posits its own objective truth).’ Foucault and Rorty, sitting in a tree, k-i-s-s-i-n-g, first comes love, then comes marriage, then comes Rorty’s mental miscarriage.

***

One of the reasons I can’t help having disdain for relativists like Foucault and Rorty, who in this respect seem no more intelligent than the average politically correct relativistic American, is that anti-relativist objections to their theories are brain-numbingly obvious.

[From a different book:] If Foucault is giving a generalized critique of general theories [i.e., of theories that purport to be “objectively true,” or true even outside the discursive practices in which they’re embedded], then he may be open to a familiar objection to all such attempts, namely, that they are self-defeating. If all attributions of truth and falsity are relative to discursive practices [and so are not valid outside a particular discursive practice], the objection would go, then the truth-claims of Foucault’s own thesis will be equally relative to a particular discursive practice. That is, he will not be able to claim that his own thesis is objectively true, or such that it ought to be accepted by any rational being. Foucault could, however, defend himself against such objections by, for example, presenting his enterprise as not so much a general theoretical critique of theories as a piecemeal liberation of his readers from the harmful influence of the belief in the need for general theories to be accepted by any rational being.

But what is meant by “general theory” here? Every hypothesis is general, inasmuch as it is put forward as being true—not true “relative” to anything, but simply true. Therefore, by writing his painstakingly dense books, Foucault is trying to free his readers from the temptation to take seriously theories such as his.

Of course, broad, unscientific worldviews like “Enlightenment-humanism” or “Marxism-Leninism” deserve to be dissected and aren’t “objectively true” (partly because they incorporate a system of values). But either the Foucauldian, Rortyan position is obvious in this way or it’s unrigorous and ultimately self-refuting.

[1] See Leopold Pospisil, The Kapauku Papuans of West New Guinea.[2] (Suggestive remark: in this respect it’s similar to consciousness.)[3] The order “Go over there!” posits the state of you being over there.[4] I say “to an extent” because, as Andrew Sayer says in Method in Social Science: A Realist Approach, there is always some continuity between two given “vocabularies”, some way of arguing between them. This is because, insofar as people follow the most basic rules of logic (which they have to if they speak a language), their methods of finding truth are essentially similar. Newtonians can argue against relativists; Western scientists can demonstrate to Papuans that the latter’s explanatory myths (of, say, an eclipse) are incorrect. (Insofar as people are rational, they must accept and utilize evidence in support of beliefs. And the scientist’s evidence—his whole framework, in fact—is more compelling than the Papuan’s, for obvious reasons. If despite overwhelming evidence the Papuan persists in his mythical beliefs, he is to that extent irrational. (An example of a ‘framework-neutral’ criterion for deciding between theories is predictive ability. Even Papuans want to be able to predict phenomena, like eclipses and weather-patterns. To the extent that the scientist is better able to predict these things, his theories are better than the Papuan’s.))[5] He’s right that in a sufficiently different language and conceptual framework the problem wouldn’t arise, but it doesn’t make sense to assume that therefore the problem is a verbal illusion. For it may be that our language and conceptual system are subtler and more faithful to differences in the Sachen selbst than the other language is. For example, the Greek idea that sensation belongs to the body rather than the mind is indeed an interesting alternative to our Cartesian intuitions, but it’s probable that philosophical argument would conclude that our intuitions are more faithful to differences in the ‘raw material’—differences that are merely “potential” (for us) until they’re made “actual” in our awareness. How else would our intuitions, our “whole cluster of images”, have persisted for centuries if they didn’t correspond to distinctions in the raw material of experience? –Anyway, all this is academic, because my solution to the mind-body problem [in the above-linked paper] accommodates both the Greek and the Cartesian intuitions. It explains in what sense mental phenomena are physical and in what sense they aren’t.[6] Here he overlooks the fact that knowledge can be both “a matter of conversation and of social practice” and “an attempt to mirror nature.” The two are not mutually exclusive.

 •  0 comments  •  flag
Share on Twitter
Published on May 24, 2021 20:11