Samir Chopra's Blog, page 11
April 2, 2018
April Fool’s Day And The Cruelest Month
The contemporary comedic take on April Fool’s Day is that it is the one day of the year that we direct some skepticism at what we encounter on the Internet; it is the one day of the year that we exercise discretion and judgment over the content of the material we read and deign to ‘share’ with our ‘friends’ online. We are treated to news of disasters, absurd decisions, and perhaps even cosmic catastrophes; we hasten to spread the word, but our hands are stayed by a quick glance at the calendar, electronic or otherwise. If only we were so discriminating for the rest of the year–when we are busily and happily aiding rumors and conspiracy theories and even worse, destined-to-be-stale ‘hot takes’ go ‘viral.’
Comedy always being paired with tragedy it is but inevitable that we should peer at the dark side of such an understanding of April Fool’s Day. For it is on this day–thanks to the many faux news bulletins that make the rounds– that many, many of our hopes are raised, only to be cruelly dashed against the unforgiving rock of the cruelest month’s first day. We are told that corporations have acquired common sense and stopped polluting the world that contains their consumers, present and future; we are assured the substances we would most like to consume but which medicine claims are slowly killing us are actually harmless and might even be good for us; we learn of windfalls and literally unbelievable strokes of fortune; the list goes on. (The science fictionish April Fool’s announcement has its own pride of place here: we are informed that cold fusion will soon power our cellphones, perhaps setting us free from the onerous business of recharging batteries, or perhaps, as Google once claimed, we could even recall the emails we had sent.) Lies, all of them.
We smile ruefully when we learn we have been duped. We had dared to hope against hope, dared to believe that this promise-breaking world had changed its ways. All for vain. That little flight of fantasy, borne aloft on wings beating powerfully and hopefully, has been called back to earth; the gravity of the real world has proved irresistible. April Fool’s Day had animated it; it renders it moribund too.
April Fool’s Day is the perfect symbol of our civilization; it isn’t enough that we have been systematically duped into imagining salvation lies around the corner in some religious, political, or intellectual dispensation. No, we must also set aside a special day of the year when we can reenact, worldwide, a human simulation of the cosmic prank. Ours is a species that dares to hope in the face of hopelessness; it is on this day that those hopes can be made to look even more forlorn.
Note: On this second day of April, residents of East Coast received their own particular notification from the cruelest month: a spring snowfall in the morning that snarled commutes and for a few hours made it clear that winter hadn’t gone anywhere just yet.
April 1, 2018
Rereading Native Son
I’ve begun re-reading a book (with the students in my philosophical issues in literature class this semester) which, as I noted here a while ago, made a dramatic impact on me on my first reading of it: Richard Wright‘s Native Son. Thus far, I’ve read and discussed Book One with my students (on Wednesday last week); we will resume discussions on April 8th once spring break is over. But even on this brief revisitation I’m struck by how my reading has changed. I’m now twenty-six years older than I was on my first reading. Then, I was thinking about returning to graduate school; now, I’m a tenured professor assigning the same text to my undergraduates. Then, I read Native Son in the anticipation of discussing it with my girlfriend, who had gifted it to me; I think I subconsciously hoped to impress an older and wiser woman with my sensitive and nuanced take on Bigger Thomas’ fate. Now, I read Book One (Fear) of Native Son in anticipation of discussing it with my students, many of whom have already shown themselves capable of sensitive and nuanced readings of the novels I have assigned them thus far; I therefore look forward to their understanding of this classic novel, daring to hope that they will bring a new interpretation and understanding of this material to my attention. For my part, I’m far more attentive to many plot details and devices on this reading; I’ve become, I think, a more careful and sensitive reader over the years, looking for more, and often finding it, in the texts I read.
Before we began class discussions I subjected my students to a little autobiographical detail: I informed them of my prior reading, of the book’s influence on me, of the passage of time since then, how I would be re-reading the text with them, and so on. I did not detail the full extent of Native Son‘s impact on me; that discussion will have to wait till Bigger’s trial and his defense by Max. But I cannot wait to do so; I wonder if I will be able to capture the sense I had twenty-six years ago of suddenly seeing the world in a whole new light. One part of that anticipation also fills me with dread; what if my students simply do not ‘get’ from it what I was able to? What if, indeed, as I read on, I find myself disappointed by Native Son?
But if the first class discussion last week was any indicator, I needn’t entertain such fears. My students ‘came through’: they had read the first book closely; they had responded to Wright’s dramatic evocation of a fearful, angry, and violent Bigger, living in a ‘black world’ disjoint from a ‘white world,’ destined to run afoul of those forces that had conspired to make him who he was, to drive him to kill, negligently and willfully alike, onwards to his fatal rendezvous with America, his home and his graveyard. Bigger’s story endures; it does so because much else–like the forces that harried him–has too.
March 29, 2018
Contra Corey Pein, Computer Science Is A Science
In this day and age, sophisticated critique of technology and science is much needed. What we don’t need is critiques like this long piece in the Baffler by Corey Pein which, I think, is trying to mount a critique of the lack of ethics education in computer science curricula but seems most concerned with asserting that computer science is not a science. By, I think, relying on the premise that “Silicon Valley activity and propaganda” = “computer science.” I fail to understand how a humanistic point is made by asserting the ‘unscientific’ nature of a purported science, but your mileage may vary. Anyway, on to Pein.
Here is a choice quote:
It’s now painfully clear that computer science is not actually a science, by the simplest definition of that word—a method of obtaining, organizing, and analyzing knowledge about the universe. Granted, computers may assist with the tasks of obtaining, organizing, and analyzing. But “computer science” as a specialized field of gadget-enabled inquiry is not concerned with the natural universe—it is, rather, engaged in exploring an entirely fabricated universe that exists inside the computer…Because it is devoted to the creation of systems that limit choice, “computer science” is something more pernicious than a non-science—it is an outright enemy of scientific reasoning.
The ignorance of basic computer science principles and history, and most gallingly, internal debates in the discipline, is painful to behold (the status of computer science as a science has long been discussed by those who work in this discipline.) Pein, I’m quite sure, has never heard of the theory of computation, or possibly ever read an article on the history of computer science. I’m unwilling to write a lengthy refutation of this piece, and will rest content with excerpting the section titled ‘Computer Science as a Science’ from Chapter Four of my Decoding Liberation: The Promise of Free and Open Source Software (please email me to ask for a PDF copy of the book or the chapter):
Though we can think of hackers and hobbyists as practicing a “naturalist” computer science in its early days, it did not acquire all the trappings of a scientific discipline until the establishment of university computer science departments some twenty years after ENIAC. Through the 1960s, computer science had to struggle to be recognized as a discipline within the academy as it competed with older applied sciences such as electrical engineering and applied mathematics. In particular, it had to combat the perception that computers were just clerical tools to be used for mundane administrative chores. Aiding in the recognition of computer science as an academic discipline, George Forsythe, a mathematician at Stanford University, created a Division of Computer Science within the Mathematics Department in 1961, which split off in 1967 to become the first computer science department (Ceruzzi 2003, 102). At that time Forsythe defined the “computer sciences” as “the theory of programming, numerical analysis, data processing, and the design of computer systems” (Knuth 1972).
Other stalwarts in the field had already made a formal definition of computer science. Herbert Simon, Alan Perlis, and Allan Newell wrote a letter to Science in 1967, defining computer science as the “study of computers” (Newell, Perlis, and Simon 1967). Their letter defended their definition and argued for the legitimacy of computer science as a science in response to a varied set of objections, including one claiming that computers as man-made artifacts were not a legitimate object of study for a “natural science.” The trio argued, “[Computers] belong to both [engineering and science], like electricity (physics and electrical engineering) or plants (botany and agriculture). Time will tell what professional specialization is desirable . . . between the pure study of computers and their application.” Newell and Simon went on to remark, in their 1975 Turing Award acceptance lecture:
Computer science is an empirical discipline. We would have called it an experimental science, but like astronomy, economics, and geology, some of its unique forms of observation and experience do not fit a narrow stereotype of the experimental method. None the less [sic], they are experiments. Each new machine that is built is an experiment. Actually constructing the machine poses a question to nature; and we listen for the answer by observing the machine in operation and analyzing it by all analytical and measurement means available. Each new program that is built is an experiment. It poses a question to nature, and its behavior offers clues to an answer. But as basic scientists we build machines and programs as a way of discovering new phenomena and analyzing phenomena we already know about. Society often becomes confused about this, believing that computers and programs are to be constructed only for the economic use that can be made of them. . . . It needs to understand that the phenomena surrounding computers are deep and obscure, requiring much experimentation to assess their nature. (Newell and Simon 1976)
In 1996…Frederick Brooks argued that computer science was grievously misnamed, that its practitioners are not scientists but rather engineers (specifically, “toolsmiths”) (Brooks 1996). As he described the distinction, “the scientist builds in order to study; the engineer studies in order to build” (emphasis in original). Or, “sciences legitimately take the discovery of facts and laws as a proper end in itself. A new fact, a new law is an accomplishment, worthy of publication. . . . But in design, in contrast with science, novelty in itself has no merit.” Brooks might be construed as asserting that computer science does not employ the traditional scientific method and should therefore be thought of as an engineering discipline. But computer scientists do frame hypotheses — say, a conjecture about the resource consumption of a distributed implementation of a new pattern-matching algorithm; then design experiments, perhaps implementing the algorithm and its experimental scaffolding; observe phenomena by gathering data from executing this implementation; support or reject hypotheses, for example, when the software performs unexpectedly poorly on large data sets; and formulate explanations, such as, “Network latencies had an unexpectedly severe impact on load-balancing.” These hypotheses and experimental designs may be refined and repeated.
Brooks…significantly undercounts situations in which computer scientists “build to study.” In addition to obvious practical applications, computer scientists study computation by building models of computation, whether physical computers, software simulations, or design elements of programming languages. Brooks himself agrees that there is a significant distinction between computer science and the engineering disciplines: “Unlike other engineering disciplines, much of our product is intangible: algorithms, programs, software systems.” These products eventually take tangible form as the internal states of a running computer change during the course of executing a program….
The practices of computer science have a close kinship, both historically and currently, with those described by Giambattista Vico in La Nueva Scienzia: scientists understand the phenomena they study by making and constructing models that validate their theories (Verum et factum convertuntur — the true and the made are convertible) (Miner 1998). Artificial intelligence, which grew out of Norbert Wiener’s cybernetics program, a direct inheritor of Vico’s principle (Dupuy 2000), is a model-making discipline par excellence; many of its practitioners improve our understanding of feats of cognition such as vision and hearing by striving to create machines that replicate them (Brooks 1991). More broadly, computer science is plausibly viewed as the use of computers and programs as models to study the properties of information rather than of energy and matter, the traditional objects of study for the natural sciences. This use of models is constitutional of computer science: the study of computational complexity, for example, would be impoverished were it limited to theoretical analysis without access to the practice of writing and running programs….
[M]any subfields [are] predominantly concerned with uncovering new facts and laws and the practitioners of the discipline who comport themselves as scientists. For example, informatics is broadly and succinctly characterized as “the science of information. . . . the representation, processing, and communication of information in natural and artificial systems. . . . [I]nformatics has computational, cognitive and social aspects” (Fourman 2002). Similarly, algorithmics has enormous applications in computer programming, but also sports vibrant experimental and theoretical branches.
[T]heoretical computer science….has uncovered “facts and laws” about the limits and applicability of computation that are not only meritorious in their own right but also critically inform nearly every design problem….Alan Turing’s discovery of uncomputable problems was both a scientific triumph and an early voice in an ongoing dialogue, in both theoretical and applied communities, about the nature of computation itself (Turing 1936).
The Encyclopedia of Computer Science describes computer science as “the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application.” The core skills of a computer scientist, therefore, are “algorithmic thinking, representation, programming, and design,” though “it is a mistake to equate computer science with any one of them” (Denning 2000). Design and engineering are crucial, as is scientific methodology. We can reasonably view every computer as a laboratory that tests and validates theories of algorithmics, language design, hardware–software interaction, and so on. Every time a piece of code runs, it generates data points that confirm or disconfirm hypotheses. Computer science is no more and no less a science than any other natural science.
Note: As an irrelevant aside, I designed and implemented a class on ‘computer ethics’ during my stint as a member of the Brooklyn College Computer Science department from 2002 to 2010.
March 22, 2018
Space Exploration And The Invisible Women
Yesterday being a snow day in New York City–for school-going children and college professors alike–I spent it with my daughter at home. Diversion was necessary, and so I turned to an old friend–the growing stock of quite excellent documentaries on Netflix–for aid. My recent conversations with my daughter have touched on the topic of space exploration–itself prompted by a discussion of the Man on the Moon, which had led me to point out that actual men had been to the moon, by rocket, and indeed, had walked on it. A space exploration documentary it would be. We settled on the BBC’s ‘Rocket Men’ and off we went; I wanted to show my daughter the Apollo 11 mission in particular, as I have fond memories of watching a documentary on its flight with my parents when I was a five-year old myself.
As the documentary began, I experienced a familiar sinking feeling: my daughter and I were going to be watching something ‘notable,’ ‘historical,’ a human achievement of some repute, and yet again, we would find few women featured prominently. Indeed, as the title itself suggests, the documentary is about men: the astronauts, the rocket scientists, the mission control specialists. The only women visible are those watching rockets blast off or worrying about the fates of their family members in them. This used to happen in our watching of music videos too as I introduced my daughter to ‘guitar heroes’ as a spur to her guitar lessons. After a couple of weeks of watching the likes of Neil Young, Jimi Hendrix, Jimmy Page et al, my daughter asked me, “Don’t girls play the guitar?” Well, of course they do, and so off we went, to check out Joan Jett, Nancy Wilson, Lita Ford, Chrissie Hynde, the Deal sisters, and many others.
It had been an easy trap to fall into. In the case of music, I had a blind spot myself. In the case of space exploration the problem lay elsewhere: there were no women pilots qualified for the astronaut program as the initial selection of the astronaut corps came from the armed forces. Both instances though, were united by their embedding in a culture in which women were women were less visible, less recognized, less likely to be promoted to the relevant pantheon. After all, as in literature and art and philosophy, women have been present in numbers that speak to their ability to surmount the social barriers placed in their paths, and yet still rendered invisible because of the failure to see them and their contributions to their chosen field of artistic endeavor.
As I watched a video of the first seven American astronauts being introduced at a press conference, I felt I had to say something to my daughter, to explain to her why no women were to be seen in this cavalcade of handsome crew cut men wearing aviator sunglasses. So I launched into a brief digression, explaining the selection process and why women couldn’t have been selected. My daughter listened with some bemusement and asked if things were still that way now. I said, no, but there’s work to be done. And then we returned to watching the Gemini and Apollo missions. Afterwards, I walked over to my computer and pulled up the Wikipedia entries for Valentina Tereshkova and Sally Ride and Kalpana Chawla and showed them to my daughter, promising her that we would watch documentaries on them too. She seemed suitably enthused.
March 21, 2018
Leaving Facebook: You Can Run, But You Can’t Hide
I first quit Facebook in 2010, in response to a talk Eben Moglen gave at NYU about Facebook’s privacy-destroying ways; one of his most memorable lines was:
The East German Stasi used to have to deploy a fleet of undercover agents and wiretaps to find out what people did, who they met, what they ate, which books they read; now we just have a bunch of Like buttons and people tell a data monetizing corporation the same information for free.
That talk–in which Moglen referred to Mark Zuckerberg as a ‘thug’–also inspired a couple of young folk, then in attendance, to start Diaspora, an alternative social network in which users would own their data. I signed up for Diaspora soon after kicked off; I also signed up for Google+. I returned to Facebook in 2012, a few months after starting my blog, because it was the only way I could see to distribute my posts. Diaspora and Google+ never ‘took off’; a certain kind of ‘first-mover status, and its associated network effects had made sure there was little social networking on those alternative platforms.
Since then, I’ve stayed on Facebook, sharing photos, bragging about my daughter and my various published writings, and so on. I use the word ‘bragging’ advisedly; no matter how much you dress it up, that’s what I’ve been doing. But it has been a horrible experience in many ways: distraction, lowered self-esteem, envy, have been but its most prominent residues. Moreover, to have substantive discussions on Facebook, you must write. A lot. I’d rather write somewhere else, like here, or work on my books and essays. So, I desperately want to leave, to work on my writing. But, ironically, as a writer, I feel I have to stay on. Folks who have already accomplished a great deal offline, can afford to stay off; those of us struggling to make a mark, to be noticed, have to stay here. (Consider that literary agents now want non-fiction writers to demonstrate that they have a ‘social media presence’; that they have a flourishing Facebook and Twitter presence, which will make the marketing of their writings easier.) I know, I know; as a writer, I should work on my craft, produce my work, and not worry about anything else. I know the wisdom of that claim and reconciling it to the practical demands of this life is an ongoing challenge.
So, let’s say, ‘we,’ the user ‘community’ on Facebook decide to leave; and we find an alternative social network platform. I’m afraid little will have changed unless the rest of the world also changes; the one in which data is monetized for profit, coupled with a social and moral and economic principle that places all values subservient to the making of profit. The problem isn’t Facebook. We could migrate to another platform; sure. They need to survive in this world, the one run by capital and cash; right. So they need to monetize data; ours. They will. Money has commodified all relationships; including the ones with social network platforms. So long as data is monetizable, we will face the ‘Facebook problem.’
March 19, 2018
Dear Legal Academics, Please Stop Misusing The Word ‘Algorithms’
Everyone is concerned about ‘algorithms.’ Especially legal academics; law review articles, conferences, symposia all bear testimony to this claim. Algorithms and transparency; the tyranny of algorithms; how algorithms can deprive you of your rights; and so on. Algorithmic decision making is problematic; so is algorithmic credit scoring; or algorithmic stock trading. You get the picture; something new and dangerous called the ‘algorithm’ has entered the world, and it is causing havoc. Legal academics are on the case (and they might even occasionally invite philosophers and computer scientists to pitch in with this relief effort.)
There is a problem with this picture. ‘Algorithms’ is the wrong word to describe the object of legal academics’ concern. An algorithm is “an unambiguous specification of how to solve a class of problems” or a step-by-step procedure which terminates with a solution to a given problem. These problems can be of many kinds: mathematical or logical ones are not the only ones, for a cake-baking recipe is also an algorithm, as are instructions for crossing a street. Algorithms can be deterministic or non-deterministic; they can be exact or approximate; and so on. But, and this is their especial feature, algorithms are abstract specifications; they lack concrete implementations.
Computer programs are one kind of implementation of algorithms; but not the only one. The algorithm for long division can be implemented by pencil and paper; it can also be automated on a hand-held calculator; and of course, you can write a program in C or Python or any other language of your choice and then run the program on a hardware platform of your choice. The algorithm to implement the TCP protocol can be programmed to run over an Ethernet network; in principle, it could also be implemented by carrier pigeon. Different implementation, different ‘program,’ different material substrate. For the same algorithm: there are good implementations and bad implementations (the algorithm might give you the right answer for any particular input but its flawed implementation incorporates some errors and does not); some implementations are incomplete; some are more efficient and effective than others. Human beings can implement algorithms; so can well-trained animals. Which brings us to computers and the programs they run.
The reason automation and the computers that deliver it to us are interesting and challenging–conceptually and materially–is because they implement algorithms in interestingly different ways via programs on machines. They are faster; much faster. The code that runs on computers can be obscured–because human-readable text programs are transformed into machine-readable binary code before execution–thus making study, analysis, and critique of the algorithm in question well nigh impossible. Especially when protected by a legal regime as proprietary information. They are relatively permanent; they can be easily copied. This kind of implementation of an algorithm is shared and distributed; its digital outputs can be stored indefinitely. These affordances are not present in other non-automated implementations of algorithms.
The use of ‘algorithm’ in the context of the debate over the legal regulation of automation is misleading. It is the ‘automation’ and ‘computerized implementation’ of an algorithm for credit scoring that is problematic; it is so because of specific features of its implementation. The credit scoring algorithm is, of course, proprietary; moreover, its programmed implementation is proprietary too, a trade secret. The credit scoring algorithm might be a complex mathematical algorithm readable by a few humans; its machine code is only readable by a machine. Had the same algorithm been implemented by hand, by human clerks sitting in an open office, carrying out their calculations by pencil and paper, we would not have the same concerns. (This process could also be made opaque but that would be harder to accomplish.) Conversely, a non-algorithmic, non-machinic (like, a human process would be subject to the same
None of the concerns currently expressed about ‘the rule/tyranny of algorithms’ would be as salient were the algorithms not being automated on computing systems; our concerns about them would be significantly attenuated. It is not the step-by-step solution–the ‘algorithm’–to a credit scoring problem that is the problem; it is its obscurity, its speed, its placement on a platform supposed to be infallible, a jewel of a socially respected ‘high technology.’
Of course, the claim is often made that algorithmic processes are replacing non-algorithmic–‘intuitive, heuristic, human, inexact’–solutions and processes; that is true, but again, the concern over this replacement would not be the same, qualitatively or quantitatively, were these algorithmic processes not being computerized and automated. It is the ‘disappearance’ into the machine of the algorithm that is the genuine issue at hand here.
March 10, 2018
Iris Murdoch On Interpreting Our Messages To Ourselves
In Iris Murdoch‘s Black Prince (1973), Bradley Pearson wonders about his “two recent encounters with Rachel and how calm and pleased I had felt after the first one, and how disturbed and excited I now felt after the second one”:
Was I going to “fall in love” with Rachel? Should I even play with the idea, utter the words to myself? Was I upon the brink of some balls-up of catastrophic dimensions, some real disaster? Or was this perhaps in an unexpected form the opening itself of my long-awaited “break through,” my passage into another world, into the presence of the god? Or was it just nothing, the ephemeral emotions of an unhappily married middle-aged woman, the transient embarrassment of an elderly puritan who had for a very long time had no adventures at all? [Penguin Classics edition, New York, 2003, p. 134]
Pearson is right to be confused and perplexed. The ‘messages’ we receive from ‘ourselves’ at moments like these–ethical dilemmas being but the most vivid–can be counted upon to be garbled in some shape or fashion. The communication channel is noisy; and the identity of the communicating party at ‘the other end’ is obscure. Intimations may speak to us–as they do to Pearson–of both the sordid and sublime for we are possessed, in equal measure, by the both devilish and the divine; these intimations promise glory but they also threaten extinction. What meaning are we to ascribe to them? What action are we to take at their bidding? A cosmic shrug follows, and we are left to our own devices all over again. ‘Listen to your heart’ is as useless as any other instruction in this domain, for ‘the heart’ also speaks in confusing ways; its supposed desires are as complex, as confusing as those of any other part of ourselves. Cognitive dissonance is not an aberration, a pathological state of affairs; it is the norm for creatures as divided as us, as superficially visible to ourselves, as possessed by the unconscious. (Freud’s greatest contribution to moral psychology and literature was to raise the disturbing possibility that it would be unwise to expect coherence–moral or otherwise–from agents as internally divided, as self-opaque as us.)
We interpret these messages, these communiques, from ourselves with tactics and strategies and heuristics that are an unstable mixture of the expedient, the irrational, the momentarily pleasurable; we deal with ‘losses’ and ‘gains’ as best as we can, absorbing the ‘lessons’ they impart with some measure of impatience; we are unable to rest content and must move on, for life presses in on us at every turn, generating new crises, each demanding resolution. Our responses can only satisfice, only be imperfect.
The Clash were right thus, to wonder, to be provoked into an outburst of song, by the question of whether they should ‘stay or go.‘ We do not express our indecision quite as powerfully and vividly as they do, but we feel the torment it engenders in our own particular way.
March 7, 2018
‘Reciprocity’ As Organizing Principle For The Moral Instruction Of Young Women
I’ve often wondered how best to provide moral instruction to my daughter as she grows up, what principles and duties to keep front and center in the course of my conversations with her as she begins to grow into an age where her interactions with other human beings start to grow more complex. Over the past year or so, I’ve strived to organize this ‘instruction’ around the concept of ‘reciprocity,’ around a variation of the Golden Rule and the altruism it implies: do good unto others; but only continue with the good if it is reciprocated; do not feel obligated to respond to unkindness with kindness; indeed, you shouldn’t respond to unkindness with kindness; if good is done to you, then you must reciprocate with good. There is one conditional duty in here: that of doing good to others, whose obligations continue to hold only if your acts are met with good done to you in turn. There is no duty to do good in response to bad being done unto you; and there is an absolute duty of doing good to others when they do good unto you. I’ve tried to provide this instruction by way of simple examples: we should not litter because in doing so we would make our neighborhoods dirty for ourselves and our neighbors; they should do the same for us; if some kid in school is nice to you, you should be nice back to them; if someone in school is not nice to you when you have been so to them, then don’t feel the need to continue being nice with them; acknowledge people’s generosity and kindness in some fashion, even if with a simple ‘thanks’; and so on. I’ve tried to make the claim that society ‘hangs together,’ so to speak, because of reciprocity. Without it, our social arrangements would fall apart.
Reciprocity is not as generous and self-sacrificing as pure altruism. I chose reciprocity as an organizing principle because I believe a commitment to altruism can hurt people, and moreover, in our society and culture, altruism has proved to be largely harmful to women. I was, and am, especially worried about a girl growing up–as too many in the past have–to believe that her primary duty is to make others happy, to do good to others even if good is not being done to her in turn. I believed that stressing reciprocity as an organizing moral principle would point in the direction of some positive obligations to make others happy but it would also place some limitations on those obligations. Aristotle wrote of the need to maintain a mean of sorts as we ‘practiced’ the virtue of generosity, between wastefulness and stinginess–the altruist gives too much in this reckoning. A moral agent guided by the principle of reciprocity aims to find a mean in the generosity of their benevolent or good actions: by all means be generous, but pick the targets of your generosity wisely.
I realize that the injunction to only do good if it is reciprocated in some way sounds vaguely unforgiving or unkind and perhaps self-defensive; but again, as I noted above, some such measure of protection is necessary for women, who for too long have been crushed by the burden of unfair or unrealistic expectations of their conduct, to the detriment of their well-being. I want my daughter to do good unto others, but I also want good to be done to her.
My daughter, to her credit, seems to have listened; she can now use the word ‘reciprocity’ in conversation and sometimes to describe a plan of ac; I wait to see how well she will internalize the ‘lessons’ it forms the core of. (She likes the rhyming with ‘gravity’; as I say to her, gravity makes the world of things work, reciprocity makes the world of people work!)
Note: ‘reciprocity’ enjoys two entries in Wikipedia. One drawn from social psychology and the other from social and political philosophy.
March 6, 2018
Anger, Melancholia, And Distraction
Anger is a funny business; it’s an unpleasant emotion for those on the receiving end, and very often, in its effects, on those who are possessed by it. And there is no denying that it affords a pleasure of sorts to those consumed by it; it would not have the fatal attraction it does if it did not. That kind of anger, of course, is a righteous anger; we feel ourselves possessed by a sense of rectitude as we rail against those who have offended us; we are in the right, they are in the wrong, and the expression of our anger acts as a kind of confirmation of that ‘fact.’ But anger’s hangover is very often unpleasant, and among its vivid features is a crippling melancholia. We became angry because we had been ‘wounded’ in some shape or fashion, and while the expression of our anger is often a powerful and effective palliative against the pain of that injury, it is almost always a temporary one. What is left in its wake is a complex welter of emotions: we are sad, of course, because the hurt of the injury is still with us; we are fearful too, because we dread the same kind of injury again; our anger might have fatally wounded an important personal relationship or friendship; we might well have ventured out into unknown territory, fueled by anger, trusting it to guide us, but instead find ourselves at an unknown pass, one whose contours we do not know yet to navigate. (I used the word ‘possessed’ above deliberately to indicate a kind of capture or hijacking of the self; to describe a person ‘suffering’ from anger might be equally accurate in terms of describing the sense of being a patient, one helpless in the face of an emotion running wild.)
I write about anger and distraction and anxiety here because I suffer from all of them; they are my psychic burdens, my crosses to carry. On one view, anger and distraction both bottom out in a kind of anxiety and fear. As I noted recently, I do not think I will ever rid myself of anxiety; it is a state of being. Because of that, I do not think I will rid myself of anger either. Anger cripples me–not just in personal relationships but in another crucial domain as well; it corrodes and attenuates my ability to do creative work. This failure induces its own melancholia; my sense of self is wrapped up quite strongly in terms of not just my personal relationships and social roles and responsibilities–like being a husband and father and teacher–but also my reading and writing. (I’m loath to describe this activity of mine as ‘scholarship’ and am quite happy to describe my intellectual status as ‘someone who likes to read and write.’) Reading and writing are well-nigh impossible in the febrile states induced by anger; among the terrible costs of anger this one brings with it an especially heavy burden for me. When I sought out a meditation practice a few years ago, one of my primary motivators was to ‘tame’ or ‘master’ this terrible beast somehow; it remains an ongoing struggle, one not helped by a falling off in my commitment to my meditation ‘sits.’ I write here, of course, as part of a process to try to reintroduce that in my life. More on that attempt anon.
March 2, 2018
A Complex Act Of Crying
I’ve written before, unapologetically, on this blog, about my lachrymose tendencies: I cry a lot, and I dig it. One person who has noticed this tendency and commented on it is my daughter. She’s seen ‘the good and the bad’: once, overcome by shame and guilt for having reprimanded her a little too harshly, I broke down in tears as I apologized to her; my daughter, bemused, accepted my apology in silence. Sometimes, my daughter has noticed my voice quiver and break as I’ve tried to read her something which moved me deeply; the most recent occurrence came when I read to her a children’s book on Dr. Martin Luther King Jr.–as I began to tell my daughter about the first time I, as a teenager, had experienced the King legend in a televised documentary. I had to stop reading, hand over those duties to my wife, and watch as my daughter heard the rest the book read to her. And, of course, because my daughter and I often listen to music together, my daughter has seen me respond to music with tears. On these occasions, she is convinced that I’m crying because I’m ‘so happy!’
In recent times the song that has served to induce tears in me almost immediately is Chrissie Hynde‘s cover of Bob Dylan‘s ‘I Shall Be Released‘ at the 30th Anniversary Concert Celebration in 1997. (Here is a music video of the performance; the audio can be found, among other places on Spotify.) No matter what, whenever my daughter and I have sat down some evening–in between dinner and bath and story time–to watch and listen to Chrissie Hynde put her unique and distinctive touch on Dylan’s classic (ably backed up by one of the best house bands of all time – GE Smith and Booker T and the MGs among others), tears spring to my eyes. I’m not sure why; the lyrics are powerful and speak to release, redemption, deliverance, and salvation; it is almost impossible for me to not, at this stage, read so much of the song’s message into a promise of kind directed at my way, at my particular ‘prison’–of the self and its seemingly perennial, unresolvable, crises and challenges. Something in those lyrics–and their singing by Hynde–seemed to offer reassurance, kindly and gently, and with, dare I say it, an existential love for all fellow human sufferers.
So I cry. And my daughter notices. She is both delighted and ever so slightly perplexed; this is her father, a fount of both affection and discipline, a man who struggles at the best of times to find the right balance between gentleness and firmness. She is curious, and so lately, when we play the song, she takes her eyes off the screen to look at me instead; she is waiting for me to cry; and on every occasion, I have ‘come through.’ Now, the song has acquired another dimension for my daughter; she wants to play it so she can see her father cry because he is ‘so happy.’ I don’t have the heart to tell her that my feelings are a little more complicated, and besides, it is true, I’m almost ecstatic as I begin to cry, to feel a little more, and to see my daughter break out in a huge smile.
And so now, if I listen to this song by myself, either on video or audio, I cry again, but something has been added to the song: my daughter’s reaction to it, to my crying. Its emotional texture is richer, more meaningful now; now when I listen to it, I see her turn to gaze into my eyes, looking for the first hint of moisture that will tell her that Papa’s reserve is no more. And I know that years from now, when I listen to this song again, I will cry again, because its lyrics will not just carry their original emotional resonance but also the memory of those days when I used to watch and listen to it with a five-year old girl, now grown older, wiser, and perhaps less inclined to spend such time with her father. That knowledge makes these moments even more powerfully emotionally informed; and yes, even more tear-inducing. A welcome situation.