Samir Chopra's Blog, page 94

February 23, 2014

Tom Friedman Has Joined Google’s HR Department

Tom Friedman is moonlighting by writing advertising copy for Google’s Human Resources Department; this talent is on display in his latest Op-Ed titled–appropriately enough “How To Get a Job at Google”. Perhaps staff at the Career Services offices of the nation’s major universities can print out this press release from Google HR and distribute it to their students, just in time for the next job fair.


Friedman is quick to get to the point (and to let someone else do the talking):


At a time when many people are asking, “How’s my kid gonna get a job?” I thought it would be useful to visit Google and hear how Bock would answer.


True to his word, the rest of the Op-Ed is a series of quotes from “Laszlo Bock, the senior vice president of people operations for Google — i.e., the guy in charge of hiring for one of the world’s most successful companies.” Let us, therefore, all fall into supplicant mode.


The How To Get a Job With Us press release is, of course, as much advertisement for the corporation’s self-imagined assessment of its work culture as anything else; how obliging, therefore, for Friedman to allow Bock to tell us Google so highly values “general cognitive ability”, “leadership — in particular emergent leadership as opposed to traditional leadership”, and “humility and ownership”. (In keeping with the usual neoliberal denigration of the university, Friedman helpfully echoes Bock’s claim that “Too many colleges…don’t deliver on what they promise. You generate a ton of debt, you don’t learn the most useful things for your life. It’s [just] an extended adolescence.” Interestingly enough, I had thought Google’s workspaces with their vending machines, toys and other play spaces contributed to the “extended adolescence” of its coders. The bit about the “ton of debt” is spot-on though.) 


The use of opinion pages at major national newspapers for corporate communiques, to advance business talking points, to function as megaphones for the suppressed, yearning voices of the board-room, eager to inform us of their strategic perspectives, is fast developing into a modern tradition. This process has thus far been accomplished with some subterfuge, some stealth, some attempt at disguise and cover-up; but there isn’t much subtlety in this use of the New York Times Op-Ed page for a press release.


Friedman’s piece clocks in at 955 words; direct and indirect quotes from Bock amount to over 700 of those. There are ten paragraphs in the piece; paragraphs one through nine are pretty much Bock quotes. Sometimes, I outsource my writing here on this blog to quotes from books and essays I’ve read; Friedman, the Patron Saint of Outsourcing, has outsourced his to Google’s VP of “people operations.”


The only thing missing in this Friedman piece is the conversation with the immigrant cabbie on the way to Google’s Mountain View in the course of which we would have learned how his American-born children were eager to excel in precisely those skills most desired by Google. Perhaps we’ll read that next week.


 •  0 comments  •  flag
Share on Twitter
Published on February 23, 2014 07:04

February 22, 2014

Steven Soderbergh’s Contagion: Portrait of the Apocalypse

If you find speculation about post-apocalyptic situations interesting, then you should find speculation about the progression of an apocalypse interesting too. Steve Soderbergh‘s Contagion is a fine cinematic take on this eventuality.


The movie’s plot is simple: a deadly new virus jumps the animal-human barrier, and is transmitted quickly by contact. The virus’ first appearance occurs in the Far East, and then, thanks to its method of transmission and modern international travel, it quickly acquires a global presence. Its rate of progression is geometric, and as disease control centers struggle to study its molecular biology–”a mix of genetic material from pig and bat viruses”–and devise a vaccine, the virus spreads, killing dozens, then hundreds, thousands and millions. This is a global pandemic, one that could terminate civilization as we know it. 


As the pandemic progresses, successive scenes in the movie–often sustained by Cliff Martinez’ excellent soundtrack– ratchet up the tension, leading finally, to the dreaded scenes of a possibly irreversible breakdown in social order.  It is all here: the run for food at supermarkets; the hoarding; the looting; the spreading panic; the evacuations and the exodus. Conspiracy theories make the rounds; the unscrupulous find ways to profit; the principled find new occasions for bravery; the diligent die; there is space aplenty for displays of love, cowardice, fear, and bravery.


Many entries in the post-apocalyptic genre leave the apocalypse unspecified; Contagion details it quite carefully.


It is be too simplistic to suggest, as many are often tempted to when confronted with such portrayals of social degeneration in response to catastrophe, that these are occasions when “true human nature”, inevitably described as “selfish” and “cruel”, is on display. Instead, as I have argued before,


There is an alternative moral to be drawn…the human nature revealed to us in these depictions of an apocalypse’s aftermath is not the ‘true’, ‘real’ or ‘natural’ one at all. Instead what is shown in post-apocalyptic art are traumatized human beings whose responses–to their environment, to each other–are pathological precisely because of the nature of the changes undergone. The death, disease and pestilence of the apocalypse, for one. Post-apocalyptic visions are thus indeed revelatory, not because they show us how we were ‘before’ we ‘became civilized’ but because they show what our response would be to the dramatic, traumatic loss of our political and social orders.


What makes Contagion as compelling as it manages to be is ultimately its commitment to scientific and political fidelity: the genetics, the virology, the epidemiology, the development of a vaccine, are all carefully and knowledgeably described and deployed in storytelling, as are the machinations of interactions between state and federal officials, and national and international public health authorities. This is a cerebral thriller, whose slickness of production artfully complements its keen eye for detail.


Mankind makes it back from the brink, but it has been a narrow escape, and it will not bring back to life the twenty-six million that do lose their lives. This is fiction, but the perils it depicts are not too far from actuality.


 •  0 comments  •  flag
Share on Twitter
Published on February 22, 2014 15:05

February 21, 2014

History as Chronicle of the Inevitable

From Philip Roth’s The Plot Against America:


[A]s Lindbergh’s election couldn’t have made clearer to me, the unfolding of the unforeseen was everything. Turned wrong way round, the relentless unforeseen was what we schoolchildren studied as “History,” harmless history, where everything unexpected in its own time is chronicled on the page as inevitable.  The terror of the unforeseen is what the science of history hides, turning a disaster into an epic.


When I first studied history as a subject of formal instruction, one equipped with formal syllabi and textbooks, it was presented as a very particular narrative, the classic dates-and-kings-and-battles kind. There was no increase in the sophistication of our studies as we rose from the sixth to the eighth grades, merely a chronological transition from the Ancient to the Medieval to the Modern, eras that neatly terminated as centuries did. Our understanding of the events we studied remained pegged to the same paradigms in each grade in school: here was a cavalcade of occurrences, flowing through time, each bringing forth in neat succession the one after it; history was just one damn thing after another. (Later, military histories introduced me to the notion of history written by victors, to the selective presentation of those narratives that best validate and justify and tidy up the messiness of something as chaotic and morally problematic as war. These histories at least made clear their writing was a creative act.)


The contingencies that Roth’s narrator notices missing in school history are implicit there but their presence is easily masked by the fact of history looking back at the past, at events transpired and now fixed and unalterable. Viewed from this perspective, it is all too easy to imagine the collective weight of all that went before forcing into existence the events of historical narratives, turning them from potential to actual. When this is done, the indeterminate garden of forking paths that is the future suddenly and mysteriously becomes as determinate and necessary as the past–the transition through the present has this magical effect. We forget, all too soon, every moment’s pregnancy with possibility. This is facilitated, of course, by the selective narrow focus that ignores entire classes of humans, entire domains of human activity. Small wonder that historical events that do not reflect the rich diversity of historical forces in action at any given moment appear so inevitable.


Some kinds of histories–especially of those human activities considered to be conceptually divorced from social, political and economic contexts–are especially susceptible to acquiring an air of inevitability about them. Histories of science are sometimes understood in this fashion; the edifices of scientific knowledge are imagined built up in step-wise fashion, each stone rising ever higher on the basis of the logical and evidential support afforded by the ones beneath it; no alternative developments appear visible. It is only the closer look at the social embedding of the laboratory that enables the revelation of its many contingencies. Keeping the ‘terror of the unforeseen’ hidden here serves the ideological purpose of portraying scientific knowledge as untainted by even the slightest hint of subjectivity; the practitioners of science appear free of any distinctively human bias.


 •  0 comments  •  flag
Share on Twitter
Published on February 21, 2014 11:14

February 20, 2014

A Stutterer and His Cure

In the seventh grade, at the age of eleven, I began to stutter. It began without apparent reason; all too suddenly, I found myself tripping over consonants and unable to begin speaking words that began with vowels. When asked to speak up in class, I found I needed a visible act of physical exertion to get the words rolling; often, I would have to step out from behind my desk with a little skip or hop, an act that never failed to provoke giggles in my classmates and sometimes even my teachers, who would look at me with expressions part amused, part quizzical. I had never stuttered before; I was mortified and humiliated and crushed.


My stuttering was plain for all to see; my audience included my parents. My mother was intelligent and sensitive enough to realize this affliction had a psychological provenance though she could not begin to guess at what it was. Perhaps because I had changed schools the previous year; perhaps because I was still struggling to adjust life as a ‘civilian’ after my father’s retirement from the air force. I had never been particularly gregarious or extroverted, but now, some other barrier to social interaction had arisen from deep within me and laid a formidable roadblock in front of me. I showed no signs of being able to negotiate it.


My mother sought help. She was directed to a child psychologist–reputed to be of sympathetic temperament and disposition–whose offices were located conveniently near by to our home, a mere short bus-ride away. When she told me she planned to take me there for a consultation, I was agreeable. I liked the idea of being ‘treated’ and more to the point, I was curious about what a ‘psychologist’ did. How would she ‘cure’ me? What was the ‘treatment’ like?


Our first meeting with the psychologist went pleasantly enough; my mother and I met her together and provided her with some elementary details on our family, my school life, my friends, my daily activities, and of course, my immediate history preceding the outbreak of stuttering.


This intake meeting out of the way, my sessions with my therapist began. Twice a week, after school, my mother and I traveled by bus to her office, and then, while my mother waited for me, I went into the therapist’s office for an hour. This was a talking cure for talking; so we talked.


It is now almost thirty-five years since those sessions, so I can remember little of them. I do remember my therapist’s gentleness, her curiousness. I think her diagnosis, such as it was, of my stuttering, was that a shy boy had become even more so; that my inability to come out of my shell in my new school, to make friends in my neighborhood, my constant retreat into my books, had driven even my spoken expression back into me, repressed and suppressed it.


In the end, the ‘cure’ was effectuated by the simplest of means; she was a stranger, and she was kind, and she spoke to me, and listened to me and humored me. Those conversations, by themselves, drew me out of my shell and encouraged me to speak. She did not discipline me; she was not harsh; she did not rebuke me or mock me; she listened a great deal. I spoke, I complained, I bemoaned the changes in my life, I spoke of what I felt was missing in my life.


After every session, my mother would ask me how it had gone, and I would always have the same answer: It went well. I grew to like my therapist and looked forward to my bi-weekly  conversations with her.


A few months later, my therapist told my mother I was ‘cured.’ Indeed, I was. I had stopped stuttering; or at least, the most noticeable forms of my affliction were now gone. I do not remember if we did any follow-ups, or if I was upset at having ended the treatment. In any case, soon thereafter, I left home for boarding school. Nothing quite convinced me how valuable my sessions with her had been than my time in boarding school; dealing with its feral residents while suffering from a stutter would have been misery.


Traces of my stutter still survive; when I am angry, stressed out, unhappy, or otherwise not quite psychically comfortable, I notice myself tripping over words, unable again, to begin words with vowels. At those times, the only remedy I can seek is to simply slow down, stop speaking, retreat, and then try again.


I wonder where my therapist is; I never found out her name, never met her again. Here is a belated thank you.


 •  0 comments  •  flag
Share on Twitter
Published on February 20, 2014 07:54

February 19, 2014

How Best to Introduce Scientific Reasoning

A couple of days ago on Facebook, by way of crowd-sourcing syllabi preparation for an undergraduate critical thinking course that includes a unit–three to six class sessions–on scientific reasoning, David Grober-Morrow threw out the following query


What do you most wish that undergraduates (science and non-science majors) understood about scientific reasoning?


This is a very good question. I would suggest that for the indicated demographic and unit-length, the most valuable instruction would be in the centrality of ampliative inference in the practice of science. That is, students should learn that scientific reasoning, besides relying on deductive inference of the consequences of scientific laws, also utilizes, in large part, induction and abduction.


Both these forms of inferences ‘go beyond the data’; they enable the bridging of the gap between observations and the conclusions drawn on their basis. In inductive reasoning, the scientist infers statements about future observations after having made a finite set of observations of empirical phenomena. The classic ‘All ravens observed thus far are black, therefore, all ravens, even those unobserved at this point in time, are black’ formulation of this kind of reasoning leads to Nelson Goodman‘s famous riddle of induction; it is a form of prediction, an inference made about the future. In abductive reasoning, in making inferences to the best explanation, the scientist infers backwards, to the past, about the kinds of events that might/must have occurred to make true the observations recorded. A bridge has collapsed; what must have happened to have made this event occur? This might thus be termed postdiction.


Thus after the scientist has made observations at one point in time, using these forms of inference he is able to look backwards and forwards along the timeline.


Introducing students to these forms of inference leads quite naturally to an introduction and explanation of the centrality of probabilistic forms of reasoning in science, the nature of admissible and inadmissible evidence and the confirmations they permit, the formulation of scientific hypotheses and theories, and the nature of scientific laws. It also shows how deductive inference is a relatively minor part of scientific reasoning, one that follows on the heels of these two forms of inference.


It would be a mistake, I think, to introduce students, in a class like the one described above, to the ‘gruesome’ Goodman puzzle of induction. The fairly sophisticated concepts involved in its clearest explication and resolution are likely to be found confusing by the students in the limited time available. (It also has the unfortunate feature of seeming a bit like a parlor trick, a sure-fire method of turning off a student already convinced that philosophers’ examples are a kind of intellectual sandbaggery.) Instead, I would rely on as many colorful examples as possible to show how science is not the mere routine noting down of data in notebooks, how creative and inventive induction and abduction allow scientists to be, and how much of the impressive and awe-inspiring edifices of scientific knowledge are built on the seemingly tenuous foundations provided by these forms of reasoning.


 •  0 comments  •  flag
Share on Twitter
Published on February 19, 2014 08:32

February 18, 2014

Hagiography as Biography: Turning Writers into Saints

Tim Parks wonders why biographies of writers flirt with hagiograpy, why they are so blind to their subjects’ faults:


With only the rarest of exceptions…each author is presented as simply the most gifted and well-meaning of writers, while their behavior, however problematic and possibly outrageous…is invariably described in a flattering light…special pleading is everywhere evident, as if biographers were afraid that the work might be diminished by a life that was less than noble or not essentially directed toward a lofty cause.


[B]iographers apparently feel a need to depict their subjects as especially admirable human beings, something that in the end makes their lives less rather than more interesting and harder rather than easier to relate to their writing. It is so much clearer why the books were written and why they had to be the way they are if the life is given without this constant positive spin.


[O]ne can only assume that they are satisfying a general need to reinforce a positive conception of narrative art, thus bolstering the self-esteem of readers, and even more of critics and biographers, who in writing about literature are likewise contributing to the very same good causes.


The habit of imagining the writer as more well-meaning than he or she probably was is even more curious when we turn to academe. Usually hostile to any notion that knowledge of a writer’s life illuminates his work—“Biographical Fallacy!” one professor of mine would thunder—academic critics nevertheless tend to assume that the author is a solemn soul devoted to profound aesthetic enquiries and invariably progressive narratives. [emphasis added]


I would have thought the answer to Parks’ puzzlement was staring him right in the face (he flirts with it above in the line emphasized) . Biographies of writers are written by, er, writers. To write quasi-hagiography rather than biography, to suggest that the personal and the artistic can be so divorced is to also give oneself a free pass: judge me on my writing, and my writing alone.  Here, the personal is not treated as political; instead, it becomes an autonomous sphere, one whose influences on a  writer’s writing are not permitted to be viewed and whose consideration is not allowed to enter into any judgment of the writing, now viewed as an act radically divorced from the life that led to it.  Writers are not embedded in their actions and circumstances and relationships; they are merely conduits for the expression of their art, which they bring to life by dint of their unstinting labors.


This is an exalted view to hold of others; it becomes even more pleasurable to profess such views when they lead to an exalted vision of oneself. Writing quasi-hagiographies of writers is then best understood as equal parts self-glorification and anticipatory protection of oneself against future critiques. To suggest the writer is essentially noble and virtuous despite well-known personal failings is to act to ensure a similar view of one’s own life. It is an act of writerly solidarity, an insurance policy taken out against any criticism that peeks under the hood.


 •  0 comments  •  flag
Share on Twitter
Published on February 18, 2014 10:53

February 17, 2014

Random Searches on the New York Subway: A User’s Story

Today’s post will simply make note of an interesting (and alarming) email I’ve received from a reader. Please do share this widely.


Some time ago I was researching the random bag check policy for the NYC subway system and stumbled across your blog posting [on random searches on the New York subway].


Until today I had never been singled out for a random bag check nor had I ever been arrested.  When I entered the subway on 58th street/Columbus Circle today at around 1pm a police officer approached me and asked to search my backpack.  I thought about it for a moment and then declined.  He told me that since I declined I would not be allowed to enter the subway.  I told him that that was fine with me and that I would simply exit and take a taxi.  I exited and began to make my way down eight avenue on foot to flag a taxi.  Along the way, instead of researching the matter on my phone more extensively as I should have done, I pondered the logic and fairness of the situation.


Even though I had nothing to hide, for some reason I did not feel like having my privacy invaded.  I also questioned the efficacy of the search strategy.  I wondered what exactly the officer meant when he told me I could not enter the subway.  Did he mean I could not enter at the exact spot where he was conducting the search?  Did he mean I could not enter that particular line at any other entrance?  Did he mean that since I had declined the search I could never ride the subway ever again on any other day and on any other line?  The vagueness of his statement puzzled me.  Surely as a metro card carrying resident of NYC I would not be required to suspend all access to this vital means of public transportation simply because I had declined this one bag check.  Following this train of thought I figured that if I entered at another station where no bag searches were being conducted I might be able to lawfully enter since I would be doing so without declining a bag check.


Remembering your story and some other information I had recently read about the legality of declining bag searches in public spaces I felt compelled to put my theory to the test.  I proceeded to head back a block north to 54th street and entered the subway from a different station.  When I made it to the turnstiles there was no bag search being conducted.  I swiped my card and entered the station.  Roughly thirty seconds after I entered the station I was approached by a different officer.  It immediately became clear that the original officer had put out an A.P.B. on me.   I was arrested and taken to the 58th street/Columbus circle subway police station.  The arresting officer instructed me to stand and face the entry counter where a duty officer and his sergeant were sitting.  As I waited there patiently and silently the sergeant and duty officer began discussing a strange smell that they detected in the air.  They continued by directing sarcasm my way and eventually asked me if I had been smoking Marijuana.  I said no and told them that the reason I had declined the search was not because I had anything to hide but rather that I did not feel like having my privacy invaded.  They laughed and suggested that I was lying.  I was then put in a cell with Steve, a man of multiple prior arrests who had decided earlier this morning to enter the station without paying.  I spent the next four hours learning all about Steve’s life story while waiting to be processed.  Finally, after having my mug shot and finger prints taken I was released.  I had a brief courteous discussion with the booking officer about the charges and my court date.  He informed me that because the NYC subway station is owned by a private company and because I had entered the station after declining the search I was being charged with trespassing (in fact, the subway is a publicly owned system that is leased to the New York City Transit Authority).  Furthermore, because I had entered the station after having been told not to I was also being charged with disobeying a lawful order.  He further stated that both charges are violations, lesser than misdemeanors.


That said, I am baffled by the vagueness of this law.  Why did the arresting officer arrest me rather than simply insisting on searching my bag?  Even though I was located it stands to reason that the ease with which I could have entered elsewhere renders the system contradictory and innefectual.  Your personal experience is a testament to this very idea.  The fact that I was arrested does not support the theory that the system works.  It simply seems to illustrate that much time was wasted and that I was arrested without probable cause.  I was not arrested because I was suspected of being a terrorist.  Instead I was arrested because I declined to have my privacy invaded.


I’m not totally sure what compelled me to enter the subway so quickly and so near to where I had declined the search.  For sure curiosity played a major role.  Before I decided on that course of action I did consider the wisdom of waiting a little longer, walking a little further, or simply taking a taxi as I had originally intended.  On the one hand, I am glad that the police force exists and that they are actively trying to avert another disaster.  On the other hand, if I was a terrorist or a drug trafficker or anything else unsavory I certainly would not have been so stupid as to enter the train so close and so quickly after my initial brush with the law.  I am simply a law abiding resident of this great city who was trying to make my way home.


Just some food for thought as you ponder entering the subway system so soon after and so near to your next declined bag search.


From: Matthew Akers


 •  0 comments  •  flag
Share on Twitter
Published on February 17, 2014 09:29

February 16, 2014

Can An Adult Read a Book Like a Child?

In ‘The Lost Childhood’ (from The Lost Childhood and Other Essays, Viking Press, New York, 1951), Graham Greene writes:


Perhaps it is only in childhood that books have any deep influence on our lives. In later life we admire, we are entertained, we may modify some views we already hold, but we are more likely to find in books merely a confirmation of what is in our minds already: as in a love affair it is our own features that we see reflected flatteringly back.


But in childhood all books are books of divination, telling us about the future, and like the fortune teller who sees a long  journey in the cards or death by water they influence the future. I suppose that is why books excite us so much. What do we ever get nowadays from reading to equal the excitement and the revelation in those first fourteen years….it is in those early years that I would look for the crisis, the moment when life took a new slant in its journey towards death.


As my posts here on Richard Wright’s Native Son and Toni Morrison’s writing in Sula would indicate, I’m inclined to disagree with Greene: I do think its possible for even adults to read books that they consider to have had a ‘deep influence on their lives.’


However, I think too, that I have a sense of what Greene is getting at. The ‘distance’–between one point of emotional and imaginative maturity and another–a childhood book helps you traverse is perhaps far greater than that any book read in adulthood could take you. The books we encounter in childhood find us having barely commenced many mental journeys; the first steps they help us take are often gigantic and accompanied by a kind of thrill we only rarely encounter in adult life. We are not yet jaded, not yet cynical. All of this is implied in the claims Greene makes above.


The reason a book read in adult life can have the ‘deep influence’ Greene speaks of is related to the childhood reading experience. We might grow up with our selves developed unevenly; we might find ourselves possessed  of great accomplishment and maturity in one domain and yet utterly lacking in sophistication and edification in another. Our formative years might have been biased by particular sorts of influences that drove out yet others; we are, so to speak, only well done on one side. Pockets of callow superficiality lurk within us.


At these moments then, thanks to fortuitous discovery, we are set up for an encounter that is like our childhood reading: we find ourselves experiencing an epiphany of sorts, the same giddiness that so thrilled us as children comes upon us again. And we speak of our newly found intellectual companion in the same breathless fashion as we did of our teenage crushes.


So, I think focusing on chronological age is a mistake. As long as–thanks to previous immaturity–we bear the potential for radical growth within us, we will continue to experience these ‘books of divination.’ Adults can read books like children.


 •  0 comments  •  flag
Share on Twitter
Published on February 16, 2014 08:36

February 15, 2014

Facebook and Writers’ Status Messages

My last post on Facebook led me to think a bit more its–current and possible–integration into our lives, especially those conducted online.


As ‘net users are by now aware, almost any site you visit on the ‘net features a Facebook button so that you can indicate whether you ‘Like’ the page and thus, share it with your ‘Friends.’ Of course, in so doing, you also leave a digital trail of sorts, indicating what you have read, what music you have listened to, which videos you have viewed, which jokes you found funny, and so on. As Eben Moglen put it rather memorably at a talk at NYU a few years ago, (and I quote from memory):


In the old days, the East German Stasi used to have to follow people, bug them, intimidate their friends to find out what they read, what they got up to in their spare time. Now. we have ‘Like’ buttons that do the same for us.


The surveillance, the generation of data detailing our habits, our inclinations, our predilections, is indeed quite efficient; it is made all the more so by having outsourced it to those being surveilled, by dint of the provision of simple tools for doing so.


I personally do not get very creeped out by the notion of hitting ‘Like’ on a article that I enjoyed reading–though, struck by Moglen’s remark, I have not done so even once since returning to Facebook in 2010. I do however find it very creepy that Netflix asks me if I would like to share my movie viewing preferences with my friends on Facebook; that seems excessively invasive. 


In any case, I do not think the limits of this kind of ‘integration’ of Facebook with the information we consume and the software we use have yet been reached.


Here is at least one more possible avenue for Facebook’s designers to consider. Many ‘net users access it via an ‘always-on’ connection. Thus, even when they are not actively using an Internet application–like say, a word processor, or a spreadsheet–they are still connected to the ‘net. In the not so distant future, these programs could be designed–by close cooperation between Facebook and the software vendor in question–to supply information about our usage of these applications to our ‘Friends.’ On a real-time basis.


Thus, for instance, when I would open a file on my word processor, my ‘Friends’ would be so informed; they would then learn how long I had continued editing, how many breaks I took, (and of course, if those breaks were online, they would be told which pages I had opened, and how long I had spent there), and so on. Our software would come with this feature turned on; you would have to opt-out or customize your sharing.


This way, all those status messages we are often treated to on Facebook: ‘Hooray, first draft complete!’ or ‘Finally got five hundred words written today’ or ‘I just can’t seem to get anything written today’ could be automated. Extremely convenient, don’t you think? Examples like this–for other kinds of applications–can be readily supplied, I’m sure.


 •  0 comments  •  flag
Share on Twitter
Published on February 15, 2014 13:55

February 14, 2014

The Curious Irony of Procrastination

Do writers procrastinate more than other people? I wouldn’t know for sure just because I have no idea how much procrastination counts as the norm and what depths practitioners of other trades sink to. But I procrastinate a great deal. (Thank you for indulging me in my description of myself as a ‘writer’; if you prefer, I could just use ‘blogger.’) At any given moment, there are many, many tasks I can think of–not all of them writerly–that I intend to get around to any hour, day, week, month, year or life now. (I procrastinate on this blog too; I’ve promised to write follow-ups to many posts and almost never get around to doing so.) This endless postponement is a source of much anxiety and dread. Which, of course, is procrastination’s central–and justifiably famous–irony.


You procrastinate because you seek relief from anxiety, because you dread encounters with the uncertainty, frustration, and intractability you sense in the tasks that remain undone. But the deferment you seek relief in becomes a source of those very sensations you sought to avoid. The affliction feared and the putative relief provider are one and the same. It is a miserable existence to suffer so.


One of my longest running procrastinations is close to the two-year mark now; this period has been particularly memorable–in all the wrong ways–because it has been marked by a daily ritual that consists of me saying ‘Tomorrow, I’ll start.’ (I normally go through this in the evening or late at night.) And on the day after, I wake up, decide to procrastinate again, and reassure myself that tomorrow is the day it will happen. As has been noted in the context of quitting vices, one of the reasons we persist in our habits is because we are able to convince ourselves that quitting, getting rid of the old habit,  is easy. So we persist, indulging ourselves once more and reassuring ourselves of our imagined success in breaking out of the habit whenever we finally decide we are ready to do so. (But habits are habits for a reason; because they are deeply ingrained, because we practice them so, because we have made them near instinctual parts of ourselves. And that is why, of course, new habits are hard to form, and old habits are hard to break.)


Similarly for procrastination; we continue to put off for the morrow because we imagine that when the morrow rolls around, we will be able to easily not put off, to get down to the business at hand. All that lets us do, of course, is continue to procrastinate today. The only thing put off till the morrow is the repetition of the same decision as made today–the decision to defer yet again.


Now, if as Aristotle said, we are what we repeatedly do, I’m a procrastinator; I’m an irrational wallower in anxiety, condemning myself to long-term suffering for fear of being afflicted by a short-lived one. That is not a flattering description to entertain of oneself but it is an apt one given my history and my actions.


 •  0 comments  •  flag
Share on Twitter
Published on February 14, 2014 12:12