Lily Salter's Blog, page 964

November 1, 2015

“My little war-porn addiction”: David Shields on how the New York Times made the Iraq and Afghanistan wars look “really cool, really glamorous, really bloodless”

The idiosyncratic Seattle-based writer David Shields was startled as he followed the wars in Afghanistan and Iraq through the New York Times’ visual representation. “At least once a week I would be enchanted and infuriated by these images, and I wanted to understand why,” he writes in the introduction to his new book. So he spent months going over every front page war photo since the wars began — more than 1,00 images. The result of his inquiry is “War is Beautiful: The New York Times Pictorial Guide to the Glamour of Armed Conflict.” It’s a twisted kind of coffee-table book: Most of “War is Beautiful” reproduces the newspaper’s images, one per page, in between brief pieces by Shields and art critic Dave Hickey, who argues that “combat photographs today are so profoundly touched in the process of bringing them out, that they amount to corporate folk art … They are no longer ‘lifelike,’ but rather ‘picture-like.’" Shields also sets up the images with brief quotations. For the most part, though, he lets the images speak for themselves. Salon asked Shields — whose acclaimed book, “Reality Hunger,” is a collage made up mostly of other people’s quotes — to show his hand a bit. The interview has been lightly edited for clarity. You talked about, in your introduction, feeling a mixture of “rapture, bafflement and repulsion” when you looked at these photographs. How did those emotions all come together for you? What provoked that strange brew?  You know, as I say on the front of the book, I’ve subscribed to the paper for decades, over the last 20 years … I find myself eagerly awaiting for paper, my little war-porn addiction, and that just seemed to me fundamentally problematic, fundamentally wrong. What was the problem in me, in my head, in my psyche? Was there a problem in the paper, was it in the exchange between the paper and me? And I thought that it’s that law, you know — if you see something, say something. I was seeing something, my eye was noticing this overwhelming pattern of impossibly beautiful photographs that conveyed not the war itself but the war that is a kind of heck, you know, according to the Times, war is heck. What was going on? Was I under-reading the pictures, misreading the pictures? Demanding the pictures be more gritty than they can possibly be? I wanted to investigate that question. I was open to course correction to be proven wrong and I hoped that I would be proven wrong; not many American writers are printing and publishing a full book that is a takedown of the New York Times. It tends not to be the wisest career move. So, I was hoping, frankly, that I’d be course corrected; I’d find hundreds or dozens of photographs to prove me wrong. But I ended up pulling, with the help of photo assistants, I looked at 4,500 cover photographs from October ’97 to essentially now. I found 1,000 war photos, 700 that to me for my criterion, of you might say, wrongly beautiful photographs. Virtually no photograph that conveyed the horror of war. For the paper of record, the first draft of history, all the news it fits to print, it seems to me that that was problematic and worth pointing out. I think the pictures are ravishingly beautiful, they’re just Gerhard Richter level, a Jackson Pollock level, Mark Rothko level gorgeousness, but first of all, to me, they seem problematic representations of Afghanistan and Iraq wars. Secondly, they were almost never balanced by [anything] that conveyed anything other than war’s really cool, really glamorous, really bloodless. They feel to me kind of scarily like military recruitment posters. So, so beautiful. I don’t know if you have the book there in front of you, but page 58 has been cropped from a much larger photograph, that has a sort of Richter-like beauty. Picture on 50 looks like an Andrew Wyeth. I think your question was how was that exact cocktail of ravishment and fury and resentment come about. I would be hard pressed to pull them all apart, I found myself loving these photographs, coming to realize they were probably oddly tied to war. This was a series of wars the Times was ostensibly covering, but was to me problematically promoting, and if this had been say, USA Today or the Wall Street Journal, or the New York Post, it would make sense. But this is a paper that, you know, is thought to be neutral or the Fourth Estate, even center-left, and these pictures just to me did not fit what strikes me as responsible journalism. They’re essentially jingoistic flag waving. It’s essentially war cheerleading under the guise of photojournalism. That’s how the book came to be. Has that quality you described been part of American war photography all the way back? Do we see images like this from Vietnam or World War I or II, or is this a different kind of aesthetic that’s come in in 21st century wars?  It is. I would definitely say it has changed, and I think that Dave Hickey’s afterword in the back of the book explains it awfully well. From Matthew Brady and the Civil War through say, Robert Capa in World War II to people like Malcolm Brown and Tim Page in Vietnam. There was, seems to me, a kind of war-is-hell photography where the photographer is actually filming from life. I think what Hickey argues, pretty persuasively, in the afterword, you know swipe photography which in a way gave rise to abstract expressionism of Rothko, Pollock, Diebenkorn, Richter, has completely tilted the field so that now to me, photographers who have first of all, been embedded by the military. They’re sending pics digitally, but above all the photo editors and photographers in my view, and Hickey’s view, have gone to school on neo-realist painting, on photo realists painting, on pop art, on abstract expressionism. And picture after picture is a kind of footnote to art history tropes. As Hickey said, the pictures are no longer being taken from life observed, they’re being sort of pulled from art history archives. To me, there's almost no lived life, there’s almost no grit, there’s almost no blood, there’s almost nothing of the horror of war. Instead there’s a kind of beautiful but extremely empty composition. And so, to me, there’s a whole series of factors — the rise of hyper digital culture, the decline of print journalism, embedded journalists. But above all, photographers and photo editors, in my view, are less observing the war in front of them — which is probably a complete bloody mess — and instead, trying to produce photographs, that first of all, are beautiful footnotes to say, abstract expressionism. And secondarily, perhaps more importantly, are palpable to the American and specifically, the New York Times reading public. There’s a kind of compromise going on between us as readers, the Times as publishers and the U.S. government. I think we, as viewers, as readers, as subscribers are complicit as well. These are staggeringly beautiful photographs which I find myself complicit for having gobbled up for too many years. Right, if we enjoy the photographs, if we respond to their aestheticism, we’re complicit as well.  It took me way too long. It took me from ’97 to say, 2007, 2008, 2010 to finally say I've got to actually study these pictures. I wish I had produced this book 10 years ago, whereas now it’s slightly after the fact. I’m sort of hoping people will learn, including myself. This is how we are lambs that are bred to slaughter, this is how we buy war propaganda. I think it’s an open question. It’s not like the Times was saying … I don’t know very much about the inner workings of the Times, it’s not like a kind of socio-political text. Here are 64 photographs that are disarmingly beautiful, like, “What are they doing on the front page of the New York Times as war photos?” As to whether the Times was fully conscious of what they’re doing, or unconscious because of the result of hyper digital culture, it’s the result of pics being sent digitally from the battlefield, is the Times essentially is currying favor with the U.S. government. It’s all those things into a very problematic cocktail, is the books’ argument. You have a short essay in the beginning, a shortish essay by Dave Hickey in the end, otherwise it’s all photographs by other people. And the book of yours that a lot of people know, “Reality Hunger,” is primarily made up of quotes, from other people. I wonder if your sense of the traditional literary essay and non-fiction book is exhausted or at least exhausted for you. You’re using these alternate ways of making an argument or building an essay. You trace those things nicely. It seems a little grandiose of me to track the links between this and my previous book, or books, in a way that’s better left for someone else to do. But I clearly see the connection — in a way, I feel as if what these photographs suffer from is precisely a lack of reality hunger. They seem to have a kind of style hunger. There’s that wonderful line, I forget who said it, “The enemy of art is taste.” And these pictures are just so tasteful, they’re very tasteful, you can look at the pictures while you have your morning coffee and croissant. They make you feel like, "Oh yeah! I’ve become educated about war." I am exhausted by traditional memoir. I am exhausted by the architecture of the conventional novel. I’m really interested in the new nonfiction. I think the hyper-digital culture has changed our brains in ways we cannot begin to fathom. So, whether a book idea, like say, “That Thing You Do With Your Mouth” — it came out in June with McSweeney’s, or another book I did, “Life is Short,” in April, or another book I did with my former student called “I Think You're Totally Wrong: A Quarrel” [that] came out in January — here are a whole series of books I’ve done trying to collapse the distinction between fiction and nonfiction, to overturn the laws regarding appropriation, and sort of suggest news way writers can turbo-charge in contemporary nonfiction. I just can’t read the way other people can, these tediously elaborated books. I could’ve written a 300-page critique explaining how every photograph is related … I just think we’re smarter than that. I really like books that don’t patronize the reader and rather give the reader tools to work with then let the reader make the connections you’ll help them make. I’m very fond of this phrase: "Collage is not a refuge for the compositionally disabled." If you put together the pieces in a really powerful way, I think you’ll let a thousand discrepancies bloom. I’ve very interested in a book as IED; I think this book is an improvised explosive device, and I just want to generate conversation. I want to create trouble, and I want to prick out our consciousness, including my own. I think a noble definition of art is to recreate trouble. Flaubert said the value of art can be measured by the harm spoken of it. And I think a lot of what my critique is of the Times is that those pictures show no harm. And I am arguing for a photojournalism, a literary art, and a visual art that faces human harm more straightforwardly. War is hell — not war is heck — to repeat that particular trope.  The idiosyncratic Seattle-based writer David Shields was startled as he followed the wars in Afghanistan and Iraq through the New York Times’ visual representation. “At least once a week I would be enchanted and infuriated by these images, and I wanted to understand why,” he writes in the introduction to his new book. So he spent months going over every front page war photo since the wars began — more than 1,00 images. The result of his inquiry is “War is Beautiful: The New York Times Pictorial Guide to the Glamour of Armed Conflict.” It’s a twisted kind of coffee-table book: Most of “War is Beautiful” reproduces the newspaper’s images, one per page, in between brief pieces by Shields and art critic Dave Hickey, who argues that “combat photographs today are so profoundly touched in the process of bringing them out, that they amount to corporate folk art … They are no longer ‘lifelike,’ but rather ‘picture-like.’" Shields also sets up the images with brief quotations. For the most part, though, he lets the images speak for themselves. Salon asked Shields — whose acclaimed book, “Reality Hunger,” is a collage made up mostly of other people’s quotes — to show his hand a bit. The interview has been lightly edited for clarity. You talked about, in your introduction, feeling a mixture of “rapture, bafflement and repulsion” when you looked at these photographs. How did those emotions all come together for you? What provoked that strange brew?  You know, as I say on the front of the book, I’ve subscribed to the paper for decades, over the last 20 years … I find myself eagerly awaiting for paper, my little war-porn addiction, and that just seemed to me fundamentally problematic, fundamentally wrong. What was the problem in me, in my head, in my psyche? Was there a problem in the paper, was it in the exchange between the paper and me? And I thought that it’s that law, you know — if you see something, say something. I was seeing something, my eye was noticing this overwhelming pattern of impossibly beautiful photographs that conveyed not the war itself but the war that is a kind of heck, you know, according to the Times, war is heck. What was going on? Was I under-reading the pictures, misreading the pictures? Demanding the pictures be more gritty than they can possibly be? I wanted to investigate that question. I was open to course correction to be proven wrong and I hoped that I would be proven wrong; not many American writers are printing and publishing a full book that is a takedown of the New York Times. It tends not to be the wisest career move. So, I was hoping, frankly, that I’d be course corrected; I’d find hundreds or dozens of photographs to prove me wrong. But I ended up pulling, with the help of photo assistants, I looked at 4,500 cover photographs from October ’97 to essentially now. I found 1,000 war photos, 700 that to me for my criterion, of you might say, wrongly beautiful photographs. Virtually no photograph that conveyed the horror of war. For the paper of record, the first draft of history, all the news it fits to print, it seems to me that that was problematic and worth pointing out. I think the pictures are ravishingly beautiful, they’re just Gerhard Richter level, a Jackson Pollock level, Mark Rothko level gorgeousness, but first of all, to me, they seem problematic representations of Afghanistan and Iraq wars. Secondly, they were almost never balanced by [anything] that conveyed anything other than war’s really cool, really glamorous, really bloodless. They feel to me kind of scarily like military recruitment posters. So, so beautiful. I don’t know if you have the book there in front of you, but page 58 has been cropped from a much larger photograph, that has a sort of Richter-like beauty. Picture on 50 looks like an Andrew Wyeth. I think your question was how was that exact cocktail of ravishment and fury and resentment come about. I would be hard pressed to pull them all apart, I found myself loving these photographs, coming to realize they were probably oddly tied to war. This was a series of wars the Times was ostensibly covering, but was to me problematically promoting, and if this had been say, USA Today or the Wall Street Journal, or the New York Post, it would make sense. But this is a paper that, you know, is thought to be neutral or the Fourth Estate, even center-left, and these pictures just to me did not fit what strikes me as responsible journalism. They’re essentially jingoistic flag waving. It’s essentially war cheerleading under the guise of photojournalism. That’s how the book came to be. Has that quality you described been part of American war photography all the way back? Do we see images like this from Vietnam or World War I or II, or is this a different kind of aesthetic that’s come in in 21st century wars?  It is. I would definitely say it has changed, and I think that Dave Hickey’s afterword in the back of the book explains it awfully well. From Matthew Brady and the Civil War through say, Robert Capa in World War II to people like Malcolm Brown and Tim Page in Vietnam. There was, seems to me, a kind of war-is-hell photography where the photographer is actually filming from life. I think what Hickey argues, pretty persuasively, in the afterword, you know swipe photography which in a way gave rise to abstract expressionism of Rothko, Pollock, Diebenkorn, Richter, has completely tilted the field so that now to me, photographers who have first of all, been embedded by the military. They’re sending pics digitally, but above all the photo editors and photographers in my view, and Hickey’s view, have gone to school on neo-realist painting, on photo realists painting, on pop art, on abstract expressionism. And picture after picture is a kind of footnote to art history tropes. As Hickey said, the pictures are no longer being taken from life observed, they’re being sort of pulled from art history archives. To me, there's almost no lived life, there’s almost no grit, there’s almost no blood, there’s almost nothing of the horror of war. Instead there’s a kind of beautiful but extremely empty composition. And so, to me, there’s a whole series of factors — the rise of hyper digital culture, the decline of print journalism, embedded journalists. But above all, photographers and photo editors, in my view, are less observing the war in front of them — which is probably a complete bloody mess — and instead, trying to produce photographs, that first of all, are beautiful footnotes to say, abstract expressionism. And secondarily, perhaps more importantly, are palpable to the American and specifically, the New York Times reading public. There’s a kind of compromise going on between us as readers, the Times as publishers and the U.S. government. I think we, as viewers, as readers, as subscribers are complicit as well. These are staggeringly beautiful photographs which I find myself complicit for having gobbled up for too many years. Right, if we enjoy the photographs, if we respond to their aestheticism, we’re complicit as well.  It took me way too long. It took me from ’97 to say, 2007, 2008, 2010 to finally say I've got to actually study these pictures. I wish I had produced this book 10 years ago, whereas now it’s slightly after the fact. I’m sort of hoping people will learn, including myself. This is how we are lambs that are bred to slaughter, this is how we buy war propaganda. I think it’s an open question. It’s not like the Times was saying … I don’t know very much about the inner workings of the Times, it’s not like a kind of socio-political text. Here are 64 photographs that are disarmingly beautiful, like, “What are they doing on the front page of the New York Times as war photos?” As to whether the Times was fully conscious of what they’re doing, or unconscious because of the result of hyper digital culture, it’s the result of pics being sent digitally from the battlefield, is the Times essentially is currying favor with the U.S. government. It’s all those things into a very problematic cocktail, is the books’ argument. You have a short essay in the beginning, a shortish essay by Dave Hickey in the end, otherwise it’s all photographs by other people. And the book of yours that a lot of people know, “Reality Hunger,” is primarily made up of quotes, from other people. I wonder if your sense of the traditional literary essay and non-fiction book is exhausted or at least exhausted for you. You’re using these alternate ways of making an argument or building an essay. You trace those things nicely. It seems a little grandiose of me to track the links between this and my previous book, or books, in a way that’s better left for someone else to do. But I clearly see the connection — in a way, I feel as if what these photographs suffer from is precisely a lack of reality hunger. They seem to have a kind of style hunger. There’s that wonderful line, I forget who said it, “The enemy of art is taste.” And these pictures are just so tasteful, they’re very tasteful, you can look at the pictures while you have your morning coffee and croissant. They make you feel like, "Oh yeah! I’ve become educated about war." I am exhausted by traditional memoir. I am exhausted by the architecture of the conventional novel. I’m really interested in the new nonfiction. I think the hyper-digital culture has changed our brains in ways we cannot begin to fathom. So, whether a book idea, like say, “That Thing You Do With Your Mouth” — it came out in June with McSweeney’s, or another book I did, “Life is Short,” in April, or another book I did with my former student called “I Think You're Totally Wrong: A Quarrel” [that] came out in January — here are a whole series of books I’ve done trying to collapse the distinction between fiction and nonfiction, to overturn the laws regarding appropriation, and sort of suggest news way writers can turbo-charge in contemporary nonfiction. I just can’t read the way other people can, these tediously elaborated books. I could’ve written a 300-page critique explaining how every photograph is related … I just think we’re smarter than that. I really like books that don’t patronize the reader and rather give the reader tools to work with then let the reader make the connections you’ll help them make. I’m very fond of this phrase: "Collage is not a refuge for the compositionally disabled." If you put together the pieces in a really powerful way, I think you’ll let a thousand discrepancies bloom. I’ve very interested in a book as IED; I think this book is an improvised explosive device, and I just want to generate conversation. I want to create trouble, and I want to prick out our consciousness, including my own. I think a noble definition of art is to recreate trouble. Flaubert said the value of art can be measured by the harm spoken of it. And I think a lot of what my critique is of the Times is that those pictures show no harm. And I am arguing for a photojournalism, a literary art, and a visual art that faces human harm more straightforwardly. War is hell — not war is heck — to repeat that particular trope.  The idiosyncratic Seattle-based writer David Shields was startled as he followed the wars in Afghanistan and Iraq through the New York Times’ visual representation. “At least once a week I would be enchanted and infuriated by these images, and I wanted to understand why,” he writes in the introduction to his new book. So he spent months going over every front page war photo since the wars began — more than 1,00 images. The result of his inquiry is “War is Beautiful: The New York Times Pictorial Guide to the Glamour of Armed Conflict.” It’s a twisted kind of coffee-table book: Most of “War is Beautiful” reproduces the newspaper’s images, one per page, in between brief pieces by Shields and art critic Dave Hickey, who argues that “combat photographs today are so profoundly touched in the process of bringing them out, that they amount to corporate folk art … They are no longer ‘lifelike,’ but rather ‘picture-like.’" Shields also sets up the images with brief quotations. For the most part, though, he lets the images speak for themselves. Salon asked Shields — whose acclaimed book, “Reality Hunger,” is a collage made up mostly of other people’s quotes — to show his hand a bit. The interview has been lightly edited for clarity. You talked about, in your introduction, feeling a mixture of “rapture, bafflement and repulsion” when you looked at these photographs. How did those emotions all come together for you? What provoked that strange brew?  You know, as I say on the front of the book, I’ve subscribed to the paper for decades, over the last 20 years … I find myself eagerly awaiting for paper, my little war-porn addiction, and that just seemed to me fundamentally problematic, fundamentally wrong. What was the problem in me, in my head, in my psyche? Was there a problem in the paper, was it in the exchange between the paper and me? And I thought that it’s that law, you know — if you see something, say something. I was seeing something, my eye was noticing this overwhelming pattern of impossibly beautiful photographs that conveyed not the war itself but the war that is a kind of heck, you know, according to the Times, war is heck. What was going on? Was I under-reading the pictures, misreading the pictures? Demanding the pictures be more gritty than they can possibly be? I wanted to investigate that question. I was open to course correction to be proven wrong and I hoped that I would be proven wrong; not many American writers are printing and publishing a full book that is a takedown of the New York Times. It tends not to be the wisest career move. So, I was hoping, frankly, that I’d be course corrected; I’d find hundreds or dozens of photographs to prove me wrong. But I ended up pulling, with the help of photo assistants, I looked at 4,500 cover photographs from October ’97 to essentially now. I found 1,000 war photos, 700 that to me for my criterion, of you might say, wrongly beautiful photographs. Virtually no photograph that conveyed the horror of war. For the paper of record, the first draft of history, all the news it fits to print, it seems to me that that was problematic and worth pointing out. I think the pictures are ravishingly beautiful, they’re just Gerhard Richter level, a Jackson Pollock level, Mark Rothko level gorgeousness, but first of all, to me, they seem problematic representations of Afghanistan and Iraq wars. Secondly, they were almost never balanced by [anything] that conveyed anything other than war’s really cool, really glamorous, really bloodless. They feel to me kind of scarily like military recruitment posters. So, so beautiful. I don’t know if you have the book there in front of you, but page 58 has been cropped from a much larger photograph, that has a sort of Richter-like beauty. Picture on 50 looks like an Andrew Wyeth. I think your question was how was that exact cocktail of ravishment and fury and resentment come about. I would be hard pressed to pull them all apart, I found myself loving these photographs, coming to realize they were probably oddly tied to war. This was a series of wars the Times was ostensibly covering, but was to me problematically promoting, and if this had been say, USA Today or the Wall Street Journal, or the New York Post, it would make sense. But this is a paper that, you know, is thought to be neutral or the Fourth Estate, even center-left, and these pictures just to me did not fit what strikes me as responsible journalism. They’re essentially jingoistic flag waving. It’s essentially war cheerleading under the guise of photojournalism. That’s how the book came to be. Has that quality you described been part of American war photography all the way back? Do we see images like this from Vietnam or World War I or II, or is this a different kind of aesthetic that’s come in in 21st century wars?  It is. I would definitely say it has changed, and I think that Dave Hickey’s afterword in the back of the book explains it awfully well. From Matthew Brady and the Civil War through say, Robert Capa in World War II to people like Malcolm Brown and Tim Page in Vietnam. There was, seems to me, a kind of war-is-hell photography where the photographer is actually filming from life. I think what Hickey argues, pretty persuasively, in the afterword, you know swipe photography which in a way gave rise to abstract expressionism of Rothko, Pollock, Diebenkorn, Richter, has completely tilted the field so that now to me, photographers who have first of all, been embedded by the military. They’re sending pics digitally, but above all the photo editors and photographers in my view, and Hickey’s view, have gone to school on neo-realist painting, on photo realists painting, on pop art, on abstract expressionism. And picture after picture is a kind of footnote to art history tropes. As Hickey said, the pictures are no longer being taken from life observed, they’re being sort of pulled from art history archives. To me, there's almost no lived life, there’s almost no grit, there’s almost no blood, there’s almost nothing of the horror of war. Instead there’s a kind of beautiful but extremely empty composition. And so, to me, there’s a whole series of factors — the rise of hyper digital culture, the decline of print journalism, embedded journalists. But above all, photographers and photo editors, in my view, are less observing the war in front of them — which is probably a complete bloody mess — and instead, trying to produce photographs, that first of all, are beautiful footnotes to say, abstract expressionism. And secondarily, perhaps more importantly, are palpable to the American and specifically, the New York Times reading public. There’s a kind of compromise going on between us as readers, the Times as publishers and the U.S. government. I think we, as viewers, as readers, as subscribers are complicit as well. These are staggeringly beautiful photographs which I find myself complicit for having gobbled up for too many years. Right, if we enjoy the photographs, if we respond to their aestheticism, we’re complicit as well.  It took me way too long. It took me from ’97 to say, 2007, 2008, 2010 to finally say I've got to actually study these pictures. I wish I had produced this book 10 years ago, whereas now it’s slightly after the fact. I’m sort of hoping people will learn, including myself. This is how we are lambs that are bred to slaughter, this is how we buy war propaganda. I think it’s an open question. It’s not like the Times was saying … I don’t know very much about the inner workings of the Times, it’s not like a kind of socio-political text. Here are 64 photographs that are disarmingly beautiful, like, “What are they doing on the front page of the New York Times as war photos?” As to whether the Times was fully conscious of what they’re doing, or unconscious because of the result of hyper digital culture, it’s the result of pics being sent digitally from the battlefield, is the Times essentially is currying favor with the U.S. government. It’s all those things into a very problematic cocktail, is the books’ argument. You have a short essay in the beginning, a shortish essay by Dave Hickey in the end, otherwise it’s all photographs by other people. And the book of yours that a lot of people know, “Reality Hunger,” is primarily made up of quotes, from other people. I wonder if your sense of the traditional literary essay and non-fiction book is exhausted or at least exhausted for you. You’re using these alternate ways of making an argument or building an essay. You trace those things nicely. It seems a little grandiose of me to track the links between this and my previous book, or books, in a way that’s better left for someone else to do. But I clearly see the connection — in a way, I feel as if what these photographs suffer from is precisely a lack of reality hunger. They seem to have a kind of style hunger. There’s that wonderful line, I forget who said it, “The enemy of art is taste.” And these pictures are just so tasteful, they’re very tasteful, you can look at the pictures while you have your morning coffee and croissant. They make you feel like, "Oh yeah! I’ve become educated about war." I am exhausted by traditional memoir. I am exhausted by the architecture of the conventional novel. I’m really interested in the new nonfiction. I think the hyper-digital culture has changed our brains in ways we cannot begin to fathom. So, whether a book idea, like say, “That Thing You Do With Your Mouth” — it came out in June with McSweeney’s, or another book I did, “Life is Short,” in April, or another book I did with my former student called “I Think You're Totally Wrong: A Quarrel” [that] came out in January — here are a whole series of books I’ve done trying to collapse the distinction between fiction and nonfiction, to overturn the laws regarding appropriation, and sort of suggest news way writers can turbo-charge in contemporary nonfiction. I just can’t read the way other people can, these tediously elaborated books. I could’ve written a 300-page critique explaining how every photograph is related … I just think we’re smarter than that. I really like books that don’t patronize the reader and rather give the reader tools to work with then let the reader make the connections you’ll help them make. I’m very fond of this phrase: "Collage is not a refuge for the compositionally disabled." If you put together the pieces in a really powerful way, I think you’ll let a thousand discrepancies bloom. I’ve very interested in a book as IED; I think this book is an improvised explosive device, and I just want to generate conversation. I want to create trouble, and I want to prick out our consciousness, including my own. I think a noble definition of art is to recreate trouble. Flaubert said the value of art can be measured by the harm spoken of it. And I think a lot of what my critique is of the Times is that those pictures show no harm. And I am arguing for a photojournalism, a literary art, and a visual art that faces human harm more straightforwardly. War is hell — not war is heck — to repeat that particular trope.  

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 15:30

How the GOP almost forced a Social Security disaster: Everything you need to know about their disgraceful hostage-taking

The comprehensive budget deal that passed Congress last week involved the temporary release of three hostages: One hostage was the debt limit, which will not need to be raised before March, 2017. A second hostage was the federal government, which might not, depending on who you ask, experience another shutdown for two fiscal years. The third hostage was Social Security. Like the hostage-taking over the debt limit and funding the government, the Social Security hostage involves something that should be routine. But these days, with this Congress, nothing is routine. To understand the Social Security hostage-taking, it is important to understand a technical aspect of Social Security with which no one but experts should be concerned. When workers have Social Security contributions deducted from their wages, those deducted monies are premiums for Social Security’s insurance against the loss of wages in the event of death, disability or old age. Those deducted funds are sent by their employers to the U.S. Treasury Department. Unbeknownst to most people, the money is then divided between two separate trust funds. This is a quirk of history. Two parts of Social Security’s wage insurance -- protection against the loss of wages in the event of death and old age -- were enacted in the 1930s. That is when the Old Age and Survivors Insurance Trust Fund (“OASI”) was established. The designers of Social Security wanted disability insurance enacted in the 1930s, as well, but it didn’t happen until 1956. When it did, its own trust fund (“DI”) was established. Nothing denotes to those making Social Security contributions what percentage goes to DI or OASI – nor should it. The benefits are intertwined and seamless, all generated from the same benefit formula. Policymakers, experts, and analysts have generally always treated the funds as combined, because of how interconnected all parts of Social Security are. Because all aspects of Social Security are so intertwined, the annual Trustees Reports provide projections as if the two trust funds were combined, and these are the projections most cited and most useful. As just one example of the interconnectedness, the funds which cover the costs of disability benefits are drawn from DI only until the disabled worker receiving them reaches full retirement age, at which point the payments are drawn from OASI. The amount is the same; the beneficiary likely is unaware of the change, since it has no impact whatsoever, but, by law, the monies can only be drawn from the specified fund. Not surprisingly, from time to time, the percentage of the Social Security contributions going to each of the two funds must be adjusted to keep both funds in balance with each other. Though you would think, for something so routine, something having no impact on what workers pay or what benefits are received, the Managing Trustee, who happens to be the Secretary of the Treasury, would have the authority to simply rebalance the funds. What would make even more sense is to simply combine the two funds into one. Many experts have recommended combining them, including the 1979 Social Security Advisory Council, which unanimously recommended a unitary trust fund, arguing that requiring legislative reallocations “is cumbersome and can cause needless public worry about the financial integrity of the Social Security system.” But there remain two separate trust funds, and the Managing Trustee does not have the authority to engage in simple rebalancing. Instead it takes an act of Congress. The goal of the hostage-takers When Congress has been responsible and done its job, this was no problem. Congress has reallocated the percentage of Social Security contributions going to the two funds 11 times in the past. About half the time, it increased the share going to OASI and about half the time, it increased the share for DI. Indeed, because of a projected shortfall in OASI in the early 1980s, Congress over-allocated from DI back then, and has never restored that over-allocation -- and the fund was projected to run dry next year, at the height of election season. This was too valuable a hostage to let go with no ransom. Unlike the other hostages, involving funding the government and raising the debt limit, the Social Security hostage directly affects some of our most vulnerable fellow Americans. If Congress did not act before the end of 2016, benefits would have been cut at that time by 20 percent. The commissioner of Social Security said recently that if Congress did not act, the automatic cuts would be a “death sentence” for those beneficiaries. The average disabled-worker benefit is $1,165 a month, or about $39 per day. These modest benefits keep millions of men, women, and children from deep poverty, even homelessness, and serve as the sole or main source of income for about 80 percent of beneficiaries. Without these benefits, about one out of two beneficiaries would live in poverty; even with benefits, most have low incomes. And one reality bears pointing out here: These beneficiaries could be any of us. Even the healthiest among us can find ourselves in a disabling car accident or stricken with a life-threatening, disabling illness The Republican hostage takers assert, against all evidence, that the disability insurance part of Social Security is broken, overrun with fraudulent payments. But the Inspector General of the Social Security Administration found that only one-third of one percent of the cases his office sampled involved fraud. Despite their protestations to the contrary, those who threatened to hold Social Security hostage were not trying to improve Social Security. They are seeking to cut it while avoiding political accountability. They want to dismantle it, brick by brick. This is not new. The battle against Social Security In 2009, a Congressional effort to hold hostage the debt limit to force a fast-track deficit commission was described by then-Chairman of the Senate Finance Committee Max Baucus (D-MT) for what it would have been: "A Social Security-cutting machine.” That effort was defeated, but President Obama set up the commission by executive order. Not surprisingly, that commission, chaired by two opponents of Social Security, former Senator Alan Simpson (R-WY) and businessman Erskine Bowles, recommended deep cuts to Social Security along with a poison pill that would have slowly and inexorably destroyed it. Fortunately, the Commission did not get the requisite number of votes, and so Congress never had to address its recommendations on an up or down vote. The failure of Bowles-Simpson was followed by once more using the need to raise the debt limit to force the formation of the so-called Supercommittee, which had essentially the same charge as the Bowles-Simpson committee, but which this time had a stick: If the Supercommittee failed, there would be mandatory sequestration that was so draconian that it would force Congress to reach agreement. The Supercommittee failed to reach an agreement, and we are now stuck with sequestration. The sequestration caps were loosened a bit for the next two years as part of last week’s deal. But sequestration should be repealed outright; it should not be traded for the dismantling of Social Security, as Republican Leader Mitch McConnell and others occasionally, in candid moments, suggest. The re-allocation in this deal – the release of the hostage -- should have put both Social Security funds on equal footing, so that all benefits could be paid in full and on time through 2034. Instead, the re-allocation is designed to set up a new false crisis and a new hostage down the road, in 2022 -- but that is certainly better than having the must-pass legislation be acted on in next year’s lame duck session, after the election, which is what would have happened without this deal. The end of 2016, the lame duck session, just after the 2016 election, probably in December, when most normal people are distracted by the holidays, is the most dangerous time for must-pass legislation. Some who are voting will not be returning to Congress, either because they are retiring or have just been defeated. The end of 2016 is as far away from the next election as possible. Everyone is scrambling to finish and get home for the holidays, and the American people are distracted. It is the perfect time for those who want to cut Social Security to do their undemocratic dirty work. Fortunately, thanks to the comprehensive deal just enacted, that bullet has been dodged. The ransom price included a diversion of Social Security resources towards virtually nonexistent fraud. Those provisions will likely require some workers with disabilities to wait longer to receive their earned benefits. That is wrong. But no benefits were cut nor eligibility rules changed, and gambling on a better outcome in the lame duck would have been extremely risky. Time to expand Social Security Now that the false crisis is behind us and the hostage is free, we should expand Social Security’s modest benefits and require the wealthiest among us finally to pay their fair share. That is what most Americans want and that is the right policy, given the nation’s looming retirement income crisis and rising income inequality. Over a half dozen bills have been introduced in Congress this year alone to do just that. Senator Bernie Sanders (I-VT) has a bill, for example, that not only would expand benefits substantially, but would assure that all benefits, including the expanded disability insurance benefits, could be paid in full and on time for the next half century. Rep. John Larson (D-Ct) has a bill that expands benefits while ensuring that all benefits, including disability insurance benefits, could be paid for the next three-quarters of a century. Social Security should never be part of a comprehensive deal, as a matter of principle. It should certainly not be part of any deal whose goal is to reduce the deficit or raise the debt limit, since Social Security doesn’t add a penny to the federal debt. Social Security legislation should go through regular order, in the light of day. If that were done, Social Security would be expanded. No hostage taking, no ransom, no behind the scenes deals released and voted on in the wee hours of the morning. Just straightforward, old fashioned legislating. That is what has brought us Social Security, among the most successful domestic programs in the nation’s history. That is what will bring us expansion, if our elected leaders do their job and follow the will of the people they are elected to represent.The comprehensive budget deal that passed Congress last week involved the temporary release of three hostages: One hostage was the debt limit, which will not need to be raised before March, 2017. A second hostage was the federal government, which might not, depending on who you ask, experience another shutdown for two fiscal years. The third hostage was Social Security. Like the hostage-taking over the debt limit and funding the government, the Social Security hostage involves something that should be routine. But these days, with this Congress, nothing is routine. To understand the Social Security hostage-taking, it is important to understand a technical aspect of Social Security with which no one but experts should be concerned. When workers have Social Security contributions deducted from their wages, those deducted monies are premiums for Social Security’s insurance against the loss of wages in the event of death, disability or old age. Those deducted funds are sent by their employers to the U.S. Treasury Department. Unbeknownst to most people, the money is then divided between two separate trust funds. This is a quirk of history. Two parts of Social Security’s wage insurance -- protection against the loss of wages in the event of death and old age -- were enacted in the 1930s. That is when the Old Age and Survivors Insurance Trust Fund (“OASI”) was established. The designers of Social Security wanted disability insurance enacted in the 1930s, as well, but it didn’t happen until 1956. When it did, its own trust fund (“DI”) was established. Nothing denotes to those making Social Security contributions what percentage goes to DI or OASI – nor should it. The benefits are intertwined and seamless, all generated from the same benefit formula. Policymakers, experts, and analysts have generally always treated the funds as combined, because of how interconnected all parts of Social Security are. Because all aspects of Social Security are so intertwined, the annual Trustees Reports provide projections as if the two trust funds were combined, and these are the projections most cited and most useful. As just one example of the interconnectedness, the funds which cover the costs of disability benefits are drawn from DI only until the disabled worker receiving them reaches full retirement age, at which point the payments are drawn from OASI. The amount is the same; the beneficiary likely is unaware of the change, since it has no impact whatsoever, but, by law, the monies can only be drawn from the specified fund. Not surprisingly, from time to time, the percentage of the Social Security contributions going to each of the two funds must be adjusted to keep both funds in balance with each other. Though you would think, for something so routine, something having no impact on what workers pay or what benefits are received, the Managing Trustee, who happens to be the Secretary of the Treasury, would have the authority to simply rebalance the funds. What would make even more sense is to simply combine the two funds into one. Many experts have recommended combining them, including the 1979 Social Security Advisory Council, which unanimously recommended a unitary trust fund, arguing that requiring legislative reallocations “is cumbersome and can cause needless public worry about the financial integrity of the Social Security system.” But there remain two separate trust funds, and the Managing Trustee does not have the authority to engage in simple rebalancing. Instead it takes an act of Congress. The goal of the hostage-takers When Congress has been responsible and done its job, this was no problem. Congress has reallocated the percentage of Social Security contributions going to the two funds 11 times in the past. About half the time, it increased the share going to OASI and about half the time, it increased the share for DI. Indeed, because of a projected shortfall in OASI in the early 1980s, Congress over-allocated from DI back then, and has never restored that over-allocation -- and the fund was projected to run dry next year, at the height of election season. This was too valuable a hostage to let go with no ransom. Unlike the other hostages, involving funding the government and raising the debt limit, the Social Security hostage directly affects some of our most vulnerable fellow Americans. If Congress did not act before the end of 2016, benefits would have been cut at that time by 20 percent. The commissioner of Social Security said recently that if Congress did not act, the automatic cuts would be a “death sentence” for those beneficiaries. The average disabled-worker benefit is $1,165 a month, or about $39 per day. These modest benefits keep millions of men, women, and children from deep poverty, even homelessness, and serve as the sole or main source of income for about 80 percent of beneficiaries. Without these benefits, about one out of two beneficiaries would live in poverty; even with benefits, most have low incomes. And one reality bears pointing out here: These beneficiaries could be any of us. Even the healthiest among us can find ourselves in a disabling car accident or stricken with a life-threatening, disabling illness The Republican hostage takers assert, against all evidence, that the disability insurance part of Social Security is broken, overrun with fraudulent payments. But the Inspector General of the Social Security Administration found that only one-third of one percent of the cases his office sampled involved fraud. Despite their protestations to the contrary, those who threatened to hold Social Security hostage were not trying to improve Social Security. They are seeking to cut it while avoiding political accountability. They want to dismantle it, brick by brick. This is not new. The battle against Social Security In 2009, a Congressional effort to hold hostage the debt limit to force a fast-track deficit commission was described by then-Chairman of the Senate Finance Committee Max Baucus (D-MT) for what it would have been: "A Social Security-cutting machine.” That effort was defeated, but President Obama set up the commission by executive order. Not surprisingly, that commission, chaired by two opponents of Social Security, former Senator Alan Simpson (R-WY) and businessman Erskine Bowles, recommended deep cuts to Social Security along with a poison pill that would have slowly and inexorably destroyed it. Fortunately, the Commission did not get the requisite number of votes, and so Congress never had to address its recommendations on an up or down vote. The failure of Bowles-Simpson was followed by once more using the need to raise the debt limit to force the formation of the so-called Supercommittee, which had essentially the same charge as the Bowles-Simpson committee, but which this time had a stick: If the Supercommittee failed, there would be mandatory sequestration that was so draconian that it would force Congress to reach agreement. The Supercommittee failed to reach an agreement, and we are now stuck with sequestration. The sequestration caps were loosened a bit for the next two years as part of last week’s deal. But sequestration should be repealed outright; it should not be traded for the dismantling of Social Security, as Republican Leader Mitch McConnell and others occasionally, in candid moments, suggest. The re-allocation in this deal – the release of the hostage -- should have put both Social Security funds on equal footing, so that all benefits could be paid in full and on time through 2034. Instead, the re-allocation is designed to set up a new false crisis and a new hostage down the road, in 2022 -- but that is certainly better than having the must-pass legislation be acted on in next year’s lame duck session, after the election, which is what would have happened without this deal. The end of 2016, the lame duck session, just after the 2016 election, probably in December, when most normal people are distracted by the holidays, is the most dangerous time for must-pass legislation. Some who are voting will not be returning to Congress, either because they are retiring or have just been defeated. The end of 2016 is as far away from the next election as possible. Everyone is scrambling to finish and get home for the holidays, and the American people are distracted. It is the perfect time for those who want to cut Social Security to do their undemocratic dirty work. Fortunately, thanks to the comprehensive deal just enacted, that bullet has been dodged. The ransom price included a diversion of Social Security resources towards virtually nonexistent fraud. Those provisions will likely require some workers with disabilities to wait longer to receive their earned benefits. That is wrong. But no benefits were cut nor eligibility rules changed, and gambling on a better outcome in the lame duck would have been extremely risky. Time to expand Social Security Now that the false crisis is behind us and the hostage is free, we should expand Social Security’s modest benefits and require the wealthiest among us finally to pay their fair share. That is what most Americans want and that is the right policy, given the nation’s looming retirement income crisis and rising income inequality. Over a half dozen bills have been introduced in Congress this year alone to do just that. Senator Bernie Sanders (I-VT) has a bill, for example, that not only would expand benefits substantially, but would assure that all benefits, including the expanded disability insurance benefits, could be paid in full and on time for the next half century. Rep. John Larson (D-Ct) has a bill that expands benefits while ensuring that all benefits, including disability insurance benefits, could be paid for the next three-quarters of a century. Social Security should never be part of a comprehensive deal, as a matter of principle. It should certainly not be part of any deal whose goal is to reduce the deficit or raise the debt limit, since Social Security doesn’t add a penny to the federal debt. Social Security legislation should go through regular order, in the light of day. If that were done, Social Security would be expanded. No hostage taking, no ransom, no behind the scenes deals released and voted on in the wee hours of the morning. Just straightforward, old fashioned legislating. That is what has brought us Social Security, among the most successful domestic programs in the nation’s history. That is what will bring us expansion, if our elected leaders do their job and follow the will of the people they are elected to represent.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 14:30

This is what a demagogue sounds like: The patriotic appeal which can’t be trusted

It’s the most frequently invoked phrase in American political life. “The American people have a right to know ...” “The American people are asking where their hard-earned tax dollars are going.” “The American people are waiting for the Secretary of State to come clean about what she knew.” So, when we get right down to it, what, if anything, does this greatly overused phrase mean? What do the American people actually want to know? Let’s begin with something concrete. In “founder-speak” (referring colloquially to our national beginnings), “the American people” was a loose term to describe the formation of public opinion. This was a moment in history when “sovereign people,” a largely theoretical construct, contested the crowned sovereigns who then ruled most of the planet. It was not yet the tool of the average demagogue (say, Ted Cruz) or the average attack ad (“The American people have had enough of….”). Let’s see if we can travel from then to now, and make better sense of the history of a phrase. In July 1775, not long after the battles of Lexington, Concord and Bunker Hill, the Second Continental Congress petitioned King George III. In the interest of reconciling the United Colonies with the parent state, even after the occasion of much bloodshed, the earnest petitioners bade him (who was yet their king) to make “such arrangements as your Majesty’s wisdom can form for collecting the united sense of your American people.” If he ceased thinking of them as rebels, he would soon understand that their very reasonable desire was a return to the mild government they had previously enjoyed under their sovereign. The patient members of Congress who signed the petition presumed that a sympathetic connection could yet be found if a reliably objective representative of the crown somehow took the pulse of these distant subjects en masse. In Federalist 37, James Madison wrote: “Stability in government is essential to national character and to the advantages annexed to it, as well as to that repose and confidence in the minds of the people, which are among the chief blessings of civil society.” As the Constitution was being framed, “national character” doubled for “the American people,” and was typically linked to the informed (moral as well as intellectual) choices they made in identifying the public interest with those individuals they considered worthy of office. National character was meant as an expression of the unity of interest and rights in America; it bespoke a desire to be collectively honest, introspective and visible to the outside world as a character-driven republic. Even as we acknowledge all partisan dysfunction, this ideal about who we are has never left our vocabulary. The airwaves draw us to all manner of “Breaking News”: verbal gaffes, inaccurate comparisons to Hitler, hints of scandal. Campaign chatter threatens to burst the collective eardrum. Amid this unfiltered noise, the ultimate authority, “the American people,” lies in reserve, and we expect to hear it invoked as reassurance that a comprehensive viewpoint exists, and that we matter. Readers might be surprised to learn that in the early years of the republic, “the American people” was not used nearly so much as it is now, nor as all-embracingly. Each Fourth of July, newspapers dutifully took note of the general spirit of camaraderie that “distinguishes the American people as one family.” Beyond national celebration, politicians did not rely on this terminology. That is not to say campaign season was ever truly a time of rational discourse, or that presidential wannabes were uniformly acknowledged as the best men for the job. In some ways, the 1848 election was like 2016 in the questions being asked about candidate qualifications and concerns aired with respect to voters’ suggestive minds. In that election year, a newspaper in east Texas — which state had only joined the Union three years earlier — took issue with Whig candidate Zachary Taylor, a victorious general who rode to fame in Mexico but had an ambiguous political identity. “The American people are notorious for their good common sense, and practicality,” went the editorial, “yet they are as easily humbugged as any people on the face of the earth.” To see General Taylor elected “would prove that the people were liable to be humbugged” — in other words, made the victims of a hoax, their political innocence imposed upon. Did the presidential hopeful even understand “the fundamental principles, and complicated machinery by which this great Union is held together?” The editorialist delivered a stern warning. No one actually knew whether Taylor’s success in the late war was due to “good Generalship, the insignificance of his enemy, the chivalry of his own troops, the act of Providence, or a combination of these.” The desire to believe the best about a supposed savior had overtaken reasoned evidence. A political unknown’s “influence over the American people” was, the columnist insisted, “dangerous to their liberties, and degrading to their character as intelligent people.” The year 1848 is not remote, in terms of the perils inherent in democracy. The Texas critic was an early incarnation of today’s questioners who wonder whether a real estate mogul or a neurosurgeon with no experience representing the public interest deserve to be seriously considered for high office. The American people continue to be “humbugged,” only now it comes courtesy of an instantaneous, visually aggressive format. As many have observed, we are glued to our screens, and the message is never neutral. Part of the reason why “the American people” sounds so good dates to the latter part of the nineteenth century, when the phrase was often invoked to describe a superior race. “We are emphatically a people of nerves,” declared the American Magazine in 1888. “Visitors from other lands are astonished at the fierce activity that pervades our most insignificant actions.” The energy embodied in the American character was due in some measure to the “American climate, which teaches in a vigorous and obtrusive manner that quiet and rest do not form part of natural law in this country.” We remained young and wakeful. “Scarcely out of swaddling clothes,” ran this generally upbeat article, “we are called upon to stand squarely in competition with a thousand years of past, and show the old fogies a new thing or two.” The country was in the midst of an immigrant deluge — the Statue of Liberty had just been dedicated. The writer predicted a future of “diluted” Americanness, the national essence weakened through the infusion of questionable foreign blood. Here we see shades of the nativist critique launched by Donald Trump, warning energetic Americans that they have to do something about Mexico’s “worst people” and warning voters to steer clear of “low energy” presidential candidates. The American people were under attack. We hear the same catch-all out of politicians’ mouths day after day after day. Take Ted Cruz in Congress at the end of September: “There is a reason the American people are fed up with Washington. There is a reason the American people are frustrated. The frustration is not simply mild or passing or ephemeral. It is volcanic. Over and over again, the American people go to the ballot box, over and over again, the American people rise up and say the direction we’re going doesn't make sense. We want change … and yet, nothing changes in Washington.” Almost every sentence is dominated by the same subject: the American people. So, who are “We the People”? It depends, of course, on who a politician sees as likeminded, or poised to benefit most, from policies he (or occasionally she) espouses. Generation by generation, the popular, ill-defined term has been cheapened, so that it’s now taken as a throwaway line — except we are supposed to agree when poll-tested. But is that precisely true? It is arguable that its meaning became clearer in 2012, when GOP nominee Mitt Romney was widely seen as the candidate of the one percent, a greed-inspired corporatist oblivious to the life of the average voter. If a president is meant to intuit the just needs and wants of “the American people,” then candidate Romney failed to wear “the people’s” mantle comfortably. President Obama has tended to use the phrase where it seems appropriate, as in promoting affordable healthcare, e.g., “I believe it’s time to give the American people more control over their health care and their health insurance.” He also likes to use the term to describe the essential warmth and neighborliness and resilience of his countrymen. In his First Inaugural Address, it was: “For as much as government can do and must do, it is ultimately the faith and determination of the American people upon which this nation relies. It is the kindness to take in a stranger when the levees break, the selflessness of workers who would rather cut their hours than see a friend lose their job ...” But Rick Perry of Texas had the same take, when he announced his abortive candidacy last June: “The spirit of compassion demonstrated by Texans is alive all across America today. While we have experienced a deficit in leadership, among the American people there is a surplus of spirit.” President Obama tends to avoid invoking “the American people” to inflame passions when he is defining himself in opposition to Republicans, but he can be a bit snarky, too. In the first debate with Romney, the one in which Obama fared so poorly, the Republican held forth: “The American people don’t want Medicare, don’t want Obamacare.” Confronting Romney’s call for reliance on private markets, and doubting the content of his “secret plan” to replace Obamacare, the incumbent said: “I think the American people have to ask themselves, is the reason that Governor Romney is keeping all these plans to replace secret because they’re too good?” Snarky, yes, but he legitimately defines the American people as those who must make up their minds before they vote. He also uses the hallowed term as a synonym for the broad middle class: “I also promised that I’d fight every single day on behalf of the American people, the middle class, and all those who were striving to get into the middle class.” “The American people,” in recent Democratic parlance, have most often been job seekers. Running in 2015, Hillary Clinton has made a curious coupling in promoting economic fairness as her theme. It is: “President Obama and the American people’s hard work” that “pulled us back from the brink of depression.” Now let’s compare. Announcing for the presidency at his old high school, Chris Christie of New Jersey proclaimed that government entitlement programs were “lying and stealing from the American people.” Bobby Jindal of Louisiana, in announcing his candidacy, also went negative with the phrase: “It’s time to level with the American people. This president, and his apprentice-in-waiting Hillary Clinton, are leading America down the path to destruction.” Marco Rubio, on the occasion of his announcement, intimated — devoid of specifics — that once Obamacare was history and the tax code reformed, “the American people will create millions of better-paying modern jobs.” We know political rhetoric is, almost by design, evasive. When they raise the term “the American people,” newsmakers have, for well over a century now, credited them with “dispassionate judgment.” Those who draw on the familiar construct operate on the pretense that “the people” have a marked, well-coordinated personality. But that’s much harder to accept today — America is simply too heterogeneous. Romney’s tone-deafness was not strictly a function of his extraordinary net worth; it reminded that one who possesses great wealth must credibly prove accessibility. The billionaire Trump, with his petulance, his childish antics, is still less remote from average folks than Romney. He thrives on crowds — all he cares about is that they cheer him. They may or may not be a good cross-section of the electorate, but they have as much claim to the designation “American people” as any more learned, respectful, policy-wonkish audience. This is our problem. “The American people” are always presumed correct in their allegiances, but, in truth, they have never really been a reliable barometer of good government. Let’s not forget that they are regularly painted as victims, dupes — and have been ever since they were called upon to oust, at the ballot box, the “grasping and unscrupulous railway corporations” that were buying state legislatures in the early 1870s, ever since they ostensibly fell prey to the hyped-up warrior Zachary Taylor. It is odd (and a little frightening) that those in whom the protection of the principles of our republic are lodged — voters — are also adjudged the most susceptible members of the population. We have the Internet, but “the American people” do not reside there either. Political blogs may draw fanfaronade; but few sites truly constitute a forum where ideas are calmly and usefully debated. Similarly, one has to wonder about the GOP itself, a political party that claims to speak for the American people and yet exhibits little interest in finding consensus — “consensus” being the founders’ ideal formula for expression of the people’s sovereignty. It’s an all too obvious dilemma: We can’t police our elected representatives so that they think twice before claiming knowledge of what that fictive “We the People” think and it’s unlikely that there are a sufficient number of discerning voters able to separate the personal ambition of a candidate from their own best interests. Minimally, every candidate for public office should be asked by someone at every campaign event: “Why should I trust your judgment?” The answer should always be specifically focused. And it should not contain the words, “the American people.”  It’s the most frequently invoked phrase in American political life. “The American people have a right to know ...” “The American people are asking where their hard-earned tax dollars are going.” “The American people are waiting for the Secretary of State to come clean about what she knew.” So, when we get right down to it, what, if anything, does this greatly overused phrase mean? What do the American people actually want to know? Let’s begin with something concrete. In “founder-speak” (referring colloquially to our national beginnings), “the American people” was a loose term to describe the formation of public opinion. This was a moment in history when “sovereign people,” a largely theoretical construct, contested the crowned sovereigns who then ruled most of the planet. It was not yet the tool of the average demagogue (say, Ted Cruz) or the average attack ad (“The American people have had enough of….”). Let’s see if we can travel from then to now, and make better sense of the history of a phrase. In July 1775, not long after the battles of Lexington, Concord and Bunker Hill, the Second Continental Congress petitioned King George III. In the interest of reconciling the United Colonies with the parent state, even after the occasion of much bloodshed, the earnest petitioners bade him (who was yet their king) to make “such arrangements as your Majesty’s wisdom can form for collecting the united sense of your American people.” If he ceased thinking of them as rebels, he would soon understand that their very reasonable desire was a return to the mild government they had previously enjoyed under their sovereign. The patient members of Congress who signed the petition presumed that a sympathetic connection could yet be found if a reliably objective representative of the crown somehow took the pulse of these distant subjects en masse. In Federalist 37, James Madison wrote: “Stability in government is essential to national character and to the advantages annexed to it, as well as to that repose and confidence in the minds of the people, which are among the chief blessings of civil society.” As the Constitution was being framed, “national character” doubled for “the American people,” and was typically linked to the informed (moral as well as intellectual) choices they made in identifying the public interest with those individuals they considered worthy of office. National character was meant as an expression of the unity of interest and rights in America; it bespoke a desire to be collectively honest, introspective and visible to the outside world as a character-driven republic. Even as we acknowledge all partisan dysfunction, this ideal about who we are has never left our vocabulary. The airwaves draw us to all manner of “Breaking News”: verbal gaffes, inaccurate comparisons to Hitler, hints of scandal. Campaign chatter threatens to burst the collective eardrum. Amid this unfiltered noise, the ultimate authority, “the American people,” lies in reserve, and we expect to hear it invoked as reassurance that a comprehensive viewpoint exists, and that we matter. Readers might be surprised to learn that in the early years of the republic, “the American people” was not used nearly so much as it is now, nor as all-embracingly. Each Fourth of July, newspapers dutifully took note of the general spirit of camaraderie that “distinguishes the American people as one family.” Beyond national celebration, politicians did not rely on this terminology. That is not to say campaign season was ever truly a time of rational discourse, or that presidential wannabes were uniformly acknowledged as the best men for the job. In some ways, the 1848 election was like 2016 in the questions being asked about candidate qualifications and concerns aired with respect to voters’ suggestive minds. In that election year, a newspaper in east Texas — which state had only joined the Union three years earlier — took issue with Whig candidate Zachary Taylor, a victorious general who rode to fame in Mexico but had an ambiguous political identity. “The American people are notorious for their good common sense, and practicality,” went the editorial, “yet they are as easily humbugged as any people on the face of the earth.” To see General Taylor elected “would prove that the people were liable to be humbugged” — in other words, made the victims of a hoax, their political innocence imposed upon. Did the presidential hopeful even understand “the fundamental principles, and complicated machinery by which this great Union is held together?” The editorialist delivered a stern warning. No one actually knew whether Taylor’s success in the late war was due to “good Generalship, the insignificance of his enemy, the chivalry of his own troops, the act of Providence, or a combination of these.” The desire to believe the best about a supposed savior had overtaken reasoned evidence. A political unknown’s “influence over the American people” was, the columnist insisted, “dangerous to their liberties, and degrading to their character as intelligent people.” The year 1848 is not remote, in terms of the perils inherent in democracy. The Texas critic was an early incarnation of today’s questioners who wonder whether a real estate mogul or a neurosurgeon with no experience representing the public interest deserve to be seriously considered for high office. The American people continue to be “humbugged,” only now it comes courtesy of an instantaneous, visually aggressive format. As many have observed, we are glued to our screens, and the message is never neutral. Part of the reason why “the American people” sounds so good dates to the latter part of the nineteenth century, when the phrase was often invoked to describe a superior race. “We are emphatically a people of nerves,” declared the American Magazine in 1888. “Visitors from other lands are astonished at the fierce activity that pervades our most insignificant actions.” The energy embodied in the American character was due in some measure to the “American climate, which teaches in a vigorous and obtrusive manner that quiet and rest do not form part of natural law in this country.” We remained young and wakeful. “Scarcely out of swaddling clothes,” ran this generally upbeat article, “we are called upon to stand squarely in competition with a thousand years of past, and show the old fogies a new thing or two.” The country was in the midst of an immigrant deluge — the Statue of Liberty had just been dedicated. The writer predicted a future of “diluted” Americanness, the national essence weakened through the infusion of questionable foreign blood. Here we see shades of the nativist critique launched by Donald Trump, warning energetic Americans that they have to do something about Mexico’s “worst people” and warning voters to steer clear of “low energy” presidential candidates. The American people were under attack. We hear the same catch-all out of politicians’ mouths day after day after day. Take Ted Cruz in Congress at the end of September: “There is a reason the American people are fed up with Washington. There is a reason the American people are frustrated. The frustration is not simply mild or passing or ephemeral. It is volcanic. Over and over again, the American people go to the ballot box, over and over again, the American people rise up and say the direction we’re going doesn't make sense. We want change … and yet, nothing changes in Washington.” Almost every sentence is dominated by the same subject: the American people. So, who are “We the People”? It depends, of course, on who a politician sees as likeminded, or poised to benefit most, from policies he (or occasionally she) espouses. Generation by generation, the popular, ill-defined term has been cheapened, so that it’s now taken as a throwaway line — except we are supposed to agree when poll-tested. But is that precisely true? It is arguable that its meaning became clearer in 2012, when GOP nominee Mitt Romney was widely seen as the candidate of the one percent, a greed-inspired corporatist oblivious to the life of the average voter. If a president is meant to intuit the just needs and wants of “the American people,” then candidate Romney failed to wear “the people’s” mantle comfortably. President Obama has tended to use the phrase where it seems appropriate, as in promoting affordable healthcare, e.g., “I believe it’s time to give the American people more control over their health care and their health insurance.” He also likes to use the term to describe the essential warmth and neighborliness and resilience of his countrymen. In his First Inaugural Address, it was: “For as much as government can do and must do, it is ultimately the faith and determination of the American people upon which this nation relies. It is the kindness to take in a stranger when the levees break, the selflessness of workers who would rather cut their hours than see a friend lose their job ...” But Rick Perry of Texas had the same take, when he announced his abortive candidacy last June: “The spirit of compassion demonstrated by Texans is alive all across America today. While we have experienced a deficit in leadership, among the American people there is a surplus of spirit.” President Obama tends to avoid invoking “the American people” to inflame passions when he is defining himself in opposition to Republicans, but he can be a bit snarky, too. In the first debate with Romney, the one in which Obama fared so poorly, the Republican held forth: “The American people don’t want Medicare, don’t want Obamacare.” Confronting Romney’s call for reliance on private markets, and doubting the content of his “secret plan” to replace Obamacare, the incumbent said: “I think the American people have to ask themselves, is the reason that Governor Romney is keeping all these plans to replace secret because they’re too good?” Snarky, yes, but he legitimately defines the American people as those who must make up their minds before they vote. He also uses the hallowed term as a synonym for the broad middle class: “I also promised that I’d fight every single day on behalf of the American people, the middle class, and all those who were striving to get into the middle class.” “The American people,” in recent Democratic parlance, have most often been job seekers. Running in 2015, Hillary Clinton has made a curious coupling in promoting economic fairness as her theme. It is: “President Obama and the American people’s hard work” that “pulled us back from the brink of depression.” Now let’s compare. Announcing for the presidency at his old high school, Chris Christie of New Jersey proclaimed that government entitlement programs were “lying and stealing from the American people.” Bobby Jindal of Louisiana, in announcing his candidacy, also went negative with the phrase: “It’s time to level with the American people. This president, and his apprentice-in-waiting Hillary Clinton, are leading America down the path to destruction.” Marco Rubio, on the occasion of his announcement, intimated — devoid of specifics — that once Obamacare was history and the tax code reformed, “the American people will create millions of better-paying modern jobs.” We know political rhetoric is, almost by design, evasive. When they raise the term “the American people,” newsmakers have, for well over a century now, credited them with “dispassionate judgment.” Those who draw on the familiar construct operate on the pretense that “the people” have a marked, well-coordinated personality. But that’s much harder to accept today — America is simply too heterogeneous. Romney’s tone-deafness was not strictly a function of his extraordinary net worth; it reminded that one who possesses great wealth must credibly prove accessibility. The billionaire Trump, with his petulance, his childish antics, is still less remote from average folks than Romney. He thrives on crowds — all he cares about is that they cheer him. They may or may not be a good cross-section of the electorate, but they have as much claim to the designation “American people” as any more learned, respectful, policy-wonkish audience. This is our problem. “The American people” are always presumed correct in their allegiances, but, in truth, they have never really been a reliable barometer of good government. Let’s not forget that they are regularly painted as victims, dupes — and have been ever since they were called upon to oust, at the ballot box, the “grasping and unscrupulous railway corporations” that were buying state legislatures in the early 1870s, ever since they ostensibly fell prey to the hyped-up warrior Zachary Taylor. It is odd (and a little frightening) that those in whom the protection of the principles of our republic are lodged — voters — are also adjudged the most susceptible members of the population. We have the Internet, but “the American people” do not reside there either. Political blogs may draw fanfaronade; but few sites truly constitute a forum where ideas are calmly and usefully debated. Similarly, one has to wonder about the GOP itself, a political party that claims to speak for the American people and yet exhibits little interest in finding consensus — “consensus” being the founders’ ideal formula for expression of the people’s sovereignty. It’s an all too obvious dilemma: We can’t police our elected representatives so that they think twice before claiming knowledge of what that fictive “We the People” think and it’s unlikely that there are a sufficient number of discerning voters able to separate the personal ambition of a candidate from their own best interests. Minimally, every candidate for public office should be asked by someone at every campaign event: “Why should I trust your judgment?” The answer should always be specifically focused. And it should not contain the words, “the American people.”  

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 13:00

Climate change claims a new victim: The slow demise of Califonia’s golden trout

Driving through the Owens Valley, a scenic 75-mile-long, U-shaped cul-de-sac on the east side of the Sierras, confirmed extremely dry conditions. California’s prolonged drought wasn't just visible in the low stream flows, charred hillsides and snowless Sierra Nevada Mountains, but I could hear it. In the small town of Lone Pine, I overheard a man say: “I guess we won't get to shower until next winter.” The water situation for local fish isn’t much better. For my journey – a day’s drive followed by four days of backpacking – I wanted to see first hand how one particular “local fish” was doing. At the Whitney Ranger Station, I laid out my trip route into the Golden Trout Wilderness. “You should have the whole place to yourself,” Rene Marshall, the Forest Service Ranger, told me. She mentioned water would be available in both of my overnight stops – Big Whitney and Tunnel Meadows. Wilderness permit in hand, I drove up to Horseshoe Meadow, packed my camping and fishing gear, and hiked out the next morning over Trail Pass. With the single exception of a packer named Billy and his five-mule pack train, Rene’s prediction was right. The Canary in the Creek The freezing mornings aside, camping in the Golden Trout Wilderness in early fall had rewards. Listening to a coyote sing just after the sunset was one. But the high point came after a grueling hike to reach the South Fork of the Kern River – catching and photographing a California golden trout. The golden trout is commonly called “the most beautiful trout in the world.” But in the new era of climate change, golden trout go beyond symbolizing the beautiful state fish of California. These native trout are ecological sentinels. Changing climate poses new risks for California golden trout and intensifies existing stressors from a long history of cattle grazing in their range. New risks resulting from more intense droughts, wildfires, and smaller snowpack will test the resilience of forest and freshwater ecosystems throughout the Sierra. The heavy snowfall on the Sierra Nevada Mountains typically serves as a natural water storage system. But this year, the snowpack hit the lowest level in 500 years. Reduced snowpack will shift stream flow and water temperature during the critical spring and summer months when golden trout spawn and do most of their feeding. Shouldering a 35-pound pack over a 10,000-foot pass, I quickly came to learn that California golden trout swim in rarified water. They live in narrow streams at around 9,000 feet where flows are thin. Their tiny headwater streams of the South Fork Kern River and Golden Trout Creek on the Kern Plateau have been their only home for 70,000 years. As cold-blooded creatures, these trout completely depend on water to regulate their body temperatures. Water temperatures in the range of 3° to 20°C are ideal for them to breathe, feed, metabolize food, evade predators, and spawn. When stream temperatures exceed 20°C they get stressed. Long exposure to temperatures above that prove lethal. Here’s the problem – their tiny streams are warming in the summer. The Cattle Conundrum As a research scientist for the US Forest Service, Dr. Kathleen Matthews has been studying California golden trout for over 20 years. Since 2008 she has been monitoring the temperatures of the streams they live in. The 90 temperature probes she stationed throughout three meadow streams reveal a harsh new reality for these trout. Over a five year study, the probes recorded temperatures in two of the streams reaching 26°C with daily temperatures exceeding 20°C for almost two months of one study year. As water warms, it holds less oxygen. When their water warms trout naturally seek out cooler water. Since California golden trout inhabit small headwater streams, it’s hard for them to seek out cooler water. Lacking an escape route to cooler water isn’t the only downside to their home in the High Sierra. Turns out, the streams with the warmest temperatures in Matthews’ study have been trampled over by domestic cattle. Livestock have a long history of grazing within the Golden Trout Wilderness beginning around 1860. As I hiked the trail along Mulkey Meadows, I saw clear scarring of the meadow, and of course, cow pies strewn everywhere. Worse than eyesores, overgrazing reduces shade-producing vegetation and breaks down fragile meadow stream banks. What remains are streams without willows to block the sun and a shallow stream channel that turns warmer during the long days of summer. “We haven’t been able to keep golden trout streams in really excellent condition in the presence of cattle grazing,” Matthews says. A healthy golden trout stream, she says, would have “lush streamside vegetation, the water is clear and cool, there is adequate food in the stream, the banks are stable and contain areas of undercutting for shade and hiding, and there are deeper pools.” The Inyo National Forest administers the wilderness area holding California golden trout. Since the late 1980’s, the Forest Service, with the help of volunteers from Trout Unlimited, California Trout and local fly fishing clubs, constructed fencing around key golden trout habitat. This fencing removed livestock from badly damaged locations and allowed some recovery of stream habitat. But fences break and in remote wilderness areas, without any road access, maintenance is difficult at best. A far better solution to keeping cattle out of meadow streams came in 2001. Inyo National Forest decided to remove cattle completely from two grazing allotments for at least 10 years. Witnessing the recovery of a meadow rested over a decade, Matthews sees the elimination of grazing as a major step in the right direction. The pay-off comes with a less compacted meadow, more undercut stream banks and greater shade producing riparian cover. And ultimately, the cooling that can help offset effects of climate change. Seeking A Cool Shaded Undercut Bank Climate change is reducing snowpack in the Sierra Nevada due to warmer air temperatures. The snow melts earlier on the Kern Plateau and the high meadows become drier by the end of summer. Climate models predict Sierra Nevada streams and lakes will experience significant warming over the next 100 years. Matthew’s water temperature study sounds a clear warning – streams impacted by overgrazing may not be resilient in the face of future warming. The Forest Service holds the key to reversing the damage caused by over a century of cattle grazing in these meadows. Native California golden trout live completely within their namesake wilderness. Many conservationists, including Matthews, would like the Golden Trout Wilderness managed as a refuge for golden trout (i.e., the freshwater equivalent of a marine preserve). Looking past their stunning color, these trout are amazing survivors. The California golden trout represents thousands of years of adapting to a changing climate in one of the most extreme environments. Helping these ancient fish survive without domestic cattle in their wilderness home is the least we can do.Driving through the Owens Valley, a scenic 75-mile-long, U-shaped cul-de-sac on the east side of the Sierras, confirmed extremely dry conditions. California’s prolonged drought wasn't just visible in the low stream flows, charred hillsides and snowless Sierra Nevada Mountains, but I could hear it. In the small town of Lone Pine, I overheard a man say: “I guess we won't get to shower until next winter.” The water situation for local fish isn’t much better. For my journey – a day’s drive followed by four days of backpacking – I wanted to see first hand how one particular “local fish” was doing. At the Whitney Ranger Station, I laid out my trip route into the Golden Trout Wilderness. “You should have the whole place to yourself,” Rene Marshall, the Forest Service Ranger, told me. She mentioned water would be available in both of my overnight stops – Big Whitney and Tunnel Meadows. Wilderness permit in hand, I drove up to Horseshoe Meadow, packed my camping and fishing gear, and hiked out the next morning over Trail Pass. With the single exception of a packer named Billy and his five-mule pack train, Rene’s prediction was right. The Canary in the Creek The freezing mornings aside, camping in the Golden Trout Wilderness in early fall had rewards. Listening to a coyote sing just after the sunset was one. But the high point came after a grueling hike to reach the South Fork of the Kern River – catching and photographing a California golden trout. The golden trout is commonly called “the most beautiful trout in the world.” But in the new era of climate change, golden trout go beyond symbolizing the beautiful state fish of California. These native trout are ecological sentinels. Changing climate poses new risks for California golden trout and intensifies existing stressors from a long history of cattle grazing in their range. New risks resulting from more intense droughts, wildfires, and smaller snowpack will test the resilience of forest and freshwater ecosystems throughout the Sierra. The heavy snowfall on the Sierra Nevada Mountains typically serves as a natural water storage system. But this year, the snowpack hit the lowest level in 500 years. Reduced snowpack will shift stream flow and water temperature during the critical spring and summer months when golden trout spawn and do most of their feeding. Shouldering a 35-pound pack over a 10,000-foot pass, I quickly came to learn that California golden trout swim in rarified water. They live in narrow streams at around 9,000 feet where flows are thin. Their tiny headwater streams of the South Fork Kern River and Golden Trout Creek on the Kern Plateau have been their only home for 70,000 years. As cold-blooded creatures, these trout completely depend on water to regulate their body temperatures. Water temperatures in the range of 3° to 20°C are ideal for them to breathe, feed, metabolize food, evade predators, and spawn. When stream temperatures exceed 20°C they get stressed. Long exposure to temperatures above that prove lethal. Here’s the problem – their tiny streams are warming in the summer. The Cattle Conundrum As a research scientist for the US Forest Service, Dr. Kathleen Matthews has been studying California golden trout for over 20 years. Since 2008 she has been monitoring the temperatures of the streams they live in. The 90 temperature probes she stationed throughout three meadow streams reveal a harsh new reality for these trout. Over a five year study, the probes recorded temperatures in two of the streams reaching 26°C with daily temperatures exceeding 20°C for almost two months of one study year. As water warms, it holds less oxygen. When their water warms trout naturally seek out cooler water. Since California golden trout inhabit small headwater streams, it’s hard for them to seek out cooler water. Lacking an escape route to cooler water isn’t the only downside to their home in the High Sierra. Turns out, the streams with the warmest temperatures in Matthews’ study have been trampled over by domestic cattle. Livestock have a long history of grazing within the Golden Trout Wilderness beginning around 1860. As I hiked the trail along Mulkey Meadows, I saw clear scarring of the meadow, and of course, cow pies strewn everywhere. Worse than eyesores, overgrazing reduces shade-producing vegetation and breaks down fragile meadow stream banks. What remains are streams without willows to block the sun and a shallow stream channel that turns warmer during the long days of summer. “We haven’t been able to keep golden trout streams in really excellent condition in the presence of cattle grazing,” Matthews says. A healthy golden trout stream, she says, would have “lush streamside vegetation, the water is clear and cool, there is adequate food in the stream, the banks are stable and contain areas of undercutting for shade and hiding, and there are deeper pools.” The Inyo National Forest administers the wilderness area holding California golden trout. Since the late 1980’s, the Forest Service, with the help of volunteers from Trout Unlimited, California Trout and local fly fishing clubs, constructed fencing around key golden trout habitat. This fencing removed livestock from badly damaged locations and allowed some recovery of stream habitat. But fences break and in remote wilderness areas, without any road access, maintenance is difficult at best. A far better solution to keeping cattle out of meadow streams came in 2001. Inyo National Forest decided to remove cattle completely from two grazing allotments for at least 10 years. Witnessing the recovery of a meadow rested over a decade, Matthews sees the elimination of grazing as a major step in the right direction. The pay-off comes with a less compacted meadow, more undercut stream banks and greater shade producing riparian cover. And ultimately, the cooling that can help offset effects of climate change. Seeking A Cool Shaded Undercut Bank Climate change is reducing snowpack in the Sierra Nevada due to warmer air temperatures. The snow melts earlier on the Kern Plateau and the high meadows become drier by the end of summer. Climate models predict Sierra Nevada streams and lakes will experience significant warming over the next 100 years. Matthew’s water temperature study sounds a clear warning – streams impacted by overgrazing may not be resilient in the face of future warming. The Forest Service holds the key to reversing the damage caused by over a century of cattle grazing in these meadows. Native California golden trout live completely within their namesake wilderness. Many conservationists, including Matthews, would like the Golden Trout Wilderness managed as a refuge for golden trout (i.e., the freshwater equivalent of a marine preserve). Looking past their stunning color, these trout are amazing survivors. The California golden trout represents thousands of years of adapting to a changing climate in one of the most extreme environments. Helping these ancient fish survive without domestic cattle in their wilderness home is the least we can do.Driving through the Owens Valley, a scenic 75-mile-long, U-shaped cul-de-sac on the east side of the Sierras, confirmed extremely dry conditions. California’s prolonged drought wasn't just visible in the low stream flows, charred hillsides and snowless Sierra Nevada Mountains, but I could hear it. In the small town of Lone Pine, I overheard a man say: “I guess we won't get to shower until next winter.” The water situation for local fish isn’t much better. For my journey – a day’s drive followed by four days of backpacking – I wanted to see first hand how one particular “local fish” was doing. At the Whitney Ranger Station, I laid out my trip route into the Golden Trout Wilderness. “You should have the whole place to yourself,” Rene Marshall, the Forest Service Ranger, told me. She mentioned water would be available in both of my overnight stops – Big Whitney and Tunnel Meadows. Wilderness permit in hand, I drove up to Horseshoe Meadow, packed my camping and fishing gear, and hiked out the next morning over Trail Pass. With the single exception of a packer named Billy and his five-mule pack train, Rene’s prediction was right. The Canary in the Creek The freezing mornings aside, camping in the Golden Trout Wilderness in early fall had rewards. Listening to a coyote sing just after the sunset was one. But the high point came after a grueling hike to reach the South Fork of the Kern River – catching and photographing a California golden trout. The golden trout is commonly called “the most beautiful trout in the world.” But in the new era of climate change, golden trout go beyond symbolizing the beautiful state fish of California. These native trout are ecological sentinels. Changing climate poses new risks for California golden trout and intensifies existing stressors from a long history of cattle grazing in their range. New risks resulting from more intense droughts, wildfires, and smaller snowpack will test the resilience of forest and freshwater ecosystems throughout the Sierra. The heavy snowfall on the Sierra Nevada Mountains typically serves as a natural water storage system. But this year, the snowpack hit the lowest level in 500 years. Reduced snowpack will shift stream flow and water temperature during the critical spring and summer months when golden trout spawn and do most of their feeding. Shouldering a 35-pound pack over a 10,000-foot pass, I quickly came to learn that California golden trout swim in rarified water. They live in narrow streams at around 9,000 feet where flows are thin. Their tiny headwater streams of the South Fork Kern River and Golden Trout Creek on the Kern Plateau have been their only home for 70,000 years. As cold-blooded creatures, these trout completely depend on water to regulate their body temperatures. Water temperatures in the range of 3° to 20°C are ideal for them to breathe, feed, metabolize food, evade predators, and spawn. When stream temperatures exceed 20°C they get stressed. Long exposure to temperatures above that prove lethal. Here’s the problem – their tiny streams are warming in the summer. The Cattle Conundrum As a research scientist for the US Forest Service, Dr. Kathleen Matthews has been studying California golden trout for over 20 years. Since 2008 she has been monitoring the temperatures of the streams they live in. The 90 temperature probes she stationed throughout three meadow streams reveal a harsh new reality for these trout. Over a five year study, the probes recorded temperatures in two of the streams reaching 26°C with daily temperatures exceeding 20°C for almost two months of one study year. As water warms, it holds less oxygen. When their water warms trout naturally seek out cooler water. Since California golden trout inhabit small headwater streams, it’s hard for them to seek out cooler water. Lacking an escape route to cooler water isn’t the only downside to their home in the High Sierra. Turns out, the streams with the warmest temperatures in Matthews’ study have been trampled over by domestic cattle. Livestock have a long history of grazing within the Golden Trout Wilderness beginning around 1860. As I hiked the trail along Mulkey Meadows, I saw clear scarring of the meadow, and of course, cow pies strewn everywhere. Worse than eyesores, overgrazing reduces shade-producing vegetation and breaks down fragile meadow stream banks. What remains are streams without willows to block the sun and a shallow stream channel that turns warmer during the long days of summer. “We haven’t been able to keep golden trout streams in really excellent condition in the presence of cattle grazing,” Matthews says. A healthy golden trout stream, she says, would have “lush streamside vegetation, the water is clear and cool, there is adequate food in the stream, the banks are stable and contain areas of undercutting for shade and hiding, and there are deeper pools.” The Inyo National Forest administers the wilderness area holding California golden trout. Since the late 1980’s, the Forest Service, with the help of volunteers from Trout Unlimited, California Trout and local fly fishing clubs, constructed fencing around key golden trout habitat. This fencing removed livestock from badly damaged locations and allowed some recovery of stream habitat. But fences break and in remote wilderness areas, without any road access, maintenance is difficult at best. A far better solution to keeping cattle out of meadow streams came in 2001. Inyo National Forest decided to remove cattle completely from two grazing allotments for at least 10 years. Witnessing the recovery of a meadow rested over a decade, Matthews sees the elimination of grazing as a major step in the right direction. The pay-off comes with a less compacted meadow, more undercut stream banks and greater shade producing riparian cover. And ultimately, the cooling that can help offset effects of climate change. Seeking A Cool Shaded Undercut Bank Climate change is reducing snowpack in the Sierra Nevada due to warmer air temperatures. The snow melts earlier on the Kern Plateau and the high meadows become drier by the end of summer. Climate models predict Sierra Nevada streams and lakes will experience significant warming over the next 100 years. Matthew’s water temperature study sounds a clear warning – streams impacted by overgrazing may not be resilient in the face of future warming. The Forest Service holds the key to reversing the damage caused by over a century of cattle grazing in these meadows. Native California golden trout live completely within their namesake wilderness. Many conservationists, including Matthews, would like the Golden Trout Wilderness managed as a refuge for golden trout (i.e., the freshwater equivalent of a marine preserve). Looking past their stunning color, these trout are amazing survivors. The California golden trout represents thousands of years of adapting to a changing climate in one of the most extreme environments. Helping these ancient fish survive without domestic cattle in their wilderness home is the least we can do.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 12:00

When Texas fell to the wingnuts: The secret history of the Southern strategy, modern conservatism and the Lone Star State

From the vantage point of most Dallas Republicans in early 1963, Barry Goldwater represented the brightest hope for national conservative Republicanism since the death of Robert Taft in 1953. Annoyance with the New Deal, particularly the National Industrial Recovery Act’s wage and price controls, which interfered with the management of his family’s department store, led to Goldwater’s first foray into politics as a member of the Phoenix city council. A successful candidate for the United States Senate in 1952, Goldwater assailed President Truman’s New Deal. Campaigning for reelection in 1958, he attacked “labor bosses” and unions with even more ferocity than in 1952. The Arizona senator’s views echoed those of many North Texas businessmen. Enclosing a thousand-dollar check, Fort Worth oilman W. A. Moncrief wrote to Goldwater that Walter Reuther, the president of the United Auto Workers, was “the most powerful and dangerous man in America today.” Seven months later, Goldwater made a similar point. Reuther, he said, was “a more dangerous menace than the sputniks or anything that the Russians might do.” Goldwater’s 1960 book, Conscience of a Conservative, ghostwritten by Brent Bozell, had a powerful impact on many Dallas Republicans, including Catherine Colgan. “Many of us were very impressed with Barry Goldwater,” she recalled. His “line of thinking” and “personal values” “made a lot of sense to us.” Goldwater’s message and worldview also inspired numerous Democrats, many of whom attended “resignation rallies,” where they renounced their old party affiliation and declared their allegiance to the Republican Party. In Texas the “Goldwater phenomenon” originated from Dallas County, where GOP leaders like Peter O’Donnell, John Tower, and Harry and Rita Bass galvanized the drive to elect the Arizona Republican. In 1960 and 1961, Goldwater had stumped in Texas for Tower, whose own book, a Program for Conservatives, while somewhat academic, contained many of the same themes and positions as Goldwater’s book. In July 1963 Harry Bass said, “We’re working now toward the goal of replacing one-term Governor Connally and one-term President Kennedy with well-qualified fiscally responsible men who will be, of course, Republicans.” Bass epitomized the optimism that many Dallas conservatives felt: “With Goldwater heading the ticket, we can expect to elect a Republican senator to Ralph Yarborough’s seat, at least three more congressmen, and thirty-five or forty more representatives in Texas.” Goldwater’s most fervent champion, however, was O’Donnell, who injected his infectious enthusiasm and trademark organizational mastery into the movement. Moreover, he contributed significantly to Goldwater’s use of the Southern Strategy and his consequent victory in five Deep South states in 1964. As the newly elected Republican state chairman in 1962, O’Donnell publicly encouraged Goldwater to run for president and secured the Texas Republican state committee’s passage of a measure praising Conscience of a Conservative as “an affirmative philosophy and program.” As early as the fall of 1962, Texas was firmly ensconced in the Goldwater camp. So sure was O’Donnell that Texas, as one campaign sign read, was “wild about Barry,” that he left Dallas in February 1963, accepted the position of chairman of the Draft Goldwater Committee, and organized its headquarters on Connecticut Avenue in Washington, DC. O’Donnell was “the natural choice” and “the ideal man,” according to F. Clifton White, the former chairman of the Young Republicans. “I couldn’t see how Barry Goldwater—or any other leading Republican in his right mind—could possibly thumb his nose at Peter O’Donnell.” Rita Bass accepted O’Donnell’s invitation to join him in Washington and became the national canvassing director. With White serving as national director of the committee, O’Donnell and John Grenier began the task of rounding up potential Southern delegates for Goldwater. By the time the committee held its first press conference, O’Donnell and White had already lined up Draft Goldwater chairmen in thirty-three states and had nationwide Republican support from precinct chairmen to national committeewomen. Part of what made Goldwater so appealing to O’Donnell was his early affinity to the Southern Strategy. In the early 1960s, the national Republican Party stood at a crossroads on racial issues. George Hinnan, Governor Nelson Rockefeller’s advisor, called race the “Great Republican issue,” one that divided the party. He noted that segregationist discourse was on the rise among party leaders, and “Barry has been falling increasingly for it.” Hinnan outlined the reasoning of these new converts: “Their theory is that by becoming more reactionary than even the Southern Democratic Party, the Republican Party can attract Southern conservatives who have been Democrats, and by consolidating them with the conservative strength in the Middle West and Far West, the Republicans can offset the liberalism of the Northeast and finally prevail.” Rockefeller himself bemoaned as “completely incredible” the Southern strategists’ plan of “writing off” the black vote. Hinnan was correct in his perception: Following Nixon’s defeat in 1960, Goldwater told Atlanta Republicans that the GOP, despite receiving 36 percent of the African-American vote that year, is “not going to get the Negro vote . . . so we ought to go hunting where the ducks are.” With that, Goldwater headed South in search of some ducks. Goldwater empathized with the South because his own philosophy drew on the argument that the Constitution protected property rights and restricted democracy in order to preserve privilege. From John C. Calhoun and other slave-owning politicians of the antebellum Old South to their conservative disciples of the New South, the region emphasized the role of the Constitution in curbing federal power. Goldwater subscribed to Calhoun’s understanding of the Constitution as a restrictive document that protected property rights and sanctified the power of the states over the federal government. “Our right of property,” Goldwater said, “is probably our most sacred right.” Goldwater’s narrow definition of liberty as it applied mainly to property owners allowed him to embrace “freedom” even as he ignored the plight of African-Americans at midcentury and railed against the 1964 Civil Rights Act for ineluctably giving rise to “a federal police force of mammoth proportions.” It was therefore no accident that, in historian David Farber’s words, “Goldwater played in the South.” His vision of liberty distinctly paralleled that of slave owners who had regarded slaves as their property. Skeptical that every individual would be able to comport himself responsibly, Goldwater consistently supported rich over poor, employer over employee, and white over black. His vision of a restrictive Constitution caused him to attack the existing Supreme Court and champion a return to the antilabor, pro-business, segregationist courts of the Gilded Age. He revered freedom yet attacked the Brown v. Board of Education decision, arguing that it was “not based on law” because it represented a direct violation of Southern traditions of white entitlement and black exclusion. Defending his conception of the protections the Constitution offers against this kind of court interference, Goldwater said, “I am firmly convinced—not only that integrated schools are not required— but that the Constitution does not permit any interference whatsoever by the federal government in the field of education.” From assailing stronger labor laws to rejecting federal aid for education to battling colonial independence movements, he vehemently took on any reform that promoted egalitarian causes or what he perceived as the redistribution of wealth and property from privileged whites to the underprivileged and nonwhites. During the first press conference for the Draft Goldwater Committee, O’Donnell addressed the media and declared that the national Republican Party ought to pursue an intentional Southern Strategy. Because Goldwater was the only candidate who could successfully execute such a strategy, the Arizona senator ought to be the party’s nominee. “The key to Republican success,” O’Donnell argued, “lies in converting a weakness into a strength and becoming a truly national party.” The phrase “converting a weakness into a strength” meant securing the once solidly Democratic South for a Republican candidate. In his book about Goldwater’s campaign for the presidency, Suite 3505, F. Clifton White cleared up any doubt over what O’Donnell meant by including after that crucial phrase this parenthetical remark: “(the paucity of Republican votes in the South).” At this revealing moment in political history, O’Donnell had based his argument on a striking admission. The Southern Strategy was an intentional maneuver on the part of the party to win elections, and Goldwater, with his ability to appeal to racist sentiments in the South, was seemingly the only candidate who could deliver enough Southern votes to ensure a Republican victory. Republican gains in the South in the 1962 midterm elections— achieved largely through Republican opposition to the Department of Urban Affairs—only bolstered O’Donnell’s conviction that Goldwater could win the presidency with appeals to race. William Rusher, publisher for the National Review and a former assistant counsel under Robert Morris, agreed that Republicans could beat Kennedy by selecting a candidate opposed to civil rights. Rusher argued against the immorality of racial politics by observing that Southern Democrats had been making appeals to segregationists for decades. While civil rights activists faced off against intractable segregationists, party-builders like O’Donnell were planning a racial strategy for Goldwater and making Republican institutions throughout the South lily-white. The Republican National Committee had dropped all pretense of appealing to minorities when it disbanded its division for minority outreach and established Operation Dixie, which recruited white Southerners to the party. To be sure, Goldwater’s allure to middle-class Southerners in the burgeoning Sunbelt drew on class appeals as well as race. Rather than appealing to the Ku Klux Klan, Goldwater and the GOP tailored their message for moderate Sunbelt suburbanites, who supported “right to work” labor laws, militantly opposed Communism, and assailed welfare policies. Attesting to the excitement that greeted Goldwater’s potential candidacy, nine thousand Americans from forty-four states converged on Washington, DC, on July 4, 1963, and filled the Washington Armory for a rally encouraging the senator to jump into the race. It was Peter O’Donnell’s job as the primary organizer and master of ceremonies to pump up the capacity crowd: “We are embarking on a great crusade . . . to put Goldwater in and Kennedy out!” One Washingtonian later declared, “This town’s never seen anything like it.” Conservative strategists’ increasing optimism and commitment to the Southern Strategy were buoyed by the American public’s growing disenchantment with President Kennedy’s unequivocal defense of civil rights. In June 1963, President Kennedy had addressed the nation on civil rights, called it a “moral issue,” and introduced a substantial civil rights bill to Congress. That summer, his approval rating dove from 70 percent to 55 percent. “Our people are tingling with excitement. I have been receiving long distance calls from all over the nation,” O’Donnell declared. “The South will take the lead in making Kennedy a 1-term president.” “A year ago it was said that Kennedy was unbeatable. But people are not thinking that way now.” With glee, O’Donnell predicted, “if Goldwater can carry the same states that Nixon carried in 1960, and then carry the balance of the Southern States, he will have 320 electoral votes—more than enough to win.” Goldwater himself was less than cooperative. He expressed little enthusiasm for running against Kennedy and throughout 1963 declined to commit himself to the presidential race. Although he never attempted to defuse the grassroots operation by flatly refusing to run, Goldwater remained unaffiliated with the committee that often met furtively in Suite 3505 in New York’s Chanin Building. It was “their time and money,” he said, although he was reportedly “furious” over the efforts of White and O’Donnell to seek out press coverage. “If Goldwater doesn’t want to make up his mind,” O’Donnell said, “we will draft him. And because he might say ‘No,’ we’ll tell him what we’re going to do. Won’t ask his permission to do it!” O’Donnell was well aware that time was of the essence, that the candidate would need to build the campaign’s financial and institutional infrastructure to run competitively against a Kennedy machine that had strong union support. O’Donnell grew increasingly impatient and frustrated with the presumptive candidate’s aloofness, but Goldwater refused to sanction fundraising on his behalf and stonewalled even John Tower, who served as O’Donnell’s primary intermediary with the Arizona senator. “We’re like a wet noodle,” O’Donnell complained. “This thing will surprise people if it ever gets started, but right now it isn’t started.” O’Donnell grew weary of working through Goldwater’s aides, who lionized their boss and, like the senator, showed no sense of urgency about announcing for president. After visiting New Hampshire in December 1963, O’Donnell lamented to a Goldwater staffer that “there are serious weaknesses in organization, finance, public relations and advertising, and in my opinion, we stand a great chance of being clobbered.” In addition to these organizational problems, O’Donnell saw Goldwater’s extemporaneous speaking style as an issue that might imperil a presidential run. O’Donnell advised Goldwater in a memo to prepare his remarks and avoid “shooting from the hip.” This unsolicited advice only further alienated O’Donnell from the senator’s inner circle. When Goldwater formally announced his intention to run in January 1964, O’Donnell and White were passed over for all senior positions on the Goldwater for President Committee staff. Rejecting John Grenier’s recommendation that O’Donnell be made director of political operations, the campaign offered the job to Lee Edwards, the editor of Young Americans for Freedom’s magazine, New Guard. Goldwater did, however, refer to O’Donnell as the “efficiency expert” during public remarks in June 1964 and thanked him for his efforts, saying, “I wouldn’t be standing here tonight as a possible nominee of our party for president if it weren’t for you.” Although the Goldwater campaign excluded O’Donnell, it nevertheless followed the strategy that had become his trademark and targeted the South with carefully coded appeals to white supremacy. Like some other conservatives, Goldwater exploited white anxieties in the face of the social change and upheaval fomented by the civil rights movement, which many perceived as a “Second Reconstruction.” In theory, Goldwater lauded liberty, but in reality, he allied himself with agents of racial separation. Martin Luther King accused Goldwater of “[giving] comfort to the most vicious racists and most extreme rightists in America.” Brooklyn Dodger great Jackie Robinson, himself a Republican, remarked that any black person voting for Barry Goldwater “would have a difficult time living among Negroes.” William P. Young of Pennsylvania, a black delegate to the Republican Convention in San Francisco’s Cow Palace, charged that Goldwater’s platform was “attempting to make the party of Lincoln a machine for dispensing discord and racial conflict.” There were three components to Goldwater’s version of the Southern Strategy. First, he demonized the Civil Rights Act, which became law in the summer of 1964. Abandoning an earlier attack that claimed the law was unconstitutional, Goldwater now insisted it “dangerously [tread] in the private affairs of men.” His opposition earned swift congratulations from O’Donnell, who called the law “vicious,” argued that it would create “a federal police state,” and declared that “President Johnson has turned his back on Texas to court the liberal extremists and Negro bloc in the North and East.” The second component, the film Choice, was a much more explicit type of appeal. Although Goldwater prohibited screenings of the film, which he himself called “racist,” its production demonstrated that winning the South remained the campaign’s chief preoccupation. The third component was an effort to conflate civil rights and civil disorder. Goldwater’s subtle argument was that “crime in the streets” resulted from disrespect for authority and waning morals, which in turn derived from liberalism’s welfare state. This disregard for authority and social mores crystalized in the civil rights movement’s strategy of civil disobedience. By invoking the phrase “law and order,” then, Goldwater launched a coded attack on civil rights, playing to the fears of many whites and implicitly promising strong retaliatory measures against those who appeared to threaten white people (particularly women). The Politics of Law and Order The politics of law and order had been brewing since at least the summer of 1963. In a memo that June, one Goldwater advisor wrote, “The hostility to the new Negro militancy has seemingly spread like wildfire from the South to the entire country.” The president had failed to grasp “the political implications of such a change.” “So long as the “tide of rebellion” continued and Goldwater invoked states’ rights and argued that “private property must remain inviolate,” he had a “serious chance” to beat John Kennedy. The memo suggested that any given category of crime be treated “as a prong of a single fork—a fork labeled ‘moral crisis.’” Goldwater, the memo argued, must jab the fork “relentlessly from now until election day.” Goldwater rolled out the discourse of “law and order” in March 1964 in New Hampshire, where he faced a closely contested primary against Nelson Rockefeller and Henry Cabot Lodge. The president, Goldwater declared, ought to “turn on the lights of moral leadership” and the “lights of moral order.” His “light-switch” reference identified morality with lightness, whiteness, and civic order (and, by extension, depravity with darkness and the civil rights struggle), connections he made even more explicit that June in Dallas. There, Goldwater specifically identified as criminal behavior the nonviolent resistance campaigns of the civil rights activists. Before a crowd of eleven thousand at the Dallas Memorial Auditorium, Goldwater declared his allegiance to “the principles that look upon violence in the streets, anywhere in this land, regardless of who does it, as the wrong way to resolve great moral questions—the way that will destroy the liberties of all the people.” Goldwater employed the language of law and order to appeal to fears of crime and black militancy while simultaneously blaming social ills on liberalism. He often spoke in terms calculated to evoke fears of black-on-white crime and sexuality, as in the statement “Our women are no longer safe in their homes.” In describing Washington, DC, with its high crime rate, as “a place of shame and dishonor,” he called into play public awareness of the city’s sizable African-American population. Goldwater placed the blame for threats to order squarely with the civil rights movement and the Great Society, President Johnson’s set of social and economic reforms. Civil rights, he averred, engendered permissiveness and moral laxity. American liberalism, reaching its crescendo with the Great Society, had banished God from schools and rewarded indolence with social programs. “Government seeks to be parent, teacher, doctor, and even minister,” Goldwater lamented. “Rising crime rates” evidenced the “failure” of the liberal strategy of social change. This strident, racist rhetoric, which originated with the Right, influenced moderates as well. Even temperate public figures like Dwight D. Eisenhower adopted the rhetoric of “law and order,” as in this remark at the 1964 Republican Convention in San Francisco: “Let us not be guilty of maudlin sympathy for the criminal . . . roaming the streets with switchblade knife.” Roy Wilkins, president of the National Association for the Advancement of Colored Persons, underlined the racist implications of Eisenhower’s remark in a strongly worded rebuke: “The phrase ‘switchblade knife’ means ‘Negro’ to the average white American.” In his own 1964 convention speech, Goldwater tied together Democratic corruption scandals and “violence in our streets” with his plea that law and order “not become the license of the mob and jungle.” John Tower also courted segregationist voters by appealing to law and order. At the convention, he observed, “We’ve come to the point when people can be mauled and beaten and even killed on the streets of a great city with hundreds of people looking on, and doing nothing about it.” Placing the blame for this lawlessness with liberal policies, he continued, “We have come to the point where, in many cases, the lawbreakers are treated with loving care . . . while those who uphold and champion the rule of law and order are looked upon in some quarters as suspect.” Rout In 1964, the country as a whole was not ready for the brand of conservatism that Barry Goldwater embodied and Dallas County voters embraced. Conservative Republicans would have to wait for “future Novembers,” as William F. Buckley Jr. put it. But the South was ready. Five of the six states Goldwater won were in the Deep South: Alabama, Georgia, Louisiana, Mississippi, and South Carolina. He took Mississippi with 87 percent of the vote. Whereas Eisenhower had won 40 percent of the nationwide black vote in 1956 and Richard Nixon had garnered 36 percent in 1960, Goldwater took a meager 6 percent in 1964. As historian Michael Flamm concluded, neither perceptions of black violence and crime nor reactions to the rapidity of desegregation in the cities of the Northeast and Midwest were yet strong enough to produce the white voter backlash Goldwater would have needed to win in 1964. The politics of law and order failed to carry the day in 1964 because the discourse was premature. The assassination of President Kennedy in Dallas in November 1963 had revolutionized the political landscape, and both Goldwater and Congressman Bruce Alger were defeated. The assassination cast a long shadow over both campaigns and over Dallas’s identity, reinforcing the city’s reputation as a haven for extremism. An incident in which Adlai Stevenson, the US ambassador to the United Nations, was physically abused by an angry mob of Dallasites on October 26, 1963, recalled the day in 1960 when Lyndon Johnson and his wife were accosted at the Adolphus Hotel. A study by Peter O’Donnell conducted a month before the assassination concluded that “neither Republicans nor Democrats identify Goldwater as part of the radical right.” That was not the case soon after. By demonstrating that extremism was a problem in the body politic, the assassination, although perpetrated by a Marxist, made the identification of Goldwater as a trigger-happy warmonger much more convincing to the public. Some rank-and-file Republicans grew despondent immediately after the tragedy. As Dallas Republican activist Sally McKenzie said, “We all worked our souls out” for Goldwater. “Every bit of that went down the tubes the day that Jack Kennedy was killed in Dallas. I had just finished a door-to-door canvass in my precinct. I went in that night, not that I was being disrespectful of a deceased president, and just tore up the records. It was futile after that.” Goldwater’s propensity for “shooting from the hip” provided further fodder for those characterizing him as “trigger-happy.” If his promise to grant jurisdiction over tactical nuclear weapons to American commanders in the field did not scare away voters, his assertion that such weapons could be used to defoliate the jungles of Vietnam did. In the final weeks of the campaign Goldwater attempted to remove what O’Donnell called the “atomic thorn in his heel” with more appeals to law and order. But the “trigger-happy bit,” one Dallas conservative Republican noted, hurt Goldwater among American voters. “We had a public relations image hung on us like a dead cat.” With Kennedy’s assassination, Lyndon Johnson, a master politician, ascended to the presidency, armed with both a singular understanding of Congress and a mandate to secure his fallen predecessor’s legislative program. While Johnson’s legislative record had been the most liberal in the nation’s history, many Dallasites, Texans, and Americans in the fall of 1964 still regarded the tall Texan as more moderate than his slain predecessor. In a shrewd gesture calculated to garner broad bipartisan support, reinforce his image of steady moderation, and avoid backlash, Johnson identified the 1964 Civil Rights Act as more of a legislative priority for the slain president than for himself. Goldwater, now facing a popular president from Texas instead of an incumbent from Massachusetts, was never fully able to execute the Southern Strategy in 1964. Moreover, the assassination had dampened his enthusiasm for the campaign. Goldwater liked Kennedy personally and had relished the opportunity to run against him. The decision to exclude F. Clifton White and Peter O’Donnell from the campaign also proved unwise. Denison Kitchel, Dean Burch, and Richard Kleindienst—the “Arizona Mafia”— lacked their predecessors’ experience, discretion, and organizational wizardry. In the final analysis, Goldwater’s running mate, William Miller, probably summed up the election results best: “The American people were just not in the mood to assassinate two Presidents in one year.” Kennedy’s assassination contributed to the debacle of the Dallas Republican Party in 1964. All eight Dallas Republicans in the Texas legislature were ousted, and Bruce Alger lost his bid for reelection to Congress to Democrat Earle Cabell. To be sure, Cabell was a well-financed candidate, a popular mayor who rode the coattails of a president from Texas. Moreover, Cabell made a strong case against Alger’s effectiveness as a congressman. Ultimately, the revival of Alger’s ultraconservatism—what many regarded as extremism, especially with Goldwater on the ballot—combined with his Dallas constituents’ concern for the city’s shattered image, were the most important factors in his defeat. Alger had modulated his ultraconservative image following the Adolphus incident, but his flirtation with distancing himself from far-right organizations and ideas did not last long. Alger’s speeches in 1962 and 1963 contained secular apocalyptic overtones. In his self-proclaimed “one-man campaign against John F. Kennedy,” he attacked the administration’s distribution of federal money to the nation’s cities, calling it a “sure step toward the end of free elections.” Aping Senator Joseph McCarthy’s 1950 speech in Wheeling, West Virginia, Alger addressed the Petroleum Engineers’ Club of Dallas and declared that he held in his hand fifty-five indictments charging the Kennedy administration with coddling Communists. On other occasions, Alger averred that the president was moving the country “closer to dictatorship” and that “the nation cannot survive another four years of the New Frontier policies.” This renewed, more militant ultraconservatism, manifested just as Dallas’s image throughout the world was tarnished by the murder of a president, contributed to Alger’s loss in 1964. Murdered by Jack Ruby, Lee Harvey Oswald never had his day in court, but Dallas, as A. C. Greene observed, promptly went “on trial.” In the days, weeks, and months after the assassination, newspaper and magazine editors descended upon Dallas to dissect its identity and often drew hasty and simplistic conclusions. The outside appraisals, on the whole, concluded that the city of Dallas was culturally bereft, politically autocratic, and socially bankrupt. Although President Kennedy’s killer was a Marxist who had lived in Dallas for only two months, many columnists concluded that the city and its right wing had created an environment that contributed to the assassination. An article in Fortune referred to Dallas as the “hate capital of the nation,” “a place so steeped in violence and political extremism that school children would cheer the president’s death.” One newspaper observed, “The hatred preachers got their man. They did not shoot him. They inspired the man who shot him.” Another noted that “Mr. Kennedy had prepared a speech which . . . reminded the people of Dallas that . . . America’s leadership must be guided by the lights of learning and reason. . . . Dallas’s answer, even before that speech was delivered, was to shoot John F. Kennedy.” Along with resurrecting the Adolphus and Stevenson incidents, some journalists concluded that the centralized structure of Dallas’s Citizens Council inhibited discussion, discouraged dissent, and restricted the intellectual and cultural activity essential to a thriving metropolis. Although many city leaders argued that the assassination “could have happened anywhere,” Congressman Alger was the most doctrinaire and hostile in attacking the news media for suggesting that Dallas itself was to blame. With the city’s image under siege, the business community divided its support between Cabell and Alger. Alger garnered support from the oilman Jake Hamon, Dresser Industries’ H. N. Mallon, Sun Oil’s Tom Hill, and Lone Star Steel’s E. B. Germany, while Cabell had the solid backing of the downtown Dallas establishment, including Robert L. Thornton (who founded the Citizens Council), the retailer Stanley Marcus, and John M. Stemmons. In the final analysis, enough business leaders came to the following conclusion: since the federal government already meddled in the life of the city—from civil rights to defense appropriations— Dallas might as well benefit and secure federal money to connect the Trinity River to the Gulf of Mexico, construct a downtown Federal Center, and undertake other projects that would move the city forward. Given concerns over the effect of the city’s image on future growth, it made little sense for Dallas leaders to stick with an intractable libertarian ideologue who had come to personify an extremism that frightened the country and appeared to bring out the worst in people. The assassination also revivified the Democratic Party in Dallas County. Within three months of the tragedy, the North Dallas Democrats, the first local organization working for Democrats on all rungs of the party hierarchy since 1948, was formed. Bill Clark, chairman of the Dallas County Democrats, adopted many of the organizational strategies that Peter O’Donnell had pioneered. Enthusiastic Democratic volunteers went door-to-door and called from numerous phone banks urging Dallasites to vote Democratic, “from the White House to the Court House.” Another important factor in Alger’s defeat was African-American turnout, which reached 85 percent in some precincts. With Lyndon Johnson committed to the cause of civil rights more vigorously than any predecessor (or successor), thirty-two thousand Dallas black voters chose a straight Democratic ticket; the Democratic proportion of victory in some black precincts was 119 to 1. Indeed, between 1952 and 1964 the flight of African-Americans from the Republican Party amounted to a seismic shift, and the Dallas Republican Party illustrated that trajectory in microcosm. In 1952, 44 percent of African-American voters nationwide supported Dwight Eisenhower for president, and two years later 67 percent of African-American voters in Dallas County supported Bruce Alger for congressman. But in 1964, only 6 percent of African-American voters nationwide supported Barry Goldwater for president, and locally only 2.4 percent supported Alger. The Dallas Republican Party’s loss of the African-American vote was no fleeting anomaly. Jim Collins, the son of Carr P. Collins and an unsuccessful 1966 GOP congressional candidate from Dallas, performed about as well as Alger had in African-African precincts two years earlier. Despite national Republican chairman Ray Bliss’s optimistic appraisal that Collins’s support among blacks was “sensational,” that it had “exceeded his fondest expectations,” and that it showed that blacks were returning to the Republican Party, the actual results in Dallas were nothing for Republicans to celebrate, rising an infinitesimal 0.8 percent to 3.2 percent. Speaking to a Dallas audience in 1968, Richard G. Hatcher, the newly elected black mayor of Gary, Indiana, said that “the Republican Party has in effect turned its back on the black people of this country.” The GOP simply did not want black votes, he concluded. The Reverend Ralph Abernathy echoed Hatcher, adding that the 1968 Republican platform and the ticket of Richard Nixon and Spiro T. Agnew “are not an inspiration to black voters.” Despite Alger’s and Goldwater’s thumping at the polls, they left an important legacy: they had made the case that there was a place for segregationists and states’ rights advocates in the Republican Party. Foreshadowing a bright future for the conservative movement, over a million men and women contributed money to Goldwater’s campaign in 1964, whereas Richard Nixon had received contributions from only forty-four thousand in 1960. After signing the 1964 Civil Rights bill into law, President Lyndon Johnson told an aide, “I think we just gave the South to the Republicans for your lifetime and mine.”96 Yet Johnson’s prognostication was only partially correct. Johnson had given the Deep South another reason to vote against the Democratic Party, but Goldwater gave the region a candidate who was on their side. One Republican from South Carolina expressed the view of many in the region when he observed that although Barry Goldwater was a Westerner, he “could pass for a great Southerner any time, any place.”97 But along with the discovery of a candidate, it took the precedent of Dallas-based, segregationist ultraconservatives like Bruce Alger, John Tower, Jack Cox, Maurice Carlson, and Peter O’Donnell to lay the groundwork for Goldwater’s run in 1964 and to demonstrate that national Republicans could finally “whistle Dixie.” Excerpted from "Nut Country: Right-wing Dallas and the Birth of the Southern Strategy" by Edward H. Miller. Published by the University of Chicago Press. Copyright 2015 by the University of Chicago. Reprinted with permission of the publisher. All rights reserved.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 11:00

The curious case of Ben Carson: How a black neurosurgeon soared to the top of the GOP primary

In a primary season that has seen the most unlikely of candidates, Donald Trump, surge to the head of the GOP pack, perhaps the least surprising development is the ascendance of another political outsider: Dr. Ben Carson. But to understand why Carson's recent success in Republican polls makes so much sense, one must first take a closer look at the nature of the modern Republican Party. In the Age of Obama, GOP elites routinely bloviate about their need to expand outreach to people of color, especially Hispanics and Latinos, given the United States’ changing ethnic and racial demographics. Yet, the Party has consistently failed to leverage opportunities to that end. This could be a function of incompetence. Alternatively, such a lack of substantive efforts could simply be a reflection of a political party that is dedicated to white racial resentment and white identity politics—and thus suppressing the votes of non-whites—as its primary electoral strategy. Given these dynamics, how does one make sense of the curious case of Ben Carson? How then does his surging popularity compute? Carson’s popularity points out the tension between what is known as “substantive” and “descriptive” politics. Substantive politics centers on a belief in a person’s values and policy positions as overriding other identity-based concerns about governance and political behavior. Yes, the body that an individual is born into matters. But, substantive politics presumes that almost any person can effectively represent a given constituency and its values. Descriptive politics, on the other hand, is the belief that a person’s life experiences and identity, especially if they are an outsider or Other in a given socio-political system -- in the United States and West this would be women, people of color, gays and lesbians, and members of other marginalized groups -- will lead them to challenge the system or be transformative and somehow resistant. Ultimately, the tension here is between individuals and systems. Do our racialized, ethnic, gendered, and other identities provide gifted insight and leverage for those we represent in government? Or is it best to vote for and support candidates based on their ideas alone, with an understanding that the system exercises constraints on all actors? (Stated differently: A white man may do a better job of representing his black and brown constituents’ interests than a brother or sister who “sells out” to Power. The latter is a “token”; the former can be a true and effective representative.) White conservatives love Ben Carson, the black face in a high place, in a sea of white candidates, because his symbolic presence provides cover for the white supremacist politics endorsed by the post-civil rights era Republican Party. Despite his popularity, Ben Carson is actually an example of the worst case of weak, symbolic, petty, token descriptive politics, where the fact of his presence as a black person is somehow supposed to win over non-white voters to the Republican Party, and demonstrate that the latter is “inclusive” and “not racist.” Yet Ben Carson’s policy proposals are not significantly different from those of his 2016 Republican primary peers. He wants to end the Affordable Care Act, do the bidding of the National Rifle Association against the will of the American people, take away women’s reproductive choices, usher in an American theocracy, and prevent the plutocrats of the 1 percent from paying their fair share in taxes. In many ways, Carson is actually worse than the white conservatives he shared the stage with at the debate the other night. He has repeatedly channeled ugly and grotesque anti-black sentiments and beliefs about the agency, freedom, and intelligence of the African-American community. This is his assigned role as a black conservative; his politics are no less noxious for his expertly performing the assigned script. By contrast, Bernie Sanders and Hillary Clinton have done a far better job of responding to the concerns of black and brown Americans -- even though the 2016 Democratic presidential primary field does not include a person of color. The Republican Party props up its black conservative human mascots and flavors of the month during the presidential campaign season because, on a basic level, white conservatives misunderstand non-white voters. People of color have rejected the Republican Party not only because of questions of representation, but also because its policies are anathema to the well-being, safety, security, and prosperity of Black and Brown America. The Republican Party is facing demographic suicide in an America that is increasingly black and brown -- where the GOP’s policies have savaged the poor, working, and middle classes. When a person is lost in the desert, they tend to walk in circles because they instinctively follow their dominant hand. He or she will eventually die from dehydration. The 2016 Republican presidential primary candidates are an example of a political organization in a death spiral. Black conservatives like Ben Carson will not save them. Together with his co-frontrunner Donald Trump, they are mirages that will lead the Republican Party to its doom.In a primary season that has seen the most unlikely of candidates, Donald Trump, surge to the head of the GOP pack, perhaps the least surprising development is the ascendance of another political outsider: Dr. Ben Carson. But to understand why Carson's recent success in Republican polls makes so much sense, one must first take a closer look at the nature of the modern Republican Party. In the Age of Obama, GOP elites routinely bloviate about their need to expand outreach to people of color, especially Hispanics and Latinos, given the United States’ changing ethnic and racial demographics. Yet, the Party has consistently failed to leverage opportunities to that end. This could be a function of incompetence. Alternatively, such a lack of substantive efforts could simply be a reflection of a political party that is dedicated to white racial resentment and white identity politics—and thus suppressing the votes of non-whites—as its primary electoral strategy. Given these dynamics, how does one make sense of the curious case of Ben Carson? How then does his surging popularity compute? Carson’s popularity points out the tension between what is known as “substantive” and “descriptive” politics. Substantive politics centers on a belief in a person’s values and policy positions as overriding other identity-based concerns about governance and political behavior. Yes, the body that an individual is born into matters. But, substantive politics presumes that almost any person can effectively represent a given constituency and its values. Descriptive politics, on the other hand, is the belief that a person’s life experiences and identity, especially if they are an outsider or Other in a given socio-political system -- in the United States and West this would be women, people of color, gays and lesbians, and members of other marginalized groups -- will lead them to challenge the system or be transformative and somehow resistant. Ultimately, the tension here is between individuals and systems. Do our racialized, ethnic, gendered, and other identities provide gifted insight and leverage for those we represent in government? Or is it best to vote for and support candidates based on their ideas alone, with an understanding that the system exercises constraints on all actors? (Stated differently: A white man may do a better job of representing his black and brown constituents’ interests than a brother or sister who “sells out” to Power. The latter is a “token”; the former can be a true and effective representative.) White conservatives love Ben Carson, the black face in a high place, in a sea of white candidates, because his symbolic presence provides cover for the white supremacist politics endorsed by the post-civil rights era Republican Party. Despite his popularity, Ben Carson is actually an example of the worst case of weak, symbolic, petty, token descriptive politics, where the fact of his presence as a black person is somehow supposed to win over non-white voters to the Republican Party, and demonstrate that the latter is “inclusive” and “not racist.” Yet Ben Carson’s policy proposals are not significantly different from those of his 2016 Republican primary peers. He wants to end the Affordable Care Act, do the bidding of the National Rifle Association against the will of the American people, take away women’s reproductive choices, usher in an American theocracy, and prevent the plutocrats of the 1 percent from paying their fair share in taxes. In many ways, Carson is actually worse than the white conservatives he shared the stage with at the debate the other night. He has repeatedly channeled ugly and grotesque anti-black sentiments and beliefs about the agency, freedom, and intelligence of the African-American community. This is his assigned role as a black conservative; his politics are no less noxious for his expertly performing the assigned script. By contrast, Bernie Sanders and Hillary Clinton have done a far better job of responding to the concerns of black and brown Americans -- even though the 2016 Democratic presidential primary field does not include a person of color. The Republican Party props up its black conservative human mascots and flavors of the month during the presidential campaign season because, on a basic level, white conservatives misunderstand non-white voters. People of color have rejected the Republican Party not only because of questions of representation, but also because its policies are anathema to the well-being, safety, security, and prosperity of Black and Brown America. The Republican Party is facing demographic suicide in an America that is increasingly black and brown -- where the GOP’s policies have savaged the poor, working, and middle classes. When a person is lost in the desert, they tend to walk in circles because they instinctively follow their dominant hand. He or she will eventually die from dehydration. The 2016 Republican presidential primary candidates are an example of a political organization in a death spiral. Black conservatives like Ben Carson will not save them. Together with his co-frontrunner Donald Trump, they are mirages that will lead the Republican Party to its doom.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 09:00

They’re the politically correct: Ben Carson and Bill O’Reilly are the real intolerant speech police

That leftist “social justice warriors” are suppressing speech that makes people “uncomfortable” is the dominant media narrative about free speech on college campuses. There are, of course, examples after examples of the opposite — people with actual administrative authority on campus (not student protesters) both upholding free speech in the face of left-leaning student demands for censorship, and denying free speech to left-leaning activists. The myths persist not because they are true, but because they are pervasive and under-scrutinized. It's also classic conservative concern-trolling. The dominant narrative of students paradoxically coddled and terrified by leftist “SJWs” pretends to be about student wellbeing and protection of free speech, but was always fodder for partisan politics. For example, Fox News frets over the implications of a liberal professoriate because 96 percent of Cornell University faculty donations went to Democrats, but Bill O’Reilly has no plans to send Jesse Watters to the Chamber of Commerce to pose a series of inelegantly leading questions about whether their disproportionate donations to Republicans might be a source of indoctrination. It’s no secret, in other words, that colleges and universities are among the very few influential U.S. institutions that also tend to question received wisdom about the free market and conservative ideology; so colleges and universities (and not workplaces, or the Motion Picture Association, which sponsored CPAC last year) are predictable targets for conservatives who ostensibly care about free speech. The trouble for Republicans — and for adherents of the dominant narrative that the left is singularly or particularly illiberal on matters of speech — is that explicit challenges to free speech on campus continue to come from the right. Last week, viable Republican presidential candidate Ben Carson unveiled a plan to use the Department of Education to monitor and police speech on college campuses, a system by which the federal government collects speech reports from campuses and then decides whether the speech in the reports qualifies as propaganda. If it does, per Carson’s plan, the institution in question would lose federal funding. In short, Carson proposes to encourage people on campus to police each other’s speech, to report to the federal government any speech that might be propaganda, and then to have the federal government determine what is and is not propaganda. Critics of leftist attempts to shut down “insensitive” speech point to the “chilling” effect of an environment that encourages self-policing, watching what we say to avoid causing offense. Carson’s proposal certainly includes that, but goes a step further by inviting one of the most powerful governments in the world to decide for college students and faculty if their speech constitutes propaganda. You could say that Carson’s expressed disregard for the First Amendment is an outlier or a “cherry-picked” example, but then you’d have to contend with the fact that the Republican National Committee, which held the most recent GOP presidential debate on campus at University of Colorado, Boulder, has made arrangements with the University to prevent UC Boulder students from attending the debate. Of the 11,000 seats provided by the venue, the RNC initially made available only 50 tickets for students (facing some pressure, the RNC generously increased student ticket availability to 150 out of 11,000). If you support the Republican agenda, it becomes difficult to talk straight-faced about the left-wing, “SJW” assault on free speech while also claiming to care about college students and free speech on campus. Particularly if you’re concerned that colleges and universities suffer from a lack of exposure to conservative ideas, it makes little sense to hold a Republican presidential debate on a liberal college campus but keep liberal college students from attending. Putting aside that the Republican presidential candidates this year are unlikely to produce much of value in the way of ideas or policy proposals, what better opportunity than a Republican presidential debate to expose liberal students to conservative ideas that matter? However, between the RNC’s handling of student access to the debate, and Ben Carson’s plan to police the speech of liberal professors, the mask of concern for free speech is beginning to melt away. Carson can’t help but admit that he’d rather have state-sponsored censorship than tolerate just one of many U.S. institutions — higher education — where free-market and conservative orthodoxies are likely to face scrutiny. The RNC would rather use a university campus (and all the lofty things it symbolizes) as a debate venue, but surround the venue with fences and keep out actual students, a gesture that puts audience control and exclusivity above the supposed goal of bringing conservative ideas to campus and attracting students to “big tent” (not ring-fenced) conservatism. These examples from national-scale conservative politics add to a growing list of high-profile free-speech issues and controversies at Duke University, Wesleyan University, the Community College of Philadelphia, American University and University of Illinois, Chicago, all of which contradict the dominant narrative of leftist “SJWs” suppressing speech and getting away with it. From this expanding list we learn three very important things about freedom of speech in the U.S. One, freedom of speech faces real challenges that we must not dismiss; but those challenges come from both the left and the right. Two, the actual enforcement of value judgments about speech is a matter not of the political content of speech, but of the power differential between the speaker and the censor; and leftist students and faculty are still far less powerful than the institutions that house and employ us all, or the outside lobbies and politicians who take opportunistic, partisan interest in campus affairs. Three, with employers monitoring employees’ social media accounts, political donations and affiliations, enforcing “workplace happiness” protocols and employing people in unpaid internships and precarious contracts, the gravest threats to free speech are happening not on campus, but in the workplace. When will free-speech advocates muster the courage to police and interrogate our most powerful institutions the way they police college campuses? I’m speaking of the corporations, the corporate lobbying firms and the hip-pocketed Congress that so nakedly serves corporate interests above all else.That leftist “social justice warriors” are suppressing speech that makes people “uncomfortable” is the dominant media narrative about free speech on college campuses. There are, of course, examples after examples of the opposite — people with actual administrative authority on campus (not student protesters) both upholding free speech in the face of left-leaning student demands for censorship, and denying free speech to left-leaning activists. The myths persist not because they are true, but because they are pervasive and under-scrutinized. It's also classic conservative concern-trolling. The dominant narrative of students paradoxically coddled and terrified by leftist “SJWs” pretends to be about student wellbeing and protection of free speech, but was always fodder for partisan politics. For example, Fox News frets over the implications of a liberal professoriate because 96 percent of Cornell University faculty donations went to Democrats, but Bill O’Reilly has no plans to send Jesse Watters to the Chamber of Commerce to pose a series of inelegantly leading questions about whether their disproportionate donations to Republicans might be a source of indoctrination. It’s no secret, in other words, that colleges and universities are among the very few influential U.S. institutions that also tend to question received wisdom about the free market and conservative ideology; so colleges and universities (and not workplaces, or the Motion Picture Association, which sponsored CPAC last year) are predictable targets for conservatives who ostensibly care about free speech. The trouble for Republicans — and for adherents of the dominant narrative that the left is singularly or particularly illiberal on matters of speech — is that explicit challenges to free speech on campus continue to come from the right. Last week, viable Republican presidential candidate Ben Carson unveiled a plan to use the Department of Education to monitor and police speech on college campuses, a system by which the federal government collects speech reports from campuses and then decides whether the speech in the reports qualifies as propaganda. If it does, per Carson’s plan, the institution in question would lose federal funding. In short, Carson proposes to encourage people on campus to police each other’s speech, to report to the federal government any speech that might be propaganda, and then to have the federal government determine what is and is not propaganda. Critics of leftist attempts to shut down “insensitive” speech point to the “chilling” effect of an environment that encourages self-policing, watching what we say to avoid causing offense. Carson’s proposal certainly includes that, but goes a step further by inviting one of the most powerful governments in the world to decide for college students and faculty if their speech constitutes propaganda. You could say that Carson’s expressed disregard for the First Amendment is an outlier or a “cherry-picked” example, but then you’d have to contend with the fact that the Republican National Committee, which held the most recent GOP presidential debate on campus at University of Colorado, Boulder, has made arrangements with the University to prevent UC Boulder students from attending the debate. Of the 11,000 seats provided by the venue, the RNC initially made available only 50 tickets for students (facing some pressure, the RNC generously increased student ticket availability to 150 out of 11,000). If you support the Republican agenda, it becomes difficult to talk straight-faced about the left-wing, “SJW” assault on free speech while also claiming to care about college students and free speech on campus. Particularly if you’re concerned that colleges and universities suffer from a lack of exposure to conservative ideas, it makes little sense to hold a Republican presidential debate on a liberal college campus but keep liberal college students from attending. Putting aside that the Republican presidential candidates this year are unlikely to produce much of value in the way of ideas or policy proposals, what better opportunity than a Republican presidential debate to expose liberal students to conservative ideas that matter? However, between the RNC’s handling of student access to the debate, and Ben Carson’s plan to police the speech of liberal professors, the mask of concern for free speech is beginning to melt away. Carson can’t help but admit that he’d rather have state-sponsored censorship than tolerate just one of many U.S. institutions — higher education — where free-market and conservative orthodoxies are likely to face scrutiny. The RNC would rather use a university campus (and all the lofty things it symbolizes) as a debate venue, but surround the venue with fences and keep out actual students, a gesture that puts audience control and exclusivity above the supposed goal of bringing conservative ideas to campus and attracting students to “big tent” (not ring-fenced) conservatism. These examples from national-scale conservative politics add to a growing list of high-profile free-speech issues and controversies at Duke University, Wesleyan University, the Community College of Philadelphia, American University and University of Illinois, Chicago, all of which contradict the dominant narrative of leftist “SJWs” suppressing speech and getting away with it. From this expanding list we learn three very important things about freedom of speech in the U.S. One, freedom of speech faces real challenges that we must not dismiss; but those challenges come from both the left and the right. Two, the actual enforcement of value judgments about speech is a matter not of the political content of speech, but of the power differential between the speaker and the censor; and leftist students and faculty are still far less powerful than the institutions that house and employ us all, or the outside lobbies and politicians who take opportunistic, partisan interest in campus affairs. Three, with employers monitoring employees’ social media accounts, political donations and affiliations, enforcing “workplace happiness” protocols and employing people in unpaid internships and precarious contracts, the gravest threats to free speech are happening not on campus, but in the workplace. When will free-speech advocates muster the courage to police and interrogate our most powerful institutions the way they police college campuses? I’m speaking of the corporations, the corporate lobbying firms and the hip-pocketed Congress that so nakedly serves corporate interests above all else.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 08:59

In “Burnt,” the brilliant white douche wins again: We love to excuse bad behavior from successful creeps — how else do you explain Donald Trump?

If "Burnt" were being honest, it would have been called “Everybody Wants to Fuck Bradley Cooper.” The John Wells-directed film, in which Cooper plays a bad boy chef, labors under the misapprehension that no matter how horrible, narcissistic or borderline violent a guy cinema’s second favorite Brad plays, everyone is dying to bone him: the talented sous chef (Sienna Miller) who can’t stand the sight of him; the gay, Elmer Fudd-inflected maître d’ (Daniel Bruhl) willing to sacrifice his entire career for just one kiss and even the restaurant critic who is exclusively into women but willing to make an exception for white dudes whose names rhyme with “Madly Boop-her.”   This is despite the fact that (to reiterate) Cooper is playing the worst person in the world. While displaying almost no redeeming qualities — outside of those piercing baby blues — Cooper’s Adam Jones manipulates Miller’s character (whose character may as well be named “Girl Chef”) into working with him and then repeatedly bribes her not to quit because of his abusive behavior; in one scene, he violently grabs her by the shirt for daring to question his authority. Jones forces the Elmer Fudd-ish maître d’ to give him the head chef job at his dying father’s restaurant — because Jones knows Fuddy is in love with him (hey, who isn’t?). And in a telling moment, he facetiously suggests that a shy cook should work for him for free — all to prove a point: You have to be a jerk to get what you want. It might seem like "Burnt" is "Wall Street" with a frying pan, but the film’s “narcissism is good” ethos is less a philosophy than a love letter to insufferable white golden boys and the people who find them dangerously irresistible. "Burnt" validates and redeems Jones' numerous misdeeds not by having him ever apologize for his behavior or make amends but by repeatedly reminding us a) that he’s so brilliant b) that he’s so damaged (bad childhood: check!) and c) that he’s not overtly horrible to children. In Adam’s redemptive “he’s not so bad after all!“ scene, he deigns to bake Girl Chef’s daughter a cake — and even eats it with her. This is, of course, after he wouldn’t let Girl Chef have the actual day off to throw her daughter a party. If little separates Adam Jones from Patrick Bateman aside from a knife and an adult poncho, Adam Jones actually exemplifies a time-honored type in American cinema: In real life, you would file a restraining order against him, but unfortunately, he’s attractive, Caucasian and the protagonist of the film. The Hot White Douche is as old as the history of the cinema itself: In the film "His Girl Friday," you’d throw a pie in Cary Grant’s face if he weren’t, well, Cary Grant. In a memorable scene from "The Philadelphia Story," Grant actually facepalms Katherine Hepburn, pushing her to the ground — and then they end up together. And James Bond has made a 50-year career off being a debonair dickbag; a recent review of "Spectre" called 007 a “violent misogynist,” and even Daniel Craig agreed. Sexism just looks better in Tom Ford. The current cinema has given us famous examples of the Hot White Douche like Edward Cullen and Christian Grey, both of whom teach horny teenage Mormons and middle-aged hausfraus that there’s nothing sexier than abusive relationships. But even less controversial characters have distinct HWD tendencies: In "Reality Bites," Winona Ryder ends up with Ethan Hawke — a pretentious, greasy-haired jerk who spends the whole movie being mean to her — for no other reason than the script says so (they “belong together” or something). "Bridget Jones’ Diary" and "The Ugly Truth" respectively give us Daniel Cleaver (Hugh Grant) and Mike (Gerard Butler), whose workplace behavior should be used as a cautionary tale at sexual harassment seminars. The problem with a movie like "The Ugly Truth" isn’t just that the screenplay (which was somehow written by three women) lets the misogynist get the girl but that it, like "Burnt," proves his worldview correct. "The Ugly Truth" offers a Cyrano de Bergerac scenario in which Mike helps Abby (Katherine Heigl) get laid by teaching her what men like — which is Cool Girls who wear tight dresses and find all their jokes hilarious. Of course, Abby finds that his methods work — until she and Mike inevitably fall in love. "Jurassic World" offers the same thesis: For women, there’s nothing sexier than a scruffy asshole (this time Chris Pratt) with a six-pack ordering you around. What’s particularly troubling about this type is how many actors have made quite a cottage industry off them: Matthew McConaughey launched his second career as cinema’s most prolific Hot White Douche — from "How to Lose a Guy in 10 Days" to "Ghosts of Girlfriends Past," his inevitable HWD swan song. Han “I Know” Solo is a famous example of the type, but even Harrison Ford’s non-"Star Wars" characters (see: Indiana Jones, Rick Deckard) have HWD elements. And Pratt is becoming the biggest star in America by picking up where Ford left off: In "Guardians of the Galaxy," he’s yet another rakish cad who doesn’t have time for feelings — he has his Walkman. The problem with the Hot White Douche — on top of, well, everything — is that they get the privilege of being bad without sacrificing our sympathy or his status as the hero. And this extends beyond Hollywood's fictional heroes — isn't Donald Trump just a natural extension of this type, albeit on a less-handsome scale? His financial success and confident swagger insulate him from backlash against his terrible public behavior. "Sure, he's a crude jerk, but don't you love the way he speaks his mind?" In a particularly interesting scene in "Burnt," it’s revealed that Omar Sy’s Michel, a chef whose life Adam Jones ruined, only agreed to work for him as an act of revenge. During a pivotal moment, Michel (aka: the only character of color) throws cayenne pepper into a dish to ruin it, thus sabotaging the restaurant’s shot at a three-star Michelin rating. After his vengeance is complete, Sy walks off the screen and is never seen again. The difference between the two men is that because Jones is white, the movie has the privilege of being about him; his shades of grey are the only ones the film considers noteworthy. You might argue that many of these film are aware their male leads aren’t role models or the “Man of the Year,” and a movie like "Burnt" is more interested in making their protagonists complicated than likable. I understand that in principle, but how often are Omar Sy or Sienna Miller allowed to play characters who are liked not in spite of their flaws but precisely because of them? While cinema has taken tentative steps forward in giving us female anti-heroes (see: "The Girl with the Dragon Tattoo"'s Lisbeth Salander, "Gone Girl"'s Amy Dunne), it’s a lot harder to think of a woman whose bad behavior is eminently fuckable. In "Young Adult," you don’t want to shack up with Mavis Gary; you want to get her into rehab; Amy Schumer's "Trainwreck" has to clean up her act before she can make her relationship last.  For all the praise heaped on "Gone Girl"’s depiction of the female anti-hero, what I found most refreshing was Gillian Flynn’s overt deflation of the Hot White Douche. Nick Dunne’s failings as a husband (his affair, sinking all their money into a failed bar) don’t make him a charming cad, and unlike Cooper’s libidinous chef, the female characters don’t spend the whole movie polishing his ego. Even the women closest to him, like his twin sister, Margo, come to detest the sight of him when they see Nick for who he really is: just a pathetic jerk. Sure, he didn’t kill his wife — but it doesn’t make him a hero. I wish "Burnt" had the same courage. For all the film’s seeming promises of lessons learned and redemption found, the movie is more interested in giving Adam Jones his great comeback than challenging him to think about why he may have been denied it to begin with. The Hot White Douche might get what he wants, but at what cost?

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 07:30

The GOP can’t escape football: Republicans’ long love affair with America’s most brutal sport

Chris Christie might have thought it was ludicrous for the CNBC debate moderators to ask about the booming yet ethically murky "daily fantasy football" industry, but they had defensible reasons for doing so. Websites such as Draft Kings and FanDuel now generate tens of billions of dollars, by way of a product that bears striking similarities to online gambling. That daily fantasy football should have become such a booming enterprise — and that it might merit consideration during a presidential debate — might surprise some. It shouldn't, and especially not on the conservative end of the spectrum. While baseball may be the America’s official pastime, football has been its most popular sport for more than thirty years. Around this same time, more and more Republican voters, presidential candidates and even presidents have been strongly identified with the game. Indeed, although he may not have realized it, when Christie expressed outrage at the debate’s disproportionate attention to football, he was in part attacking the values of his own party’s base. As writer Neal Gabler noted in an editorial for ESPN, “Of people who identified themselves as part of the NFL fan base 83 percent were white, 64 percent  were male, 51 percent were 45 years or older, only 32 percent made less than $60,000 a year, and, to finish the point, registered Republicans were 21 percent more likely to be NFL fans than registered Democrats.” In addition, football is particularly popular in the South, one of the Republican Party’s regional strongholds in America today. Richard Nixon was the first president to be widely viewed as a “football freak,” even though his personal experience playing the sport was limited to serving as a tackling dummy for the small football program at the Quaker college he attended. Despite this inglorious beginning, Nixon was openly passionate about the game, impressing even harsh critics like Hunter S. Thompson with his encyclopedic knowledge of college and professional football statistics. This same enthusiasm, though, also got him into minor trouble, such as when he asked Miami Dolphins’ coach Don Shula to run a play he suggested during Super Bowl VI (the play failed and the Dallas Cowboys ultimately won) or annoyed anti-Vietnam War protesters at the Lincoln Memorial by trying to talk to them about college football instead of their political concerns. Nixon’s penchant for unflattering associations with football apparently set a precedent for his party’s subsequent commanders in chief. Gerald Ford’s opponents often liked to insult his intelligence by referencing the multiple concussions he suffered while playing the game in his youth, quoting Lyndon Johnson’s old observation that Ford “spent too much time playing football without a helmet.” Ronald Reagan allowed his second inauguration to be postponed by a day so as to not conflict with Super Bowl XIX, instead ushering in the first day of his second term by performing the ceremonial coin flip at the Big Game. George W. Bush, meanwhile, allowed football metaphors to become so commonplace among members of his administration that they sometimes bungled their language when navigating delicate foreign policy situations in North Korea or Afghanistan. This brings us to the meat of Christie’s criticism — in his own words — that “we have ISIS and al Qaeda attacking us and we’re talking about fantasy football.” Despite occupying very different points on the ideological spectrum, Christie’s outrage was in the same spirit as the anger expressed by the Lincoln Memorial protesters who thought Nixon had come to them to discuss the Vietnam War, only to be regaled with tales about the University of Syracuse’s football program. For many Americans, football is more than just a form of entertainment or recreation, but a way of life deeply embedded in their cultural institutions and held in reverence by the bulk of society. As a result, it is easy to allow the bread-and-circus spectacle of football to distract them from issues that have serious real-world consequences — or, like Nixon, to understand why others wouldn’t feel the same way. This overzealous passion for football can also cause serious ideological inconsistencies. Take Paul Ryan, the newly minted Speaker of the House, who as the Republican vice presidential candidate in 2012 stood on a staunchly anti-labor platform that he had helped design — but one which he temporarily abandoned by siding with the NFL referees’ union over the league during a strike, because the inexperienced replacements refs had cost his beloved Green Bay Packers a victory over Seattle Seahawks. (Ryan was joined in this by Wisconsin governor Scott Walker, who had built his political career on his anti-union bona fides, but called for the NFL to resolve the strike so that the quality of the game he loved would not be impaired.) Even Christie’s own rebuke of the notion that Republican presidential candidates should discuss fantasy football regulation touches on this problem; when he argued that the focus should be on how to “get the government to do what they’re supposed to be doing” — and declared, “Enough on fantasy football. Let people play, who cares?” — he reminded the audience of the GOP’s ostensible “small government” principles — ones that should make the state’s stance on something as frivolous as fantasy football self-evident. That said, it’s important that Republicans learn a lesson from the first president to have a meaningful relationship with football — future president Herbert Hoover, who managed Stanford University’s budding football team in the 1890s and famously forgot to bring the ball during their first Big Game with the University of California. While Hoover may have been embarrassed for neglecting his football duties, the party he later represented has often been guilty of the opposite sin, ranking football too highly among its priorities. The daily fantasy football controversy is perhaps a partial exception. Nonetheless, hopefully last night’s debate can start the process of offsetting this phenomenon.Chris Christie might have thought it was ludicrous for the CNBC debate moderators to ask about the booming yet ethically murky "daily fantasy football" industry, but they had defensible reasons for doing so. Websites such as Draft Kings and FanDuel now generate tens of billions of dollars, by way of a product that bears striking similarities to online gambling. That daily fantasy football should have become such a booming enterprise — and that it might merit consideration during a presidential debate — might surprise some. It shouldn't, and especially not on the conservative end of the spectrum. While baseball may be the America’s official pastime, football has been its most popular sport for more than thirty years. Around this same time, more and more Republican voters, presidential candidates and even presidents have been strongly identified with the game. Indeed, although he may not have realized it, when Christie expressed outrage at the debate’s disproportionate attention to football, he was in part attacking the values of his own party’s base. As writer Neal Gabler noted in an editorial for ESPN, “Of people who identified themselves as part of the NFL fan base 83 percent were white, 64 percent  were male, 51 percent were 45 years or older, only 32 percent made less than $60,000 a year, and, to finish the point, registered Republicans were 21 percent more likely to be NFL fans than registered Democrats.” In addition, football is particularly popular in the South, one of the Republican Party’s regional strongholds in America today. Richard Nixon was the first president to be widely viewed as a “football freak,” even though his personal experience playing the sport was limited to serving as a tackling dummy for the small football program at the Quaker college he attended. Despite this inglorious beginning, Nixon was openly passionate about the game, impressing even harsh critics like Hunter S. Thompson with his encyclopedic knowledge of college and professional football statistics. This same enthusiasm, though, also got him into minor trouble, such as when he asked Miami Dolphins’ coach Don Shula to run a play he suggested during Super Bowl VI (the play failed and the Dallas Cowboys ultimately won) or annoyed anti-Vietnam War protesters at the Lincoln Memorial by trying to talk to them about college football instead of their political concerns. Nixon’s penchant for unflattering associations with football apparently set a precedent for his party’s subsequent commanders in chief. Gerald Ford’s opponents often liked to insult his intelligence by referencing the multiple concussions he suffered while playing the game in his youth, quoting Lyndon Johnson’s old observation that Ford “spent too much time playing football without a helmet.” Ronald Reagan allowed his second inauguration to be postponed by a day so as to not conflict with Super Bowl XIX, instead ushering in the first day of his second term by performing the ceremonial coin flip at the Big Game. George W. Bush, meanwhile, allowed football metaphors to become so commonplace among members of his administration that they sometimes bungled their language when navigating delicate foreign policy situations in North Korea or Afghanistan. This brings us to the meat of Christie’s criticism — in his own words — that “we have ISIS and al Qaeda attacking us and we’re talking about fantasy football.” Despite occupying very different points on the ideological spectrum, Christie’s outrage was in the same spirit as the anger expressed by the Lincoln Memorial protesters who thought Nixon had come to them to discuss the Vietnam War, only to be regaled with tales about the University of Syracuse’s football program. For many Americans, football is more than just a form of entertainment or recreation, but a way of life deeply embedded in their cultural institutions and held in reverence by the bulk of society. As a result, it is easy to allow the bread-and-circus spectacle of football to distract them from issues that have serious real-world consequences — or, like Nixon, to understand why others wouldn’t feel the same way. This overzealous passion for football can also cause serious ideological inconsistencies. Take Paul Ryan, the newly minted Speaker of the House, who as the Republican vice presidential candidate in 2012 stood on a staunchly anti-labor platform that he had helped design — but one which he temporarily abandoned by siding with the NFL referees’ union over the league during a strike, because the inexperienced replacements refs had cost his beloved Green Bay Packers a victory over Seattle Seahawks. (Ryan was joined in this by Wisconsin governor Scott Walker, who had built his political career on his anti-union bona fides, but called for the NFL to resolve the strike so that the quality of the game he loved would not be impaired.) Even Christie’s own rebuke of the notion that Republican presidential candidates should discuss fantasy football regulation touches on this problem; when he argued that the focus should be on how to “get the government to do what they’re supposed to be doing” — and declared, “Enough on fantasy football. Let people play, who cares?” — he reminded the audience of the GOP’s ostensible “small government” principles — ones that should make the state’s stance on something as frivolous as fantasy football self-evident. That said, it’s important that Republicans learn a lesson from the first president to have a meaningful relationship with football — future president Herbert Hoover, who managed Stanford University’s budding football team in the 1890s and famously forgot to bring the ball during their first Big Game with the University of California. While Hoover may have been embarrassed for neglecting his football duties, the party he later represented has often been guilty of the opposite sin, ranking football too highly among its priorities. The daily fantasy football controversy is perhaps a partial exception. Nonetheless, hopefully last night’s debate can start the process of offsetting this phenomenon.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on November 01, 2015 07:00

October 31, 2015

Halloween is my nightmare, but not for the usual reasons

Each year I dread Halloween. While my neighbors are impersonating Ben Carson, Jeb! Bush and other spine-chilling creatures, Halloween fills me with grief. My 46-year-old brother lay dying on All Hallows’ Eve, 25 years ago. The trick was on us. Ten years older than me, Jay was my surrogate dad. My workaholic father was rarely around, busy supporting three children, his widowed mother and his mother-in-law. A blue-eyed boy with a photographic memory, Jay was my mother’s favorite and I was his. My first childhood memory was Jay climbing into my crib, making me giggle uncontrollably. He protected me from bullies and thrilled me with risks as we pretended to be Olympic slalom riders down the legendary Suicide Hill nearby. Whenever I fell and skinned my knee, my usually gentle brother would hit my arm rather roughly. “Ouch!” I’d protest. “Does your knee hurt anymore?” Jay responded playfully. He even invited me on dates with his high school sweetheart. She was not always enthusiastic about his baby sister munching popcorn next to her in the movies. When he married her after his first year of dental school, I felt jealous and abandoned, but had perfect material for therapy sessions. He left me again, for Vietnam, where he filled soldiers’ teeth dangerously close to the front lines, and later far away to Florida, with his wife and three kids. When he was 44, Jay was diagnosed with lung cancer. A non-smoker, he was so far out of any risk group that the doctors missed the signs for a year, believing the pain he was feeling was a pulled muscle. After his first MRI, he confessed how eerie it was going into the tube, knowing his grim diagnosis. “It was like being in a coffin,” he said. One night his wife called me, sobbing. “Only 30 percent survive beyond 18 months,” she said. I consoled her, sent him his favorite chocolates and books to read during treatment, and cried privately, 1,500 miles away, angry at such an unfair fate. Two years later Jay was taking his last breaths in his bedroom, his wife and kids by his side, as trick-or-treaters relentlessly rang their doorbell all night. “I couldn’t answer the door,” Jay’s wife frantically told me. “But they just wouldn’t stop coming, wanting candy. Finally I disconnected the doorbell.” One Halloween when I was 8, my friends and I heard that a family down the block was doling out a different kind of sugar: money. In the days when parents didn’t escort their kids, my friends and I marched down to the house, whose residents we didn’t know. We leaned on the doorbell. Again and again. No answer, but we persisted. Finally an angry, weary dad answered the door. “Stop disturbing us,” he barked. “Our baby’s sick.” We retreated, guilty and remorseful. His baby recovered from the flu, and we avoided walking by his house for a long time. My brother died the morning after relentless ghouls and goblins didn’t understand that his grieving soon-to-be-widow had nothing sweet to spare. Jay’s tragic death made me question my faith in God. I focused my energy on nurturing his kids, who were in high school and college. Each year, Halloween was particularly painful for me, and I avoided any kind of celebration. Instead, my husband and I lit a candle in memory of Jay, recalling his warmth, humor and jokes. After my daughter was born, it became difficult to hide out on Halloween. Her best friend’s mother made elaborate outfits from scratch as if she were a Broadway costume designer, making my parenting skills feel inadequate. But it also felt impossible to go gleefully into the spooky night when I was sadly missing my brother. Only once did I succeed in piecing together a cool Halloween costume for my daughter. We enjoyed watching Marx Brothers movies, the way Jay had introduced me to “A Day at the Races.” As an homage to my brother, I transformed my only daughter into Harpo. She wore my raincoat, which nearly reached the floor on her. I found a blonde curly wig in a store frequented by transvestites in the East Village, and a friend donated a horn to complete her outfit. Instead of saying trick-or-treat, she honked. Through all this, I kept my Halloween grief to myself, not wanting to ruin her fun and excitement. Ironically, All Hallows’ Eve originated as a time to remember the dead, but today it’s morphed into a festive Mardi Gras. Holidays that emphasize a collective celebration of joy make us feel compelled, even pressured, to be gleeful along with everyone else—rather than be identified as the only stick-in-the-mud in the crowd. On a night of gaiety when it’s de rigueur to transform into something else, I’m cloaked behind an invisible mask of sadness.  Sometimes I tried to ease my sorrow by stuffing myself with my daughter’s overstock of candy. I was relieved when she was old enough to piece together her own costumes. When the new generation of trick-or-treaters arrived at my door, I put on a big smile, a clown with a sad interior. Halloween never ends when your kids grow up. This year an email arrived from close friends, inviting us to their annual party, ending with the line “costumes are a must!” My husband has always hated dressing up, and for years agreed to show up in a Mr. Spock tee-shirt until he was typecast. Once I convinced him to reprise Harpo for a party. He put on the wig for half an hour, then tossed it off. I never feel like turning myself into someone else on the anniversary of my brother’s death, but only my husband knows my secret. I don’t want to cast a pall when everyone else is flying high, riding broomsticks. Of course, Jay wouldn’t have wanted me to stay home and mourn him on Halloween, decades later. So I will show up among the grown-up pirates and witches in a Mets hat and a blue shirt, as usual deserving the award for Worst-Dressed-of-the-Night. On the way to the party I plan to hit myself in my knee so my heart hurts less. I’ll greet my friends with a pretend smile and force my laughter, realizing I’m wearing a costume after all.Each year I dread Halloween. While my neighbors are impersonating Ben Carson, Jeb! Bush and other spine-chilling creatures, Halloween fills me with grief. My 46-year-old brother lay dying on All Hallows’ Eve, 25 years ago. The trick was on us. Ten years older than me, Jay was my surrogate dad. My workaholic father was rarely around, busy supporting three children, his widowed mother and his mother-in-law. A blue-eyed boy with a photographic memory, Jay was my mother’s favorite and I was his. My first childhood memory was Jay climbing into my crib, making me giggle uncontrollably. He protected me from bullies and thrilled me with risks as we pretended to be Olympic slalom riders down the legendary Suicide Hill nearby. Whenever I fell and skinned my knee, my usually gentle brother would hit my arm rather roughly. “Ouch!” I’d protest. “Does your knee hurt anymore?” Jay responded playfully. He even invited me on dates with his high school sweetheart. She was not always enthusiastic about his baby sister munching popcorn next to her in the movies. When he married her after his first year of dental school, I felt jealous and abandoned, but had perfect material for therapy sessions. He left me again, for Vietnam, where he filled soldiers’ teeth dangerously close to the front lines, and later far away to Florida, with his wife and three kids. When he was 44, Jay was diagnosed with lung cancer. A non-smoker, he was so far out of any risk group that the doctors missed the signs for a year, believing the pain he was feeling was a pulled muscle. After his first MRI, he confessed how eerie it was going into the tube, knowing his grim diagnosis. “It was like being in a coffin,” he said. One night his wife called me, sobbing. “Only 30 percent survive beyond 18 months,” she said. I consoled her, sent him his favorite chocolates and books to read during treatment, and cried privately, 1,500 miles away, angry at such an unfair fate. Two years later Jay was taking his last breaths in his bedroom, his wife and kids by his side, as trick-or-treaters relentlessly rang their doorbell all night. “I couldn’t answer the door,” Jay’s wife frantically told me. “But they just wouldn’t stop coming, wanting candy. Finally I disconnected the doorbell.” One Halloween when I was 8, my friends and I heard that a family down the block was doling out a different kind of sugar: money. In the days when parents didn’t escort their kids, my friends and I marched down to the house, whose residents we didn’t know. We leaned on the doorbell. Again and again. No answer, but we persisted. Finally an angry, weary dad answered the door. “Stop disturbing us,” he barked. “Our baby’s sick.” We retreated, guilty and remorseful. His baby recovered from the flu, and we avoided walking by his house for a long time. My brother died the morning after relentless ghouls and goblins didn’t understand that his grieving soon-to-be-widow had nothing sweet to spare. Jay’s tragic death made me question my faith in God. I focused my energy on nurturing his kids, who were in high school and college. Each year, Halloween was particularly painful for me, and I avoided any kind of celebration. Instead, my husband and I lit a candle in memory of Jay, recalling his warmth, humor and jokes. After my daughter was born, it became difficult to hide out on Halloween. Her best friend’s mother made elaborate outfits from scratch as if she were a Broadway costume designer, making my parenting skills feel inadequate. But it also felt impossible to go gleefully into the spooky night when I was sadly missing my brother. Only once did I succeed in piecing together a cool Halloween costume for my daughter. We enjoyed watching Marx Brothers movies, the way Jay had introduced me to “A Day at the Races.” As an homage to my brother, I transformed my only daughter into Harpo. She wore my raincoat, which nearly reached the floor on her. I found a blonde curly wig in a store frequented by transvestites in the East Village, and a friend donated a horn to complete her outfit. Instead of saying trick-or-treat, she honked. Through all this, I kept my Halloween grief to myself, not wanting to ruin her fun and excitement. Ironically, All Hallows’ Eve originated as a time to remember the dead, but today it’s morphed into a festive Mardi Gras. Holidays that emphasize a collective celebration of joy make us feel compelled, even pressured, to be gleeful along with everyone else—rather than be identified as the only stick-in-the-mud in the crowd. On a night of gaiety when it’s de rigueur to transform into something else, I’m cloaked behind an invisible mask of sadness.  Sometimes I tried to ease my sorrow by stuffing myself with my daughter’s overstock of candy. I was relieved when she was old enough to piece together her own costumes. When the new generation of trick-or-treaters arrived at my door, I put on a big smile, a clown with a sad interior. Halloween never ends when your kids grow up. This year an email arrived from close friends, inviting us to their annual party, ending with the line “costumes are a must!” My husband has always hated dressing up, and for years agreed to show up in a Mr. Spock tee-shirt until he was typecast. Once I convinced him to reprise Harpo for a party. He put on the wig for half an hour, then tossed it off. I never feel like turning myself into someone else on the anniversary of my brother’s death, but only my husband knows my secret. I don’t want to cast a pall when everyone else is flying high, riding broomsticks. Of course, Jay wouldn’t have wanted me to stay home and mourn him on Halloween, decades later. So I will show up among the grown-up pirates and witches in a Mets hat and a blue shirt, as usual deserving the award for Worst-Dressed-of-the-Night. On the way to the party I plan to hit myself in my knee so my heart hurts less. I’ll greet my friends with a pretend smile and force my laughter, realizing I’m wearing a costume after all.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2015 16:30