Helen H. Moore's Blog, page 844

March 5, 2016

Homework is wrecking our kids: The research is clear, let’s ban elementary homework

“There is no evidence that any amount of homework improves the academic performance of elementary students.” This statement, by homework research guru Harris Cooper, of Duke University, is startling to hear, no matter which side of the homework debate you’re on. Can it be true that the hours of lost playtime, power struggles and tears are all for naught? That millions of families go through a nightly ritual that doesn’t help? Homework is such an accepted practice, it’s hard for most adults to even question its value. When you look at the facts, however, here’s what you find: Homework has benefits, but its benefits are age dependent. For elementary-aged children, research suggests that studying in class gets superior learning results, while extra schoolwork at home is just . . . extra work. Even in middle school, the relationship between homework and academic success is minimal at best. By the time kids reach high school, homework provides academic benefit, but only in moderation. More than two hours per night is the limit. After that amount, the benefits taper off. “The research is very clear,” agrees Etta Kralovec, education professor at the University of Arizona. “There’s no benefit at the elementary school level.” Before going further, let’s dispel the myth that these research results are due to a handful of poorly constructed studies. In fact, it’s the opposite. Cooper compiled 120 studies in 1989 and another 60 studies in 2006. This comprehensive analysis of multiple research studies found no evidence of academic benefit at the elementary level. It did, however, find a negative impact on children’s attitudes toward school. This is what’s worrying. Homework does have an impact on young students, but it’s not a good one. A child just beginning school deserves the chance to develop a love of learning. Instead, homework at a young age causes many kids to turn against school, future homework and academic learning. And it’s a long road. A child in kindergarten is facing 13 years of homework ahead of her. Then there’s the damage to personal relationships. In thousands of homes across the country, families battle over homework nightly. Parents nag and cajole. Overtired children protest and cry. Instead of connecting and supporting each other at the end of the day, too many families find themselves locked in the “did you do your homework?” cycle. When homework comes prematurely, it’s hard for children to cope with assignments independently—they need adult help to remember assignments and figure out how to do the work. Kids slide into the habit of relying on adults to help with homework or, in many cases, do their homework. Parents often assume the role of Homework Patrol Cop. Being chief nag is a nasty, unwanted job, but this role frequently lingers through the high school years. Besides the constant conflict, having a Homework Patrol Cop in the house undermines one of the purported purposes of homework: responsibility. Homework supporters say homework teaches responsibility, reinforces lessons taught in school, and creates a home-school link with parents. However, involved parents can see what’s coming home in a child’s backpack and initiate sharing about school work--they don’t need to monitor their child’s progress with assigned homework. Responsibility is taught daily in multiple ways; that’s what pets and chores are for. It takes responsibility for a 6-year-old to remember to bring her hat and lunchbox home. It takes responsibility for an 8-year-old to get dressed, make his bed and get out the door every morning. As for reinforcement, that’s an important factor, but it’s only one factor in learning. Non-academic priorities (good sleep, family relationships and active playtime) are vital for balance and well-being. They also directly impact a child’s memory, focus, behavior and learning potential. Elementary lessons are reinforced every day in school. After-school time is precious for the rest of the child. What works better than traditional homework at the elementary level is simply reading at home. This can mean parents reading aloud to children as well as children reading. The key is to make sure it’s joyous. If a child doesn’t want to practice her reading skills after a long school day, let her listen instead. Any other projects that come home should be optional and occasional. If the assignment does not promote greater love of school and interest in learning, then it has no place in an elementary school-aged child’s day. Elementary school kids deserve a ban on homework. This can be achieved at the family, classroom or school level. Families can opt out, teachers can set a culture of no homework (or rare, optional homework), and schools can take time to read the research and rekindle joy in learning. Homework has no place in a young child’s life. With no academic benefit, there are simply better uses for after-school hours.“There is no evidence that any amount of homework improves the academic performance of elementary students.” This statement, by homework research guru Harris Cooper, of Duke University, is startling to hear, no matter which side of the homework debate you’re on. Can it be true that the hours of lost playtime, power struggles and tears are all for naught? That millions of families go through a nightly ritual that doesn’t help? Homework is such an accepted practice, it’s hard for most adults to even question its value. When you look at the facts, however, here’s what you find: Homework has benefits, but its benefits are age dependent. For elementary-aged children, research suggests that studying in class gets superior learning results, while extra schoolwork at home is just . . . extra work. Even in middle school, the relationship between homework and academic success is minimal at best. By the time kids reach high school, homework provides academic benefit, but only in moderation. More than two hours per night is the limit. After that amount, the benefits taper off. “The research is very clear,” agrees Etta Kralovec, education professor at the University of Arizona. “There’s no benefit at the elementary school level.” Before going further, let’s dispel the myth that these research results are due to a handful of poorly constructed studies. In fact, it’s the opposite. Cooper compiled 120 studies in 1989 and another 60 studies in 2006. This comprehensive analysis of multiple research studies found no evidence of academic benefit at the elementary level. It did, however, find a negative impact on children’s attitudes toward school. This is what’s worrying. Homework does have an impact on young students, but it’s not a good one. A child just beginning school deserves the chance to develop a love of learning. Instead, homework at a young age causes many kids to turn against school, future homework and academic learning. And it’s a long road. A child in kindergarten is facing 13 years of homework ahead of her. Then there’s the damage to personal relationships. In thousands of homes across the country, families battle over homework nightly. Parents nag and cajole. Overtired children protest and cry. Instead of connecting and supporting each other at the end of the day, too many families find themselves locked in the “did you do your homework?” cycle. When homework comes prematurely, it’s hard for children to cope with assignments independently—they need adult help to remember assignments and figure out how to do the work. Kids slide into the habit of relying on adults to help with homework or, in many cases, do their homework. Parents often assume the role of Homework Patrol Cop. Being chief nag is a nasty, unwanted job, but this role frequently lingers through the high school years. Besides the constant conflict, having a Homework Patrol Cop in the house undermines one of the purported purposes of homework: responsibility. Homework supporters say homework teaches responsibility, reinforces lessons taught in school, and creates a home-school link with parents. However, involved parents can see what’s coming home in a child’s backpack and initiate sharing about school work--they don’t need to monitor their child’s progress with assigned homework. Responsibility is taught daily in multiple ways; that’s what pets and chores are for. It takes responsibility for a 6-year-old to remember to bring her hat and lunchbox home. It takes responsibility for an 8-year-old to get dressed, make his bed and get out the door every morning. As for reinforcement, that’s an important factor, but it’s only one factor in learning. Non-academic priorities (good sleep, family relationships and active playtime) are vital for balance and well-being. They also directly impact a child’s memory, focus, behavior and learning potential. Elementary lessons are reinforced every day in school. After-school time is precious for the rest of the child. What works better than traditional homework at the elementary level is simply reading at home. This can mean parents reading aloud to children as well as children reading. The key is to make sure it’s joyous. If a child doesn’t want to practice her reading skills after a long school day, let her listen instead. Any other projects that come home should be optional and occasional. If the assignment does not promote greater love of school and interest in learning, then it has no place in an elementary school-aged child’s day. Elementary school kids deserve a ban on homework. This can be achieved at the family, classroom or school level. Families can opt out, teachers can set a culture of no homework (or rare, optional homework), and schools can take time to read the research and rekindle joy in learning. Homework has no place in a young child’s life. With no academic benefit, there are simply better uses for after-school hours.“There is no evidence that any amount of homework improves the academic performance of elementary students.” This statement, by homework research guru Harris Cooper, of Duke University, is startling to hear, no matter which side of the homework debate you’re on. Can it be true that the hours of lost playtime, power struggles and tears are all for naught? That millions of families go through a nightly ritual that doesn’t help? Homework is such an accepted practice, it’s hard for most adults to even question its value. When you look at the facts, however, here’s what you find: Homework has benefits, but its benefits are age dependent. For elementary-aged children, research suggests that studying in class gets superior learning results, while extra schoolwork at home is just . . . extra work. Even in middle school, the relationship between homework and academic success is minimal at best. By the time kids reach high school, homework provides academic benefit, but only in moderation. More than two hours per night is the limit. After that amount, the benefits taper off. “The research is very clear,” agrees Etta Kralovec, education professor at the University of Arizona. “There’s no benefit at the elementary school level.” Before going further, let’s dispel the myth that these research results are due to a handful of poorly constructed studies. In fact, it’s the opposite. Cooper compiled 120 studies in 1989 and another 60 studies in 2006. This comprehensive analysis of multiple research studies found no evidence of academic benefit at the elementary level. It did, however, find a negative impact on children’s attitudes toward school. This is what’s worrying. Homework does have an impact on young students, but it’s not a good one. A child just beginning school deserves the chance to develop a love of learning. Instead, homework at a young age causes many kids to turn against school, future homework and academic learning. And it’s a long road. A child in kindergarten is facing 13 years of homework ahead of her. Then there’s the damage to personal relationships. In thousands of homes across the country, families battle over homework nightly. Parents nag and cajole. Overtired children protest and cry. Instead of connecting and supporting each other at the end of the day, too many families find themselves locked in the “did you do your homework?” cycle. When homework comes prematurely, it’s hard for children to cope with assignments independently—they need adult help to remember assignments and figure out how to do the work. Kids slide into the habit of relying on adults to help with homework or, in many cases, do their homework. Parents often assume the role of Homework Patrol Cop. Being chief nag is a nasty, unwanted job, but this role frequently lingers through the high school years. Besides the constant conflict, having a Homework Patrol Cop in the house undermines one of the purported purposes of homework: responsibility. Homework supporters say homework teaches responsibility, reinforces lessons taught in school, and creates a home-school link with parents. However, involved parents can see what’s coming home in a child’s backpack and initiate sharing about school work--they don’t need to monitor their child’s progress with assigned homework. Responsibility is taught daily in multiple ways; that’s what pets and chores are for. It takes responsibility for a 6-year-old to remember to bring her hat and lunchbox home. It takes responsibility for an 8-year-old to get dressed, make his bed and get out the door every morning. As for reinforcement, that’s an important factor, but it’s only one factor in learning. Non-academic priorities (good sleep, family relationships and active playtime) are vital for balance and well-being. They also directly impact a child’s memory, focus, behavior and learning potential. Elementary lessons are reinforced every day in school. After-school time is precious for the rest of the child. What works better than traditional homework at the elementary level is simply reading at home. This can mean parents reading aloud to children as well as children reading. The key is to make sure it’s joyous. If a child doesn’t want to practice her reading skills after a long school day, let her listen instead. Any other projects that come home should be optional and occasional. If the assignment does not promote greater love of school and interest in learning, then it has no place in an elementary school-aged child’s day. Elementary school kids deserve a ban on homework. This can be achieved at the family, classroom or school level. Families can opt out, teachers can set a culture of no homework (or rare, optional homework), and schools can take time to read the research and rekindle joy in learning. Homework has no place in a young child’s life. With no academic benefit, there are simply better uses for after-school hours.“There is no evidence that any amount of homework improves the academic performance of elementary students.” This statement, by homework research guru Harris Cooper, of Duke University, is startling to hear, no matter which side of the homework debate you’re on. Can it be true that the hours of lost playtime, power struggles and tears are all for naught? That millions of families go through a nightly ritual that doesn’t help? Homework is such an accepted practice, it’s hard for most adults to even question its value. When you look at the facts, however, here’s what you find: Homework has benefits, but its benefits are age dependent. For elementary-aged children, research suggests that studying in class gets superior learning results, while extra schoolwork at home is just . . . extra work. Even in middle school, the relationship between homework and academic success is minimal at best. By the time kids reach high school, homework provides academic benefit, but only in moderation. More than two hours per night is the limit. After that amount, the benefits taper off. “The research is very clear,” agrees Etta Kralovec, education professor at the University of Arizona. “There’s no benefit at the elementary school level.” Before going further, let’s dispel the myth that these research results are due to a handful of poorly constructed studies. In fact, it’s the opposite. Cooper compiled 120 studies in 1989 and another 60 studies in 2006. This comprehensive analysis of multiple research studies found no evidence of academic benefit at the elementary level. It did, however, find a negative impact on children’s attitudes toward school. This is what’s worrying. Homework does have an impact on young students, but it’s not a good one. A child just beginning school deserves the chance to develop a love of learning. Instead, homework at a young age causes many kids to turn against school, future homework and academic learning. And it’s a long road. A child in kindergarten is facing 13 years of homework ahead of her. Then there’s the damage to personal relationships. In thousands of homes across the country, families battle over homework nightly. Parents nag and cajole. Overtired children protest and cry. Instead of connecting and supporting each other at the end of the day, too many families find themselves locked in the “did you do your homework?” cycle. When homework comes prematurely, it’s hard for children to cope with assignments independently—they need adult help to remember assignments and figure out how to do the work. Kids slide into the habit of relying on adults to help with homework or, in many cases, do their homework. Parents often assume the role of Homework Patrol Cop. Being chief nag is a nasty, unwanted job, but this role frequently lingers through the high school years. Besides the constant conflict, having a Homework Patrol Cop in the house undermines one of the purported purposes of homework: responsibility. Homework supporters say homework teaches responsibility, reinforces lessons taught in school, and creates a home-school link with parents. However, involved parents can see what’s coming home in a child’s backpack and initiate sharing about school work--they don’t need to monitor their child’s progress with assigned homework. Responsibility is taught daily in multiple ways; that’s what pets and chores are for. It takes responsibility for a 6-year-old to remember to bring her hat and lunchbox home. It takes responsibility for an 8-year-old to get dressed, make his bed and get out the door every morning. As for reinforcement, that’s an important factor, but it’s only one factor in learning. Non-academic priorities (good sleep, family relationships and active playtime) are vital for balance and well-being. They also directly impact a child’s memory, focus, behavior and learning potential. Elementary lessons are reinforced every day in school. After-school time is precious for the rest of the child. What works better than traditional homework at the elementary level is simply reading at home. This can mean parents reading aloud to children as well as children reading. The key is to make sure it’s joyous. If a child doesn’t want to practice her reading skills after a long school day, let her listen instead. Any other projects that come home should be optional and occasional. If the assignment does not promote greater love of school and interest in learning, then it has no place in an elementary school-aged child’s day. Elementary school kids deserve a ban on homework. This can be achieved at the family, classroom or school level. Families can opt out, teachers can set a culture of no homework (or rare, optional homework), and schools can take time to read the research and rekindle joy in learning. Homework has no place in a young child’s life. With no academic benefit, there are simply better uses for after-school hours.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 15:00

“Lying to make life bearable”: Cheryl Strayed interviews memoirist Rob Roberge

I became fast friends with Rob Roberge in 2008, when we were both on the faculty at Antioch University’s low-residency MFA program in creative writing in Los Angeles. His kindness was matched only by the intensity of his intelligence. The students he mentored all raved about his exceptional talents at both nurturing and schooling them. Not long after meeting him I read the first of his books with a feeling that was equal parts curiosity and dread; one always hopes to love the books written by friends. I’m grateful to say I didn’t have to fake a thing. I’m a true blue Rob Roberge fan. With each book, my admiration has only grown. Sentence after sentence, page after page, Rob never fails to astonish me with his dark humor and clear eye, his palpable vulnerability and his beautifully perceptive mind. He’s the kind of writer about whom one senses there is a great amount at stake, and never has that been more apparent to me than when I read his latest book, his first memoir, "Liar." He was good enough to answer my questions about the book here. How did "Liar" come to be? All of your other books are fiction. What compelled you to write a memoir? Yes, all of the other ones were fiction, and I think I'll go back to it next time and stay on that side of the fence. Memoir's a tough gig for me (not to say fiction isn't). But, in a way, I'm still not quite sure how this book came to be. I had tried to write a memoir starting maybe in 2010, when I was in a deep depression while recovering from an opiate relapse (it takes a long time for the dopamine receptors to go back to normal), and I found I couldn't write fiction, which didn't help the depression I was already in. So, by default, I guess (hardly the best reason), I started a memoir. And I got about 35 pages in and thought it sucked. But it kind of taught me to write again, and after that I wrote my most recent novel. And it was my most autobiographical novel yet—and I tend to be pretty autobiographical as it is. Then, without thinking about a new book, I started doing these factual little snippets/fragments in second person, and published a few on-line. And some people—most persistently the writer/editor Gina Frangello—told me I had a book there. And my initial reaction (well, and my reaction several times throughout the process and, at times, still) was "who the fuck wants to read about me for 250 pages?" So, in a weird way, I wasn't compelled to write a memoir so much as I was compelled to write my next book, which just happened to be a memoir. I know some projects are ones that writers always know they are going to write. While others kind of drop in your lap. You're like a radio, tuned to the right frequency. Once I got into it, I found it a really interesting, if scary, form to work in. It was new. Something I'd never done before—which is always attractive to me.   Also—and this is the kind of thing only writers themselves tend to care about, not something readers care about—I had a sort of breakthrough moment where I realized that, to some degree, I'd been writing pretty much about myself and my experiences for my whole career in the first three novels, increasingly so. And I thought maybe it was time to cop to it and just say it was me. It seemed, and seems, like the end of a phase of my career. I'm done writing about me. It's time to move on and write about characters who don't share a ridiculous amount of their life story and history with mine. So, the memoir was a goodbye to a certain type of writing for me. What was it like for you to delve into this form for the first time? Was your experience writing memoir vastly different than writing fiction? It turned out to be a lot different, which I totally didn't expect. I figured it would be just like writing a novel, only that the material all had to be true. You know—they're both long form narratives, they both seemed liked they would have the same types of obstacles and issues, but I figured they'd be more or less the same. And I was very, very wrong. And most of the issues were personal and ethical. On the one hand, no matter how autobiographical a novel was, I could always hide behind the veil of fiction and say I made something up. With memoir, if there was a scene that made me embarrassed or made me cringe or tighten with regret (and those were sort of my litmus test scenes) I figured they were the ones that most needed to be in there. So there was a level of personal exposure that was—and still is—enormously uncomfortable for me. Plus, while I knew going in that I'd be writing my story, I didn't realize I'd be writing other people's stories as well. And that brought in a whole ethical side to it that I hadn't considered. That I had to be honest and honor the story, yet still be aware of other people and treating them with much more respect and care than I treated myself in the book.  It's also just incredibly hard to share some of your worst, toughest moments of your life in print. On some level, I admire writers who do it. On another level, I wonder why any of us do it. It's a question that troubles me, actually. Why do we do this? The book was just released and I already feel some dread that people are actually going to read it, which is of course a ridiculous fear for a writer. For what it’s worth, I think you wrote about difficult things with an enormous amount of sensitivity. So let’s talk about you calling yourself a liar. I love that "Liar" is the title of this book in part because I think it’s a direct response to the phenomenon of memoirists often being suspected of lying, but I also think it’s a particularly apt title because in the book you examine the ways lies have functioned in your life. In the course of telling us about the times you’ve lied, you tell us the truth. Writers, especially creative nonfiction writers, are constantly asking themselves about the relationship between truths and lies. What did you learn about that while writing this book?  Tough, good question. On the one hand, I wanted it to be a literal examination of the lies I have told over the course of my life and try to examine why I told them. Maybe, to some degree, that is as simple as this great Chekhov quote (actually, it's footnoted as Chekhov misquoting Pushkin, which I always kind of loved): "The lie which elates us is dearer than a thousand sober truths." So, there was that. Lying to make life bearable. But, it was important to me that the book sort of set the record straight and that I wanted to force myself to cop to my own lies, which I try my best to do throughout the text. Everything that happens in the book is true to my memory, unless I tell the reader I lied about it.  But then there's the gray area of memory and invention and how much we can trust our own versions of the narratives we've told to make our lives make some sense to us. There's the great Didion line: "We tell ourselves stories in order to live." And that was a big part of what I was trying to show. But/and, I also use the line in the book that was a guiding principle throughout—Nabokov's line that "memory is a revision." I wanted the book to be something of an indictment on memory itself. And to try and find if there was a way to find the "real" story behind the stories that have shaped my life. I'm not sure if that's even possible, but I tried. In "Liar," you write about your entire life in a series of vignettes that range from your earliest childhood through recent times. When your book spans so many years it can be hard to know what to include and what to leave out. What parts of your life did you omit from the book and why? On the contrary, what did you include that surprised you?  I omitted (myself—before my editor got his very talented hands on it) things that felt too easy. Where I felt I was playing to my strengths. Where there was a lot of witty dialog (one of my big weaknesses—I can have two people talking funny dialog for five pages that goes nowhere). Beyond that, I, at first, left out some very personal and painful stuff about my marriage—because I simply didn't want to share it. It was too close to the bone, and too personal, I thought, about someone who never asked to marry a writer. I kind of felt everything in there had to be the truth, but not that everything had to be in there. My editor asked and cajoled and pressured me to add more. And I reluctantly did. But that was hard. And there are still lies of omission—the book is a piece of art. A memoir, not diary. So, it's not all in there. And I guess I omitted, for the most part (except for some of the humor that balanced the darkness of the book), anything that didn't make me squirm. I figured if I was pretty comfortable with it being there, I either hadn't written it correctly, or it shouldn't be there. I don't know, ultimately, if anything surprised me. Once I decided I wasn't going to stop if I flinched, I figured I was opening myself up to some hard stuff. So, when it came, I kind of expected it. Maybe some of the beautiful moments of my life surprised me. Because I knew it was going to be a dark book. But I've been blessed with a lot of luck and love and beauty. And those came at me, too. Maybe that was a surprise that I should never let be a surprise. Something I should remember every day. Being surprised by beauty is one of the things I love the most about writing about painful experiences. Almost always what you find when you dig down into that sorrow is that there is also beauty there. You’re right that "Liar" is a dark book. As your friend, there were so many times as I was reading that it pained me to know you were suffering. I wanted to go back in time and help you somehow, but the other thing I noticed--and I think this sense accumulated over time as I got deeper into the book--is how much love has always been in your life; not just romantic love, but friends and family too. It struck me that seeking and finding love is your greatest survival skill. Do you agree? Yes. I don't know that I could have articulated it exactly that way. I probably would have said something more simply (if still true): that love has saved my life more times than I can count over the years. And, as you point out, not just romantic love, but the love of friends and family, as well...even, at times, of strangers—especially in recovery. There's a selflessness in recovery of helping someone who's at their bottom, and you don't know them, but your only agenda is to help them. Which is a form (a different one, to be sure, but one nevertheless) of love. But the way you put it—that seeking and finding love was/is my greatest survival skill? As I said, I wouldn't have thought to put it that way, but I think it may be true. It wasn't a conscious choice on my part, but maybe a lot of our survival skills/coping mechanisms aren't conscious choices. They are simply how we've learned to adapt and make it through our days and nights. But without having had a plan for it, seeking and finding love may, in fact, be my greatest survival skill. Surely the love of others has saved my life over the years. And I guess I managed some way to find that love. The book’s structure is what I’d call a collage—we get stories from all parts of your life, told out of chronology, vivid portraits of a single moment, experience, or era. Why did you choose this structure? Collage—I like that. I thought of it, as I was going along, like a resonant chamber that works the way memory works. Things matter in relation to other things. But, I chose the form for a couple of reasons. One was purely a craft reason—I tend to love nonlinear narrative. There's a story in "The Things They Carried" where the character Lemon dies. And then, because the book is told in a non-chronological way, three (or something—going from memory) stories later, Lemon is alive. And the reader knows he's going to die very soon. But the character has no idea. So, even a scene where he talks about what he's having for lunch or something like that is incredibly moving. If it had been told straight, it would have been dull dialog about lunch. But because the reader is privileged to know the future, it takes on the weight of tragedy. I've always loved that about nonlinear narrative. The way it lets the reader know more than the character at certain parts of the story. I like privileging the reader—never holding back from them. The other (and this was probably the bigger of the two) reason was that I wanted the book to be something of a structural mimesis of the way my brain works. I have rapid-cycling bipolar with occasional psychotic episodes. I wanted the structure of the book to sort of mimic the way my brain fires when I'm in a manic state. Not a psychotic one, because that would have made it pure gibberish. But I wanted to give a sense of how my brain fires. The structure—for it to be an honest memoir—seemed as important as the content. Form mattered a great deal to me in the honesty of the text. I could have told the same events in a beginning/middle/end narrative and it would have been a lie for me, as far as how I experience the world. Form is content, often.  The opening section of the book is incredibly compelling. In it, you write about a girl you knew when you were 10 who was murdered. Throughout the book you write about others who’ve died by homicide or suicide. Some of them are people you knew, others are strangers. Why did you decide to include their stories in "Liar"? Well, the murders of the people I knew were ones that were extremely formative and life-changing ones for me. So, they seemed to need to be there. The ones of people I didn't know kind of ties in to the previous question (or answer): they were indicative of the way my mind works. Associating events with one another. And the first murder—the one you mention—when I was 10, shaped my life in an enormous way. I think the fact that my friend was murdered was enough by itself to make a mark on me. But that it went unsolved really altered the way I looked at the world. That something that big—that hideous—could not be solved, really changed me for life. In my youth, it made me not trust men—it felt like any man I saw could be the one who killed her. Later in life—and even as a writer—I don't tend to trust resolve or closure. My narratives don't tend to conclude in the most conventional ways. Some things just stay ugly mysteries with no wrap-up. They don't, as Gordon Lish once said, "reduce themselves to meaning." So, I guess all the murders and suicides of the people I didn't know were in there to show that my life's narrative has an obsession with those things. The book is written in second person, which I found quite effective. When done well, as it is here, I think it feels strangely more intimate than first person would. Why did you decide to write it this way? For one, I like second person (though I had never used it before this project). But, when I tried the memoir the first time, it was in first person. And it just wasn't working. And then I just thought, coming back to it three years later, that I would try it in second. And I like—as you mention—the intimacy it has with the reader. It's very intimate—and, interestingly, second person is almost always present tense, so I wanted to put the reader very much into the scenes. But it had a second benefit for me as a writer. A lot of times where I was too uncomfortable to write that "I" did something, there was a distance for me in saying "you" did something. And that distance allowed me to tell the truth more. So, it was weird. It gave me a distance, but it gave the narrative an intimacy. You write about so many difficult things in "Liar"—about your struggles with drug and alcohol addiction, bipolar disorder, suicidal thoughts, shame, anxiety, and feeling "like an interruption your whole life.” How does writing in general and writing this book in particular interact with those things? Is writing healing for you or does it exacerbate your suffering?  I’ve been asked in interviews, and on the first leg of my tour, if the book was cathartic in any way. I like your word—"healing"—better. And I guess I kind of hoped it might. I don't think of writing as therapy. It's not why I do it. But I thought it might be a by-product of this one. But no. Dredging up some of the most difficult times of my life—and then putting them out there for friends and strangers to read—has really brought up a lot of difficult memories and emotions in me. So I guess I just made things worse. Go figure. You’re also a musician. Does making music feel different to you than writing? Making music feels a lot different. If I had to make a choice, I'd keep writing and reluctantly lose music. But, back to music… you make it (when you're not playing solo—which is a nice thing, too) with friends. It's a communication. Even when it's sad, music is often joyous to play. And, for me, to listen to. You're, in general, able to feed off the energy of the audience. And sometimes they dance, which never happens at readings. But, even when you're alone in the room (or when I am, at any rate), music is always a comfort. But then, so is writing. But there are days you just can't write. If you're alone in a room with a guitar and your own stuff isn't doing it for you, you can always play a cover of something/someone you love. You can't play covers in writing. Plus music has melody, which is a beautiful thing. Even the most musical prose has elements of rhythm and pace and flow, but it can't, by its nature, have melody. You can't hum a book. But both forms, at their best, make people feel less alone—which is what art's pretty much supposed to do, I think. You know, that great Malraux quote that art's job is "to comfort the disturbed and disturb the comfortable." Which I have always loved. But the older I get, the more I think we are all haunted. All disturbed. So, art is something that makes people feel less alone in the world. A comfort. A connection. I hope, anyway. What sentence or passage in the book means the most to you and why? Wow. Great question. Probably the line where one of my exes, who at the time was only a friend, says the line, "it's the bad parts that make you realize how good the great parts are." It was just one of those tossed-off lines when she said it, and it stayed with me for life. And, the book's not a very cheery or optimistic one, but I hope that line resonates through it. It always has for me.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 14:00

“Idiocracy’s” curdled politics: The beloved dystopian comedy is really a celebration of eugenics

AlterNet Last week the screenwriter for the 2006 satirical science fiction comedy "Idiocracy" came out and said his film’s nightmare vision of a country run by improperly bred morons had indeed come true: This was predictably followed by a series of editorials, think pieces and takes that seem to confirm this theory: Trump was a product of an indefinable “dumbing down” of our political environment:

Is Donald Trump the Herald of ‘Idiocracy’?

The Many Signs That Mike Judge’s ‘Idiocracy’ Is Upon Us

Idiocracy’ at 10: Mike Judge’s Cult Film Saw America Run by Imbeciles. Well…

The idiaccuracy of Idiocracy: When life imitates art for better or for the actual worst

It’s no surprise Cohen’s comments would go viral. They fit neatly into a superficially appealing notion that Trump, and the GOP at large, are animated by toothless rednecks and science-denying idiots. While there certainly are both of those, as well as outright white supremacists, in Trump’s constituency, wielding "Idiocracy" as a kind of political shorthand for a new, and therefore meaningful, shift in our political climate is both inaccurate and politically toxic for the left.

First of all, there’s the issue of the film’s pro-eugenic premise: The idea that some future world would be populated by dumb people, while inherently smug, isn't necessarily right-wing. What makes the movie reactionary is the reason for this stupidity: that dumb people are breeding too much, a concept steeped in eugenics, one of the nastiest strains of elitism ever invented by humanity. This is the idea that society incentivizes the wrong people—"idiots"—to have more children, and by the laws of “evolution” this results in more idiots and fewer smart people. This is defined in the opening sequence by the pseudoscience of IQ and given a distinctly classist framing:

While the movie is savvy enough to avoid overt racism, it dives head first into gross classism. The problematic breeders all have hillbilly accents and live in trailers, while those whose eggs we presumably want fertilized epitomize WASP-y stereotypes.

The film's legions of defenders call it satire. Well, the overt argument of the film is that good breeding prevents social problems. The so-called satire proceeds from there, presenting the ridiculous consequences of what will happen if we don't rethink how society breeds. Satire isn’t a get-out-of-jail-free card for all vulgar and illiberal ideas; it has to be pointed and targeting the powerful, not targeting vague notions of idiocy illustrated by Appalachia accents and trailer parks without consideration for what caused the idiocy in the first place.

The message is cheap and easy and doesn’t require us to meaningfully challenge power, much less ourselves. Instead, we direct our disdain at the pseudo-problem of not being adequately intelligent, as if such a problem operates independent of material factors.

This sentiment is a common thread in left discourse. While nowhere near as reactionary or meanspirited, being smarter than the other guy was a feature of the Jon Stewart era of political comedy. Snark was more important than ideology, hypocrisy the only unforgivable sin and throwing clips together to make right wingers look like morons, rather than people with sinister politics, was the point of "The Daily Show" fan base's political enterprise.

This was also seen in the left’s mockery of the Tea Party, often painted as illiterate boobs, despite the fact that those identifying as Tea Party members have, on average, higher income and education levels than the population in general. While education certainly doesn’t equate to intelligence, to say nothing of worldliness or wisdom, the fact that the Tea Party—and Trump’s voting base—are actually more educated than the general voting base affirms, once again, the problem isn’t "intelligence" but rather toxic ideology that operates independent of people’s IQ. As Michael Tracy notes, Trump actually won voters with post-graduate degrees in the state of Massachusetts, the "crown jewel of American higher education." How many Nazis had anthropology or psychology degrees? How many were renowned physicists and musicians? Stupidity is not what created the rise of Trump, a deliberate poisoning of the discourse by the wealthy over decades combined with the left’s inability offer a clear class-based alternative has.

Smugness and irony are the intellectual run-off of a left incapable or unwilling to speak clearly in the language of class and class conflict. When we can’t, or won’t, direct our ire at those responsible for the vast majority of the world’s problems, namely the superwealthy and the capitalist system that props them up, we are left with nowhere to aim. Instead, we highlight the problem—in this case political ignorance—without addressing its primary culprit: the consolidation of media into large corporations, a PR-fueled think tank industry fed by billionaires designed to promote toxic right-wing canards, a sprawling Islamophobia industry, a corrupt campaign financing system, and a decades-long corporate assault on K-12 and postsecondary education.

The idea that a corrosive intellectual and political climate (for which Trump is the current avatar) can be chalked up to too many dumb people having kids or some vague, guiltless notion of "dumbing down," rather than deliberate policy directives of the wealthy and their far-right media machinery--to say nothing of the inability of the left to adequately combat this machinery--is one of the more reductionist and politically useless ideas to populate our discourse. We are not living in an idiocracy, we are living in an oligarchy, for which political stupidity is one of many symptom caused by the large, malignant cancer of inequality and runaway capitalism.

AlterNet Last week the screenwriter for the 2006 satirical science fiction comedy "Idiocracy" came out and said his film’s nightmare vision of a country run by improperly bred morons had indeed come true: This was predictably followed by a series of editorials, think pieces and takes that seem to confirm this theory: Trump was a product of an indefinable “dumbing down” of our political environment:

Is Donald Trump the Herald of ‘Idiocracy’?

The Many Signs That Mike Judge’s ‘Idiocracy’ Is Upon Us

Idiocracy’ at 10: Mike Judge’s Cult Film Saw America Run by Imbeciles. Well…

The idiaccuracy of Idiocracy: When life imitates art for better or for the actual worst

It’s no surprise Cohen’s comments would go viral. They fit neatly into a superficially appealing notion that Trump, and the GOP at large, are animated by toothless rednecks and science-denying idiots. While there certainly are both of those, as well as outright white supremacists, in Trump’s constituency, wielding "Idiocracy" as a kind of political shorthand for a new, and therefore meaningful, shift in our political climate is both inaccurate and politically toxic for the left.

First of all, there’s the issue of the film’s pro-eugenic premise: The idea that some future world would be populated by dumb people, while inherently smug, isn't necessarily right-wing. What makes the movie reactionary is the reason for this stupidity: that dumb people are breeding too much, a concept steeped in eugenics, one of the nastiest strains of elitism ever invented by humanity. This is the idea that society incentivizes the wrong people—"idiots"—to have more children, and by the laws of “evolution” this results in more idiots and fewer smart people. This is defined in the opening sequence by the pseudoscience of IQ and given a distinctly classist framing:

While the movie is savvy enough to avoid overt racism, it dives head first into gross classism. The problematic breeders all have hillbilly accents and live in trailers, while those whose eggs we presumably want fertilized epitomize WASP-y stereotypes.

The film's legions of defenders call it satire. Well, the overt argument of the film is that good breeding prevents social problems. The so-called satire proceeds from there, presenting the ridiculous consequences of what will happen if we don't rethink how society breeds. Satire isn’t a get-out-of-jail-free card for all vulgar and illiberal ideas; it has to be pointed and targeting the powerful, not targeting vague notions of idiocy illustrated by Appalachia accents and trailer parks without consideration for what caused the idiocy in the first place.

The message is cheap and easy and doesn’t require us to meaningfully challenge power, much less ourselves. Instead, we direct our disdain at the pseudo-problem of not being adequately intelligent, as if such a problem operates independent of material factors.

This sentiment is a common thread in left discourse. While nowhere near as reactionary or meanspirited, being smarter than the other guy was a feature of the Jon Stewart era of political comedy. Snark was more important than ideology, hypocrisy the only unforgivable sin and throwing clips together to make right wingers look like morons, rather than people with sinister politics, was the point of "The Daily Show" fan base's political enterprise.

This was also seen in the left’s mockery of the Tea Party, often painted as illiterate boobs, despite the fact that those identifying as Tea Party members have, on average, higher income and education levels than the population in general. While education certainly doesn’t equate to intelligence, to say nothing of worldliness or wisdom, the fact that the Tea Party—and Trump’s voting base—are actually more educated than the general voting base affirms, once again, the problem isn’t "intelligence" but rather toxic ideology that operates independent of people’s IQ. As Michael Tracy notes, Trump actually won voters with post-graduate degrees in the state of Massachusetts, the "crown jewel of American higher education." How many Nazis had anthropology or psychology degrees? How many were renowned physicists and musicians? Stupidity is not what created the rise of Trump, a deliberate poisoning of the discourse by the wealthy over decades combined with the left’s inability offer a clear class-based alternative has.

Smugness and irony are the intellectual run-off of a left incapable or unwilling to speak clearly in the language of class and class conflict. When we can’t, or won’t, direct our ire at those responsible for the vast majority of the world’s problems, namely the superwealthy and the capitalist system that props them up, we are left with nowhere to aim. Instead, we highlight the problem—in this case political ignorance—without addressing its primary culprit: the consolidation of media into large corporations, a PR-fueled think tank industry fed by billionaires designed to promote toxic right-wing canards, a sprawling Islamophobia industry, a corrupt campaign financing system, and a decades-long corporate assault on K-12 and postsecondary education.

The idea that a corrosive intellectual and political climate (for which Trump is the current avatar) can be chalked up to too many dumb people having kids or some vague, guiltless notion of "dumbing down," rather than deliberate policy directives of the wealthy and their far-right media machinery--to say nothing of the inability of the left to adequately combat this machinery--is one of the more reductionist and politically useless ideas to populate our discourse. We are not living in an idiocracy, we are living in an oligarchy, for which political stupidity is one of many symptom caused by the large, malignant cancer of inequality and runaway capitalism.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 13:00

This is how religion fails: Why the biggest religions never live up to their ideals

Over three thousand years ago, the fertile basin of the Middle East gave birth to a new idea that altered human and religious history. The idea was of one God, creator of the world, who is both singular in number and unique in quality; who is independent, self-sufficient, and transcendent, but at the same time profoundly interested in and concerned for the world and humanity; who is loving and forgiving, as well as judging and wrathful; who commands and challenges humanity to be loyal and faithful to the divine and compassionate and just with our fellow human beings. This idea in turn gave rise over the millennia to Judaism, Christianity, and Islam, and their countless denominations and affiliations, each with a distinct take on how life with the one God should be lived. As these religions entered the world stage, alongside their charge to love God and love humanity, they began to wage war with those who preceded or followed them. Wherever monotheism developed, it was accompanied by the belief that the one God could be truly represented or correctly understood by only one faith community. Love of God, or more accurately being loved by God, was perceived to be a zero-sum game—the more one was loved, the less another could be. And so, together with the love of neighbor came the hatred of the other. Together with kindness to those in need came the murder of those who disagreed. Monotheism became a mixed blessing and a double-edged sword. Why have monotheistic religions produced such a checkered past? More important, what type of future do they have in store for us? These questions are particularly pressing since the last two decades have seen religion—particularly the monotheistic Abrahamic faiths of Judaism, Christianity, and Islam—emerge from the twentieth-century quasi-hibernation imposed on it by a coalition of secular nationalism, fascism, communism, and liberal democracy. In this time, we have seen religion arise as a central force in world politics and frequent instigator of global conflict. The majority of the great conflicts and conflagrations of the twentieth century were clashes of a predominantly national and secular political nature. In the last decade of the twentieth century, however, this geopolitical picture began to shift. We witnessed the first stirrings of what would become the multiple manifestations of global Islamic terror, as the Middle East, Africa, and Asia became sites of pitched battles both within Islam and between Christianity and Islam. Semisecular dictators have been replaced by Islamic parties, and Muslim and Jewish religious ideologies are increasingly mainstreamed into political governance in ways that tend to fuel and exacerbate conflicts. Europe has become a frontline in the struggle between secular nationalism and Islam. In the United States, religion is playing an increasingly influential and often contentious role in political discourse and public policy. It is no understatement to say that the last two decades have been painting the twenty-first century in strongly religious hues. THE “GOD DELUSION” DELUSION: FAITH AND ITS CONSEQUENCES The reemergence of God as a dominant force in world affairs, shaping both the fates of nations and the daily existence of ordinary individuals, poses fundamental questions about the role of religion in human life. One of the most significant of these, and the one that guides this book, is this: What does faith in God do to a person? That is, when God enters the conversation and dictates human ethical and social norms, is it a force for good or evil? For action or complacency? For moral progress or moral corruption? To ask what faith in God does to a person is not the same as asking what faith in God gives to a person. This second question holds different answers for different types of religious personalities. For the spiritually attuned, the mere experience of God’s presence can fill one’s life with joy, awe, and love. For the more average person of faith, the religious “folk,” faith in God offers access, if not a substantive claim, to God’s power and grace, guidance and forgiveness, in this world and perhaps in the next. Both groups share a clear and profound intuition that something is fundamentally flawed in the notion of a world without God. For the spiritually attuned, it is the flatness of a life without transcendence. For ordinary religious believers, it is the emptiness of a life without hope for order, and the crushing sense of helplessness in confronting the daily challenges of pain and chaos without recourse to a transcendent source of power and agency. Those whose lives entail many journeys through the “valley of the shadow of death” quite understandably prefer to “fear no harm, for you are with me” (Psalm 23:4), rather than to tread such treacherous paths unguided, unprotected, and alone. For the person of faith, to believe is most often not a decision but an outcome of the search to fill a void that is experienced with palpable immediacy in everyday life. As a result, this faith is by and large impervious to critical analysis and counterclaims. The human species may or may not be under the spell of “the God delusion,” as Richard Dawkins claims, and Christopher Hitchens may or may not have been right that “God is not great.” The fact remains that no argument for either of these propositions will be likely to move the person of faith. For the spiritually attuned, the reality and intensity of the religious experience is its own self-validating confirmation. For the average believer, a deep existential need makes faithlessness unimaginable. As Clifford Geertz rightfully posits in "The Interpretation of Cultures," chaos does not undermine faith in God; rather, chaos makes faith in God necessary. Even after the Enlightenment and the Holocaust, the number of atheists in foxholes hovers steadily near zero. My concern with the question of what faith in God does to a person is not meant to question the validity of faith nor undermine the legitimacy of the enterprise. Rather, it is meant to be an internal exploration of the practical and conceptual consequences of faith. The questions of whether God exists or is merely a human fabrication, of whether or what we ought to believe, already fill countless volumes, and belong to a different conversation from those that I explore in this book. I am more interested in examining how our beliefs, and the life path faith sets us on, affect our identities—the way we see ourselves and others, and the way we treat people. How does faith change us? Does it make us better people—kinder, gentler and more compassionate to others? Does it alter our perspective on things like violence, war, and suffering? In light of religion’s resurgence as a significant power in shaping the world, it is critical that the faithful take an honest look at the types of people and communities our systems are producing and evaluate the results according to our own self-described values and aspirations. The broad geopolitical and socioeconomic impact of religion in the world today demands that people of faith take ownership over the consequences of their ideologies. “WHO ASKED THIS OF YOU?”: RELIGION’S NOBLE FAILURE Based on some of the most oft-quoted verses in monotheistic scriptures—their “greatest hits,” if you will—it might seem surprising that religion could be anything other than an ennobling force in human life. One common feature of all the monotheistic traditions is that their God aspires to create kind, gentle, and compassionate people. Faith in God is not meant merely to inspire one to worship but to change those who worship, and to be a force for generating care and concern for all of God’s creatures, in particular those over whom one holds power. Here are a few prominent examples:
“You shall not wrong a stranger or oppress him, for you were strangers in the land of Egypt. You shall not ill-treat any widow or orphan. If you do mistreat them, I will heed their outcry as soon as they cry out to Me, and My anger shall blaze forth.” (Exodus 22:20–23) “It is not righteousness that you turn your faces towards the East or West (in prayer). But it is righteousness to believe in Allah and the Last Day and the Book and the Messengers. To spend of your substance out of love for Him, for your kin, for orphans, for the needy, for the wayfarer, for those who ask, and for the ransom of slaves.” (Quran 2:177)
Against the backdrop of these sources, and thousands of similar others, the failure of religion to produce individuals and societies that champion the values advocated in them is both puzzling and deeply unsettling. Even more troubling is that often religious faith itself is the catalyst that emboldens individuals and governments to murder, maim, harm, and control others in the service of “their” God. While it is not credible to suggest that people of faith are definitively worse than those who do not believe, the fact that a life with God does not seem consistently to make people better is a failure of religion on its own terms, and ought to be a source of consternation for any serious believer. This problem is not new, nor does it reflect an outsider’s critique of religion. In fact, it has hovered around monotheistic traditions since their inception, formulated and addressed by the very first carriers of the one God’s word, the biblical prophets:
Cry with full throat, without restraint, raise your voice like a horn and declare unto My people their transgression, and to the house of Jacob their sins. Yet they seek Me daily, eager to learn My ways, as a nation that did righteousness and forsook not the ordinance of their God, they ask of Me righteous ordinances, they delight in drawing near to God. “Why have we fasted, and yet You do not see? Why have we afflicted our soul, and You pay no attention?” Behold, in the day of your fast you pursue your business, and perform all your labors. Behold, you fast for strife and contention, and to smite with the fist of wickedness. You fast not this day so as to make your voice to be heard on high; is this the fast I desire? The day for a man to afflict his soul? Is it to bow down his head as a bulrush, and to spread sackcloth and ashes under him? Will you call this a fast, and an acceptable day to the Lord? Is not this the fast that I have chosen: to loosen the fetters of wickedness, to undo the bands of the yoke, and to let the oppressed go free, and that you break every yoke? Is it not to distribute your bread to the hungry, and bring the poor that are cast out to your house? When you see the naked, that you cover him, and that thou hide not yourself from your own flesh? (Isaiah 58:1–7)
Isaiah’s admonitions evoke a rare moment in Jewish antiquity. Idolatry is the prevalent deviance of the biblical era, culminating in divine rejection and the Babylonian Exile. Indeed, for most of biblical history Jews rejected God and opted for idolatry. The Bible can be effectively summarized as the history of a Creator yearning to create a holy people who seek the divine and commit themselves to walking in its ways, but who regularly choose instead to ignore it and walk in the way of the idolatrous Ba’al. Isaiah, however, addresses a scenario in which people actually seem to be turning to God, expressing the desire for relationship through ritual devotion. At first glance, this ought to be one of the great moments in the Bible. At long last, the Jewish people and God are on the same page: “They seek me daily, eager to learn my ways.” Is this not precisely the thing for which God has so long yearned? Yet it is at this very moment of rigorous ritual commitment that God must angrily intervene to let them know they have fallen far astray from the path; that they are lost. God tells them, in essence, that while claiming to be a people who want to follow the divine path, they have abandoned it by ignoring their moral responsibility to others. “Did you not hear Me,” God asks through the prophets, again and again. “There is something else that I want from you?” Hear the words of the Lord, you chieftains of Sodom. Give ear to your gods’ instructions, you folk of Gomorrah. What need have I of all your sacrifices? I am sated with burnt offerings of rams and suet of fatling and bloods of bulls. I have no delight in lambs and he-goats. That you come to appear before me, who asked this of you? Trample my courts no more. Bringing oblations is futile. Incense is offensive to me. New moon and Sabbath, proclaiming of solemnities, assemblies with inequity, I cannot abide. Your new moons and fixed seasons fill me with loathing. They have become a burden to me. I cannot endure them. And when you lift up your hands, I will turn my eyes away from you. Though you pray at length, I will not listen. Why? Because your hands are stained with crime. Wash yourselves clean. Put your evildoings away from my sight. Cease to do evil. Learn to do good. Devote yourselves to justice. Aid the wrong. Uphold the rights of the orphan. Defend the cause of the widow. (Isaiah 1:10–17) The people are eager for intimacy with God through the offering of sacrifices. They finally show up with passionate ritual devotion, and God’s response, in essence, is to say, Go away! Why is it that your religious life is completely defined by ritual, by devotion to me, to the exclusion of everything I said about how to treat others? Why are you ignoring the other part of what I have commanded? Why does a life with God—a God who so clearly commands “Love your neighbor as yourself, I am the Lord your God” (Leviticus 19:18)—so consistently fail to achieve its own stated goals? ASSIGNING BLAME: “THE DEVIL QUOTES SCRIPTURE” Advocates of religion tend to answer this question by ascribing religious failure exclusively to human weakness and ignorance. It is not a consequence of faith or tradition but of a flawed humanity consumed by a form of original sin. Contrary to the gospel of Woody Allen, who posited in his film Love and Death that God is an underachiever, defenders of the Almighty counter that people are the real underachievers, incapable of true commitment to perfect divine directives and to meeting the obligations that, if only followed correctly, would remake them, their families, and their communities. The Bible echoes this tradition when describing humanity in the aftermath of the Flood: “And the Lord said to Himself: Never again will I doom the earth because of man, since the devisings of man’s mind are evil from his youth.” (Genesis 8:21) God may charge us with a mission, to live a life of righteousness and justice . . . but the flesh is weak, and the bar perhaps unrealistically high. From this perspective, God is a romantic, perennially yearning for us to reach for standards of moral sensitivity that will require us to open our eyes and respond to the suffering surrounding us, but we cannot seem to muster the inner fortitude required to live up to those aspirations. Conversely, religion’s critics locate the primary blame for the moral failure of religious people in religion itself. For them, this failure is not the consequence of ignoring the divine command but of fulfilling it. For such critics, religion itself is the original sin that “poisons everything,” as per Christopher Hitchens in "God Is Not Great." They argue that surrounding the scriptures’ advocacy of moral sensitivity and compassion are a multitude of sources commanding holy war, religious discrimination and persecution, and triumphalism, to say nothing of gender inequality, racism, and homophobia. These, they claim, are in fact the dominant themes of these traditions, far outweighing the others, and history seems to bear this reading out. It is no wonder, they argue, that religion has been the driving force behind so much bloodshed and oppression. When the advocates of religion, on the one hand, and critics of God, on the other, make their claims and counterclaims, it is evident that they are reading completely different books. Confronting morally difficult or disturbing texts, advocates tend to rationalize, apologize, minimize, reinterpret, or otherwise divert attention away from them. Conversely, critics who claim that religion is inherently corrupt and corrupting either ignore these traditions’ powerful moral insights or marginalize them as insignificant, clearly outweighed by contradictory imperatives. The interpretive moves of the advocates help to assuage the cognitive dissonance of the enlightened believer, but they do nothing to relieve the profound impact these texts have on the great many others who take their messages at face value. As Shakespeare sharply observed, “The devil can cite Scripture for his purpose,” and it is important to emphasize that the devil does not misquote scripture. He has no need, for the tradition provides him with all the ammunition he requires. Where religion serves to fuel injustice, it comes armed with chapter and verse. On the other hand, the claims of the critics ignore the experienced reality of religious people, for whom these verses that enshrine positive ethics and values are a central, driving component of their religious consciousness, prompting intense moral striving and achievement. To trivialize or gloss over them is to overlook the positive impact that religion has on the lives of countless people and communities, inspiring and compelling them to compassion, charity, justice, and good deeds. The picture, ultimately, is more complex than either side tends to recognize. RELIGION’S AUTOIMMUNE DISEASE The truth is that monotheistic religion is neither perfectly good—and thus its failures the exclusive result of human weakness— nor perfectly evil, poisoning the character of all who adopt it with a crippling spiritual disease. The central argument of this book is that religion’s (and religions’) spotty moral track record cannot be written off to either a core corruption in human nature or an inherently corrupt scripture. Rather it is my contention that a life of faith, while obligating moral sensitivity, also very often activates a critical flaw that supports and encourages immoral impulses. These impulses, given free rein to flourish under the cloak of religious piety, undermine the ultimate moral agendas of religions and the types of communities and societies they aspire to build. The argument of this book is that this critical flaw, when recognized, can be overcome. This frequently overlooked phenomenon that accounts for the moral underachievement of our monotheistic traditions is what I term religion’s “autoimmune disease”—a disease in which the body’s immune system, which is designed to fight off external threats, instead attacks and destroys the body’s own healthy cells and tissues. This diagnosis is meant to help conceptualize the dynamics through which religions so often undermine their own deepest values and attack their professed goals. While God obligates the good and calls us into its service, God simultaneously and inadvertently makes us morally blind. The nature of monotheism’s autoimmune disease is that God’s presence, and the human religious desire to live in relationship with God, often distracts religion’s adherents from their traditions’ core moral truths. Such a presence can so consume our field of vision that we see nothing other than God (a recipe for ethical bankruptcy); can lead to claims of chosenness that encourage self-aggrandizing reflexivity (transforming us into people who see only ourselves); or can cause us to see scripture as morally perfect, despite the failures embedded within it (thereby sanctifying the morally profane). Ultimately, I believe that religion’s record of moral mediocrity will persist as long as communities of faith fail to recognize the ways in which our faith itself is working against us. In other words, only when we are able to discern, within ourselves and our traditions, the symptoms of religion’s autoimmune diseases, will we be able to begin developing remedies that enable religion to heal itself and reclaim its noble aspirations. Excerpted from "Putting God Second: How to Save Religion from Itself" by Rabbi Donniel Hartman (Beacon Press, 2016). Reprinted with permission from Beacon Press.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 11:00

The American inferno behind our election obsessions: How we’re ignoring that millions of lives are in ruin

We know whom the voters cast ballots for on Super Tuesday, and thanks to exits polls have some sense of what their rationale was for their choices. But as we extrapolate these results to see what they portend for the general election, what do we really know about the lives of the people who voted? Sad to say, in the midst of our horse race obsessions, really very little. All totaled, the dozen Super Tuesday states are a pretty big sample group, with a total population of 85 million, almost a quarter of the U.S. population. Five of the states are more diverse than the country. Five of the states have a higher percentage of college graduates than the nation as a whole, which has 29.3 percent of adults with at least a bachelor's degree. Four have college graduation rates that are significantly lower than the U.S as a whole. Across America 13.1 percent of those living here are foreign born but in Texas and Massachusetts that percentage is 16.5 percent and 15.3 percent, respectively. But in five states the portion of immigrants now calling the U.S. home is less that half the national percentage of foreign born. Pundits this year are describing primary voters in general as being cranky, but when pressed for an explanation, the TV experts usually get quite vague about just what has voters upset -- which, in a way, kind of trivializes popular sentiments. "How about those wacky, 'fired up' voters!" If you're lucky, what you’ll get from these high-paid groupthink prognosticators will be something about how the people who are turning out to vote think the "fix is in" in our politics, with the super-wealthy and multinationals corrupting the process by way of billions in dark money and campaign cash. Another commonly expressed rationale for voter discontent is disappointment over what has been the weakest and most anemic recovery since World War II. But that sugarcoats what the data shows, which is that tens of millions of Americans are idle and that several million desperately need full-time work but are being kept permanently in part-time status. In every one of the dozen states that voted Tuesday, the ranks of those not working has grown since the start of the Obama tenure. And while this decline in labor force participation has deep historical roots that go back to the 1970s, this mega-trend accounts for the increasing irrelevance of the top-line unemployment number that is the gold standard of central bankers and the clueless business press. Demographers blame the long-term shrinking of our labor force on an aging population, but that doesn’t account for it all, especially when we continue to see historically high youth unemployment rates that in some neighborhoods of color can approach 50 percent for young men. In a state-by-state analysis of these Super Tuesday populations, we see a country in economic stagnation, and even subtle decline. But you won’t pick up on this if you just look at the national aggregation of economic performance data that Wall Street uses to spin its roulette wheel and central bankers use to guide monetary policy. The Great Recession and the Wall Street rape of the national economy took a much greater toll on the American economy and family than our national leadership wants to admit, and it becomes more apparent when you look at specific jurisdictions, like counties and states. Now, as we come up on what will be close to a decade since the crash, the generational consequences of what transpired will increasingly become apparent and perhaps compel more radical action. We see, based on comparative U.S. Census data put out on the occasion of Super Tuesday, that in two-thirds of the states that cast ballots, poverty is up. In a state like Texas -- considered a "success story," where total poverty was down slightly -- 56.3 percent of single mothers with children under 5 were living in poverty, up significantly from 2010. Only in Colorado does the census data show real progress in poverty reduction. This tracks with other recent U.S. Census statistics that show that over the Obama years, poverty has gone up in a full third of America’s counties, and only actually declined in 4 percent of them. On the home front, this ongoing stagnation and decline continues to cause social dislocation, which largely flies under the radar. Consider that in seven of the 12 Super Tuesday states, the number of grandparents who have had to step up and take responsibility for raising their grandchildren has increased since President Obama’s first term. All told, close to 900,000 grandparents in Super Tuesday states are standing in for a missing generation, roughly a fifth of those doing so across the country. No doubt this is a profile of an America that is a "stucknation," one you won’t see depicted in our corporately controlled broadcast news media, which still scratches its head over just what all the fuss is about.We know whom the voters cast ballots for on Super Tuesday, and thanks to exits polls have some sense of what their rationale was for their choices. But as we extrapolate these results to see what they portend for the general election, what do we really know about the lives of the people who voted? Sad to say, in the midst of our horse race obsessions, really very little. All totaled, the dozen Super Tuesday states are a pretty big sample group, with a total population of 85 million, almost a quarter of the U.S. population. Five of the states are more diverse than the country. Five of the states have a higher percentage of college graduates than the nation as a whole, which has 29.3 percent of adults with at least a bachelor's degree. Four have college graduation rates that are significantly lower than the U.S as a whole. Across America 13.1 percent of those living here are foreign born but in Texas and Massachusetts that percentage is 16.5 percent and 15.3 percent, respectively. But in five states the portion of immigrants now calling the U.S. home is less that half the national percentage of foreign born. Pundits this year are describing primary voters in general as being cranky, but when pressed for an explanation, the TV experts usually get quite vague about just what has voters upset -- which, in a way, kind of trivializes popular sentiments. "How about those wacky, 'fired up' voters!" If you're lucky, what you’ll get from these high-paid groupthink prognosticators will be something about how the people who are turning out to vote think the "fix is in" in our politics, with the super-wealthy and multinationals corrupting the process by way of billions in dark money and campaign cash. Another commonly expressed rationale for voter discontent is disappointment over what has been the weakest and most anemic recovery since World War II. But that sugarcoats what the data shows, which is that tens of millions of Americans are idle and that several million desperately need full-time work but are being kept permanently in part-time status. In every one of the dozen states that voted Tuesday, the ranks of those not working has grown since the start of the Obama tenure. And while this decline in labor force participation has deep historical roots that go back to the 1970s, this mega-trend accounts for the increasing irrelevance of the top-line unemployment number that is the gold standard of central bankers and the clueless business press. Demographers blame the long-term shrinking of our labor force on an aging population, but that doesn’t account for it all, especially when we continue to see historically high youth unemployment rates that in some neighborhoods of color can approach 50 percent for young men. In a state-by-state analysis of these Super Tuesday populations, we see a country in economic stagnation, and even subtle decline. But you won’t pick up on this if you just look at the national aggregation of economic performance data that Wall Street uses to spin its roulette wheel and central bankers use to guide monetary policy. The Great Recession and the Wall Street rape of the national economy took a much greater toll on the American economy and family than our national leadership wants to admit, and it becomes more apparent when you look at specific jurisdictions, like counties and states. Now, as we come up on what will be close to a decade since the crash, the generational consequences of what transpired will increasingly become apparent and perhaps compel more radical action. We see, based on comparative U.S. Census data put out on the occasion of Super Tuesday, that in two-thirds of the states that cast ballots, poverty is up. In a state like Texas -- considered a "success story," where total poverty was down slightly -- 56.3 percent of single mothers with children under 5 were living in poverty, up significantly from 2010. Only in Colorado does the census data show real progress in poverty reduction. This tracks with other recent U.S. Census statistics that show that over the Obama years, poverty has gone up in a full third of America’s counties, and only actually declined in 4 percent of them. On the home front, this ongoing stagnation and decline continues to cause social dislocation, which largely flies under the radar. Consider that in seven of the 12 Super Tuesday states, the number of grandparents who have had to step up and take responsibility for raising their grandchildren has increased since President Obama’s first term. All told, close to 900,000 grandparents in Super Tuesday states are standing in for a missing generation, roughly a fifth of those doing so across the country. No doubt this is a profile of an America that is a "stucknation," one you won’t see depicted in our corporately controlled broadcast news media, which still scratches its head over just what all the fuss is about.We know whom the voters cast ballots for on Super Tuesday, and thanks to exits polls have some sense of what their rationale was for their choices. But as we extrapolate these results to see what they portend for the general election, what do we really know about the lives of the people who voted? Sad to say, in the midst of our horse race obsessions, really very little. All totaled, the dozen Super Tuesday states are a pretty big sample group, with a total population of 85 million, almost a quarter of the U.S. population. Five of the states are more diverse than the country. Five of the states have a higher percentage of college graduates than the nation as a whole, which has 29.3 percent of adults with at least a bachelor's degree. Four have college graduation rates that are significantly lower than the U.S as a whole. Across America 13.1 percent of those living here are foreign born but in Texas and Massachusetts that percentage is 16.5 percent and 15.3 percent, respectively. But in five states the portion of immigrants now calling the U.S. home is less that half the national percentage of foreign born. Pundits this year are describing primary voters in general as being cranky, but when pressed for an explanation, the TV experts usually get quite vague about just what has voters upset -- which, in a way, kind of trivializes popular sentiments. "How about those wacky, 'fired up' voters!" If you're lucky, what you’ll get from these high-paid groupthink prognosticators will be something about how the people who are turning out to vote think the "fix is in" in our politics, with the super-wealthy and multinationals corrupting the process by way of billions in dark money and campaign cash. Another commonly expressed rationale for voter discontent is disappointment over what has been the weakest and most anemic recovery since World War II. But that sugarcoats what the data shows, which is that tens of millions of Americans are idle and that several million desperately need full-time work but are being kept permanently in part-time status. In every one of the dozen states that voted Tuesday, the ranks of those not working has grown since the start of the Obama tenure. And while this decline in labor force participation has deep historical roots that go back to the 1970s, this mega-trend accounts for the increasing irrelevance of the top-line unemployment number that is the gold standard of central bankers and the clueless business press. Demographers blame the long-term shrinking of our labor force on an aging population, but that doesn’t account for it all, especially when we continue to see historically high youth unemployment rates that in some neighborhoods of color can approach 50 percent for young men. In a state-by-state analysis of these Super Tuesday populations, we see a country in economic stagnation, and even subtle decline. But you won’t pick up on this if you just look at the national aggregation of economic performance data that Wall Street uses to spin its roulette wheel and central bankers use to guide monetary policy. The Great Recession and the Wall Street rape of the national economy took a much greater toll on the American economy and family than our national leadership wants to admit, and it becomes more apparent when you look at specific jurisdictions, like counties and states. Now, as we come up on what will be close to a decade since the crash, the generational consequences of what transpired will increasingly become apparent and perhaps compel more radical action. We see, based on comparative U.S. Census data put out on the occasion of Super Tuesday, that in two-thirds of the states that cast ballots, poverty is up. In a state like Texas -- considered a "success story," where total poverty was down slightly -- 56.3 percent of single mothers with children under 5 were living in poverty, up significantly from 2010. Only in Colorado does the census data show real progress in poverty reduction. This tracks with other recent U.S. Census statistics that show that over the Obama years, poverty has gone up in a full third of America’s counties, and only actually declined in 4 percent of them. On the home front, this ongoing stagnation and decline continues to cause social dislocation, which largely flies under the radar. Consider that in seven of the 12 Super Tuesday states, the number of grandparents who have had to step up and take responsibility for raising their grandchildren has increased since President Obama’s first term. All told, close to 900,000 grandparents in Super Tuesday states are standing in for a missing generation, roughly a fifth of those doing so across the country. No doubt this is a profile of an America that is a "stucknation," one you won’t see depicted in our corporately controlled broadcast news media, which still scratches its head over just what all the fuss is about.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 08:59

The latest Zika scare: Virus linked to rare neurological condition

Scientific American LONDON - French scientists say they have proved a link between the Zika virus and a nerve syndrome called Guillain-Barré, suggesting countries hit by the Zika epidemic will see a rise in cases of the serious neurological condition. Guillain-Barré is a rare syndrome in which the body's immune system attacks part of the nervous system. It usually occurs a few days after exposure to a virus, bacteria or parasite. In a retrospective study analysing data from a Zika outbreak in French Polynesia during 2013 and 2014, researchers led by Arnaud Fontanet of France's Institut Pasteur calculated the estimated risk of developing Guillain-Barré syndrome at 2.4 for every 10,000 people infected by Zika. "This work is significant because it allows for the confirmation of the role of Zika virus infection in the occurrences of the severe neurological complications that constitute Guillain-Barré syndrome," said Fontanet, Pasteur's head of the emerging diseases epidemiology. "The regions which are affected by the Zika virus epidemic are likely to see a significant increase in the number of patients with serious neurological complications, and when possible, should increase the capacity of health-care facilities to receive patients needing intensive care." The World Health Organization (WHO) has declared an outbreak of the mosquito-borne Zika virus spreading from Brazil an international health emergency. This declaration was largely based on evidence linking Zika to a birth defect known as microcephaly, marked by a small head and underdeveloped brain, but the WHO is also concerned about rising reports of cases of Guillain-Barré syndrome in countries hit by Zika. It is not yet clear whether the Zika virus actually causes microcephaly in babies, but experts say the evidence of a link is growing. Fontanet's team analysed data from 42 patients who developed Guillain-Barré syndrome at the time of the French Polynesian epidemic and found that every one had evidence of a previous infection with Zika. Tests also showed 93 percent of them had been infected with Zika recently -- within three months prior to developing Guillain-Barré syndrome. Jeremy Farrar, an infectious disease specialist and director of the Wellcome Trust global health charity, said the study, published in The Lancet medical journal, "provides the most compelling evidence to date of a causative link" between Zika and Guillain-Barré syndrome. "The increase in reported cases of Guillain-Barré in Brazil and other South American countries seems to suggest that a similar situation may be occurring in the current outbreak, although the link here is yet to be proven definitively," he said in an emailed statement. According to WHO, even with the best healthcare services available, some 3 to 5 percent of Guillain-Barré syndrome patients die from complications, including blood infection, lung clots, cardiac arrest and paralysis of the muscles that control breathing. Scientific American LONDON - French scientists say they have proved a link between the Zika virus and a nerve syndrome called Guillain-Barré, suggesting countries hit by the Zika epidemic will see a rise in cases of the serious neurological condition. Guillain-Barré is a rare syndrome in which the body's immune system attacks part of the nervous system. It usually occurs a few days after exposure to a virus, bacteria or parasite. In a retrospective study analysing data from a Zika outbreak in French Polynesia during 2013 and 2014, researchers led by Arnaud Fontanet of France's Institut Pasteur calculated the estimated risk of developing Guillain-Barré syndrome at 2.4 for every 10,000 people infected by Zika. "This work is significant because it allows for the confirmation of the role of Zika virus infection in the occurrences of the severe neurological complications that constitute Guillain-Barré syndrome," said Fontanet, Pasteur's head of the emerging diseases epidemiology. "The regions which are affected by the Zika virus epidemic are likely to see a significant increase in the number of patients with serious neurological complications, and when possible, should increase the capacity of health-care facilities to receive patients needing intensive care." The World Health Organization (WHO) has declared an outbreak of the mosquito-borne Zika virus spreading from Brazil an international health emergency. This declaration was largely based on evidence linking Zika to a birth defect known as microcephaly, marked by a small head and underdeveloped brain, but the WHO is also concerned about rising reports of cases of Guillain-Barré syndrome in countries hit by Zika. It is not yet clear whether the Zika virus actually causes microcephaly in babies, but experts say the evidence of a link is growing. Fontanet's team analysed data from 42 patients who developed Guillain-Barré syndrome at the time of the French Polynesian epidemic and found that every one had evidence of a previous infection with Zika. Tests also showed 93 percent of them had been infected with Zika recently -- within three months prior to developing Guillain-Barré syndrome. Jeremy Farrar, an infectious disease specialist and director of the Wellcome Trust global health charity, said the study, published in The Lancet medical journal, "provides the most compelling evidence to date of a causative link" between Zika and Guillain-Barré syndrome. "The increase in reported cases of Guillain-Barré in Brazil and other South American countries seems to suggest that a similar situation may be occurring in the current outbreak, although the link here is yet to be proven definitively," he said in an emailed statement. According to WHO, even with the best healthcare services available, some 3 to 5 percent of Guillain-Barré syndrome patients die from complications, including blood infection, lung clots, cardiac arrest and paralysis of the muscles that control breathing. Scientific American LONDON - French scientists say they have proved a link between the Zika virus and a nerve syndrome called Guillain-Barré, suggesting countries hit by the Zika epidemic will see a rise in cases of the serious neurological condition. Guillain-Barré is a rare syndrome in which the body's immune system attacks part of the nervous system. It usually occurs a few days after exposure to a virus, bacteria or parasite. In a retrospective study analysing data from a Zika outbreak in French Polynesia during 2013 and 2014, researchers led by Arnaud Fontanet of France's Institut Pasteur calculated the estimated risk of developing Guillain-Barré syndrome at 2.4 for every 10,000 people infected by Zika. "This work is significant because it allows for the confirmation of the role of Zika virus infection in the occurrences of the severe neurological complications that constitute Guillain-Barré syndrome," said Fontanet, Pasteur's head of the emerging diseases epidemiology. "The regions which are affected by the Zika virus epidemic are likely to see a significant increase in the number of patients with serious neurological complications, and when possible, should increase the capacity of health-care facilities to receive patients needing intensive care." The World Health Organization (WHO) has declared an outbreak of the mosquito-borne Zika virus spreading from Brazil an international health emergency. This declaration was largely based on evidence linking Zika to a birth defect known as microcephaly, marked by a small head and underdeveloped brain, but the WHO is also concerned about rising reports of cases of Guillain-Barré syndrome in countries hit by Zika. It is not yet clear whether the Zika virus actually causes microcephaly in babies, but experts say the evidence of a link is growing. Fontanet's team analysed data from 42 patients who developed Guillain-Barré syndrome at the time of the French Polynesian epidemic and found that every one had evidence of a previous infection with Zika. Tests also showed 93 percent of them had been infected with Zika recently -- within three months prior to developing Guillain-Barré syndrome. Jeremy Farrar, an infectious disease specialist and director of the Wellcome Trust global health charity, said the study, published in The Lancet medical journal, "provides the most compelling evidence to date of a causative link" between Zika and Guillain-Barré syndrome. "The increase in reported cases of Guillain-Barré in Brazil and other South American countries seems to suggest that a similar situation may be occurring in the current outbreak, although the link here is yet to be proven definitively," he said in an emailed statement. According to WHO, even with the best healthcare services available, some 3 to 5 percent of Guillain-Barré syndrome patients die from complications, including blood infection, lung clots, cardiac arrest and paralysis of the muscles that control breathing. Scientific American LONDON - French scientists say they have proved a link between the Zika virus and a nerve syndrome called Guillain-Barré, suggesting countries hit by the Zika epidemic will see a rise in cases of the serious neurological condition. Guillain-Barré is a rare syndrome in which the body's immune system attacks part of the nervous system. It usually occurs a few days after exposure to a virus, bacteria or parasite. In a retrospective study analysing data from a Zika outbreak in French Polynesia during 2013 and 2014, researchers led by Arnaud Fontanet of France's Institut Pasteur calculated the estimated risk of developing Guillain-Barré syndrome at 2.4 for every 10,000 people infected by Zika. "This work is significant because it allows for the confirmation of the role of Zika virus infection in the occurrences of the severe neurological complications that constitute Guillain-Barré syndrome," said Fontanet, Pasteur's head of the emerging diseases epidemiology. "The regions which are affected by the Zika virus epidemic are likely to see a significant increase in the number of patients with serious neurological complications, and when possible, should increase the capacity of health-care facilities to receive patients needing intensive care." The World Health Organization (WHO) has declared an outbreak of the mosquito-borne Zika virus spreading from Brazil an international health emergency. This declaration was largely based on evidence linking Zika to a birth defect known as microcephaly, marked by a small head and underdeveloped brain, but the WHO is also concerned about rising reports of cases of Guillain-Barré syndrome in countries hit by Zika. It is not yet clear whether the Zika virus actually causes microcephaly in babies, but experts say the evidence of a link is growing. Fontanet's team analysed data from 42 patients who developed Guillain-Barré syndrome at the time of the French Polynesian epidemic and found that every one had evidence of a previous infection with Zika. Tests also showed 93 percent of them had been infected with Zika recently -- within three months prior to developing Guillain-Barré syndrome. Jeremy Farrar, an infectious disease specialist and director of the Wellcome Trust global health charity, said the study, published in The Lancet medical journal, "provides the most compelling evidence to date of a causative link" between Zika and Guillain-Barré syndrome. "The increase in reported cases of Guillain-Barré in Brazil and other South American countries seems to suggest that a similar situation may be occurring in the current outbreak, although the link here is yet to be proven definitively," he said in an emailed statement. According to WHO, even with the best healthcare services available, some 3 to 5 percent of Guillain-Barré syndrome patients die from complications, including blood infection, lung clots, cardiac arrest and paralysis of the muscles that control breathing.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 05, 2016 08:00

March 4, 2016

My geriatric “catfishing” cautionary tale

OK. Let’s first agree on this: We’ll change names to protect the innocent. And the guilty, too. I’ll change his name just because I’m feeling generous, although I don’t know why.

We are all too conscious of the perils the Internet, particularly its social networks, pose for young people—inexperienced and vulnerable, teenagers and twenty-somethings can be gullible and prone to rushing enthusiastically into something they later regret. Parents are constantly reminded to monitor their children’s digital activities: Who are they talking to? What are they talking about? Is the person on the other side of the screen really who s/he claims to be on a Facebook profile?

Amid the hysteria, it’s easy to forget that any of us, no matter how old and “experienced,” no matter how savvy we like to think we are, can be vulnerable to digital dissembling. No longer shamed into the shadows, the online dating scene thrives, and more and more adults look to it to find love—and to find love again, after a divorce or the death of a spouse. These websites, many emphasizing romance and long-term commitment, hide lies aplenty—about height, weight, education, the real year a profile picture was taken—underneath their veneer of respectability. Your motives for signing up with Match.com might be honorable, but there’s no way to know if the same is true for everyone else. Yet just because the possibility exists, is it reasonable for you to assume that your divorced mother’s new boyfriend, or your widowed grandfather’s new companion, might not be who they seem? Even if something about a person doesn’t sit quite right, who are you to step into an older adult’s life and spoil their new chance at happiness? Surely they are capable of making their own decisions.

Having dismissed the melodrama of “catfishing” as a phenomenon of teenage pop culture, rather than an actual threat in the rational adult world, these are questions I never thought about. That is, until I found myself struggling for answers as I faced an elderly man whose story literally didn’t add up.

We begin in a picturesque English country village, where nary a car appears on the narrow road that snakes through it to disturb the peace. Thatched cottages, with roses climbing the walls. Quaint tea rooms serving scones and jam. Ramshackle pubs offering warm local ale. This is the village where Margaret lives.

She’s a second- or third-something, possibly a great-something? I’ve never really grasped exactly what those terms mean. But she is my relative, and I’m very fond of her.

When Margaret’s husband, Derek, died eight years ago, she was devastated. Though English by birth and upbringing, by then I’d moved to New Zealand, and I didn’t have a chance to visit her until I returned in 2011. Then in her mid-seventies, she seemed in reasonable spirits, all things considered; “on the up,” you might say. Margaret is resilient, full of positivity and warmth. As a child, back in the late 1960s and early 1970s, I remember her gathering as many family members as she could for long summer lunches, for which she would dress in vibrant, flowing robes and put the Rolling Stones or Led Zeppelin records on her oversize hi-fi stereo. We younger children played in the overgrown garden of the large house in which she and Derek lived while the adults boozed away the afternoon.

She didn’t mention to me that she was actively seeking companionship through the Internet. But why should she? It was none of my business. We walked a while around her charming village and chatted over a cup of tea before I set off on my journey back to Yorkshire.

It was a couple of months later when Margaret called me, sounding slightly sheepish.

“Well….” A pause. “I’ve, um, met someone.”

Why on earth not? I thought. “That’s great,” I said. “Wonderful!” Though, as I spoke, some uncertainty entered my head. Margaret is a smart cookie, but her beloved Derek’s death had hit her hard. Even though she was lonely, was this—starting all over again with someone new—really what she wanted?

“Who is this someone?” I asked cheerfully.

“Name of Roy,” said Margaret. “We met up in London the other day. I think you’d like him.”

“How did you meet him?”

“Through the Internet!” Imagine my surprise upon learning that Margaret is a first-league silver surfer, far more proficient online than I ever could be.

“I look forward to meeting him,” I said, sincerely.

With the demands of my own work and family, it was another five weeks before I could make it down to Margaret’s village. By that time, as I discovered to my surprise when I walked through her front door, Roy had moved in.

Margaret ushered me through her small, dark hallway into the main room of her cottage. Roy, sitting in what had been Margaret’s favorite armchair, stood unsteadily, his cold, wary blue eyes fixing upon me. He’s nervous, I thought. Understandable.

“Hello,” I said, full of artificial bonhomie, offering my hand. “I’m Nicholas.”

He examined my hand for a short moment, then he shook it.

“Roy,” he said.

Already, I didn’t like him. But I said, “Very pleased to meet you.” This, after all, is the English way.

We sat down to eat in Margaret’s cramped little kitchen. As she strove anxiously to keep our plates filled and our glasses topped up, Roy and I eyed each other beneath the veneer of polite conversation. Before moving into Margaret’s cottage, I learned, Roy had enjoyed a comfortable, well-heeled pensioner’s life in the Kent town where he’d spent most of his adulthood. He’d grown up in North London, and served his country in the Second World War. His career had taken several turns; at one point he had been a financial services consultant, and, later, he owned a market garden and sold vegetables to a major supermarket chain. I found him interesting, but when pressed on detail, Roy became vague.

The discomfort in the air was palpable, but we parted on amicable terms, promising to meet again shortly. On my drive back north, I felt oddly uneasy at leaving Margaret alone with Roy. I’d thought she’d said he was 79, but a quick calculation showed that, by the end of the war, he would have been 14 or 15 years old. Had I got it wrong? Or had Margaret made a mistake?

The following weekend I traveled down again to Margaret’s village. By now, Roy seemed to feel fully at home; having sized me up the previous week, he’d clearly decided I was no threat to him. He was more affable and voluble over lunch, offering anecdotes about his time in the Royal Air Force in 1944 and his Islington childhood. He told me about his son and his wife, who lived only a few miles from his flat in Kent. Later, he sombrely described his wife’s painful and protracted battle with cancer.

“Things weren’t so advanced back then,” Roy said. “I had to nurse her through the last few months more or less single-handed.”

Before I left that evening, I confirmed his age with Margaret. “Seventy-nine,” she repeated. I didn’t pursue the question of her arithmetic, but now I knew something about Roy literally didn’t add up. And I was concerned.

Margaret believes strongly in family. Even though we have all dispersed, making those glorious lunches of my childhood impossible, she is meticulous in remembering our birthdays, sending lavish gifts and cards that she illustrates herself. So, when it came to Roy, it was natural that she should want to meet his family.

She persuaded him to invite his son and daughter-in-law over for a weekend. I don’t know how she got him to agree. He must have understood the danger of her request, but I suppose he somehow thought he could contain it. My wife and I were also invited for dinner on the Saturday. We checked into a local hotel, where Graham and Janice were also staying, so we gave them a lift to Margaret’s cottage.

Graham and Janice were pleasant company, and dinner was congenial. We “youngsters” spent most of the time talking, with Margaret making several interjections to keep things moving nicely. Roy watched us, leaning his bulk back in his chair, slowly blinking like a lizard.

On the way back to the hotel, I decided to cross-check some of my facts with Graham and Janice.

“So you’re an only child, Graham,” I said by way of conversational opener.

“No. There’s my sister, of course,” he said.

“Sorry. Of course. Remind me about her.”

“He’s probably hardly ever mentioned her because she doesn’t get on with him. For a long while I didn’t, either.”

“Oh?”

“No. We didn’t speak for years after what he did to my mother.”

“Oh. I didn’t know about that.” I was beginning to feel very anxious for Margaret.

“No. He wouldn’t have said anything. When I was about 10, he left us in the lurch. My sister was only 7. We had no money. Mum had to rely on welfare. It was a struggle.”

“Where did Roy go?”

“He went off with his fancy woman. Nottingham, I think it was? They had two kids. Then, after five years or so, he came back. Mum accepted him, she was that desperate. I wouldn’t speak to him for a while afterwards. But, you know. Time and all that.”

“A great healer.” Had he told the truth about anything?  “You must have all been so upset when your mother….”

“Pardon?”

“When your mother died. At least he tended to her in her final months. It must have brought you closer.”

“What’s he been telling you? He’s been at it again, hasn’t he?”

It seemed that Roy had once again left his wife after he retired. She’d had a number of “episodes” requiring psychiatric care, and he’d had enough. She remained in the family home, happier and better-adjusted than she ever was while she was with Roy.

And yet Graham still remained in contact with his father.

“Hard to explain. I’ve little to do with him really. Just the normal pleasantries. My sister doesn’t speak to him at all. But when Margaret invited us…. I suppose it was a chance to see whether he was up to his old tricks. This is what he does. He’ll be cruising the Internet just like he once would cruise bars and cafés. He can be very charming.”

Did Margaret suspect anything? Some of Roy’s lies she simply believed, that much was clear, but I wonder if some she simply accepted as the price she had to pay for companionship. I was paralyzed. What should I do? Margaret is, after all, many years my senior. It would have been impertinent for me to interfere, to tell her that she was wrong to trust the man who’d moved into her home. She seemed to be happy, and not in the path of any immediate harm.

I concentrated on being supportive, keeping in close touch with Margaret and visiting as often as I could. I took care not to appear judgmental of Roy in Margaret’s hearing, and was pleasantness personified with him directly. He was not difficult to catch out, but I was careful not to put him on the hook. I simply waited. It was for Margaret to make her choice. All I could do was to be ready if things changed.

About three months later, Margaret rang me, somewhat out of breath. My immediate thought was what has he done? But I didn’t need to be concerned. About that, at least.

Margaret had become wise to Roy. The accumulation of lies had grown so large that something between them had snapped. The old Margaret, sharp as a tack, was back—and annoyed at having been such an old fool. Angry at Roy for tricking her, but mainly at herself for letting him delude her. She wanted to end the relationship.

By this time, Roy had apparently sensed something was awry and was keeping close tabs on her. “I’m afraid of him,” she said. “Physically afraid. I don’t know what to do!”

Conflict avoidance is something of a way of life for me, often miring me in situations that could have been avoided had I been more direct from the start. I shun physical violence. I don’t get into “alpha male” one-upmanship. But a wimp’s gotta do what a wimp’s gotta do. Margaret needed my help.

She and I arranged covert meetings. She’d let herself out of the house for an hour or so on some pretext, and we’d meet in a local coffee shop to plan our next move in hushed tones. Margaret ran through the mental script of what she would say, and I tried out a couple of ideas about the part I would play. We talked timing and fallbacks. At home, I researched the legal position of the situation, and agonized over safety issues—for Roy’s sake as much as for Margaret’s.

Eventually, the day arrived. My wife and I turned up at Margaret’s cottage at a pre-appointed time, and she let us in. Roy looked at us suspiciously over the top of his newspaper.

Margaret delivered her short speech. It was over, she said. She’d enjoyed meeting Roy. It’d been fun getting to know him. But she’d grown tired. No, she’d become exhausted with worry, her blood pressure was sky high and she regularly had palpitations. She’d been foolish to let him come and live with her. If they’d had an arm's-length relationship, maybe they would still be friends. But suddenly she’d realized that almost everything he told her about himself was a lie, that he was sponging on her good nature, that he was idle and manipulative and cared nothing for her. There was no going back; this was the end of it.

I helped Roy pack, ordered a cab, paid the driver several hundred pounds, and Roy was on his way back home. All we had to do now, for Margaret’s peace of mind, was change the locks on the house and her telephone number.

Success, finally. I felt about as tired and down as I could possibly feel. Confronting an 86-year-old man and turning him out of his home is not an activity I’d recommend to anyone. But it had to be done, and it was.

I’ve no idea what’s become of Roy. If he’s still around, he’ll be well into his nineties. Despite everything, I honestly wish him no ill. I still wonder what his motive was. Very basic, I suspect: he simply wanted someone to look after him so that he could live out his final years comfortably. The easy life.

Margaret, meanwhile, is thriving. She regards Roy as a blip in her life, a temporary madness. I see her regularly and she remains as delightful as ever.

OK. Let’s first agree on this: We’ll change names to protect the innocent. And the guilty, too. I’ll change his name just because I’m feeling generous, although I don’t know why.

We are all too conscious of the perils the Internet, particularly its social networks, pose for young people—inexperienced and vulnerable, teenagers and twenty-somethings can be gullible and prone to rushing enthusiastically into something they later regret. Parents are constantly reminded to monitor their children’s digital activities: Who are they talking to? What are they talking about? Is the person on the other side of the screen really who s/he claims to be on a Facebook profile?

Amid the hysteria, it’s easy to forget that any of us, no matter how old and “experienced,” no matter how savvy we like to think we are, can be vulnerable to digital dissembling. No longer shamed into the shadows, the online dating scene thrives, and more and more adults look to it to find love—and to find love again, after a divorce or the death of a spouse. These websites, many emphasizing romance and long-term commitment, hide lies aplenty—about height, weight, education, the real year a profile picture was taken—underneath their veneer of respectability. Your motives for signing up with Match.com might be honorable, but there’s no way to know if the same is true for everyone else. Yet just because the possibility exists, is it reasonable for you to assume that your divorced mother’s new boyfriend, or your widowed grandfather’s new companion, might not be who they seem? Even if something about a person doesn’t sit quite right, who are you to step into an older adult’s life and spoil their new chance at happiness? Surely they are capable of making their own decisions.

Having dismissed the melodrama of “catfishing” as a phenomenon of teenage pop culture, rather than an actual threat in the rational adult world, these are questions I never thought about. That is, until I found myself struggling for answers as I faced an elderly man whose story literally didn’t add up.

We begin in a picturesque English country village, where nary a car appears on the narrow road that snakes through it to disturb the peace. Thatched cottages, with roses climbing the walls. Quaint tea rooms serving scones and jam. Ramshackle pubs offering warm local ale. This is the village where Margaret lives.

She’s a second- or third-something, possibly a great-something? I’ve never really grasped exactly what those terms mean. But she is my relative, and I’m very fond of her.

When Margaret’s husband, Derek, died eight years ago, she was devastated. Though English by birth and upbringing, by then I’d moved to New Zealand, and I didn’t have a chance to visit her until I returned in 2011. Then in her mid-seventies, she seemed in reasonable spirits, all things considered; “on the up,” you might say. Margaret is resilient, full of positivity and warmth. As a child, back in the late 1960s and early 1970s, I remember her gathering as many family members as she could for long summer lunches, for which she would dress in vibrant, flowing robes and put the Rolling Stones or Led Zeppelin records on her oversize hi-fi stereo. We younger children played in the overgrown garden of the large house in which she and Derek lived while the adults boozed away the afternoon.

She didn’t mention to me that she was actively seeking companionship through the Internet. But why should she? It was none of my business. We walked a while around her charming village and chatted over a cup of tea before I set off on my journey back to Yorkshire.

It was a couple of months later when Margaret called me, sounding slightly sheepish.

“Well….” A pause. “I’ve, um, met someone.”

Why on earth not? I thought. “That’s great,” I said. “Wonderful!” Though, as I spoke, some uncertainty entered my head. Margaret is a smart cookie, but her beloved Derek’s death had hit her hard. Even though she was lonely, was this—starting all over again with someone new—really what she wanted?

“Who is this someone?” I asked cheerfully.

“Name of Roy,” said Margaret. “We met up in London the other day. I think you’d like him.”

“How did you meet him?”

“Through the Internet!” Imagine my surprise upon learning that Margaret is a first-league silver surfer, far more proficient online than I ever could be.

“I look forward to meeting him,” I said, sincerely.

With the demands of my own work and family, it was another five weeks before I could make it down to Margaret’s village. By that time, as I discovered to my surprise when I walked through her front door, Roy had moved in.

Margaret ushered me through her small, dark hallway into the main room of her cottage. Roy, sitting in what had been Margaret’s favorite armchair, stood unsteadily, his cold, wary blue eyes fixing upon me. He’s nervous, I thought. Understandable.

“Hello,” I said, full of artificial bonhomie, offering my hand. “I’m Nicholas.”

He examined my hand for a short moment, then he shook it.

“Roy,” he said.

Already, I didn’t like him. But I said, “Very pleased to meet you.” This, after all, is the English way.

We sat down to eat in Margaret’s cramped little kitchen. As she strove anxiously to keep our plates filled and our glasses topped up, Roy and I eyed each other beneath the veneer of polite conversation. Before moving into Margaret’s cottage, I learned, Roy had enjoyed a comfortable, well-heeled pensioner’s life in the Kent town where he’d spent most of his adulthood. He’d grown up in North London, and served his country in the Second World War. His career had taken several turns; at one point he had been a financial services consultant, and, later, he owned a market garden and sold vegetables to a major supermarket chain. I found him interesting, but when pressed on detail, Roy became vague.

The discomfort in the air was palpable, but we parted on amicable terms, promising to meet again shortly. On my drive back north, I felt oddly uneasy at leaving Margaret alone with Roy. I’d thought she’d said he was 79, but a quick calculation showed that, by the end of the war, he would have been 14 or 15 years old. Had I got it wrong? Or had Margaret made a mistake?

The following weekend I traveled down again to Margaret’s village. By now, Roy seemed to feel fully at home; having sized me up the previous week, he’d clearly decided I was no threat to him. He was more affable and voluble over lunch, offering anecdotes about his time in the Royal Air Force in 1944 and his Islington childhood. He told me about his son and his wife, who lived only a few miles from his flat in Kent. Later, he sombrely described his wife’s painful and protracted battle with cancer.

“Things weren’t so advanced back then,” Roy said. “I had to nurse her through the last few months more or less single-handed.”

Before I left that evening, I confirmed his age with Margaret. “Seventy-nine,” she repeated. I didn’t pursue the question of her arithmetic, but now I knew something about Roy literally didn’t add up. And I was concerned.

Margaret believes strongly in family. Even though we have all dispersed, making those glorious lunches of my childhood impossible, she is meticulous in remembering our birthdays, sending lavish gifts and cards that she illustrates herself. So, when it came to Roy, it was natural that she should want to meet his family.

She persuaded him to invite his son and daughter-in-law over for a weekend. I don’t know how she got him to agree. He must have understood the danger of her request, but I suppose he somehow thought he could contain it. My wife and I were also invited for dinner on the Saturday. We checked into a local hotel, where Graham and Janice were also staying, so we gave them a lift to Margaret’s cottage.

Graham and Janice were pleasant company, and dinner was congenial. We “youngsters” spent most of the time talking, with Margaret making several interjections to keep things moving nicely. Roy watched us, leaning his bulk back in his chair, slowly blinking like a lizard.

On the way back to the hotel, I decided to cross-check some of my facts with Graham and Janice.

“So you’re an only child, Graham,” I said by way of conversational opener.

“No. There’s my sister, of course,” he said.

“Sorry. Of course. Remind me about her.”

“He’s probably hardly ever mentioned her because she doesn’t get on with him. For a long while I didn’t, either.”

“Oh?”

“No. We didn’t speak for years after what he did to my mother.”

“Oh. I didn’t know about that.” I was beginning to feel very anxious for Margaret.

“No. He wouldn’t have said anything. When I was about 10, he left us in the lurch. My sister was only 7. We had no money. Mum had to rely on welfare. It was a struggle.”

“Where did Roy go?”

“He went off with his fancy woman. Nottingham, I think it was? They had two kids. Then, after five years or so, he came back. Mum accepted him, she was that desperate. I wouldn’t speak to him for a while afterwards. But, you know. Time and all that.”

“A great healer.” Had he told the truth about anything?  “You must have all been so upset when your mother….”

“Pardon?”

“When your mother died. At least he tended to her in her final months. It must have brought you closer.”

“What’s he been telling you? He’s been at it again, hasn’t he?”

It seemed that Roy had once again left his wife after he retired. She’d had a number of “episodes” requiring psychiatric care, and he’d had enough. She remained in the family home, happier and better-adjusted than she ever was while she was with Roy.

And yet Graham still remained in contact with his father.

“Hard to explain. I’ve little to do with him really. Just the normal pleasantries. My sister doesn’t speak to him at all. But when Margaret invited us…. I suppose it was a chance to see whether he was up to his old tricks. This is what he does. He’ll be cruising the Internet just like he once would cruise bars and cafés. He can be very charming.”

Did Margaret suspect anything? Some of Roy’s lies she simply believed, that much was clear, but I wonder if some she simply accepted as the price she had to pay for companionship. I was paralyzed. What should I do? Margaret is, after all, many years my senior. It would have been impertinent for me to interfere, to tell her that she was wrong to trust the man who’d moved into her home. She seemed to be happy, and not in the path of any immediate harm.

I concentrated on being supportive, keeping in close touch with Margaret and visiting as often as I could. I took care not to appear judgmental of Roy in Margaret’s hearing, and was pleasantness personified with him directly. He was not difficult to catch out, but I was careful not to put him on the hook. I simply waited. It was for Margaret to make her choice. All I could do was to be ready if things changed.

About three months later, Margaret rang me, somewhat out of breath. My immediate thought was what has he done? But I didn’t need to be concerned. About that, at least.

Margaret had become wise to Roy. The accumulation of lies had grown so large that something between them had snapped. The old Margaret, sharp as a tack, was back—and annoyed at having been such an old fool. Angry at Roy for tricking her, but mainly at herself for letting him delude her. She wanted to end the relationship.

By this time, Roy had apparently sensed something was awry and was keeping close tabs on her. “I’m afraid of him,” she said. “Physically afraid. I don’t know what to do!”

Conflict avoidance is something of a way of life for me, often miring me in situations that could have been avoided had I been more direct from the start. I shun physical violence. I don’t get into “alpha male” one-upmanship. But a wimp’s gotta do what a wimp’s gotta do. Margaret needed my help.

She and I arranged covert meetings. She’d let herself out of the house for an hour or so on some pretext, and we’d meet in a local coffee shop to plan our next move in hushed tones. Margaret ran through the mental script of what she would say, and I tried out a couple of ideas about the part I would play. We talked timing and fallbacks. At home, I researched the legal position of the situation, and agonized over safety issues—for Roy’s sake as much as for Margaret’s.

Eventually, the day arrived. My wife and I turned up at Margaret’s cottage at a pre-appointed time, and she let us in. Roy looked at us suspiciously over the top of his newspaper.

Margaret delivered her short speech. It was over, she said. She’d enjoyed meeting Roy. It’d been fun getting to know him. But she’d grown tired. No, she’d become exhausted with worry, her blood pressure was sky high and she regularly had palpitations. She’d been foolish to let him come and live with her. If they’d had an arm's-length relationship, maybe they would still be friends. But suddenly she’d realized that almost everything he told her about himself was a lie, that he was sponging on her good nature, that he was idle and manipulative and cared nothing for her. There was no going back; this was the end of it.

I helped Roy pack, ordered a cab, paid the driver several hundred pounds, and Roy was on his way back home. All we had to do now, for Margaret’s peace of mind, was change the locks on the house and her telephone number.

Success, finally. I felt about as tired and down as I could possibly feel. Confronting an 86-year-old man and turning him out of his home is not an activity I’d recommend to anyone. But it had to be done, and it was.

I’ve no idea what’s become of Roy. If he’s still around, he’ll be well into his nineties. Despite everything, I honestly wish him no ill. I still wonder what his motive was. Very basic, I suspect: he simply wanted someone to look after him so that he could live out his final years comfortably. The easy life.

Margaret, meanwhile, is thriving. She regards Roy as a blip in her life, a temporary madness. I see her regularly and she remains as delightful as ever.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 04, 2016 16:00

Trump really does stand for B.S.: “Trumpery,” an old-fashioned word that’s proving useful today

2015’s Word of the Year was singular they, according to the American Dialect Society, and it’s far too early to speculate on this year’s top word. But it would be easy to pick the word of the Presidential race so far: trumpery. This old word for various types of bullshit has gotten new life thanks to bullshit maestro Donald Trump. It’s become a common pastime to link Trump with definitions of trumpery on social media such as Twitter and publications such as Mother Jones. Here in Salon, Randy Malmud suggested we pull a Santorum on Trump by reviving an obscure meaning of trump: to fart loudly. But we hardly need a lexical campaign when trumpery exists. Trump’s name is already as stuffed with horseshit as his words. People mean a lot of different things when they call bullshit, and there is a bullshit spectrum, which includes lies, flim-flams, boasts, nonsense, and other rubbish (as I discuss in my book "Bullshit: A Lexicon"). Trumpery has been used to cover a large part of that spectrum, starting with deceit in the 1400s when it was borrowed from the French tromperie. Trumpery, like the verb to trump, involved some sort of cheating, swindling, hornswoggling, or bamboozlement. Early examples from the Oxford English Dictionary (OED) find the word in revealing pairs:tromperyes and deceytes” and “trumperies and double dealings.” From the start, trumpery was not to be trusted. By the 1500s, trumpery was being applied to a different sort of crap: trifling, insignificant objects. Shakespeare used the term with this sense in "A Winter’s Tale" (“I haue sold all my Tromperie”) and "The Tempest" (“The trumpery in my house, goe bring it hither For stale to catch these theeues.”) That second use is the first known example of the term applying to stuff that’s not just worthless, but worthless and showy. A 1789 example in some travel writing by Hester Lynch Piozzi gives a sense of trumpery’s non-value: “A heap of trumpery fit to furnish out the shop of a Westminster pawnbroker.” Anything that comes by the heap, bunch, or load isn’t worth much. As these meanings took hold, trumpery was also being used as it is today: for nonsense, malarkey, and bunk. Today, trumpery can refer to just about any sort of balderdash, but it used to refer specifically to religious or woo-woo ideas. Given his recent spat with the Pope, Trump himself might appreciate this 1731 use from travel writer Joseph Pitts: “They blame the Papists for having so many Trumperies in their Churches.” Trumpery has had a few other obscure meanings. Misogynistic Trump might be pleased to learn that a trumpery can sometimes be a strumpet. Another meaning refers to weeds or anything else gunking up a garden. Rarely, trumpery has been an adjective. Though it sounds odd today, the OED has examples of trumpery brooch, trumpery performance, trumpery new house, trumpery rhetorician and trumpery quarrel. All those uses involve the insignificant or worthless meaning. An extremely rare variation popped up in 1886: “How these things impress the lover of Gothic who dwells in a country of churches of inexpressible trumperiness and shabbiness!” Trumperiness doesn’t exactly roll off the tongue like truthiness, but then again neither does President Trump. In these dark days, we need all the lexical ammunition we can find. Much like when Joe Biden used malarkey in a debate with Paul Ryan, the lexical stock of trumpery has risen thanks to Trump. Writers find it hard to resist linking Trump to this older noun. For example:
“The Trumpery Before Trump” — Feb. 17, Huffington Post “It’s a non-evidence-based approach to politics, what you might call Trumpery. It’s terribly dangerous.” — Jan. 25, Evening Standard “‘Trumpery’ is defined as showy but worthless nonsense or trickery. On Tuesday in Iowa, trumpery was on display as Republican presidential front-runner Donald Trump was endorsed by former half-term Alaska governor and 2008 GOP vice presidential candidate Sarah Palin.” — Jan. 24, Online Athens “Trumpery is coming to Louisiana. Naming his state campaign team in Louisiana, The Donald says he will soon honor us with a visit.” — Jan. 23, The Advocate “People keep lauding Trump's business success. I don't find five bankruptcies business success. It is a terrible record. Someone who calls him a successful business man is just spouting trumpery.” — Jan. 22, PJ Media
Word evolution never stops, and it’ll be interesting to see if Trump gives an already disparaging word a whole new odor—and perhaps a revised origin. Sometimes, trumpery gets a new spelling as Trumpery—to emphasize that it’s not just any trumpery, but trademarked Trumpery from the orange blowhard himself. It would be fitting if some people began to wonder if Trumpery had been coined for Trump, as the capitalization implies. That wouldn’t be the first time a word evolved due to misunderstanding. Eggcorns—so named because of a misspelling of acorn—are spelling changes that are incorrect but logical, and sometimes they stick. That’s why chaise lounge is probably more common than the original chaise longue, and it’s why free reign may someday overtake free rein. Logical mistakes tend to multiply, and I can’t imagine a more logical whoopsie than that assuming Trump begat trumpery. If there’s ever a time to throw accurate etymology out the window and go with a good story, this is that time. But as scary (and racist and misogynistic and Islamophobic) as candidate Trump is, we should be thankful for the lexical synchronicity of his name. We may never see the like again, unless future elections feature candidates named Marla Malarkey, Ed Twaddle, Jennifer Gibberish and Bobby Bullshit.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 04, 2016 16:00

Relic hunter: A missing Christian relic, the fall of Nazi Germany and a mystery that flummoxed historians for centuries

As locked-room mysteries go, this one was a doozy. A famous Christian relic disappeared from a locked vault beneath the bomb-blasted streets of Nuremberg, just as the Second World War was ending and the famous trials of the Nazi elite were set to begin. The Holy Lance, or Spear of Destiny, was the iron pilum used by the Roman legionnaire Longinus to pierce Christ’s side as he hung on the cross, to see if he had died. According to the Gospel of John (19:34) and apocryphal sources, “water and blood” flowed from Christ’s wound and into Longinus’ eyes, and he was immediately converted. Through history, four different spearheads have been claimed as the original (two of which are actually pieces from the same head). This most famous candidate had been part of Charlemagne’s treasury, and had ended up in the collection of the Belvedere Palace in Vienna. When Hitler annexed Austria, he had the Holy Lance and other treasures that were part of the Reichskleinodien, the Crown Jewels of the Holy Roman Emperors, brought to Nuremberg. Hitler and Himmler were fascinated with the occult, and saw Nuremberg as the spiritual center of the Nazi Party—Hitler planned to have himself crowned there as a latter-day Holy Roman Emperor at the war’s end. When the Allies stormed Nuremberg, a German-born Monuments Man, Walter Horn, was called in to locate the Crown Jewels—which had gone missing from the locked subterranean vault in which they were kept, and for which only three people had the key. It was a hunt for stolen treasure, but a treasure with an enormous symbolic significance for Nazis and Christians alike. If, that is, it was authentic. A cult fascination with religious relics and their supposed supernatural properties began shortly after the establishment of Christianity. The purported tomb of Saint Peter, located in an ancient Roman necropolis hidden directly beneath the high altar of Saint Peter’s Basilica in Rome, was initially found to be empty, and the Vatican archaeologist, Monsignor Ludwig Kaas, who found it in 1939, feared that he had arrived too late—that early Christian relic hunters had stripped away Peter’s bones for their sacred value and the supposed power of the remnants of saints to work miracles and heal the sick. But investigation led to an astonishing discovery: the tomb had been built expressly to trick relic hunters. The bones were actually entombed in a hollow in one of the four walls of the tomb, not in the cavity of the tomb itself. This demonstrates an awareness that the bones of saints would be sought, traded and sometimes stolen by the faithful, even within a few years of Christianity’s foundation. Throughout the Middle Ages, a lively trade in relics flourished, including numerous fakes. With thousands of soldiers streaming home from the Holy Land after the Crusades, it was not hard to believe that a fragment of bone might indeed have come from some saint, whether or not it did. Famous relics, or objects claimed as authentic relics, made their way from Palestine through Europe, from the Crown of Thorns (currently enshrined at Notre Dame de Paris) to the Shroud of Turin (a suspected forgery as early as the 13th century, and proven as such by scientific testing in the 1980s). Which of these objects, if any, were real relics, actually involved in the life of Christ? There are currently three lance-heads that are claimed by their owners to be the original Holy Lance of Longinus. Which, if any, is the real one? The Echmiadzin Lance is currently displayed in Armenia. It was discovered during the First Crusade in 1098 by a soldier called Peter Bartholomew, after he had a vision of Saint Andrew telling him the location of the lance, in the cathedral of Saint Peter in Antioch. This lance is oddly diamond-shaped and flat, rather than sharply pointed, and has a conical iron base that would fit at the end of a pole—most scholars think it looks more like the fitment that would stand atop a Roman standard, and would not make for a serviceable weapon. There is no reason to doubt that it was found by crusaders in Antioch, but similarly there is no reason to associate this with Longinus, beyond wishful thinking on the part of those who discovered it. The Vatican Lance is said to rest in a reliquary beneath a sculpture of Longinus by Bernini, in a niche beneath the dome of Saint Peter’s (though it is not visible to the public). This lance was first recorded by Antoninus of Piacenza (570 AD), who wrote that he saw “the crown of thorns with which Our Lord was crowned and the lance with which He was struck in the side” at the Basilica of Mount Zion in Jerusalem. Numerous other early Christian writers likewise described having seen it. In 615 a Persian king, Chosroes II, took it from Jerusalem and is said to have broken off the tip to give as a gift, which ended up in Hagia Sophia in Constantinople—it was eventually sold by King Baldwin II to King Louis IX of France, along with the Crown of Thorns, and was displayed in Sainte-Chapelle, until it disappeared during the French Revolution. The Nuremberg or Vienna Lance, which the Nazis venerated as the original, is now on display at the Belvedere Palace in Vienna. It has a long, documented history, as it was part of Charlemagne’s Crown Jewels, guarded as sacred by Holy Roman emperors for centuries. It had been kept in Nuremberg for a millennium, before being moved to Vienna to keep it out of the hands of Napoleon’s thieving troops. Hitler brought it back and locked it away in a vault, from which it was stolen. Walter Horn eventually found it, along with the other the Crown Jewels, and they were returned to Austria (the story is grippingly told in Sidney Kirkpatrick’s book "Hitler’s Holy Relics"). Will the real Holy Lance-head please stand up? The best way to determine which might be the real one is by scientific testing, but it is rare for permission to be given for the forensic testing of relics. Faith may be defined as blind belief, without the need for tangible “proof,” and most would say that religious relics are for the faithful, not for the scientists. In the unusual instances when an object has undergone tests, as in the case of the Shroud of Turin, it has not come out well for the object. In that case, four independent scientific laboratories all determined that the Shroud was actually a painting made the 13th century, a forgery meant to be passed off as an ancient relic. The Vatican has never tested the bones that they believe to be those of Saint Peter, beyond determining that they came from a man of around 60 years of age, with a muscular build. Testing can rarely definitively prove what something is; more often it rules out possibilities. Carbon-14 testing can give the age of an object, so the bones could indeed date to the first century AD, but that does not prove that they are Saint Peter’s—merely that they could be. And if they date to a later period, then the precious relic has been deposed, and faith in it shattered. It was a surprise, then, that the Vienna Holy Lance was subjected to scientific testing, which demonstrated that it could be authentic—at least part of it. In 2003, metallurgist Robert Feather was given special permission to examine the lance for a BBC special. He determined that the iron of the lance-head itself was from the 7th century AD, but that inside the Lance he found a 1st century Roman nail, which tradition holds was one of the nails used in the crucifixion, and is itself a venerated relic, the Holy Nail. As part of this lance’s intricate history, it is thought to have been displayed in the cathedral of Lombard Milan in the 7th century, at which point the Holy Nail was inserted into the lance-head, to create a sort of double-strength relic, nail and lance. That insertion would have required some recasting, which could explain the 7th-century dating of the iron. In short, this version of the Holy Lance could be the original, and could actually incorporate a different relic of the Passion. But, in the end, it is up to the faithful to decide.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on March 04, 2016 15:59