Helen H. Moore's Blog, page 984

October 10, 2015

“Condoms are pretty much awful”: So why isn’t there something better for men?

Three state-of-the-art birth control methods for women have annual pregnancy rates below 1 in 500, and the user doesn’t have to think about them for years at a time. By contrast, the best option available to men (short of sterilization) has an annual pregnancy rate of about 1 in 6 and has to be rolled onto an erect penis during each sexual encounter. A new generation of researchers would like to change that — but change takes money.    Why the Neglect? During the last 70 years, billions of dollars have gone into research on female contraception. While pharmaceutical companies are reluctant to invest in innovation, public health dollars and private philanthropists have funded research at university medical schools or nonprofits like the Population Council to fill the gap. As a consequence, family planning options and outcomes have improved, sometimes dramatically. Pills contain less hormones, IUDs now protect against disorders like endometriosis and several methods offer lighter, less frequent periods or other bonus health benefits.   But during the same time, research into male-controlled methods received paltry attention, largely because funders believed men weren’t interested. That may have been true 70 years ago. But many men today say they are perfectly willing to share the responsibility of family planning and in fact want the means to manage their own fertility. Young Men Feel It In the 1970s the typical man was about 21 years old when his first child was born. Since then, that age has shifted by almost a decade — 10 years in which many young men prefer to focus on education and launching a career before launching into parenthood. The more a young man cares about being a good partner and father (someday), and the more he cares about his own future, the more he is likely to feel stressed about the potential for unwanted pregnancy. Consider some of the following comments: Being self-reliant is important to me. Condoms are pretty much awful, and it's just not good enough for me to have to trust a partner to consistently & effectively address the birth-control question. I know of too many firsthand accounts of instances in which doing so just didn't work out. It's time for men to have reliable options like women do. –Adam It's absolutely terrible to have no other option as a male. I'm just 17, and thinking that I'll have to spend 50 years using the condom with no other possibility? That would be awful. I want something new! --Samuel I'd like something with the convenience and reliability of The Pill. Pop one in and don't worry about it. The key word here is choice and being free of worry. –Alan I simply want to be able to take control into my own hands instead of relying on my partner who doesn't like having synthetic hormones coursing through her.—Adam I've been active in discussing and promoting the legalization of abortions, and it's a real pain to see the dismissal of male reproductive rights from the very same people who fight for women. It saddens me to hear the very same arguments the antiabortion camp uses against women. Like "If they're afraid of having unwanted kids, men should get vasectomies," or "Just keep it in your pants." –Facundo  Cesa Males put their financial future in their hands as we have to pay child support for any pregnancy we father, no matter the circumstance. –Gustavo Normally a man just can't know, and he doesn't even have a right to know what's the fertility status of a woman he has sex with. He doesn't know when she had her last period, if she's on the pill, if she has been taking it regularly, if she had antibiotics that might interfere, if she's on some other kind of birth control, if she had a hysterectomy, if she's trans and what's her stance on abortion, or even the morning-after pill (she's under no obligation to tell him ANY of that). That being the case, he just can't make an informed decision when he has sex. –Facundo Cesa I want reliable control over my fertility, so I can control the timing and spacing when I have children. At this time in human history, it is a basic right. –Derek Gender Injustice Derek’s comment about family planning as a basic human right strikes home for many. And yet, an upcoming (and much welcome) conference on the Human Right to Family Planning has not a single session devoted to better birth control for men. In an ideal world, every child would be created with the mutual consent of both biological parents. State of the art “set-and-forget” contraceptives coupled with safe abortion make that future a reality for the female half of the human population, at least for those privileged females who have access to the best money can buy. But guys who have sex (that’s most of them) are still rolling the dice. A woman today may be using a method with a high failure rate. She may not be using contraception consistently enough. She may lie. And if a pregnancy occurs, she has the final say in whether to carry it forward.   Better birth control for men is a moral imperative and a matter of gender justice, as well as making good practical sense. We don’t often talk about it this way, primarily because we’re accustomed to females being the underdogs in the justice equation. For millennia, girls and women had little right to decline either sex or pregnancy, and in much of the world that is still the case. And yet, a young man’s dreams, hopes, and family plans can be shattered by a surprise pregnancy. For those who care deeply about being good fathers, an ill-conceived pregnancy can be devastating. If we are going to help young men become loving, engaged parents, we need to ensure that they father a child only when they feel ready for partnership and nurturing.   The Male Contraception Initiative Frustrated by the lack of progress on male birth control, attorney Aaron Hamlin founded the Male Contraception Initiative at the beginning of 2015. The mission of MCI is to inform and mobilize the public and secure funding in order to accelerate better birth control for men. The MCI website features and follows a wide range of birth control methods in various phases of research. "We have to cast a wide net," says Hamlin. Even so, almost immediately, the MCI board and staff got excited about the molecular research of a small biotechnology company, Spacefill Discovery, which became the focus of their first, exploratory crowd-funding campaign. After raising $12,000 from 200 donors this summer, MCI has been working with Spacefill Discovery to secure additional grant and venture funding. Innovation Depends on Small Biotech Early stage contraceptive research is a wide funnel, meaning many projects need to be funded for one to achieve success. In the past, a pharmaceutical company might have invested hundreds of millions of dollars over literally decades — mostly in dead ends — before taking a new drug to market. One male contraceptive, a bimonthly hormone injection, got as far as international clinical trials before being canceled abruptly in 2011 due to mood changes, depression and increased libido in some trial participants. Long lead times, high financial risks, complex regulations and liability make R & D of this type unattractive to the institutional investor and big pharma. Consequently, contraceptive innovation has largely been abandoned by established pharmaceutical companies, which mostly opt instead to make minor tweaks in existing lucrative franchises in order to extend patent protections. Today, 14 products drive 80 percent of revenue in the female contraceptive market, all of them using hormone formulations that were identified and synthesized a generation ago.   But fundamental changes in bench science have made early stage biotech research cheaper and faster, allowing small players to enter the field, particularly if they can garner support from mission-driven investors. While Johnson & Johnson, Bayer and Merck dominate the market for branded contraceptives (currently all female-focused), the roll call of those involved in early research is largely a list of unknowns: Ligand, Bio Pro, ASKA, Orient Europharma, Hydra Biosciences, Pantarhei ... and of course, the one that caught the eye of the Male Contraception Initiative, Spacefill Discovery. "It is through science that we prove, but through intuition that we discover." -- H. Poincare Technological breakthroughs often happen when the unexpected is explored — when the unfettered curiosity of academics and the compulsive tinkering instinct of inventors collide, between the public good and the profit motive, between the joy of discovery and the need to commercialize inventions. Spacefill Discovery is no exception. I spoke with co-founder Dr. David Brandt to learn more about their male contraception project. “This is a story of how science can happen when you are open to it and not putting blinders on and being too focused on a specific goal,” Dr. Brandt said. Dr. Brandt was literally minding his own business — cancer research — when he got a call from an old colleague, semi-retired medicinal chemist Dr. Gary Flynn. “I think I’ve discovered something new,” said Flynn. "Medicinal chemist" means Flynn designs medicines at a molecule level.  I’m not sure what "retired" means in this context, because Dr. Flynn spends his waking hours designing compounds and then using 3-D modeling techniques, docking them into proteins. Dr. Flynn’s “something new” was a kind of chemical scaffold that binds to a class of proteins known as kinases. Kinases are proteins that control virtually all cell activities, and they are key drug targets. Flynn had found a way, at least in theory, to block them in a new manner. One Thing Leads to Another   At this point, Drs. Flynn and Brandt had no intention of embarking on the elusive quest for male birth control; they were interested in possible new treatments for cancer. So, they started tinkering with the scaffolding, trying to figure out what kinds of cancer processes Flynn’s compounds could inhibit. The work went slowly; it was a side project. In the meantime, Flynn began excavating the electronic library stacks, searching for related content. He came across a proposal by Dr. James Chen, a Stanford professor who was researching one of the kinases that Flynn’s compounds could block very specifically.  That’s when they got excited. I’ll have to leave the technical description in Brandt’s words:
We reached out to Dr. Chen and learned he was working on mechanisms inside the developing sperm cell that this kinase may control. He got excited when he learned that we had very potent and selective inhibitors of this kinase from our cancer research. Why the excitement?  Because the presence of this kinase is 10,000 higher in the testes than anywhere else in the body. That is an ideal situation that we almost never find in drug discovery. Usually a drug’s action on its protein target will affect several tissues and organs in the body, and that is one reason drugs have side effects.    One of the big challenges of developing a male pill is that ... with the female reproductive system if you perturb it the system shuts down.  With men the primary hormone regulating sperm production is testosterone which is also plays a key role in normal behavior and metabolic ability — it affects sugars, lipids and muscle tone, so you don’t want to block it. What’s really exciting about this kinase is that, if you look at the entire pathway of spermatogenesis, it is found only in the later stages not at the germ cell line stage. In mice if you knock out this gene they are healthy but sterile. So it has a very narrow function, and hitting late stages makes reversibility more probable. And it’s non-hormonal!
Fizz or Fizzle? Have Drs. Brandt, Flynn and Chen found a way forward? I remind myself of the cautionary mantras: Early stage biomedical research is a big funnel. Most start-ups fail. That’s why angel investing and venture capital are high risk. But the few that succeed have the power to change individual lives and shape the future. That’s why angel investing and venture capital are also high reward.   Brandt himself is chomping at the bit. “We have been working on it at a very low level out of our own pockets. Now we are looking for funding to ramp it up from a side research project. If it is properly funded — the time frame is really driven by funding — we could be in the clinic in three years.” he says. That’s clinical trials, not your local Planned Parenthood, and that’s only if all the ducks line up, but it would be huge. Will it happen?  That’s hard to say. Spacefill Discovery’s funders are focused on cancer. Drs. Flynn and Chen submitted a grant to the National Institutes of Health, but didn’t get it. They came out at the top of the Male Contraception Initiative’s competition for crowd-funding, but MCI is barely off the ground itself. Even so, to Brandt’s mind — as to mine — the imperative is clear. A 2012 survey of 40,000 men who were asked about their desire for better male birth control came back with a resounding "Yes!"  Dr. Brandt conducts an informal survey of his own among friends who ask about his work.  “I get dramatic responses,” he says. “No way or where can I get it? Nothing in between. About 70 percent of guys in my network say 'Where can I get it?', and the other 30 percent say 'No way.' That’s only my social network. I would like to see the data in the 18- to 35-year-old demographic, because right now the only really effective protection is a vasectomy, which is permanent. What if your life changes and later you want to have a child?” Dr. Brandt recalls the stress of being a single man, the lack of options, the edge of worry — even after marriage — that life might be dramatically altered by an unanticipated child.  He talks about his two daughters and what it’s like as a dad to hope your daughter’s dreams and ambitions won’t get derailed by a surprise pregnancy, and how parents of young men feel the same way. We need more conversation about male birth control, he says. It would change the dynamic between men and women. It would change lives. Advocate Facundo Cesa agrees. “A friend of mine just got a vasectomy. He chose to forget all about ever having children. His peace of mind was worth it, he says.”  Worth it, maybe, but why should young men have to make that hard choice? Surely we can do better.  

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 15:00

Freedom, on the oligarchs’ terms: What a billionaire’s nuisance suit reveals about American plutocracy

The Supreme Court’s Citizens United ruling is a consummately extreme and ideological document, the product of the Federalist Society mindset through and through. So if you’re trying to elevate one specific part of the ruling as especially disconnected, you’ve got a lot to work with. In fact, it’s one of those questions for which there is no “wrong” answer. If I had to choose just a single piece of the decision’s underlying argument as the most unhinged, though, I’d go with Justice Anthony Kennedy’s assertion that “independent expenditures do not lead to, or create the appearance of, quid pro quo corruption.” I’ve written about this previously, so I’ll try not to belabor the point. But the key thing here is that Kennedy isn’t just saying super PACs don’t corrupt. He’s saying they don’t even create the appearance of corruption. Because guarding against even the mere appearance of corruption is seen as a legitimate and Constitutional government interest, Kennedy, if he wanted to make Citizens United stick, didn’t have much of a choice but to argue as he did. Yet even if you give Kennedy “credit,” in a sense, and assume his argument stemmed from Machiavellian cynicism rather than libertarian dogmatism, the idea is still self-evidently absurd. (And subsequent polling has confirmed it to be empirically wrong, as well.) Again, Kennedy’s is an especially radical view. That said, the general assumption that allows Kennedy to stray so far off the reservation — that economic inequality does not create political inequality — is widely accepted by conservative elites. A cursory knowledge of history and simple common sense should make you reject the idea out-of-hand. But if you need a real-world example of why economic and political power aren’t so easily decoupled, look no further than a district court ruling that came out of Idaho earlier this week. The case in question involved the liberal magazine Mother Jones and Frank VanderSloot, a GOP mega-donor (who, judging by his name, may also be a villain in a long-lost Charles Dickens story). You can read MoJo’s recap of the case here, but the quick-and-dirty summary goes like this: In 2012, VanderSloot sued MoJo for defamation over a piece about his company’s support for a pro-Romney super PAC that also happened to mention his history opposing gay rights. MoJo fought back. VanderSloot lost. Well, he lost technically, that is. Because before MoJo was vindicated by the judge (who made clear in her decision that she was not a fan of the magazine, might I add) it had to spend, in its telling, “at least $2.5 million defending ourselves.” For a guy like VanderSloot, who reportedly is worth something north of $1 billion, that’s chump change. But that is a lot of money not only for normal people but also for most small or mid-size publications, especially ones that focus on somewhat niche topics like public policy. There’s another element of the story that you should know, too. At first, VanderSloot’s response to MoJo’s post was to simply have his lawyers send a huffy letter that pointed out some minor errors. MoJo made the appropriate corrections and figured that was that. But then MoJo broke Romney’s infamous “47 percent” comment, which many people believe sunk the GOPer’s chances of becoming president. It was only then that VanderSloot, who was a national finance chair for the Romney campaign, decided to sue. As they say, you can draw your own conclusions. Remember, VanderSloot’s lawsuit was completely legal (as were the lawsuits he subsequently opened against individual journalists involved in the story). He may have had an enormous advantage in terms of the financial resources he could muster, and the political connections he could leverage; but at least nominally, he and MoJo stood in the courtroom on equal footing. They had equal access to the law and equal access to justice. Provided, of course, they could pay for it. This is just one small example. What’s more, it could have turned out much worse. It’s easy to imagine a scenario in which these kinds of bullying lawsuits, these attempts to stifle speech by roping journalists into a war of attrition, have their intended effect. It’s easy to imagine an editor or a publisher deciding not to publish an article that criticizes a politically active billionaire, or an influential corporation. And thanks to the Supreme Court, a vindictive 1 percenter could have a cash-strapped politician on his side, too. That’s the kind of ersatz egalitarianism that increasingly defines American politics. But don’t worry too much; according to Justice Kennedy, this state of affairs doesn’t even appear to be corrupt. And I’d hazard a guess that VanderSloot is right there with him.The Supreme Court’s Citizens United ruling is a consummately extreme and ideological document, the product of the Federalist Society mindset through and through. So if you’re trying to elevate one specific part of the ruling as especially disconnected, you’ve got a lot to work with. In fact, it’s one of those questions for which there is no “wrong” answer. If I had to choose just a single piece of the decision’s underlying argument as the most unhinged, though, I’d go with Justice Anthony Kennedy’s assertion that “independent expenditures do not lead to, or create the appearance of, quid pro quo corruption.” I’ve written about this previously, so I’ll try not to belabor the point. But the key thing here is that Kennedy isn’t just saying super PACs don’t corrupt. He’s saying they don’t even create the appearance of corruption. Because guarding against even the mere appearance of corruption is seen as a legitimate and Constitutional government interest, Kennedy, if he wanted to make Citizens United stick, didn’t have much of a choice but to argue as he did. Yet even if you give Kennedy “credit,” in a sense, and assume his argument stemmed from Machiavellian cynicism rather than libertarian dogmatism, the idea is still self-evidently absurd. (And subsequent polling has confirmed it to be empirically wrong, as well.) Again, Kennedy’s is an especially radical view. That said, the general assumption that allows Kennedy to stray so far off the reservation — that economic inequality does not create political inequality — is widely accepted by conservative elites. A cursory knowledge of history and simple common sense should make you reject the idea out-of-hand. But if you need a real-world example of why economic and political power aren’t so easily decoupled, look no further than a district court ruling that came out of Idaho earlier this week. The case in question involved the liberal magazine Mother Jones and Frank VanderSloot, a GOP mega-donor (who, judging by his name, may also be a villain in a long-lost Charles Dickens story). You can read MoJo’s recap of the case here, but the quick-and-dirty summary goes like this: In 2012, VanderSloot sued MoJo for defamation over a piece about his company’s support for a pro-Romney super PAC that also happened to mention his history opposing gay rights. MoJo fought back. VanderSloot lost. Well, he lost technically, that is. Because before MoJo was vindicated by the judge (who made clear in her decision that she was not a fan of the magazine, might I add) it had to spend, in its telling, “at least $2.5 million defending ourselves.” For a guy like VanderSloot, who reportedly is worth something north of $1 billion, that’s chump change. But that is a lot of money not only for normal people but also for most small or mid-size publications, especially ones that focus on somewhat niche topics like public policy. There’s another element of the story that you should know, too. At first, VanderSloot’s response to MoJo’s post was to simply have his lawyers send a huffy letter that pointed out some minor errors. MoJo made the appropriate corrections and figured that was that. But then MoJo broke Romney’s infamous “47 percent” comment, which many people believe sunk the GOPer’s chances of becoming president. It was only then that VanderSloot, who was a national finance chair for the Romney campaign, decided to sue. As they say, you can draw your own conclusions. Remember, VanderSloot’s lawsuit was completely legal (as were the lawsuits he subsequently opened against individual journalists involved in the story). He may have had an enormous advantage in terms of the financial resources he could muster, and the political connections he could leverage; but at least nominally, he and MoJo stood in the courtroom on equal footing. They had equal access to the law and equal access to justice. Provided, of course, they could pay for it. This is just one small example. What’s more, it could have turned out much worse. It’s easy to imagine a scenario in which these kinds of bullying lawsuits, these attempts to stifle speech by roping journalists into a war of attrition, have their intended effect. It’s easy to imagine an editor or a publisher deciding not to publish an article that criticizes a politically active billionaire, or an influential corporation. And thanks to the Supreme Court, a vindictive 1 percenter could have a cash-strapped politician on his side, too. That’s the kind of ersatz egalitarianism that increasingly defines American politics. But don’t worry too much; according to Justice Kennedy, this state of affairs doesn’t even appear to be corrupt. And I’d hazard a guess that VanderSloot is right there with him.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 14:56

America wants you to suffer: The staggering ways we punish our college graduates

AlterNet In a recent issue of Psychology Today, Peter Gray joined the chorus of Millennial-bashers, making the case that college kids lack "resilience." He says that more college students depend on counseling services and offers up anecdotes of woefully pathetic kids. The basic gist of the piece — common to this genre of article — is that society is doomed because young people can't handle "everyday" challenges. Other commentators have been all too happy to bash Millennials. In fact, everywhere Millennials turn they are told that they’re lazy, entitled, narcissistic and clueless. They have even been called “the lamest generation.” This conclusion is wrong, and it's damaging. When critics accuse Millennials of lacking resilience, they fail to appreciate the very real pressures young people face. This line of argument is especially damaging because it transforms major public issues into a problem of character. Blaming "helicopter" parents and an "overprotective" society for failing to inculcate kids with coping skills misses the point. Young people do live in a helicopter society, but the helicopters that ominously hover over them are much larger socioeconomic forces that threaten their safety and success. They live in a world that is fundamentally hostile to their future. Gray’s argument goes like this: parents have not allowed children as much time to freely play and explore, and this reduction in time for adventure has produced fearful, coddled losers who can’t cope in the world. He claims parents have solved their kids’ problems, leaving them unable to deal with everyday challenges without calling mommy — or, if in college, a counselor or faculty member — to figure things out for them. Some of this should come as no surprise. Older generations have always demonized the young. Generational theorists William Strauss and Neil Howe remind us that at the outset of World War II, army psychiatrists complained that their GI “recruits had been ‘over-mothered’ in the years before the war.” According to Russell Dalton, the younger generation is constantly blamed for all that is wrong in our nation. He explains that Millennials may be the most denounced generation ever. The thing is, though, these attacks are unfounded. Sure, it may well be true that counseling services are used more frequently on campus. But for some reason, it never occurs to Gray that students’ need for support make sense. He tells of “a student who felt traumatized because her roommate had called her a ‘bitch’ and two students who had sought counseling because they had seen a mouse in their off-campus apartment.” Gray never considers, as would many mental health professionals, the possibility that the reaction to the mouse and the name-calling are symptoms of a deep anxiety that may be well-founded rather than a product of immature hysteria. In contrast, I’d suggest that students are anxious and depressed for a range of good reasons. Millennials have inherited one of the most difficult social situations to face a generation of college students in decades. Let’s look at some facts: About one-third of college students are first-generation American. This means that many of their parents might be limited in their ability to give them advice on college life in the United States, since they have no direct experience. It is also important to note that 60 percent of these students do not complete their degree. Forty-three percent of Millennials are of color. This is the most diverse generation in U.S. history. While college campuses attempt to attract a diverse student body, students of color often find a lack of inclusion and support once they arrive. It’s also worth noting that 79 percent of faculty are white, a factor that researchers suggest can influence the success of non-white students. Approximately 25 percent of Millennials were raised by single parents. And about 66 percent of single moms work outside the home — a factor that greatly limits a parent’s ability to solve a child’s problems for him or her. Twenty-six percent of undergraduates are raising dependent children. There are twice as many openly LGBT students on college campuses than there were in 2011. A recent study shows that this group suffers disproportionate sexual harassment (73 percent ) and violence (44 percent). One in five women is sexually assaulted in college. Many students face food insecurity. Hunger among college students is on the rise: 121 college campuses have food banks for students and, in one example, 60 percent of students at Western Oregon University reported suffering from hunger and poor nutrition. About 4 percent of all undergraduates are veterans or military service members. Statistics show that this population has higher rates of mental disorders. Eleven percent of college students have learning disabilities — a reality that would make coping with college workloads naturally more challenging. Thirty to 40 percent of all graduates have double majors, a trend designed to offer students more job prospects, and one that also brings with it far more stress as students have to overload to complete their degrees. (I have one student this term enrolled in 28 credits so that she can graduate on time.) Seventy percent of college students have school debt and the average owed is $28,400. The total amount of student debt today is $1.2 trillion. Eighty percent of students work part-time while in college and 18 percent pay their way through college. Twenty percent  of working students work 35 hours a week or more. Only 22 percent of college students get their bills paid by their parents. Sixty-two percent of students manage a budget. Millennials account for 40 percent of the nation’s unemployed. If they do have jobs, they earn less than the nation’s median income as compared to those of the same age a decade ago. When they do get meaningful work, they toil away at unpaid internships that may never become full-time job offers. These statistics belie the argument that this generation's biggest problem is overattentive parenting. But it’s worse than that: most of the arguments that charge Millennials with being coddled are based on anecdotes that really only refer to a highly select segment of Millennials who might be coddled by their parents — let’s call them the “1 percent” of Millennials. Despite the hype, a 2012 APA study found that only 12 percent — at most — of college students consult counseling services. Given the reality-based stresses so many college students face, we might conclude that today’s students are potentially the most resilient generation we have seen in decades. Even though they inherit a tough economy, a fractured political system, and a news media dominated by hype and fear, today’s young adults remain overwhelmingly optimistic. They might suffer from depression and anxiety, but they also have a strong positive attitude. They vote in record numbers. They volunteer more than any other generation. They value many of the same things older generations do, like being a good parent and having a home. They prefer to buy from companies that support social issues.  A 2010 Pew research study characterized them as “confident, self-expressive, liberal, upbeat and open to change.” But here’s the real problem. Aside from being the most unfairly demonized generation of young people in decades and inheriting a world in crisis, Millennials have been raised in an era of neoliberalism. Think about it. The most significant social change to face this nation in the last 25 years is the shift towards a neoliberal, market-oriented society. If anything, the rise of the “helicopter parent” is a direct consequence of the fear-based, competitive society that neoliberalism fosters. The move to neoliberalism meant the complete retreat from our social contract. We began to speak of “entitlements” over social “security,” we've vilified people who need help as "welfare queens," and we substituted high-stakes testing for teacher support. Neoliberalism is more than a market economy; it's an ideology and social practice that refuses social obligation to others, including our youth. Neoliberalism has brought about not just a ruthless economy that privileges the 1 percent, it has also ushered in a way of life that threatens notions like the common good. It destroys a sense of care and compassion. It is an ideology that believes all problems are personal and that if you are in crisis it's your fault (or your parents’). It is the ideology of those like Donald Trump, who advocate a cutthroat, cruel, survival-of-the-fittest world governed by market principles. As Zygmunt Bauman puts it, “The plight of being outcast may stretch to embrace a whole generation.” There now seems conclusive evidence that the millennial generation will suffer the hardships of neoliberalism at a rate that far exceeds that of older generations. One of the key consequences of neoliberalism, as Henry Giroux explains, is the privatization of all problems. We see not just the privatization of public services, but also a tendency to explain social crises as personal problems. According to neoliberal logic, students clamoring for counseling are a consequence of overprotective parents. Millenial-bashers like Gray pin the blame on parents for not giving kids the freedom to play, learn and grow. But no amount of independent play is going to fix the economy or the broader social ideology these kids inherit. It's never just personal. Paul Verhaege writes in the Guardian that “[w]e tend to perceive our identities as stable and largely separate from outside forces. But over decades of research and therapeutic practice, I have become convinced that economic change is having a profound effect not only on our values but also on our personalities. Thirty years of neoliberalism, free-market forces and privatisation have taken their toll, as relentless pressure to achieve has become normative.” When you buy into the logic that the problem is in the home or in the classroom or in the student's mind, you neglect the link between individual problems and a socio-economic process that puts extreme pressure on individuals. Furthermore, "just toughen up" mantras are eroding our ability to support and strengthen the younger generation. The young live in a tough world, and when they crack, we tell them they are needy. Rather than support students, we now have what Giroux describes as a society that governs youth “through a logic of punishment, surveillance, and control.” Our schools are filled with security cameras, metal detectors and other forms of surveillance. This is a system that arrests kids who build clocks and that shoots young black men wearing hoodies and holding Skittles. Nearly half of black males and 40 percent of white males have been arrested by the age of 23. That is the real “helicopter society” that Millennials have been raised in. Only 19 percent of Millennials agreed with the statement "most people can be trusted." Eighty-three percent of Millennials agreed with the statement "there is too much power concentrated in the hands of a few big companies.” Nearly half of Millennials believe the U.S. justice system is unfair. They are acutely aware of the fact that the system is stacked against them. Make no mistake: Millennial bashers are part of that same system. If we really care about the mental health of today’s college students, then we need to start rejecting the narratives that blame them for feeling the social stresses they have inherited. And, if we really, really care, we will stop sharing memes about self-involved students that don’t read the syllabus; we will stop complaining about how young people are needy and lame; and we will start finding meaningful ways to work with them to improve the society we share. AlterNet In a recent issue of Psychology Today, Peter Gray joined the chorus of Millennial-bashers, making the case that college kids lack "resilience." He says that more college students depend on counseling services and offers up anecdotes of woefully pathetic kids. The basic gist of the piece — common to this genre of article — is that society is doomed because young people can't handle "everyday" challenges. Other commentators have been all too happy to bash Millennials. In fact, everywhere Millennials turn they are told that they’re lazy, entitled, narcissistic and clueless. They have even been called “the lamest generation.” This conclusion is wrong, and it's damaging. When critics accuse Millennials of lacking resilience, they fail to appreciate the very real pressures young people face. This line of argument is especially damaging because it transforms major public issues into a problem of character. Blaming "helicopter" parents and an "overprotective" society for failing to inculcate kids with coping skills misses the point. Young people do live in a helicopter society, but the helicopters that ominously hover over them are much larger socioeconomic forces that threaten their safety and success. They live in a world that is fundamentally hostile to their future. Gray’s argument goes like this: parents have not allowed children as much time to freely play and explore, and this reduction in time for adventure has produced fearful, coddled losers who can’t cope in the world. He claims parents have solved their kids’ problems, leaving them unable to deal with everyday challenges without calling mommy — or, if in college, a counselor or faculty member — to figure things out for them. Some of this should come as no surprise. Older generations have always demonized the young. Generational theorists William Strauss and Neil Howe remind us that at the outset of World War II, army psychiatrists complained that their GI “recruits had been ‘over-mothered’ in the years before the war.” According to Russell Dalton, the younger generation is constantly blamed for all that is wrong in our nation. He explains that Millennials may be the most denounced generation ever. The thing is, though, these attacks are unfounded. Sure, it may well be true that counseling services are used more frequently on campus. But for some reason, it never occurs to Gray that students’ need for support make sense. He tells of “a student who felt traumatized because her roommate had called her a ‘bitch’ and two students who had sought counseling because they had seen a mouse in their off-campus apartment.” Gray never considers, as would many mental health professionals, the possibility that the reaction to the mouse and the name-calling are symptoms of a deep anxiety that may be well-founded rather than a product of immature hysteria. In contrast, I’d suggest that students are anxious and depressed for a range of good reasons. Millennials have inherited one of the most difficult social situations to face a generation of college students in decades. Let’s look at some facts: About one-third of college students are first-generation American. This means that many of their parents might be limited in their ability to give them advice on college life in the United States, since they have no direct experience. It is also important to note that 60 percent of these students do not complete their degree. Forty-three percent of Millennials are of color. This is the most diverse generation in U.S. history. While college campuses attempt to attract a diverse student body, students of color often find a lack of inclusion and support once they arrive. It’s also worth noting that 79 percent of faculty are white, a factor that researchers suggest can influence the success of non-white students. Approximately 25 percent of Millennials were raised by single parents. And about 66 percent of single moms work outside the home — a factor that greatly limits a parent’s ability to solve a child’s problems for him or her. Twenty-six percent of undergraduates are raising dependent children. There are twice as many openly LGBT students on college campuses than there were in 2011. A recent study shows that this group suffers disproportionate sexual harassment (73 percent ) and violence (44 percent). One in five women is sexually assaulted in college. Many students face food insecurity. Hunger among college students is on the rise: 121 college campuses have food banks for students and, in one example, 60 percent of students at Western Oregon University reported suffering from hunger and poor nutrition. About 4 percent of all undergraduates are veterans or military service members. Statistics show that this population has higher rates of mental disorders. Eleven percent of college students have learning disabilities — a reality that would make coping with college workloads naturally more challenging. Thirty to 40 percent of all graduates have double majors, a trend designed to offer students more job prospects, and one that also brings with it far more stress as students have to overload to complete their degrees. (I have one student this term enrolled in 28 credits so that she can graduate on time.) Seventy percent of college students have school debt and the average owed is $28,400. The total amount of student debt today is $1.2 trillion. Eighty percent of students work part-time while in college and 18 percent pay their way through college. Twenty percent  of working students work 35 hours a week or more. Only 22 percent of college students get their bills paid by their parents. Sixty-two percent of students manage a budget. Millennials account for 40 percent of the nation’s unemployed. If they do have jobs, they earn less than the nation’s median income as compared to those of the same age a decade ago. When they do get meaningful work, they toil away at unpaid internships that may never become full-time job offers. These statistics belie the argument that this generation's biggest problem is overattentive parenting. But it’s worse than that: most of the arguments that charge Millennials with being coddled are based on anecdotes that really only refer to a highly select segment of Millennials who might be coddled by their parents — let’s call them the “1 percent” of Millennials. Despite the hype, a 2012 APA study found that only 12 percent — at most — of college students consult counseling services. Given the reality-based stresses so many college students face, we might conclude that today’s students are potentially the most resilient generation we have seen in decades. Even though they inherit a tough economy, a fractured political system, and a news media dominated by hype and fear, today’s young adults remain overwhelmingly optimistic. They might suffer from depression and anxiety, but they also have a strong positive attitude. They vote in record numbers. They volunteer more than any other generation. They value many of the same things older generations do, like being a good parent and having a home. They prefer to buy from companies that support social issues.  A 2010 Pew research study characterized them as “confident, self-expressive, liberal, upbeat and open to change.” But here’s the real problem. Aside from being the most unfairly demonized generation of young people in decades and inheriting a world in crisis, Millennials have been raised in an era of neoliberalism. Think about it. The most significant social change to face this nation in the last 25 years is the shift towards a neoliberal, market-oriented society. If anything, the rise of the “helicopter parent” is a direct consequence of the fear-based, competitive society that neoliberalism fosters. The move to neoliberalism meant the complete retreat from our social contract. We began to speak of “entitlements” over social “security,” we've vilified people who need help as "welfare queens," and we substituted high-stakes testing for teacher support. Neoliberalism is more than a market economy; it's an ideology and social practice that refuses social obligation to others, including our youth. Neoliberalism has brought about not just a ruthless economy that privileges the 1 percent, it has also ushered in a way of life that threatens notions like the common good. It destroys a sense of care and compassion. It is an ideology that believes all problems are personal and that if you are in crisis it's your fault (or your parents’). It is the ideology of those like Donald Trump, who advocate a cutthroat, cruel, survival-of-the-fittest world governed by market principles. As Zygmunt Bauman puts it, “The plight of being outcast may stretch to embrace a whole generation.” There now seems conclusive evidence that the millennial generation will suffer the hardships of neoliberalism at a rate that far exceeds that of older generations. One of the key consequences of neoliberalism, as Henry Giroux explains, is the privatization of all problems. We see not just the privatization of public services, but also a tendency to explain social crises as personal problems. According to neoliberal logic, students clamoring for counseling are a consequence of overprotective parents. Millenial-bashers like Gray pin the blame on parents for not giving kids the freedom to play, learn and grow. But no amount of independent play is going to fix the economy or the broader social ideology these kids inherit. It's never just personal. Paul Verhaege writes in the Guardian that “[w]e tend to perceive our identities as stable and largely separate from outside forces. But over decades of research and therapeutic practice, I have become convinced that economic change is having a profound effect not only on our values but also on our personalities. Thirty years of neoliberalism, free-market forces and privatisation have taken their toll, as relentless pressure to achieve has become normative.” When you buy into the logic that the problem is in the home or in the classroom or in the student's mind, you neglect the link between individual problems and a socio-economic process that puts extreme pressure on individuals. Furthermore, "just toughen up" mantras are eroding our ability to support and strengthen the younger generation. The young live in a tough world, and when they crack, we tell them they are needy. Rather than support students, we now have what Giroux describes as a society that governs youth “through a logic of punishment, surveillance, and control.” Our schools are filled with security cameras, metal detectors and other forms of surveillance. This is a system that arrests kids who build clocks and that shoots young black men wearing hoodies and holding Skittles. Nearly half of black males and 40 percent of white males have been arrested by the age of 23. That is the real “helicopter society” that Millennials have been raised in. Only 19 percent of Millennials agreed with the statement "most people can be trusted." Eighty-three percent of Millennials agreed with the statement "there is too much power concentrated in the hands of a few big companies.” Nearly half of Millennials believe the U.S. justice system is unfair. They are acutely aware of the fact that the system is stacked against them. Make no mistake: Millennial bashers are part of that same system. If we really care about the mental health of today’s college students, then we need to start rejecting the narratives that blame them for feeling the social stresses they have inherited. And, if we really, really care, we will stop sharing memes about self-involved students that don’t read the syllabus; we will stop complaining about how young people are needy and lame; and we will start finding meaningful ways to work with them to improve the society we share.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 14:00

An algorithm might save your life: How the Amazon and Netflix method might someday cure cancer

Machine learning plays a part in every stage of your life. If you studied online for the SAT college admission exam, a learning algorithm graded your practice essays. And if you applied to business school and took the GMAT exam recently, one of your essay graders was a learning system. Perhaps when you applied for your job, a learning algorithm picked your résumé from the virtual pile and told your prospective employer: here’s a strong candidate; take a look. Your latest raise may have come courtesy of another learning algorithm. If you’re looking to buy a house, Zillow.com will estimate what each one you’re considering is worth. When you’ve settled on one, you apply for a home loan, and a learning algorithm studies your application and recommends accepting it (or not). Perhaps most important, if you’ve used an online dating service, machine learning may even have helped you find the love of your life. Society is changing, one learning algorithm at a time. Machine learning is remaking science, technology, business, politics, and war. Satellites, DNA sequencers, and particle accelerators probe nature in ever-finer detail, and learning algorithms turn the torrents of data into new scientific knowledge. Companies know their customers like never before. The candidate with the best voter models wins, like Obama against Romney. Unmanned vehicles pilot themselves across land, sea, and air. No one programmed your tastes into the Amazon recommendation system; a learning algorithm figured them out on its own, by generalizing from your past purchases. Google’s self-driving car taught itself how to stay on the road; no engineer wrote an algorithm instructing it, step-by-step, how to get from A to B. No one knows how to program a car to drive, and no one needs to, because a car equipped with a learning algorithm picks it up by observing what the driver does. Machine learning is something new under the sun: a technology that builds itself. Ever since our remote ancestors started sharpening stones into tools, humans have been designing artifacts, whether they’re hand built or mass produced. But learning algorithms are artifacts that design other artifacts. “Computers are useless,” said Picasso. “They can only give you answers.” Computers aren’t supposed to be creative; they’re supposed to do what you tell them to. If what you tell them to do is be creative, you get machine learning. A learning algorithm is like a master craftsman: every one of its productions is different and exquisitely tailored to the customer’s needs. But instead of turning stone into masonry or gold into jewelry, learners turn data into algorithms. And the more data they have, the more intricate the algorithms can be. Homo sapiens is the species that adapts the world to itself instead of adapting itself to the world. Machine learning is the newest chapter in this million-year saga: with it, the world senses what you want and changes accordingly, without you having to lift a finger. Like a magic forest, your surroundings—virtual today, physical tomorrow—rearrange themselves as you move through them. The path you picked out between the trees and bushes grows into a road. Signs pointing the way spring up in the places where you got lost. These seemingly magical technologies work because, at its core, machine learning is about prediction: predicting what we want, the results of our actions, how to achieve our goals, how the world will change. Once upon a time we relied on shamans and soothsayers for this, but they were much too fallible. Science’s predictions are more trustworthy, but they are limited to what we can systematically observe and tractably model. Big data and machine learning greatly expand that scope. Some everyday things can be predicted by the unaided mind, from catching a ball to carrying on a conversation. Some things, try as we might, are just unpredictable. For the vast middle ground between the two, there’s machine learning. Paradoxically, even as they open new windows on nature and human behavior, learning algorithms themselves have remained shrouded in mystery. Hardly a day goes by without a story in the media involving machine learning, whether it’s Apple’s launch of the Siri personal assistant, IBM’s Watson beating the human Jeopardy! champion, Target finding out a teenager is pregnant before her parents do, or the NSA looking for dots to connect. But in each case the learning algorithm driving the story is a black box. Even books on big data skirt around what really happens when the computer swallows all those terabytes and magically comes up with new insights. At best, we’re left with the impression that learning algorithms just find correlations between pairs of events, such as googling “flu medicine” and having the flu. But finding correlations is to machine learning no more than bricks are to houses, and people don’t live in bricks. When a new technology is as pervasive and game changing as machine learning, it’s not wise to let it remain a black box. Opacity opens the door to error and misuse. Amazon’s algorithm, more than any one person, determines what books are read in the world today. The NSA’s algorithms decide whether you’re a potential terrorist. Climate models decide what’s a safe level of carbon dioxide in the atmosphere. Stock-picking models drive the economy more than most of us do. You can’t control what you don’t understand, and that’s why you need to understand machine learning—as a citizen, a professional, and a human being engaged in the pursuit of happiness. This book’s first goal is to let you in on the secrets of machine learning. Only engineers and mechanics need to know how a car’s engine works, but every driver needs to know that turning the steering wheel changes the car’s direction and stepping on the brake brings it to a stop. Few people today know what the corresponding elements of a learner even are, let alone how to use them. The psychologist Don Norman coined the term conceptual model to refer to the rough knowledge of a technology we need to have in order to use it effectively. This book provides you with a conceptual model of machine learning. Not all learning algorithms work the same, and the differences have consequences. Take Amazon’s and Netflix’s recommenders, for example. If each were guiding you through a physical bookstore, trying to determine what’s “right for you,” Amazon would be more likely to walk you over to shelves you’ve frequented previously; Netflix would take you to unfamiliar and seemingly odd sections of the store but lead you to stuff you’d end up loving. In this book we’ll see the different kinds of algorithms that companies like Amazon and Netflix use. Netflix’s algorithm has a deeper (even if still quite limited) understanding of your tastes than Amazon’s, but ironically that doesn’t mean Amazon would be better off using it. Netflix’s business model depends on driving demand into the long tail of obscure movies and TV shows, which cost it little, and away from the blockbusters, which your subscription isn’t enough to pay for. Amazon has no such problem; although it’s well placed to take advantage of the long tail, it’s equally happy to sell you more expensive popular items, which also simplify its logistics. And we, as customers, are more willing to take a chance on an odd item if we have a subscription than if we have to pay for it separately. Hundreds of new learning algorithms are invented every year, but they’re all based on the same few basic ideas. These are what this book is about, and they’re all you really need to know to understand how machine learning is changing the world. Far from esoteric, and quite aside even from their use in computers, they are answers to questions that matter to all of us: How do we learn? Is there a better way? What can we predict? Can we trust what we’ve learned? Rival schools of thought within machine learning have very different answers to these questions. The main ones are five in number, and we’ll devote a chapter to each. Symbolists view learning as the inverse of deduction and take ideas from philosophy, psychology, and logic. Connectionists reverse engineer the brain and are inspired by neuroscience and physics. Evolutionaries simulate evolution on the computer and draw on genetics and evolutionary biology. Bayesians believe learning is a form of probabilistic inference and have their roots in statistics. Analogizers learn by extrapolating from similarity judgments and are influenced by psychology and mathematical optimization. Driven by the goal of building learning machines, we’ll tour a good chunk of the intellectual history of the last hundred years and see it in a new light. Each of the five tribes of machine learning has its own master algorithm, a general-purpose learner that you can in principle use to discover knowledge from data in any domain. The symbolists’ master algorithm is inverse deduction, the connectionists’ is backpropagation, the evolutionaries’ is genetic programming, the Bayesians’ is Bayesian inference, and the analogizers’ is the support vector machine. In practice, however, each of these algorithms is good for some things but not others. What we really want is a single algorithm combining the key features of all of them: the ultimate master algorithm. For some this is an unattainable dream, but for many of us in machine learning, it’s what puts a twinkle in our eye and keeps us working late into the night. If it exists, the Master Algorithm can derive all knowledge in the world—past, present, and future—from data. Inventing it would be one of the greatest advances in the history of science. It would speed up the progress of knowledge across the board, and change the world in ways that we can barely begin to imagine. The Master Algorithm is to machine learning what the Standard Model is to particle physics or the Central Dogma to molecular biology: a unified theory that makes sense of everything we know to date, and lays the foundation for decades or centuries of future progress. The Master Algorithm is our gateway to solving some of the hardest problems we face, from building domestic robots to curing cancer. Take cancer. Curing it is hard because cancer is not one disease, but many. Tumors can be triggered by a dizzying array of causes, and they mutate as they metastasize. The surest way to kill a tumor is to sequence its genome, figure out which drugs will work against it—without harming you, given your genome and medical history—and perhaps even design a new drug specifically for your case. No doctor can master all the knowledge required for this. Sounds like a perfect job for machine learning: in effect, it’s a more complicated and challenging version of the searches that Amazon and Netflix do every day, except it’s looking for the right treatment for you instead of the right book or movie. Unfortunately, while today’s learning algorithms can diagnose many diseases with superhuman accuracy, curing cancer is well beyond their ken. If we succeed in our quest for the Master Algorithm, it will no longer be. Excerpted from "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World" by Pedro Domingos. Published by Basic Books, a division of the Perseus Books Group. Copyright © 2015 by Pedro Domingos. Reprinted with permission of the publisher. All rights reserved.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 13:00

When the media had enough: Watergate, Vietnam and the birth of the adversarial press

“The media”: these two words, yoked together, name something new in the world. They turn the many organizations that constitute what was once more commonly called “the press” into a single looming, forbidding entity, implicitly offering it criticism, even disdain. The press had long offended, of course. But something about the media got worse in the 1970s, even in the eyes of journalists. “The media,” wrote Washington-based British journalist Henry Fairlie in 1983, was a term whose current meaning could not be found in Webster’s Third New International Dictionary of 1966. But soon thereafter the term came to be widely used. The media were not, Fairlie argued, “just an extension of journalism.” The media were (or “was”—there remains even now complete confusion about whether the term should be treated as a plural noun or not) somehow new and different, and Fairlie was unsparing in his contempt: “The more dangerous insects who infest Washington today are the media: locusts who strip bare all that is green and healthy, as they chomp at it with untiring jaws; those insatiable jaws that are never at a loss for a word, on the screen or on the platform, and occasionally, when they can spare a moment for their old trade, in print.” Fairlie returns to his entomological sneer at the end of his piece: “The media settle on the White House and Congress to strip them like locusts, for the purpose of advancing themselves on television and the lecture circuit, and year by year they complain at the debility of the political system.” William Safire, in remembering his time as a White House speechwriter for Richard Nixon, provided a more conspiratorial account of the term “the media.” He recalled, “In the Nixon White House, the press became ‘the media,’ because the word had a manipulative, Madison Avenue, all-encompassing connotation, and the press hated it.” For Nixon, journalism was an “enemy” to be defeated; Safire heard Nixon declare “the press is the enemy” a dozen times. So the Nixon White House insistently used “the media” because to refer to journalists as “the press” handed to these miscreants an aura of rectitude and First Amendment privilege that gave them an emotional advantage, while to call them “the media” took it away. We have “the media” not only for the reason Fairlie focuses on—the mutation of (a few) reporters into obnoxious and ever-jabbering TV celebrities—but also because powerful politicians sought to paint journalists as the misleadingly human faces of an impersonal and insatiable monster. With the rise of the media, Fairlie argued, the primary activity of Washington switched from governing the country through legitimate political institutions to “the sustaining of the illusion of government through the media and in obedience to the media’s needs and demands.” This position or something like it, shared by many other distinguished figures both inside and outside journalism, is an important clue to a social change that, among other things, decisively promoted a culture of disclosure in American politics. But in the end, the claim that public officials in the 1960s and 1970s shifted from governing to public relations is not credible. It would be convenient if there were a sharp break between the (good) old journalism and the (bad) new media, the (good) old politics of men dedicated to public service and the (bad) new politics of men and women devoted to seeing their own faces on television, but what happened in the 1960s and 1970s was far more subtle than critics of the day acknowledged. One part of Fairlie’s observation is surely correct—that the media became more central to the operation of Washington. Only this is not proof of what he assumes to be an unquestionably “debilitating” impact. It is proof of impact. It is proof that journalism was taking a more independent, less deferential stance toward power. Journalists would have to be reckoned with. This has had effects both good and ill. No specific moment, case, or condition made all the difference. Not the rise of network television news, important as that was for giving the media a unified national identity. Certainly not Watergate; Watergate was a capstone to a journalism that had become increasingly assertive in the Vietnam years. Vietnam is not sufficient explanation, either. Journalists grew disillusioned with the war in Vietnam in the mid- to late 1960s, but a growing critical edge arose at the same time, if less intensely, in European journalism. An increased media presence was not uniquely American. It was a generational change, an educational change, a cultural change. And while the media were very much agents of that change, it is likewise true that they were responding to something in the air, something that, in the American case, began to take shape in the late 1950s, gathered momentum in the early 1960s in the usual sites of political power— Congress, the White House, and the Supreme Court—and was reinforced and extended by popular action in the streets by the late 1960s as well as by a new sophistication, a new capacity, and a new arrogance in journalism. Consider the following tale: Peter Buxtun, a young employee working for the U.S. Public Health Service in the 1960s, learned about an experiment the service was conducting on the long-term effects of syphilis on African American men if left untreated. This study had begun in the 1930s and more than a generation later was still in operation. One of the mysteries about this is that a cure for syphilis had become available in the interim with the discovery of penicillin and its wide availability after World War II. Buxtun contacted his superiors in the Public Health Service, convinced they would shut the study down if only it was brought to their attention. But they did no such thing. Instead, they treated Buxtun as a troublemaker and successfully stalled him. Buxtun put the matter aside for several years, but he could not put it out of his mind. He tried again to sound the alarm with the Public Health Service’s Communicable Disease Center (later the Centers for Disease Control, or CDC). Again he made no headway. At that point he went to the news media. He contacted an AP reporter in San Francisco, Edie Lederer. Lederer was leaving for a trip to Europe but promised Buxtun she would pass on the materials he provided to another AP reporter. En route to Europe she stopped in Miami, where her colleague Jean Heller was covering the 1972 Republican National Convention. Lederer provided Heller the materials Buxtun had sent to the CDC and the CDC’s reply to them. Heller and her husband, Ray, also an AP reporter, thought the CDC response indirectly confirmed Buxtun’s charges—or at least did not flatly deny them. She decided this was well worth following up. Heller had grown up in Ohio and attended the University of Michigan, studying history and English, but transferred in her junior year to Ohio State, where Ray, her high school sweetheart, was studying. There she minored in journalism and fell in love with it. The dean of the School of Journalism, a former AP bureau chief, helped her get a position in New York with AP Radio in 1954. She went to Washington in 1968 when AP created—for the first time in its century-long history—an investigative reporting team, to which she was assigned. Thanks to Peter Buxtun and Edie Lederer, the story of the Tuskegee syphilis experiment had just fallen into Heller’s lap. It did not require her investigative skills. The CDC told her she could see any records she wanted. The story she produced was sensational. She knew it would be. She and her colleagues wanted to be sure that, for maximum impact on Washington policy makers, it would appear on the front page of a Washington newspaper—either the Post or the Star. They chose the Star because at the time the Post “was just consumed with Watergate” and they were not confident it would give the story a front-page spot. So they promised the Star they would release the story “on the p.m. cycle if they could guarantee page one.” Heller herself opposed the deal because “I figured if it was page one in the Star it would never be page one the next day in the Post.” She was wrong about that. The story appeared the next day all over the country, including in a bylined story in the New York Times. Heller was on the phone that night with the Times because that paper wanted to do its own reporting and rewrote the story, cutting out, among other things, the potent phrase Heller had used to describe the black Alabama men who were the subjects of the study: “human guinea pigs.” It was July 25, 1972, when Heller’s story ran in the Star, telling the world that some 600 African American residents of Macon County, Alabama, in a study begun in 1932, had become human guinea pigs. While suffering from syphilis, they were told only that they had “bad blood,” and though they were treated for other everyday medical complaints, they were not treated for syphilis. About one hundred people died from this deliberate decision to leave their syphilis untreated. Heller went to visit her parents in Ohio not long after. “My folks’ best friend was a doctor—his response was ‘That’s not true.’ . . . That’s how I had felt, too. I had this pedestal the medical and legal fraternity stood on. . . . It was quite a rude awakening for me. The scales fell from my eyes. It’s a terrible cliché, but—this was an evil I couldn’t comprehend. What were these people thinking? It was the end of naivete.” For the next two years, Heller said, “I wrote about nothing else.” She followed the lawsuit, dozens of meetings of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and related topics. One more story. In Wheaton, Illinois, 1961, a top student at the local high school gave a graduation speech largely devoted to attacking the federal government. The high schooler’s father was the town’s leading attorney, a conventional Republican and a pillar of the community. Inspiration for the speech came from Barry Goldwater, the conservative Republican senator from Arizona, whose The Conscience of a Conservative was published in 1960. The young student himself had favored Richard Nixon in the Kennedy-Nixon contest for president in 1960. Eleven years later, this onetime Nixon and then Goldwater fan, Bob Woodward, would start working on a story for the Washington Post about a burglary in the Watergate apartment and office complex. The rest of that tale is the best-known story in the history of American journalism. Watergate did not simply influence journalism; it galvanized the journalistic imagination. Investigative journalism became the definition of great journalism. Of course, reporters prized the “scoop” long before Watergate, but journalists can get scoops with little more than a well-placed and well-timed interview. But at the moment Woodward and Carl Bernstein began to cover Watergate, there was already a lively new interest in investigative work. Newsday had put three reporters, an editor, and a researcher on an investigative “team” in 1967. The Chicago Tribune began an investigative task force in 1968, and so did the Associated Press with its “special assignment team.” The Boston Globe began its “spotlight” group in 1970 on the Newsday model. But this was not the beginning of a new mood in journalism, either, not the point when journalism began to be more open, more inquisitive (and even inquisitorial), more aggressive, more negative. There is no definitive point of origin. Even in 1953–1954 and 1960, when Bernard Cohen interviewed foreign correspondents, he found them attached not only to a role of neutral observer but also to a role of “participant.” The latter, however, was still a “bootleg” journalism that, “like illicit liquor . . . is found everywhere” without being publicly acknowledged. But disquiet among journalists grew, a sense that the country’s leaders were not leveling with them, either on the record or in confidence. The support that reporters and editors provided John Moss for his efforts to pry public information out of the executive branch of government is one indicator. Another was the public scandal over President Eisenhower’s initial, embarrassing lies about the downing of Francis Gary Powers’s U-2 spy plane over Russia in 1960. Administration spokesmen at first declared that the U-2 was a weather plane and denied Soviet charges that the plane was engaged in espionage. The Soviets, however, were correct. Roger Mudd, later a prominent national correspondent for CBS, was then a reporter for a local television station in Washington. He recalled later that veteran Washington correspondents were shaken that the government had straight-out lied to them. Most journalists in 1960, he said, were “trusting and uncritical of the government; they tended to be unquestioning consumers and purveyors of official information.” Just a few months after the U-2 incident, the first of the televised Kennedy-Nixon debates took place. Although presidential debates would not be repeated until 1976, they were an important symbol of a new media power and at the same time a novel pressure for a new transparency—staged transparency, to be sure, but nonetheless a site available for surprise and spontaneity. As media scholars Daniel Dayan and Elihu Katz have argued, these first TV presidential debates were among those “media events” that “breed the expectation of openness in politics and diplomacy.” Media events “turn the lights on social structures that are not always visible, and dramatize processes that typically take place offstage.” As people around the world took up television, expectations of openness spread globally, almost as if bundled into the technological package. A presumption of openness carried on into the Kennedy administration. Kennedy’s critics charged, and many of his friends conceded, that this was more style than substance. But, as historian Cynthia Harrison writes about the Kennedy administration, “style and substance are not unrelated phenomena.” Most of Lyndon Johnson’s impressive success in domestic legislation grew out of Kennedy administration initiatives, including, notably, both civil rights legislation and engagement in Vietnam; in Harrison’s words, “in both cases the ‘style’ was an authentic political event. It encouraged national energies that continued beyond Kennedy’s life, through the 1960s, facilitating movements for women’s rights, consumer rights, ecology, and mental health services.” As indicated in the work of Esther Peterson and Philip Hart, the hand of the Kennedy administration was visible in encouraging consumer reforms and women’s rights, and it also played a significant role in abetting environmental awareness and environment-centered legislation. As virtually all accounts by journalists and historians attest, news coverage of government, politics, and society opened up in the 1960s and 1970s. It was not the jousting on Capitol Hill before the Moss Committee, nor the U-2 incident, nor the TV debates; it was not any specific skirmish or even the sum of the confrontations between the press and U.S. military spokesmen in Vietnam as the war there dragged on; and it was not the rise in the 1960s of irreverent underground publications or the growing respect for maverick reporter I. F. Stone, who became a hero for politically committed young reporters of the day. It was all of this and more. There was a generational change, and there was a broad cultural change that made the news media a chief constituent of the opening up of American society and not simply its transcriber (although “transcribing” is never as simple as it sounds). The change in the media’s role was the joint product of several closely connected developments: government— especially the federal government—grew larger and more engaged in people’s everyday lives; the culture of journalism changed and journalists asserted themselves more aggressively than before; and many governmental institutions became less secretive and more attuned to the news media, eager for media attention and approval. As the federal government expanded its reach (in civil rights, economic regulation, environmental responsibility, and social welfare programs such as food stamps, Medicare, and Medicaid), as the women’s movement proclaimed that “the personal is political,” and as stylistic innovation in journalism proved a force of its own, the very concept of “covering politics” changed, too. News coverage became at once more probing, more analytical, and more transgressive of conventional lines between public and private, but this recognizes only half of the influence of a changing journalism. The other half is perhaps even more important, if harder to document: not only did the news media grow in independence and professionalism and provide more comprehensive and more critical coverage of powerful institutions, but powerful institutions adapted to a world in which journalists had a more formidable presence than ever before. Of course, politicians had resented the press much earlier—President George Washington complained about how he was portrayed in the newspapers; President Thomas Jefferson encouraged libel prosecutions in the state courts against editors who attacked him and his policies; critics in the 1830s bemoaned that the country had become a “pressocracy”; and President Theodore Roosevelt, one of the great manipulators of journalists, famously castigated the negative tone of reporters he dubbed “muckrakers.” Even so, Washington politics remained much more exclusively an insiders’ game than it would be later. The Washington press corps was more subservient to the whims and wishes of editors and publishers back home than to official Washington, and in any event, politicians in Washington kept their jobs less by showing themselves in the best light in the newspapers than by maintaining their standing among their party’s movers and shakers in their home state. Members of the U.S. Senate were not popularly elected until 1914; before then, a remoteness from popular opinion was a senator’s birthright. And while in the early twentieth century a small number of writers at the most influential newspapers and a small number of syndicated political columnists came to be influential power brokers, the press as a corporate force did not have an imposing presence. Presence is what the media acquired by the late 1960s. Presence meant not a seat at the table but an internalization in the minds of political decision makers that the media were alert, powerful, and by no means sympathetic. In a shift that was partially independent of how journalists covered Washington (and other centers of political power), those who held political power came to orient themselves in office or in seeking office to public opinion and to their belief that the media both reflected and influenced it. The story of a transformed journalism has been told many times before, but it has generally failed to specify what exactly the transformation looked like in the pages of the newspapers. Much attention has focused on the very important growth of investigative reporting. But the quantitatively more significant change between the 1950s and the early 2000s has been the rise of what I call contextual reporting, following research Katherine Fink and I have conducted. In contextual reporting, the journalist’s work is less to record the views of key actors in political events and more to analyze and explain them with a voice of his or her own. More than other concurrent changes, this one altered the front page, putting a premium on the stories behind the story. This shift, like that toward investigative reporting, made the news media a more assertive presence in American public life, and helped make the press implicitly an evangelist for openness, through its own vigor. The press became an explicit advocate for practices premised on a cultural or philosophical, if not legal, right to know in promoting FOIA and later in editorializing on behalf of “sunshine” rules in Congress. But much more generally, the move from writing down what political leaders said to contextualizing what they said and did, and why, offered a new model of journalism. The new model seeped into the work of journalism with little fanfare, barely even notice. Journalists continued to defend their work as “objective” or “balanced” while, in the newsroom, the new model transformed what they meant by such terms. Excerpted from "The Rise of the Right to Know: Politics and the Culture of Transparency, 1945-1975" by Michael Schudson. Published by Belknap Press. Copyright 2015 by Harvard University. Reprinted with permission of the publisher. All rights reserved.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 11:00

Drunk in love: The fine line between infatuation and intoxication

Scientific American Many studies trumpet the positive effects of oxytocin. The hormone facilitates bonding, increases trust and promotes altruism. Such findings earned oxytocin its famous nickname, the “love hormone.” But more recent research has shown oxytocin has a darker side, too: it can increase aggression, risk taking and prejudice. A new analysis of this large body of work reveals that oxytocin's effects on our brain and behavior actually look a lot like another substance that can cut both ways: alcohol. As such, the hormone might point to new treatments for addiction. Researchers led by Ian Mitchell, a psychologist at the University of Birmingham in England, conducted the meta-analysis, which reveals that both oxytocin and alcohol reduce fear, anxiety and stress while increasing trust, generosity and altruism. Yet both also increase aggression, risk taking and “in-group” bias—favoring people similar to ourselves at the expense of others, according to the paper published in August in Neuroscience and Biobehavioral Reviews. The scientists posit that these similarities probably exist because oxytocin and alcohol act at different points in the same chemical pathway in the brain. Oxytocin stimulates release of the neurotransmitter GABA, which tends to reduce neural activity. Alcohol binds to GABA receptors and ramps up GABA activity. Oxytocin and alcohol therefore both have the general effect of tamping down brain activity—perhaps explaining why they both lower inhibitions. Clinical trials have uncovered further interplay between the two in demonstrating that a nasal spray of oxytocin reduces cravings and withdrawal symptoms in alcoholics. These findings inspired a new study, published in March in the Proceedings of the National Academy of Sciences USA, which suggests oxytocin and alcohol do more than just participate in the same neural pathway: they may physically interact. The researchers showed that oxytocin prevented drunken motor impairment in rats by blocking the GABA receptor subunit usually bound by alcohol. Mitchell speculates this interaction is specific to brain regions that regulate movement, thereby “sparing the usual motor deficits associated with alcohol but still influencing social and affective processes.” These findings suggest getting “love drunk” may impede a person from getting truly drunk—or at least make getting drunk less appealing. They also offer a possible biological explanation for why social support is so effective at helping people beat addictions. The researchers' biggest hope for now is that in the near future, the similarity between these two chemicals will allow scientists to develop oxytocin-based treatments for alcoholics. Scientific American Many studies trumpet the positive effects of oxytocin. The hormone facilitates bonding, increases trust and promotes altruism. Such findings earned oxytocin its famous nickname, the “love hormone.” But more recent research has shown oxytocin has a darker side, too: it can increase aggression, risk taking and prejudice. A new analysis of this large body of work reveals that oxytocin's effects on our brain and behavior actually look a lot like another substance that can cut both ways: alcohol. As such, the hormone might point to new treatments for addiction. Researchers led by Ian Mitchell, a psychologist at the University of Birmingham in England, conducted the meta-analysis, which reveals that both oxytocin and alcohol reduce fear, anxiety and stress while increasing trust, generosity and altruism. Yet both also increase aggression, risk taking and “in-group” bias—favoring people similar to ourselves at the expense of others, according to the paper published in August in Neuroscience and Biobehavioral Reviews. The scientists posit that these similarities probably exist because oxytocin and alcohol act at different points in the same chemical pathway in the brain. Oxytocin stimulates release of the neurotransmitter GABA, which tends to reduce neural activity. Alcohol binds to GABA receptors and ramps up GABA activity. Oxytocin and alcohol therefore both have the general effect of tamping down brain activity—perhaps explaining why they both lower inhibitions. Clinical trials have uncovered further interplay between the two in demonstrating that a nasal spray of oxytocin reduces cravings and withdrawal symptoms in alcoholics. These findings inspired a new study, published in March in the Proceedings of the National Academy of Sciences USA, which suggests oxytocin and alcohol do more than just participate in the same neural pathway: they may physically interact. The researchers showed that oxytocin prevented drunken motor impairment in rats by blocking the GABA receptor subunit usually bound by alcohol. Mitchell speculates this interaction is specific to brain regions that regulate movement, thereby “sparing the usual motor deficits associated with alcohol but still influencing social and affective processes.” These findings suggest getting “love drunk” may impede a person from getting truly drunk—or at least make getting drunk less appealing. They also offer a possible biological explanation for why social support is so effective at helping people beat addictions. The researchers' biggest hope for now is that in the near future, the similarity between these two chemicals will allow scientists to develop oxytocin-based treatments for alcoholics.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 10:00

The Republican suicide ballad: The party that can’t govern, and the country that hates its guts

It is time once again to ponder the question of whether the Republican Party can be saved from itself – and if so, what exactly there is to save and why anyone should care. The GOP’s current struggle to find someone, or indeed anyone, who is willing to serve as Speaker of the House of Representatives, the position once held by Henry Clay and Sam Rayburn and Tip O’Neill – the president’s most important counterbalance and negotiating partner, and traditionally the second most powerful job in Washington – is of course a tragic and/or hilarious symptom of much deeper dysfunction. It’s always time for this question in the cracked crucible of 21st-century American politics, and when considered in full it reaches beyond the arena of Machiavellian power struggle into the abstract theological realm favored by Church scholastics of the Middle Ages. How many angels can dance on the head of a pin? Whatever the number is, it’s infinitely larger than the number of Republicans who want to pick up John Boehner’s poisoned gavel. How large are Heaven and Hell, measured in cubits and ells? Not large enough, it appears, to encompass the pride and arrogance of the House Freedom Caucus, the group of 40-odd far-right Jacobins who first sabotaged Boehner’s speakership and then torpedoed the candidacy of his chosen replacement, Kevin McCarthy. In the great tradition of doomed revolutionaries, the Freedom Caucus prefers death, or at least political annihilation – which will be theirs one day, and sooner than they think – to the dishonor of compromise. It’s easy to make fun of the vainglory and self-importance embodied in the group’s name, but it strikes me as accurate enough. They have declared themselves free of all the responsibilities of government, free from the need to discuss or negotiate or pass any legislation that has the slightest chance of being enacted. They represent freedom in precisely the same sense that death represents freedom from being alive. They could just as well be called the Suicide Caucus – or the Satanic Caucus, in the grandiose spirit of Milton’s fallen angel, who fights on with no hope of victory: “To do ought good never will be our task,/ But ever to do ill our sole delight.” I imagine that when we all come back to work on Monday we will find that Paul Ryan has been arm-twisted into filling the role, at least for the next 14 months. But you can’t really blame Ryan, who is devious and intelligent and would like to be president someday, for feeling reluctant to commit political hara-kiri in this fashion. Even before the Republican Party constructed an alternate universe around itself and blotted out political reality, the speaker’s gavel was a final destination, not a pathway to anything larger. As numerous historical articles have now informed us, the last (and only) former House Speaker to move on to the White House was James K. Polk in 1844, and that happened after he had left Congress and served a term as governor of Tennessee. In any event, the fiscal whiz kid of Janesville, Wisconsin, is no better than a Band-Aid applied to the GOP’s gaping wound. Once upon a time, not so very long ago, the Republicans were boring and small-minded but not especially crazy. They pursued a disastrous foreign-policy agenda during the Cold War, but they were not alone in that, and one could argue that marked the first stages of betraying the tradition of Edmund Burke-style conservatism. On fiscal and social issues, they stood with country-club middle management and small-town Presbyterians and the affluent families who owned the third-largest bank in Indiana or a chain of hardware stores in and around San Diego. I’ve written previously that Richard Nixon and Dwight Eisenhower resemble Leon Trotsky on a crack bender compared to today’s Republicans. On a more personal level the GOP past will always be embodied for me by Mrs. Supinger, who was the postmistress in the tiny California town where I spent much of my childhood. A severe but polite lady whose hair was always immovably styled and gelled and set in the mode of roughly 1956 (even though there was no beauty parlor within 15 miles), Mrs. Supinger was seen to weep bitterly after Nixon resigned in the summer of 1974, and refused to remove his official portrait from the post-office wall. Maybe that act of civil disobedience – which, in all honesty, I still find perversely admirable – was the beginning of the Republican embrace of "No," the nihilistic or Satanic refusal that has poisoned the contemporary political climate. But if Mrs. Supinger is watching us now, from a high-backed armchair whose lace-covered armrests never need mending, I can’t imagine that she’s pleased to see her beloved party in ruins. Of course I believe that the Republicans have brought their gruesome predicament upon themselves and that they richly deserve their fate, although they have certainly been nudged toward the precipice by Democratic cowardice and incompetence. Some degree of liberal Schadenfreude is irresistible, and I too cackled when Nancy Pelosi was asked why nobody wanted to be speaker and responded, “You’ll just have to ask nobody.” But this ugly spectacle could have dire consequences for the country, in the near future and for a long time to come. Whoever the GOP shoves to the podium, whether it’s Ryan or Darrell Issa or Jason Chaffetz or someone even dumber than them, will either have to default on the national debt in November and shut down the government in December or face yet another enraged right-wing revolt. Either way, this Congress (and most likely the next one too, regardless of who is elected president) is a lost cause, and the future viability of bipartisan politics is very much in doubt. Dear Mrs. Supinger, I will try to explain: Even though the pundit class keeps on telling us that order will eventually be restored in the Republican presidential campaign – in the form of Jeb Bush or Marco Rubio, presumably – I suspect they’ve been smoking that stuff that President Nixon told you was sapping the national morale. At this writing, the leading candidates are still a deranged billionaire who makes impossible promises, a retired black doctor who says outrageous things in a quiet voice, and a likable woman who was a spectacular failure in the business world before she became a spectacular failure in politics. No, I’m not kidding! And none of those people has ever been elected to anything, including church deacon, yearbook president or assistant treasurer of the Ayn Rand fan club. How did that happen? And what does the crazy-time presidential race have to do with the fact that the Republicans hold their largest congressional majority since 1931 – 247 of the 435 House members – but still can’t find anybody who can win the speakership in a floor vote? Well, Mrs. S, the secret is that nobody actually likes the Republican Party, as it currently exists. The American public is divided between those who hate the Republicans for being irrational and intransigent and racist, and those who hate them for not being irrational and intransigent and racist enough. The only people who still feel affection toward the “Republican brand” are you and a tiny handful of extremely rich people, and since you’re dead you technically don’t count. That big Republican victory in the 2014 midterms was a masterfully engineered work of fiction – an artifact of voter suppression, voter apathy and the intensive gerrymandering imposed by GOP-dominated state legislatures after the 2010 census. Republican candidates won barely 51 percent of the vote, but thanks to the imaginative redistricting plans imposed in numerous states, that modest margin was dramatically overrepresented in the final result. Even more important, voter turnout fell to 36.6 percent, the lowest in any national election for more than 70 years. Just under 40 million people actually voted for Republicans, while 35.4 million voted for Democrats. That was the lowest Democratic total in 12 years – and also the lowest GOP total in eight years. Compare those numbers to the Obama re-election year of 2012, when Democratic congressional candidates attracted almost 60 million votes (and still lost seats thanks to the gerrymander), or to the electoral bonanza of 2008, when they got more than 65 million votes. So on one hand, nearly 30 million Democratic votes have vanished into thin air over the course of the last six years, which has been catastrophic to the party’s dreams of a semi-permanent electoral majority. But that doesn’t mean those people switched sides. The Republicans lost voters by the bucket-load too, just on a less epic scale. Their supposedly glorious 2014 victory required 18 million fewer votes than their record high total of 2012 – which was not enough to win the overall popular vote. In other words, Mrs. Supinger, the Republicans won their gigantic majority by poisoning and paralyzing the government like a Boehner-headed giant scorpion, and now they must face the consequences. They convinced enormous numbers of Democrats and lots of moderate Republicans to give up and stay home because American politics had become worthless and terrible, a conclusion that is difficult to fault. They appealed almost exclusively to their angriest, most zealous and most overtly racist base voters, a loud but relatively small minority of the general population who cannot be called “conservative” in any sense of the word. These are the people who constantly yammer for more tax cuts, in a country where income-tax rates, especially for the wealthiest citizens, are far below those our grandparents paid in the 1950s. They want all the Mexicans deported, even though illegal immigration is clearly an economic benefit (and is declining on its own). They want to slash social spending to levels that would make the 2008 recession look like a nationwide beach vacation. They want to defund Obamacare and Planned Parenthood, and make steep cuts to Medicare and Medicaid and Social Security – and the only good thing about their agenda is that since they can’t have all of it, they don’t want any of it. Now the Republicans in Congress, along with the “mainstream” or “establishment” Republican presidential candidates, are discovering what should have been obvious all along: The Frankenstein voter base they bred and nurtured with so much money and so much cunning does not like them or trust them. The fanatics of the Satanic Suicide Caucus and their supporters do not want the current Republican leadership to govern anything, or even try to. They have devoured the old Republican Party of Mrs. Supinger’s day from within, like an alien parasite. When they repeat its catchphrases about fiscal responsibility and social order in their metallic parasite voices, what they really mean is fiscal holocaust, social anarchy and class war against poor women, black people and immigrants. They dream of conquest, but whatever they can’t conquer – starting with their own political party – they will happily destroy.It is time once again to ponder the question of whether the Republican Party can be saved from itself – and if so, what exactly there is to save and why anyone should care. The GOP’s current struggle to find someone, or indeed anyone, who is willing to serve as Speaker of the House of Representatives, the position once held by Henry Clay and Sam Rayburn and Tip O’Neill – the president’s most important counterbalance and negotiating partner, and traditionally the second most powerful job in Washington – is of course a tragic and/or hilarious symptom of much deeper dysfunction. It’s always time for this question in the cracked crucible of 21st-century American politics, and when considered in full it reaches beyond the arena of Machiavellian power struggle into the abstract theological realm favored by Church scholastics of the Middle Ages. How many angels can dance on the head of a pin? Whatever the number is, it’s infinitely larger than the number of Republicans who want to pick up John Boehner’s poisoned gavel. How large are Heaven and Hell, measured in cubits and ells? Not large enough, it appears, to encompass the pride and arrogance of the House Freedom Caucus, the group of 40-odd far-right Jacobins who first sabotaged Boehner’s speakership and then torpedoed the candidacy of his chosen replacement, Kevin McCarthy. In the great tradition of doomed revolutionaries, the Freedom Caucus prefers death, or at least political annihilation – which will be theirs one day, and sooner than they think – to the dishonor of compromise. It’s easy to make fun of the vainglory and self-importance embodied in the group’s name, but it strikes me as accurate enough. They have declared themselves free of all the responsibilities of government, free from the need to discuss or negotiate or pass any legislation that has the slightest chance of being enacted. They represent freedom in precisely the same sense that death represents freedom from being alive. They could just as well be called the Suicide Caucus – or the Satanic Caucus, in the grandiose spirit of Milton’s fallen angel, who fights on with no hope of victory: “To do ought good never will be our task,/ But ever to do ill our sole delight.” I imagine that when we all come back to work on Monday we will find that Paul Ryan has been arm-twisted into filling the role, at least for the next 14 months. But you can’t really blame Ryan, who is devious and intelligent and would like to be president someday, for feeling reluctant to commit political hara-kiri in this fashion. Even before the Republican Party constructed an alternate universe around itself and blotted out political reality, the speaker’s gavel was a final destination, not a pathway to anything larger. As numerous historical articles have now informed us, the last (and only) former House Speaker to move on to the White House was James K. Polk in 1844, and that happened after he had left Congress and served a term as governor of Tennessee. In any event, the fiscal whiz kid of Janesville, Wisconsin, is no better than a Band-Aid applied to the GOP’s gaping wound. Once upon a time, not so very long ago, the Republicans were boring and small-minded but not especially crazy. They pursued a disastrous foreign-policy agenda during the Cold War, but they were not alone in that, and one could argue that marked the first stages of betraying the tradition of Edmund Burke-style conservatism. On fiscal and social issues, they stood with country-club middle management and small-town Presbyterians and the affluent families who owned the third-largest bank in Indiana or a chain of hardware stores in and around San Diego. I’ve written previously that Richard Nixon and Dwight Eisenhower resemble Leon Trotsky on a crack bender compared to today’s Republicans. On a more personal level the GOP past will always be embodied for me by Mrs. Supinger, who was the postmistress in the tiny California town where I spent much of my childhood. A severe but polite lady whose hair was always immovably styled and gelled and set in the mode of roughly 1956 (even though there was no beauty parlor within 15 miles), Mrs. Supinger was seen to weep bitterly after Nixon resigned in the summer of 1974, and refused to remove his official portrait from the post-office wall. Maybe that act of civil disobedience – which, in all honesty, I still find perversely admirable – was the beginning of the Republican embrace of "No," the nihilistic or Satanic refusal that has poisoned the contemporary political climate. But if Mrs. Supinger is watching us now, from a high-backed armchair whose lace-covered armrests never need mending, I can’t imagine that she’s pleased to see her beloved party in ruins. Of course I believe that the Republicans have brought their gruesome predicament upon themselves and that they richly deserve their fate, although they have certainly been nudged toward the precipice by Democratic cowardice and incompetence. Some degree of liberal Schadenfreude is irresistible, and I too cackled when Nancy Pelosi was asked why nobody wanted to be speaker and responded, “You’ll just have to ask nobody.” But this ugly spectacle could have dire consequences for the country, in the near future and for a long time to come. Whoever the GOP shoves to the podium, whether it’s Ryan or Darrell Issa or Jason Chaffetz or someone even dumber than them, will either have to default on the national debt in November and shut down the government in December or face yet another enraged right-wing revolt. Either way, this Congress (and most likely the next one too, regardless of who is elected president) is a lost cause, and the future viability of bipartisan politics is very much in doubt. Dear Mrs. Supinger, I will try to explain: Even though the pundit class keeps on telling us that order will eventually be restored in the Republican presidential campaign – in the form of Jeb Bush or Marco Rubio, presumably – I suspect they’ve been smoking that stuff that President Nixon told you was sapping the national morale. At this writing, the leading candidates are still a deranged billionaire who makes impossible promises, a retired black doctor who says outrageous things in a quiet voice, and a likable woman who was a spectacular failure in the business world before she became a spectacular failure in politics. No, I’m not kidding! And none of those people has ever been elected to anything, including church deacon, yearbook president or assistant treasurer of the Ayn Rand fan club. How did that happen? And what does the crazy-time presidential race have to do with the fact that the Republicans hold their largest congressional majority since 1931 – 247 of the 435 House members – but still can’t find anybody who can win the speakership in a floor vote? Well, Mrs. S, the secret is that nobody actually likes the Republican Party, as it currently exists. The American public is divided between those who hate the Republicans for being irrational and intransigent and racist, and those who hate them for not being irrational and intransigent and racist enough. The only people who still feel affection toward the “Republican brand” are you and a tiny handful of extremely rich people, and since you’re dead you technically don’t count. That big Republican victory in the 2014 midterms was a masterfully engineered work of fiction – an artifact of voter suppression, voter apathy and the intensive gerrymandering imposed by GOP-dominated state legislatures after the 2010 census. Republican candidates won barely 51 percent of the vote, but thanks to the imaginative redistricting plans imposed in numerous states, that modest margin was dramatically overrepresented in the final result. Even more important, voter turnout fell to 36.6 percent, the lowest in any national election for more than 70 years. Just under 40 million people actually voted for Republicans, while 35.4 million voted for Democrats. That was the lowest Democratic total in 12 years – and also the lowest GOP total in eight years. Compare those numbers to the Obama re-election year of 2012, when Democratic congressional candidates attracted almost 60 million votes (and still lost seats thanks to the gerrymander), or to the electoral bonanza of 2008, when they got more than 65 million votes. So on one hand, nearly 30 million Democratic votes have vanished into thin air over the course of the last six years, which has been catastrophic to the party’s dreams of a semi-permanent electoral majority. But that doesn’t mean those people switched sides. The Republicans lost voters by the bucket-load too, just on a less epic scale. Their supposedly glorious 2014 victory required 18 million fewer votes than their record high total of 2012 – which was not enough to win the overall popular vote. In other words, Mrs. Supinger, the Republicans won their gigantic majority by poisoning and paralyzing the government like a Boehner-headed giant scorpion, and now they must face the consequences. They convinced enormous numbers of Democrats and lots of moderate Republicans to give up and stay home because American politics had become worthless and terrible, a conclusion that is difficult to fault. They appealed almost exclusively to their angriest, most zealous and most overtly racist base voters, a loud but relatively small minority of the general population who cannot be called “conservative” in any sense of the word. These are the people who constantly yammer for more tax cuts, in a country where income-tax rates, especially for the wealthiest citizens, are far below those our grandparents paid in the 1950s. They want all the Mexicans deported, even though illegal immigration is clearly an economic benefit (and is declining on its own). They want to slash social spending to levels that would make the 2008 recession look like a nationwide beach vacation. They want to defund Obamacare and Planned Parenthood, and make steep cuts to Medicare and Medicaid and Social Security – and the only good thing about their agenda is that since they can’t have all of it, they don’t want any of it. Now the Republicans in Congress, along with the “mainstream” or “establishment” Republican presidential candidates, are discovering what should have been obvious all along: The Frankenstein voter base they bred and nurtured with so much money and so much cunning does not like them or trust them. The fanatics of the Satanic Suicide Caucus and their supporters do not want the current Republican leadership to govern anything, or even try to. They have devoured the old Republican Party of Mrs. Supinger’s day from within, like an alien parasite. When they repeat its catchphrases about fiscal responsibility and social order in their metallic parasite voices, what they really mean is fiscal holocaust, social anarchy and class war against poor women, black people and immigrants. They dream of conquest, but whatever they can’t conquer – starting with their own political party – they will happily destroy.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 09:00

Amy Schumer’s “SNL” genius: Tonight’s challenge, topping herself

It's been Amy Schumer's year — a critically acclaimed comedy show, an Emmy, and a hit movie that helped debunk the myth that no one wants to watch comedies with female leads — so of course she's hosting "Saturday Night Live" this weekend. Considering that she's the host and star of her own sketch comedy show, this one has a good chance to be a stronger episode than those that have to work around people who aren't comedy actors, or even actors at all. Of course, the real question is whether or not "Saturday Night Live" can or will even bother to come close to some of the searing satire that Schumer brings to her own show, "Inside Amy Schumer." Like most sketch comedy shows, "Inside Amy Schumer" is hit-or-miss, but when it hits, most frequently on the topic of gender relations, it's often some of the funniest and most insightful stuff on TV. Also now one of the most innovative. The stand-out episode of the season was an episode-long sketch that parodied the '60s-era drama "12 Angry Men," except instead of debating whether a defendant is guilty, the men were debating whether Amy was hot enough to be on basic cable. It really shouldn't have worked, between the outdated reference and the fact that most sketches are too long at 5 minutes. But somehow it came together to be one of the most memorable moments in Comedy Central's history, up there with the best "Chappelle Show" episodes and some early "South Park." It worked for the reason that a lot of Schumer's best comedy bits about gender and sexism work, because Schumer really gets, and is totally unafraid of targeting, the culture of toxic masculinity. Watching the "12 Angry Men" episode, it quickly becomes apparent that the ostensible topic — Schumer's looks — isn't really the point of it at all. Instead, Schumer is targeting the way that men talk about women's bodies as a way to bolster their own egos and try to impress other men. Deigning a woman "hot or not" has little to do with actual sexual attraction and everything to do with trying to make other men think of you as studly and powerful, powerful enough to wave away a woman's body as if you're a king on a throne, declining gifts from your simpering supplicants. But what makes the sketch really next level is that it when it turns, it does so in a humanizing, though still hilarious way. One by one, the men start to crack, admitting in turn that they don't actually think that Amy is some kind of hideous troll. And while none of them end up coming across as some kind of prince, it cleverly revealed the core of vulnerability and fear that often lays behind this kind of masculine bravado. Months before the #MasculinitySoFragile hashtag took off on Twitter, Schumer had created the masterpiece on the theme. Many of her best sketches send up male entitlement in just this way. "Football Town Nights," a parody of "Friday Night Lights" with Josh Charles as the football coach, perfectly nailed the way that so many men, usually seen overwhelming the comment sections on any online article on rape, try to find some kind of exception to the don't-have-sex-without-consent rule. "Last F*ckable Day" zeroed in on way that male-run Hollywood wants to put women out to sea for daring to age in the same way men are allowed to do. My personal favorite, however, might be "Hello M'Lady," an ad for a fake smartphone app to help women manage those self-proclaimed "nice guys" that linger around, hoping that if they do you enough favors, you'll end up feeling guilty and repay the debt by reluctantly starting a relationship with them. Clever for noticing how often this happens, genius for showing that such guys aren't nice at all, but have an overwhelming sense of entitlement that leads them to think they are owed a relationship with a woman just because they put in some time, regardless of what she actually wants for herself. Bonus points because the sketch drew a number of angry comments from men who saw themselves in the sketch and were defensive about it, and angry at women for not wanting to play their game. Not that Schumer reserves her mockery only for men and their delusions. She teases women all the time on her show. She especially takes pleasure in sending up women that are, for lack of a better term, basic: Unimaginative and incurious, obnoxious and privileged, hungry for male attention and callous to the feelings of others. She does this by playing a character, who is usually named "Amy", that exhibits all these traits and who does things like uses the occasion of a bridesmaid's toast to remind everyone that she's had sex with the groom before. (This character is on full display in the promos for "Saturday Night Live," where Schumer pretends to forget Vanessa Bayer's name.) Playing a character with the same name as yourself to explore some of your ugliest urges and desires is hardly unknown in comedy, of course. Louis CK does it on "Louie" and Larry David mastered the form on "Curb Your Enthusiasm." But Amy's spin on the form is a specifically feminine one, exploring the specific ways the urge to be a boor manifests itself in women, who aren't allowed the same cultural room for open aggression as men. Hopefully, the writing staff at "Saturday Night Live" will channel a little of the sharp insight of "Inside Amy Schumer" about gender and culture. But even if they don't, it'll still be worth turning in to see this rising star return to the live comedy format where she cut her teeth.It's been Amy Schumer's year — a critically acclaimed comedy show, an Emmy, and a hit movie that helped debunk the myth that no one wants to watch comedies with female leads — so of course she's hosting "Saturday Night Live" this weekend. Considering that she's the host and star of her own sketch comedy show, this one has a good chance to be a stronger episode than those that have to work around people who aren't comedy actors, or even actors at all. Of course, the real question is whether or not "Saturday Night Live" can or will even bother to come close to some of the searing satire that Schumer brings to her own show, "Inside Amy Schumer." Like most sketch comedy shows, "Inside Amy Schumer" is hit-or-miss, but when it hits, most frequently on the topic of gender relations, it's often some of the funniest and most insightful stuff on TV. Also now one of the most innovative. The stand-out episode of the season was an episode-long sketch that parodied the '60s-era drama "12 Angry Men," except instead of debating whether a defendant is guilty, the men were debating whether Amy was hot enough to be on basic cable. It really shouldn't have worked, between the outdated reference and the fact that most sketches are too long at 5 minutes. But somehow it came together to be one of the most memorable moments in Comedy Central's history, up there with the best "Chappelle Show" episodes and some early "South Park." It worked for the reason that a lot of Schumer's best comedy bits about gender and sexism work, because Schumer really gets, and is totally unafraid of targeting, the culture of toxic masculinity. Watching the "12 Angry Men" episode, it quickly becomes apparent that the ostensible topic — Schumer's looks — isn't really the point of it at all. Instead, Schumer is targeting the way that men talk about women's bodies as a way to bolster their own egos and try to impress other men. Deigning a woman "hot or not" has little to do with actual sexual attraction and everything to do with trying to make other men think of you as studly and powerful, powerful enough to wave away a woman's body as if you're a king on a throne, declining gifts from your simpering supplicants. But what makes the sketch really next level is that it when it turns, it does so in a humanizing, though still hilarious way. One by one, the men start to crack, admitting in turn that they don't actually think that Amy is some kind of hideous troll. And while none of them end up coming across as some kind of prince, it cleverly revealed the core of vulnerability and fear that often lays behind this kind of masculine bravado. Months before the #MasculinitySoFragile hashtag took off on Twitter, Schumer had created the masterpiece on the theme. Many of her best sketches send up male entitlement in just this way. "Football Town Nights," a parody of "Friday Night Lights" with Josh Charles as the football coach, perfectly nailed the way that so many men, usually seen overwhelming the comment sections on any online article on rape, try to find some kind of exception to the don't-have-sex-without-consent rule. "Last F*ckable Day" zeroed in on way that male-run Hollywood wants to put women out to sea for daring to age in the same way men are allowed to do. My personal favorite, however, might be "Hello M'Lady," an ad for a fake smartphone app to help women manage those self-proclaimed "nice guys" that linger around, hoping that if they do you enough favors, you'll end up feeling guilty and repay the debt by reluctantly starting a relationship with them. Clever for noticing how often this happens, genius for showing that such guys aren't nice at all, but have an overwhelming sense of entitlement that leads them to think they are owed a relationship with a woman just because they put in some time, regardless of what she actually wants for herself. Bonus points because the sketch drew a number of angry comments from men who saw themselves in the sketch and were defensive about it, and angry at women for not wanting to play their game. Not that Schumer reserves her mockery only for men and their delusions. She teases women all the time on her show. She especially takes pleasure in sending up women that are, for lack of a better term, basic: Unimaginative and incurious, obnoxious and privileged, hungry for male attention and callous to the feelings of others. She does this by playing a character, who is usually named "Amy", that exhibits all these traits and who does things like uses the occasion of a bridesmaid's toast to remind everyone that she's had sex with the groom before. (This character is on full display in the promos for "Saturday Night Live," where Schumer pretends to forget Vanessa Bayer's name.) Playing a character with the same name as yourself to explore some of your ugliest urges and desires is hardly unknown in comedy, of course. Louis CK does it on "Louie" and Larry David mastered the form on "Curb Your Enthusiasm." But Amy's spin on the form is a specifically feminine one, exploring the specific ways the urge to be a boor manifests itself in women, who aren't allowed the same cultural room for open aggression as men. Hopefully, the writing staff at "Saturday Night Live" will channel a little of the sharp insight of "Inside Amy Schumer" about gender and culture. But even if they don't, it'll still be worth turning in to see this rising star return to the live comedy format where she cut her teeth.

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2015 08:59

October 9, 2015

“Death doesn’t come like it does in the movies:” What my mother’s last days taught me about our right to die

My mom died three weeks ago. She had ovarian cancer. Or what they suspect started as ovarian cancer. By the time the CT scan was finally ordered, the disease had spread throughout her pelvis.

When my friends ask how I’m doing, I say, “I am full of grilief.” Grief for what my mom endured. Relief that it’s finally over. Grilief.

During the 99 days between Mom’s diagnosis and death, she was either undergoing chemotherapy, dealing with the side effects of chemo or seeking ways to relieve her pain. Her hair fell out. She couldn’t eat. She grew weaker and weaker. A tube was placed in her abdomen to drain the fluid. She looked like a cruel science experiment. On her 77th birthday, her doctor told her the chemo had not worked; the cancer had spread. There was nothing more they could do.

That’s when my dad, two sisters and I began caring for her at home. It was easy at first. We helped her sit up, squirted Roxanol under her tongue. “I’m soooooo happy,” she’d say as the drugs kicked in and we tucked her back in bed. 

Baby Bird, we called her.

We thought this was how it would all go down. We’d keep giving her drugs until she died of happiness. But that, we came to learn, is the easy road to death. There is another road, a road euphemistically called “the difficult road.” 

For reasons we may never fully understand, Mom began to suffer from something called terminal agitation, which sounds like something you might experience at an airport when your flight is delayed. But it was nothing like that. 

Mom begged to be put to sleep, then begged for her old life back. She stayed awake for 48 hours straight, talking to herself. She complained that my dad smelled like tomato soup. She told a hospice nurse that we were withholding food from her. She flailed her arms and spouted premonitions about the daughter of a local meteorologist. 

Our mom had been a sweet, well-mannered woman known for countless acts of kindness. This was not our mom. 

We called this person Dark Mom.

The change in Mom’s behavior was shocking and bewildering. And, we later learned, not uncommon. According to the National Hospice and Palliative Care Organization and Hospice Pharmacia, 42 percent of dying patients experience some form of terminal restlessness in the final 48 hours of life. Mom’s agitation lasted five days.

When we could no longer care for her at home, Mom was taken by ambulance to a hospice facility. She insisted she had arrived by airplane. She slung insults, called the nurses “novices.” She tried to hitch her bed sheet to the ceiling fan and climb to freedom. She asked for a gun.

Kimmy, one of the nurses, told me I would miss this phase of Mom’s journey. 

“There is no way,” I told her.

But Kimmy was right. 

By the time the hospice staff got Mom’s agitation under control, she became unresponsive. She lay with her mouth open, gazing into space. Her breath grew shallow, irregular. She ran fevers, her fingertips turned blue, the skin of her face turned orange, then white, then gray. Her left ear lost its familiar shape.

While we waited for Mom to die, my sisters and I whispered to the nurses. “Isn’t there something you can give her to help her along?”

“Everyone asks us that,” they said.

The nurses did everything in their power to make Mom comfortable. They administered her meds, bathed her, swabbed her dry mouth with a tiny blue sponge, and worked in tandem to gently move Mom from her side, to her back, to her other side. 

Despite their efforts, Mom’s brow was often furrowed. Her exhale had that sharp sound you make after a hard cry. When Dad kissed her and stroked her cheek, tears pooled in the corners of her eyes. 

On Monday, we learned that the governor of California, Jerry Brown, had signed a measure to help terminally ill people end their lives.

Before Mom’s cancer took her down the “difficult road,” my sisters and I had never given assisted dying much thought. If I’d heard about a proposal to give doctors the right to prescribe lethal doses of painkillers to terminally ill patients with less than six months to live, I wouldn’t have opposed it, but I might not have considered it an essential and deeply humane piece of legislation, as I do now. I still recognize the medical and religious reasons many hold for opposing it, but having watched someone I love suffer at the end the way my mother did, I could not in good conscience feel anything but gratitude toward this measure. We don’t know if Mom would have wanted a physician-assisted death if it had been available in our state. But watching her die — knowing that there are so many others out there who suffered longer and without hospice support — made us wish that everyone at least had the option.

One of the great accomplishments of 21st-century medicine is our ability to mitigate and abbreviate pain, to spare patients needless anguish. When we were caring for Mom at home, a hospice nurse brought us a “comfort pack” of medications, which we stored in the refrigerator and used to ease her pain, delirium and anxiety — common symptoms in terminally ill patients. Once she was taken to the hospice facility, the nurses administered these medications under a doctor’s supervision. But because her symptoms were constantly shifting and magnifying, the dosages changed too, and it soon became impossible for my family to distinguish which of Mom’s symptoms were due to the disease process and which were due to side effects of the meds. We agonized over her agony, and I wondered why, if there were medications available to “cure” Mom, to completely eliminate her suffering and transport her with love to her ultimate destination, should she not have the right to that medication if she wanted it? Having watched what she went through, I would certainly want it for myself.

Everyone knows that death is inevitable, but we don’t spend enough time talking about the reality of it. Death doesn’t come like it does in the movies. You don’t always say something profound, close your eyes, and drift away. Death can be protracted, ugly and painful, and we can’t remove grief from the process of dying and letting loved ones go. But surely we can pass laws to give people the option to die without suffering needlessly. 

My older sister and I had the honor of holding Mom’s hand when she finally passed from this world. A wave of grief washed over us, followed by a wave of relief. 

My mom died three weeks ago. She had ovarian cancer. Or what they suspect started as ovarian cancer. By the time the CT scan was finally ordered, the disease had spread throughout her pelvis.

When my friends ask how I’m doing, I say, “I am full of grilief.” Grief for what my mom endured. Relief that it’s finally over. Grilief.

During the 99 days between Mom’s diagnosis and death, she was either undergoing chemotherapy, dealing with the side effects of chemo or seeking ways to relieve her pain. Her hair fell out. She couldn’t eat. She grew weaker and weaker. A tube was placed in her abdomen to drain the fluid. She looked like a cruel science experiment. On her 77th birthday, her doctor told her the chemo had not worked; the cancer had spread. There was nothing more they could do.

That’s when my dad, two sisters and I began caring for her at home. It was easy at first. We helped her sit up, squirted Roxanol under her tongue. “I’m soooooo happy,” she’d say as the drugs kicked in and we tucked her back in bed. 

Baby Bird, we called her.

We thought this was how it would all go down. We’d keep giving her drugs until she died of happiness. But that, we came to learn, is the easy road to death. There is another road, a road euphemistically called “the difficult road.” 

For reasons we may never fully understand, Mom began to suffer from something called terminal agitation, which sounds like something you might experience at an airport when your flight is delayed. But it was nothing like that. 

Mom begged to be put to sleep, then begged for her old life back. She stayed awake for 48 hours straight, talking to herself. She complained that my dad smelled like tomato soup. She told a hospice nurse that we were withholding food from her. She flailed her arms and spouted premonitions about the daughter of a local meteorologist. 

Our mom had been a sweet, well-mannered woman known for countless acts of kindness. This was not our mom. 

We called this person Dark Mom.

The change in Mom’s behavior was shocking and bewildering. And, we later learned, not uncommon. According to the National Hospice and Palliative Care Organization and Hospice Pharmacia, 42 percent of dying patients experience some form of terminal restlessness in the final 48 hours of life. Mom’s agitation lasted five days.

When we could no longer care for her at home, Mom was taken by ambulance to a hospice facility. She insisted she had arrived by airplane. She slung insults, called the nurses “novices.” She tried to hitch her bed sheet to the ceiling fan and climb to freedom. She asked for a gun.

Kimmy, one of the nurses, told me I would miss this phase of Mom’s journey. 

“There is no way,” I told her.

But Kimmy was right. 

By the time the hospice staff got Mom’s agitation under control, she became unresponsive. She lay with her mouth open, gazing into space. Her breath grew shallow, irregular. She ran fevers, her fingertips turned blue, the skin of her face turned orange, then white, then gray. Her left ear lost its familiar shape.

While we waited for Mom to die, my sisters and I whispered to the nurses. “Isn’t there something you can give her to help her along?”

“Everyone asks us that,” they said.

The nurses did everything in their power to make Mom comfortable. They administered her meds, bathed her, swabbed her dry mouth with a tiny blue sponge, and worked in tandem to gently move Mom from her side, to her back, to her other side. 

Despite their efforts, Mom’s brow was often furrowed. Her exhale had that sharp sound you make after a hard cry. When Dad kissed her and stroked her cheek, tears pooled in the corners of her eyes. 

On Monday, we learned that the governor of California, Jerry Brown, had signed a measure to help terminally ill people end their lives.

Before Mom’s cancer took her down the “difficult road,” my sisters and I had never given assisted dying much thought. If I’d heard about a proposal to give doctors the right to prescribe lethal doses of painkillers to terminally ill patients with less than six months to live, I wouldn’t have opposed it, but I might not have considered it an essential and deeply humane piece of legislation, as I do now. I still recognize the medical and religious reasons many hold for opposing it, but having watched someone I love suffer at the end the way my mother did, I could not in good conscience feel anything but gratitude toward this measure. We don’t know if Mom would have wanted a physician-assisted death if it had been available in our state. But watching her die — knowing that there are so many others out there who suffered longer and without hospice support — made us wish that everyone at least had the option.

One of the great accomplishments of 21st-century medicine is our ability to mitigate and abbreviate pain, to spare patients needless anguish. When we were caring for Mom at home, a hospice nurse brought us a “comfort pack” of medications, which we stored in the refrigerator and used to ease her pain, delirium and anxiety — common symptoms in terminally ill patients. Once she was taken to the hospice facility, the nurses administered these medications under a doctor’s supervision. But because her symptoms were constantly shifting and magnifying, the dosages changed too, and it soon became impossible for my family to distinguish which of Mom’s symptoms were due to the disease process and which were due to side effects of the meds. We agonized over her agony, and I wondered why, if there were medications available to “cure” Mom, to completely eliminate her suffering and transport her with love to her ultimate destination, should she not have the right to that medication if she wanted it? Having watched what she went through, I would certainly want it for myself.

Everyone knows that death is inevitable, but we don’t spend enough time talking about the reality of it. Death doesn’t come like it does in the movies. You don’t always say something profound, close your eyes, and drift away. Death can be protracted, ugly and painful, and we can’t remove grief from the process of dying and letting loved ones go. But surely we can pass laws to give people the option to die without suffering needlessly. 

My older sister and I had the honor of holding Mom’s hand when she finally passed from this world. A wave of grief washed over us, followed by a wave of relief. 

My mom died three weeks ago. She had ovarian cancer. Or what they suspect started as ovarian cancer. By the time the CT scan was finally ordered, the disease had spread throughout her pelvis.

When my friends ask how I’m doing, I say, “I am full of grilief.” Grief for what my mom endured. Relief that it’s finally over. Grilief.

During the 99 days between Mom’s diagnosis and death, she was either undergoing chemotherapy, dealing with the side effects of chemo or seeking ways to relieve her pain. Her hair fell out. She couldn’t eat. She grew weaker and weaker. A tube was placed in her abdomen to drain the fluid. She looked like a cruel science experiment. On her 77th birthday, her doctor told her the chemo had not worked; the cancer had spread. There was nothing more they could do.

That’s when my dad, two sisters and I began caring for her at home. It was easy at first. We helped her sit up, squirted Roxanol under her tongue. “I’m soooooo happy,” she’d say as the drugs kicked in and we tucked her back in bed. 

Baby Bird, we called her.

We thought this was how it would all go down. We’d keep giving her drugs until she died of happiness. But that, we came to learn, is the easy road to death. There is another road, a road euphemistically called “the difficult road.” 

For reasons we may never fully understand, Mom began to suffer from something called terminal agitation, which sounds like something you might experience at an airport when your flight is delayed. But it was nothing like that. 

Mom begged to be put to sleep, then begged for her old life back. She stayed awake for 48 hours straight, talking to herself. She complained that my dad smelled like tomato soup. She told a hospice nurse that we were withholding food from her. She flailed her arms and spouted premonitions about the daughter of a local meteorologist. 

Our mom had been a sweet, well-mannered woman known for countless acts of kindness. This was not our mom. 

We called this person Dark Mom.

The change in Mom’s behavior was shocking and bewildering. And, we later learned, not uncommon. According to the National Hospice and Palliative Care Organization and Hospice Pharmacia, 42 percent of dying patients experience some form of terminal restlessness in the final 48 hours of life. Mom’s agitation lasted five days.

When we could no longer care for her at home, Mom was taken by ambulance to a hospice facility. She insisted she had arrived by airplane. She slung insults, called the nurses “novices.” She tried to hitch her bed sheet to the ceiling fan and climb to freedom. She asked for a gun.

Kimmy, one of the nurses, told me I would miss this phase of Mom’s journey. 

“There is no way,” I told her.

But Kimmy was right. 

By the time the hospice staff got Mom’s agitation under control, she became unresponsive. She lay with her mouth open, gazing into space. Her breath grew shallow, irregular. She ran fevers, her fingertips turned blue, the skin of her face turned orange, then white, then gray. Her left ear lost its familiar shape.

While we waited for Mom to die, my sisters and I whispered to the nurses. “Isn’t there something you can give her to help her along?”

“Everyone asks us that,” they said.

The nurses did everything in their power to make Mom comfortable. They administered her meds, bathed her, swabbed her dry mouth with a tiny blue sponge, and worked in tandem to gently move Mom from her side, to her back, to her other side. 

Despite their efforts, Mom’s brow was often furrowed. Her exhale had that sharp sound you make after a hard cry. When Dad kissed her and stroked her cheek, tears pooled in the corners of her eyes. 

On Monday, we learned that the governor of California, Jerry Brown, had signed a measure to help terminally ill people end their lives.

Before Mom’s cancer took her down the “difficult road,” my sisters and I had never given assisted dying much thought. If I’d heard about a proposal to give doctors the right to prescribe lethal doses of painkillers to terminally ill patients with less than six months to live, I wouldn’t have opposed it, but I might not have considered it an essential and deeply humane piece of legislation, as I do now. I still recognize the medical and religious reasons many hold for opposing it, but having watched someone I love suffer at the end the way my mother did, I could not in good conscience feel anything but gratitude toward this measure. We don’t know if Mom would have wanted a physician-assisted death if it had been available in our state. But watching her die — knowing that there are so many others out there who suffered longer and without hospice support — made us wish that everyone at least had the option.

One of the great accomplishments of 21st-century medicine is our ability to mitigate and abbreviate pain, to spare patients needless anguish. When we were caring for Mom at home, a hospice nurse brought us a “comfort pack” of medications, which we stored in the refrigerator and used to ease her pain, delirium and anxiety — common symptoms in terminally ill patients. Once she was taken to the hospice facility, the nurses administered these medications under a doctor’s supervision. But because her symptoms were constantly shifting and magnifying, the dosages changed too, and it soon became impossible for my family to distinguish which of Mom’s symptoms were due to the disease process and which were due to side effects of the meds. We agonized over her agony, and I wondered why, if there were medications available to “cure” Mom, to completely eliminate her suffering and transport her with love to her ultimate destination, should she not have the right to that medication if she wanted it? Having watched what she went through, I would certainly want it for myself.

Everyone knows that death is inevitable, but we don’t spend enough time talking about the reality of it. Death doesn’t come like it does in the movies. You don’t always say something profound, close your eyes, and drift away. Death can be protracted, ugly and painful, and we can’t remove grief from the process of dying and letting loved ones go. But surely we can pass laws to give people the option to die without suffering needlessly. 

My older sister and I had the honor of holding Mom’s hand when she finally passed from this world. A wave of grief washed over us, followed by a wave of relief. 

Continue Reading...










 •  0 comments  •  flag
Share on Twitter
Published on October 09, 2015 16:00