Oxford University Press's Blog, page 593

November 8, 2015

International security and foreign affairs in 2014 [interactive map]

What was happening in the world last year? Events such as the the devastating protest-turned-conflict in Ukraine, or the maritime disputes between states in the South China Sea, have wide-reaching repercussions – from the amount a country spends on its military, to the direction of foreign policies whole regions take. The SIPRI Yearbook, published annually in print and online, keeps track of global developments which happened in the previous year and provides authoritative data and expert analysis on the most important issues facing the global community, offering a deeper and more nuanced understanding of the headlines you’ve been reading in the past year.


Did you know, for example, that Sweden became the first EU state to recognize Palestine as a sovereign country? In response, Israel recalled its Swedish ambassador in October 2014. Or that Africa continued to be the main focus of peace operations in 2014, with almost half of global peace operations taking place on the continent?


With snippets taken from the SIPRI Yearbook 2015, which analyses significant events across the globe in the previous year, the map below helps you explore the global state of affairs in 2014:



Featured image credit: Berkut Riot Police by the Cabinet of Ministers of the Ukraine by Ivan Bandura. CC BY 2.0 via Wikimedia Commons


The post International security and foreign affairs in 2014 [interactive map] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2015 03:30

Does news have a future?

Journalism is in trouble and democracy is at risk. Countless editorialists have sounded this ominous warning in the past few years in Britain and the United States.


For over two centuries, newspapers were the dominant news medium. Yet today ‘dead tree’ media-like stamp collecting is, well, so twentieth century. Now that millions of Americans get their news from social media on-line, newspapers have been in free-fall, prompting many pundits to wonder aloud if journalism has a future.


A crisis is a terrible thing to waste, and for some time a small battalion of historians have been rewriting the history of our journalistic past to meet the needs of the day. The history of news, it turns out, is not the same thing as the history of newspapers, while the history of newspapers has been often misunderstood.


Contrary to what seems to be a common assumption, newspaper journalism has long been funded mostly not by subscribers but by advertisers. Not even the street-wise ‘newsies’ of Broadway musical-fame could guarantee the revenue that Joseph Pulitzer or William Randolph Hearst needed to balance their books. Advertisers footed the bill.


The rise of the Internet had dried up much of this revenue, confronting the journalistic profession with an existential crisis. Yet this is not the crux of the problem. At the heart of the current crisis of journalism is a simple fact that we ignore at our peril: news reporting has almost never paid for itself. To provide the citizenry the information we need to remain well-informed, journalists for centuries designed market-limiting institutional arrangements that ranged from government subventions to legally enforced cartels. Market failure has gone hand in hand with journalistic success.


Can the unregulated Internet dreamed of by technophiles solve our problems? If history is any guide, the answer is a resounding no. The history of journalism provides little warrant for those who assume that market forces will somehow provide the citizenry with the news it needs. ‘Net neutrality’ is the Social Darwinism of the digerati. The marketplace of ideas has been, paradoxically, the most robust when market forces have been held in check.


For journalism to thrive, as it must, and if democracy is to endure, new institutional arrangements must be devised to enable journalists to build and sustain the enduring organizations that have always been necessary for the creation of high-quality news. Faddish stopgaps like citizen journalism can do little to solve the fundamental problem: high-quality journalism is expensive, and some way must be found to foot the bill. By understanding how journalism worked in the past, we can help ensure that it has a future.


Featured image credit: Newspapers B&W (4) by Jon S. CC-BY-2.0 via Flickr.


The post Does news have a future? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2015 02:30

Preparing for AMS Louisville

We’re getting ready for the annual American Musicological Society Conference, beginning 11 November 2015 in Louisville, Kentucky. From panels to performances, there’s a lot to look forward to. We asked our past and present attendees to tell us what make AMS and Louisville such exciting places to be this month.


Program Highlights

If you’re attending the AMS Louisville Conference, keep your eye out for these exciting events on AMS’s schedule. Here are some quick highlights selected by our editors.


Thursday


3:00     New Music Festival: Kaija Saariaho, Convocation Lecture, University of Louisville, Comstock Hall


8:00     New Music Festival: Louisville Orchestra, University of Louisville, Comstock Hall


Friday


12:00    John Schneider, guitar, Frazier History Museum


12:15    Michael Beckerman discusses Louisville’s “unconscious composers”, Nunn


8:00     Kentucky Opera Presents: Jake Heggie, Three Decembers, Brown Theater


8:00     New Music Festival: University of Louisville Orchestra, University of Louisville, Comstock Hall


Saturday


12:30    Improvisation in Beethoven’s Violin Sonatas, Katharina Uhde (Valparaiso University), violin, R. Larry Todd (Duke University), piano, Nunn


8:00     Louisville Orchestra, Music of Led Zeppelin, Whitney Hall, Kentucky Center


8:00     New Music Festival: Electronic Music, University of Louisville, Comstock Hall


Tips from OUP Staff

“One I can answer right away: I’m a big fan of the restaurant Hillbilly Tea in Louisville. Delicious eats, all locally sourced and strongly recommended!”


–Norm Hirschy, Editorial


“I always enjoy AMS as a place to make new friends and catch up with old ones in the musicology community. My favorite AMS memory is attending an early Sunday morning panel on women in rock music featuring Annie Randall, author of Dusty: Queen of the Post Mods.”


–Richard Carlin, Editorial


“I love AMS because I get to talk in person to the people I work with all year. I meet with the editorial boards for Grove Music, Oxford Bibliographies, and Oxford Handbooks. It’s the only time all year where we get to sit down in a room together to talk about those thorny issues that benefit from a prolonged discussion rather than a phone call or an email. But I also love meeting with scholars one-on-one to talk about what they’re working on. It’s one of the great pleasures of my job – listening to people talk about the things they are really passionate about. I’ve never been to Louisville before. I don’t usually have too much time for exploring while at AMS, but I’m hoping to visit the Louisville Slugger Museum, just a few blocks from the hotel, on behalf of my son, who’s an avid baseball player.”


–Anna-Lise Santella, Editorial


“Back in 2013, I filmed a video during the AMS conference with Deane Root and Charles Hiroshi Garrett. They sat down and talked about the second edition of The Grove Dictionary of American Music, and it was really interesting to hear about the past, present, and future of the project in both print and online.”


–Victoria Davis, Marketing


“This is my first AMS conference and first time to Lousville, so I’m excited to learn more about the local culture while getting to know my coworkers. It’s a great opportunity to meet authors, musicians, and teachers who love music and music books as much as I do.”


–Celine Aenlle-Rocha, Marketing


“This is my first time traveling to a conference with Oxford so I’m looking forward to seeing all of our texts displayed after knowing all the hard work and preparation that goes into our exhibits.”


–Erin Stanton, Marketing


AMS has a fantastic list of dining suggestions to check out in between events.


If you’re attending the conference, stop by the Oxford University Press booth. You’ll have the chance to check out our books, including our new and bestselling titles on display at a 20% conference discount, and get free trial access to our suite of online products. To learn more about the AMS conference, check out their official website and follow along on Twitter at @OUPMusic and the #amslouisville hashtag.


Featured image: Louisville Skyline. Photo by The Pug Father. CC BY 2.0 via Flickr.


The post Preparing for AMS Louisville appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2015 01:30

Paradox and self-evident sentences

According to philosophical lore many sentences are self-evident. A self-evident sentence wears its semantic status on its sleeve: a self-evident truth is a true sentence whose truth strikes us immediately, without the need for any argument or evidence, once we understand what the sentence means (and similarly, a self-evident falsehood wears its falsity on its sleeve in a similar manner). Some paradigm examples of self-evident truths, according to those who believe in such things at least, include the law of non-contradiction:


No sentence is both true and false at the same time.


which was championed as self-evidently true by Aristotle, and:


1+1 = 2


Note that if a claim is self-evidently true, then its negation is self-evidently false.


Now, it seems like we have good reasons for the following claim to seem at least initially plausible:


No self-referential statement is self-evident (whether true, false, or otherwise).


One thing that becomes somewhat obvious once we look at self-referential sentences like the Liar paradox:


This sentence is false.


And the self-referential sentences discussed here, here, and here is that determining whether a particular self-referential sentence is true or false (or paradoxical, etc.) usually involves a lot of work, typically in the form of careful and complicated deductive reasoning.


Surprisingly, however, we can show that some self-referential sentences are self-evident. In particular, we will look at a self-referential sentence that is self-evidently false.


Of course, anyone who has read even a single installment in this series will likely guess that some sort of trick is coming up. Thus, in order to highlight exactly what is weird, and what is logically interesting, about the apparently self-evidently false self-referential sentence that we are going to construct, let’s first look at a more well-known self-referential puzzle.


The puzzle in question is the paradox of the knower (also known as the Montague paradox). Consider the following self-referential sentence:


This sentence is known to be false.


Now, we can easily prove that this sentence, which I shall call the ‘Knower’ is false: Assume that the Knower is true. Then what it says must be the case. It says that it is known to be false. So the Knower must be known to be false. But knowledge is what philosophers call factive: for any sentence P, if you know that P is the case, then P must be the case. So, since the Knower is known to be false, then the Knower must be false. But then, the Knower is both true and false. Contradiction. So the Knower is false. QED.


Further, a little trial and error will show that you can’t give a simple proof like the one above to show that the Knower is true. If you assume it is false, all you can conclude is that it is false, but we don’t know that it is false, and that is not contradictory at all.


But wait! Two paragraphs earlier we gave a proof that the Knower is false. Proofs generate knowledge, however: if you read through that paragraph and were paying attention (and I hope you were!) then you know that the Knower is false. So the Knower is known to be false, since you know it to be so. But that’s just what the Knower says. So it is true after all. Now we have a genuine paradox!


Notice, however, that the two pieces of reasoning that we used to generate the paradox – the reasoning used to conclude that the Knower is false, and the reasoning used to conclude that the Knower is true – are of very different types. The first bit of reasoning is just a straightforward deduction about the sentence we are calling the Knower (well, as straightforward as such reasoning about self-referential sentences gets). The second bit of reasoning is different, however: in order to conclude that the Knower is true, we didn’t reason directly about the sentence we are calling the Knower, but instead carried out a second bit of reasoning about the first bit of reasoning.


In other words, we have a proof that plays two roles: First, it shows that the Knower is false, since its conclusion just is that the Knower is false. Second, it shows that the Knower is true, since our recognition of the existence of such a proof is enough to ensure that we have knowledge of the truth of the Knower.


Something like this is also going on in the example to which we now turn: the paradox of self-evidence. Consider the following sentence:


This sentence is false, but not self-evidently false.


Let’s call this sentence the Self-evidencer. Now, we can prove that the Self-evidencer is self-evidently false.


First, we prove that the Self-evidencer is false: assume that the Self-evidencer is true. Then what it says must be the case. It says that it is false, but not self-evidently false. So the Self-evidencer is false, but it is not self-evidently false. But this means that the Self-evidencer is both true and false. Contradiction. So the Self-evidencer is false.


Now, we can prove that it is self-evidently false: given the previous paragraph, we know that the Self-evidencer is false. So what it says must not be the case. The Self-evidencer says that it is false, but not evidently so. So it must not be the case that the Self-evidencer is both false and not self-evidently false. So (by a basic logical law known as the DeMorgan law) either the Self-evidencer is not false, or it is self-evidently false. But the Self-evidencer is false. So it must also be self-evidently false. QED.


Again, like the Knower, there is no obvious contradiction or paradox lurking in the above argument – we have merely proven that the Self-evidencer is self-evidently false, similarly to how we might prove that the following sentence is true:


This sentence is either true or false.


But herein lies the problem. It seems like the only way that we can come to know that the Self-evidencer is self-evidently false is via a complicated bit of reasoning like the one we just gave. It seems unlikely that anyone will think that the falsity of the Self-evidencer is obvious, or forces itself on us, immediately once we understand the sentence.


Thus, we again have a proof that plays two roles. On the one hand, it seems to provide us with knowledge that the Self-evidencer is self-evidently false, since that is its conclusion. On the other hand, however, the fact that we can only come to this knowledge via this rather complicated proof (or some bit of reasoning equivalent to it) seems to be indirect evidence that the Self-evidencer is not self-evident after all. Contradiction.


Featured image: Paradox by Brett Jordan. CC BY 2.0 via Flickr.


The post Paradox and self-evident sentences appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 08, 2015 00:30

November 7, 2015

10 academic books that changed the world

upweekacbkweek


What is the future of academic publishing? We’re celebrating University Press Week (8-14 November 2015) and Academic Book Week (9-16 November) with a series of blog posts on scholarly publishing from staff and partner presses. Today, we present Oxford’s list of ten academic books that changed the world.


Oxford University Press has a rich publishing history which can be traced back to the earliest days of printing. The first book was printed in Oxford in 1478, just two years after Caxton set up the first printing press in England. Since 1478, Oxford University Press has published thousands of academic books that have changed the world. With the help of our colleagues, we pick our top ten.


Which academic book do you think had the biggest impact on the world? Let us know in the comments below.



Featured image credit: Bookshelf by David Orban, CC BY 2.0 via Flickr.


The post 10 academic books that changed the world appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2015 23:30

Debunking ADHD myths: an author Q&A

With the rise in the number ADHD diagnoses, fierce controversies have emerged over the mental disorder—how we should classify it, how best to treat it, and even whether it exists at all. We have only recently (within the past century) developed our understanding of how it affects those diagnosed, with the number of papers on “attention deficit” exploding within the past decade. But with the sheer amount of information on ADHD that’s out there, it’s easy for anyone these days to be completely overwhelmed. What do we believe? Who should we believe? Psychologist Stephen P. Hinshaw and Pulitzer Prize-winning journalist Katherine Ellison, authors of ADHD: What Everyone Needs to Know, answered a few questions for us in hopes of debunking some myths about the disorder.


Isn’t ADHD just an excuse for bad parenting, lazy, bratty kids, and pill-poppers?

This is a prevalent myth—and one we spend a lot of time debunking in our book, in interviews, and in our public talks. Despite the skepticism and the stereotypes, substantial research has shown that ADHD is a strongly hereditary neurodevelopmental disorder. The quality of one’s parenting doesn’t create ADHD—although it can influence a child’s development—and children with this condition are not lazy but instead handicapped in their capacity to focus attention and keep still.


It’s not? Well, isn’t it just a plot by pharmaceutical firms that want to sell more stimulants?

Pharmaceutical firms have worked hard to expand awareness of ADHD as they pursue profits in a global market last estimated at $11.5 billion. But they didn’t create the disorder. Moreover, studies have shown that stimulant medications—the most common treatment for ADHD—can be quite helpful for many people with the disorder and are generally safe, when used as prescribed. Our position on medication boils down to this: there is no “magic bullet,” and medication should be used with caution, due to potential side-effects and valid concerns about dependency. But you shouldn’t let Big Pharma’s sometimes remarkably aggressive tactics dissuade you from trying medication, if a doctor says you need it.


But aren’t we all getting a little ADHD because of how much we’re all checking Facebook and Twitter?

Everyone in modern society is facing a new world of devices, social media, and demands for rapidly shifting attention. It’s quite possible that the evolution of technology is moving faster than our brains’ capacity to adapt. Still, it’s important to make a distinction between distraction that can be controlled by turning off your email versus genuine ADHD, which arises from the brain’s inefficient processing of important neurochemicals including dopamine and norepinephrine. While most of us today are facing environmentally-caused problems with distraction, people with ADHD are at a significant disadvantage.


How fast have US rates of ADHD been increasing, and why?

The short answer is: really fast. US rates of ADHD were already high at the turn of the millennium, but since 2003, the numbers of diagnosed children and adolescents have risen by 41%. Today, more than six million youths have received diagnoses, and the fastest-growing segment of the total population with respect to diagnosis and medication treatment is now adults, particularly women.


The current numbers are staggering. For all children aged 4-17, the rate of diagnosis is now one in nine. For those over nine years of age, more than one boy in five has received a diagnosis. Among youth with a current diagnosis, nearly 70% receive medication.


Why are US rates so much higher than anywhere else?

Epidemiological studies show that ADHD is a global phenomenon, with rates of prevalence ranging from five to seven percent, even in such remote places as Brazil’s Amazon River basin. Indeed, diagnosis rates are much lower, for a range of reasons that include simple lack of awareness, cultural differences, and resistance to US-style “medicalization” of behavioral problems. Rates of diagnosis and treatment are now rising, in some cases dramatically, throughout the world, even as they still lag considerably behind US rates. One major factor in this trend is increasing pressures for performance in schools and on the job.


What might be causing some of the high rates in the United States?

One issue that seriously concerns us is the likelihood of over-diagnosis in some parts of the country. The danger of over-diagnosis is heightened by the fact that determining whether someone has ADHD remains a somewhat subjective process, in that, like all mental disorders, there is no blood test or brain scan that can decisively determine it.


“Gold-standard” clinical processes, which include taking thorough medical histories and gathering feedback from family members and teachers, can guard against over-diagnosis, but all too often the diagnosis is made in a cursory visit to a doctor.


What danger might there be of under-diagnosis?

The same quick-and-dirty evaluations that fuel over-diagnosis can also lead to missing ADHD when it truly exists. That is, the clinician who insists that he or she can detect ADHD in a brief clinical observation may overlook the fact that children and adults may act quite differently in a doctor’s office than they do at school or in the workplace. This is equally concerning, because whereas over-diagnosis may lead to over-treatment with medication, under-diagnosis means children who truly need help aren’t getting it.


I keep hearing that ADHD is a “gift.” What does that mean?

Celebrities including the rapper will.i.am and business superstars such as Jet Blue founder David Neeleman have talked about the advantages of having ADHD in terms of creativity and energy, and many ADHD advocates have championed the idea that the condition is a “gift.” We support the idea of ADHD as a kind of neuro-variability that in some contexts, and with the right support, can offer advantages. But do look this gift-horse in the mouth; ADHD can also be a serious liability, and needs to be managed throughout a lifetime. Consider the Olympic swimmer Michael Phelps, who rose to stardom only to be embarrassed by drug and alcohol problems. Longitudinal studies show that people with ADHD on average suffer significantly more problems with addiction, accidents, divorces, and academic and employment setbacks than others. ADHD is serious business.


Is ADHD really more common in boys than girls?

Just like all other childhood neurodevelopmental disorders (e.g., autism, Tourette’s, severe aggression), ADHD truly is more common in boys, at a rate of about two-and-a-half to one. But too many clinicians still don’t seem to understand that ADHD can and does exist in girls. One issue here is that girls—and women—often manifest the problem differently than boys and men. Whereas males may be more hyperactive, females may be more talkative or simply daydreamy. Although girls and women have historically been under-diagnosed, the rates are catching up in recent years, which is a good thing, given that the consequences of the disorder, when untreated, can be serious.


Can adults have ADHD?

Adult rates of ADHD are real and quickly growing. One reason is that as awareness has spread about childhood ADHD, many parents are starting to confront the reasons for their own lifelong and untreated distraction. There is debate about whether children ever “grow out of” their ADHD, or whether some merely learn how to cope so well that it is indistinguishable by adulthood. But the best estimates are that close to 10 million adults—about 4.4% of the population—are impaired to some extent by the disorder. That’s a prevalence rate of about half of the childhood rate.


Image Credit: Derivative work by Connie Ngo and released under CC BY-SA 4.0.  (1) Photo by HebiFot. Public Domain via Pixabay. (2) Image by ClkerFreeVectorImages. Public Domain via Pixabay.



The post Debunking ADHD myths: an author Q&A appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2015 05:30

Shakespeare the Classicist

The traditional view of Shakespeare is that he was a natural genius who had no need of art or reading. That tradition grew from origins which should make us suspect it. Shakespeare’s contemporary Ben Jonson famously declared that Shakespeare had ‘small Latin and less Greek’. (Although what he actually wrote, ‘Though thou hadst small Latin and less Greek’, could be interpreted as a counterfactual statement—‘even if it were the case that you had’—rather than a simple statement of truth.) John Milton called Shakespeare ‘fancy’s child’, who would ‘warble his native woodnotes wild’. Both of these writers wanted to be thought of as classically learned, and both of them were effectively inventing Shakespeare as their own opposite. Neither gives simply reliable testimony about the historical Shakespeare.


Shakespeare read widely in the vernacular. Almost all of the big, fashionable books which were printed during his working career—John Florio’s translation of Montaigne’s Essays, Thomas North’s translation of Plutarch, and Raphael Holinshed’s Chronicles—are major sources for his plays. But he was also extremely well read by present-day standards in classical literature. We can be pretty sure he attended Stratford grammar school, where Latin literature was the main subject of study. At school and after he read a great deal of Ovid—who informs both the gruesomeness of Titus Andronicus (1593-4), the playfulness of Venus and Adonis (1593), and the seriousness of the later plays Cymbeline (1609-10) and The Tempest (1611-12). There is good evidence that he read and learned from Latin handbooks of rhetorical theory. His works also display knowledge of several tragedies by Seneca, and at least the first half of Virgil’s Aeneid. If you compared his knowledge of Latin literature with that of a recent classics graduate today, the chances are that Shakespeare would win the contest.


Why then was Shakespeare not regarded as a learned writer by his contemporaries? There are two main reasons. The first is that he did not have a university degree. Other writers from the Elizabethan period who did have degrees—or who, like Jonson, wanted to appear as though they did—often made a great show of their learning: they might quote in Latin, or make their readers know that they were using recent editions of classical texts. They also had a significant cultural investment in representing provincial grammar school boys as unlearned. So Shakespeare has been traditionally regarded as unlearned for one simple reason: cultural snobbery.



“The Plays of William Shakespeare” by Sir John Gilbert (1817-1897). Photo by Sofi,

CC BY-SA 2.0 via Flickr.

The second main reason why the extent of Shakespeare’s classical reading was not fully appreciated until the twentieth century is that he chose to display the learning that he had in very distinctive ways. Before around 1600 he could sometimes allude to classical texts onstage in a deliberately clumsy or archaic style. So in A Midsummer Night’s Dream the play of Pyramus and Thisbe is a retelling of a story from Ovid’s Metamorphoses. But instead of artfully displaying his knowledge of that classical text Shakespeare has a group of rustics enact it in a deliberately topsy-turvy low style. In Hamlet the player recites a speech about the death of Priam and the grief of Hecuba, which is based on Book 2 of Virgil’s Aeneid. This is composed in an almost excessively ‘high’ style, with antiquated diction and heavy alliteration. In those two works Shakespeare seems to be exaggerating the distance between his own dramas and the classical past, and to be underplaying his intimacy with the classics.


But Shakespeare’s classical learning also went unappreciated for so long for a further reason. He tended to learn from what he read rather than simply echoing it. This means that the traditional method of identifying ‘sources’ and ‘borrowings’ by looking for precise verbal parallels is a very unreliable means of determining which texts mattered to Shakespeare. Classical comedy, for instance, clearly influenced how Shakespeare constructed plots and how he thought about the human imagination, even if there are not many direct allusions to specific lines by Plautus and Terence in his plays. The early work The Comedy of Errors (1592-3) does draw very directly on a play about twins and confusion called the Menaechmi by Plautus. It doubles up Plautus’s sets of twins in order to multiply the comic confusions, but it also complicates Plautus in other ways. The Menaechmi was principally concerned with material losses and confusions, but Shakespeare made from it a play in which people become confused about who they are and what they know. A few years later in Twelfth Night (1599-1600) questions about the psychology of love and identity become such pronounced elements in the play that the material confusions of Plautus seem to have been left far behind—although at least one of Shakespeare’s early audience, John Manningham of the Inner Temple, did record in his diary that the play was ‘much like the comedy of errors, or Menaechmi in Plautus’.


In his later tragedies and comedies of love Shakespeare continued to address a series of questions which had been provoked by his reading of Plautus: ‘who am I?’, ‘what do I know?’, ‘am I part of an illusion?’. Those questions are explored in Troilus and Cressida (1601-2), and take on a tragic dimension with the delusions of Othello (1604-5). They are central to his depiction, throughout his career, of human beings as subject to illusion, imagination, and desire. And those questions Shakespeare was first prompted to ask by his reading in classical comedy.


There is, though, a curious irony here. It was Shakespeare’s ability to see beneath his source material, extract principles from it, and transform those principles, that made him a great writer. But his ability to conceal and transform his reading had a secondary consequence: it made generations of readers fail to appreciate quite how learned Shakespeare actually was.


The post Shakespeare the Classicist appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2015 01:30

‘Death with Dignity': is it suicide?

Five states have now legalized “Death with Dignity”: Oregon, Washington, Montana, Vermont, and most recently, the most populous state in the US, California. By 2016, when the California “End of Life Option Act” goes into effect, 1 in 10 Americans will live in jurisdictions in which it is legal for a physician to prescribe a lethal drug to a terminally ill patient who, under a variety of safeguards, requests it. Opponents label this “physician-assisted suicide”; proponents use terms like “self-deliverance,” “physician-assisted dying,” “aid-in-dying,” and “hastened death.”


But what’s the right term, really? After all, much of the political disagreement and legal wrangling over this issue is rooted in this fundamental conceptual question, is “physician-assisted suicide” really suicide?


Let’s see if we can figure it out.


Which of these cases is suicide? A heartsick teenager slits her writs, stung by rejection from a boyfriend. A middle-aged businessman, shamed by failure in an economic downturn, hangs himself in the basement. An older man shoots himself with the gun he’s had in his home for years to protect himself and his family.


These are all suicides, we say.


Again, which of these following cases is suicide? A Buddhist monk immolates himself to protest a repressive political regime; he has trained himself for this moment for years. A Jehovah’s Witness, heeding her religion’s teaching, refuses a blood transfusion after a serious accident; she dies. A sea captain goes down with his ship. Socrates drinks the hemlock after he is convicted by the Athenian court, even though he could easily have avoided conviction and though he could easily have escaped from jail.


Are these suicides? We’re a little less certain about what to call them.


So what about these? A soldier falls on a grenade, saving his buddy. Samson pulls the temple down on himself as well as on the Philistines. The pilot of a small jet plane whose engine is failing deliberately crashes it into an open field, rather than bailing out and letting it hit the crowded schoolyard ahead; he dies, but the children survive. Jesus is nailed to the cross.


Are these suicides? No, we insist, they are absolutely not suicides, they are clearly acts of heroism, of goodness, of faith.


“is “physician-assisted suicide” really suicide?”

Well what about these, the very kinds of cases that are at issue in Death with Dignity, or, as it is also called, physician-assisted suicide? An ALS patient refuses a ventilator when the disease finally paralyses his diaphragm and he can no longer breathe on his own, and he requests medication to prevent air hunger, pain, and anxiety as he dies; these, as he knows, will make his death come sooner than otherwise. A terminal cancer patient, who lives in a state where assisted dying is legal under its Death with Dignity Act, asks her physician for a prescription for a lethal medication and, some time later when the symptoms of disease progression are more pronounced, takes it and dies.


How do these compare with the cases we’ve just been exploring? What makes something suicide?


Is it about causation? The lovesick teenager, the shamed businessman, the Buddhist monk, Samson, Socrates, the soldier who fell on the grenade—all of them caused their own deaths. We count some as tragedies, some as acts of courage, and some as religious or philosophical heroism. Just the same, all of these people took deliberate actions that resulted directly in their deaths.


Or is it about intention? Did any of these people want to die? Did the lovesick teenager really want to die, or rather to get even with the rejecting boyfriend, even if she knew she could hurt him worst by dying? What about the failing businessman—did he want to die, or was dying the only way he could see to avoid humiliation? The sea captain? He acted to honor a seafaring tradition, surely, but did he do so in order to die? The self-immolating monk? His difficult, public act isn’t about wanting to die, but wanting to see the political regime changed. The jet pilot didn’t want to die either, but he wanted to kill the children even less: hence he knowingly, voluntarily, indeed heroically acted in a way that spared them but caused his own death. Jesus? John Donne, the 17th century poet and dean of St. Paul’s, argues—contrary to the entire Christian tradition—that Christ was a suicide, albeit a quintessentially admirable one, done for the glory of God.


This brings us to the present: what about the ALS or the cancer patient? Like these others, neither wanted to die—they had been making every effort to avoid dying as they went through the medical treatment available. But now, given that they already dying anyway, what they chose was to avoid the more difficult ways of dying that were clearly in their futures.


One noisy objection heard over and over again is that we cannot tolerate this practice: it is wrong because it is suicide.


So what should we call Death with Dignity? If you focus on the hand lifting the glass to the mouth and then swallowing the lethal medication, that is, on the mechanics of physical causation, Death with Dignity may look like suicide. If, on the other hand, you focus instead on the patient’s intention, it looks like self-preservation, self-protection from a future that could be much worse. Partisans on one side of the issue, the opponents of legalization, tend to focus on the mechanism of causation—this is what provides the rationale for calling it suicide; proponents in contrast tend to focus on the intention under which the patient acts, to protect oneself from a worse fate, to be able to preserve one’s cognitive capacities, to bring a life to a bearable close in the presence of those one loves. Seen in this way, it isn’t suicide at all.


This duality explains much of the way we label life-ending acts: when we focus on mechanism we call them suicide, but when we focus on intention we give those of which we approve of the intention much more favorable labels. The lovesick teenager? We focus on mechanism—she slit her wrists—and so label it a suicide. The failing businessman? He hanged himself, and so he too is a suicide. The Buddhist monk? We see it both ways: he lit the match, but he did it for a noble reason: some might call him a suicide of social protest; Thich Nhat Hanh insists the self-immolating monks and nuns were not suicides at all. Was Samson a suicide? that’s surely not the scriptural interpretation; we would call it self-sacrifice for a religiously important reason, a champion of his people. And Jesus? To call him a suicide, as John Donne well knew, was tantamount to heresy. But it’s the same for the soldier falling on the grenade—we don’t focus on the mechanism of his death, but primarily on his reason for doing it: he counts as a hero, saving his buddy. It’s the same too for the pilot of the failing jet, and even though he knowingly, deliberately pitched the nose down and banked away from the school, it is his reason for doing so, sacrificing his own life to save the children, that makes us count him a hero. To label him a suicide, which, if we were to speak in terms of mechanism, he clearly would be, would deeply offend our moral senses.


So how should we see the patient dying of ALS or cancer who takes a medication that ends his life: in terms of mechanism (swallowing the pills), or in terms of intention, as an already-dying human being making a difficult but understandable, even praiseworthy choice to spare himself and others around him? Whether we call him a suicide would invite some to call him a sinner; if we avoid this pejorative label, we could call him courageous, self-sacrificing, even saintly instead.


The word “suicide” wasn’t invented until the middle of the 17th century. But earlier thinkers reflected on whether it could ever be acceptable to end one’s own life. Plato thought yes in the specific situations of judicial execution (like Socrates’ drinking the hemlock himself), disgrace, and the “stress of cruel and inevitable calamity,” but he thought no in cases of “sloth,” what we’d probably call depression—and “want of manliness” or cowardice. Aristotle thought no, suicide was wrong: it is an injury to the state. The Stoic philosophers thought that reflective, responsible suicide was the act of the wise man; Roman generals expected themselves to fall on their swords if they could no longer defend their causes. The early Church fathers debated whether a virgin might kill herself to avoid rape or a Christian witness to avoid apostasy; some held yes, some argued no. Augustine and Thomas Aquinas solidified the Church’s position against suicide, arguing that it was a sin worse than any sin that could be avoided by it. David Hume argued in defense of suicide; Immanuel Kant argued against it, though he acknowledged the possible exception of Cato. Nietzsche insisted that the sick man was a “parasite” of society and ought to “die at the right time.”


In the West, the discussion of the ethics of suicide effectively ended at the time Durkheim and Freud, who argued respectively that suicide was the product of social organization or psychopathology, and so not an ethical issue at all. That silenced things for a while. Now, however, we are faced with a practical question of labeling, as the ethical issues erupt again in the tension over the role we may play in our own dying, and what we should call it when we play an active role in how our deaths go.


Death with Dignity? Aid-in-dying? Is it suicide? What we call it makes an enormous political difference in the debates about the legal change. Terminally ill people would not have survived very long before the advent of modern medicine; we cannot know what the great earlier thinkers would have said about cases like these and we cannot appeal to the long history of our cultures: they are too varied in their views to give us a unique answer. This is a contemporary problem: it is the very existence of modern medicine that gives rise to the dilemma of whether to endure the very, very long downhill slope that may lead to what can be a difficult death, or take earlier steps to bring our lives to an easier end. This is a personal and societal choice to be taken reflectively, not decided in advance on the basis of slanted language.


Like the debates about abortion, “little tiny babies” on one side, “fetal tissue” on the other, slanted language often hijacks our thinking. The next, most important step in resolving these controversies is to move beyond the issue of whether physician aid-in-dying is suicide and think instead about intentions, about choices, about what range of options we want, what roles we want to be able to play in our own eventual deaths. We know how somebody else’s political or religious beliefs can highjack our options; we need to recognize that somebody else’s language can highjack our options too. If we call it by the most neutral term, “physician assisted dying,” we can reduce much of the tension over matters so important to our own personal futures.


Featured image credit: American Army Cemetery. CC0 Public domain via Pixabay.


The post ‘Death with Dignity': is it suicide? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 07, 2015 00:30

November 6, 2015

Game on – Episode 28 – The Oxford Comment

Listen closely and you’ll hear the squeak of sneakers on AstroTurf, the crack of a batter’s first hit, and the shrill sound of whistles signaling Game on!  Yes, it’s that time of year again. As fall deepens, painted faces and packed stadiums abound, with sports aficionados all over the country (and world) preparing for a spectacle that is more than just entertainment. Which leads us to the following questions: What is the place of sports in our modern lives? And how should we understand it as part of our history?


In this month’s episode, Sara Levine, Multimedia Producer for Oxford University Press, sat down to discuss the evolution of our favorite pastimes with Chuck Fountain, author of The Betrayal: The 1919 World Series and the Birth of Modern BaseballJulie Des Jardins, author of Walter Camp: Football and the Modern ManDr. Munro Cullum, a Clinical Neuropsychologist who specializes in the assessment of cognitive disorders, and Paul Rouse, author of Sport and Ireland: A History.



Image Credit: “Baseball” by Anne Ruthmann. CC BY NC 2.0 via Flickr.


The post Game on – Episode 28 – The Oxford Comment appeared first on OUPblog.




 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2015 05:30

Charles West and Florence Nightingale: Children’s healthcare in context

At the dawn of the children’s hospital movement in Europe and the West (best epitomised and exemplified by the opening of London’s Great Ormond Street Hospital for Sick Children (GOSH) on 14 February 1852), the plight of sick children was precarious at all levels of society.


After a long campaign by Dr Charles West, Great Ormond Street hospital was the first establishment to provide in-patient beds specifically for children in England. Rather telling of attitudes at the time, the clinic originally opened with just ten beds. Despite this, it soon grew into one of the world’s leading children’s hospitals. This was helped with the patronage of several wealthy and high-profile individuals, most notably Queen Victoria, and the famous author Charles Dickens; a personal friend of Dr West.


In 1854, after three years of activity, West (who was also the first physician of the hospital) could describe its initial success. He was also quick to point out the major shortcomings plaguing children’s health care:


The poor now flock to it [Great Ormond Street], sick children from all parts of London are brought to it. […] But still the want of funds limits the numbers who are received into it, and only thirty beds can be kept open for in-patients. Thirty beds! When more than 21,000 children die every year in this metropolis under ten years of age; and when this mortality falls thrice as heavily on the poor as on the rich! But alas, the tables of mortality do not tell the whole of the sad tale. It is not only because so many children die, that this Hospital was founded; but because so many are sick; because they languish in their homes; a burden to their parents who have no leisure to tend them, no means to minister to their wants.



456px-Florence_Nightingale_CDV_by_H_Lenthall‘Florence Nightingale from Carte de Visite, circa 1850s’, by H. Lenthall, London, Public Domain via Wikimedia Commons.

In the mid-nineteenth century, nurses had little access to textbooks and journal articles, and what little was available was written solely by medical doctors. The first English paediatric textbook for nurses was also written by West; entitled How to Nurse Sick Children, in 1854. Like the hospital, it was a great success and brought him considerable fame. Ironically though, later in the decade, West corresponded with Florence Nightingale – asking for advice on how to nurse sick children! This was particularly ironic because Nightingale had no experience of nursing sick children. Despite this, in her famous Notes on Nursing published in 1859, she stated that “children; they are affected by the same things (as adults) but much more quickly and seriously.”


Given the aforementioned lack of nursing texts for children, Nightingale’s text became a cornerstone of the curriculum at nursing schools across the country. It also sold well amongst the general reading public (West’s “parents with no leisure”), as it was written specifically for the education of those nursing at home – and is still considered a classic introduction to nursing.


In the introduction to the 1974 edition, Joan Quixley of the Nightingale School of Nursing wrote:


The book was the first of its kind ever to be written. It appeared at a time when […] hospitals were riddled with infection, when nurses were still mainly regarded as ignorant, uneducated persons. The book has, inevitably, its place in the history of nursing, for it was written by the founder of modern nursing.


Today, we all know that the assessment of the deteriorating sick child is difficult, and children’s nurses rely on clinical instruments such as PEWS (Paediatric Early Warning Scores) and SBAR (a structured method for communicating critical information that requires immediate attention) to measure acuity. But they also need access to quick and reliable, evidence based protocols to provide optimum care delivery. The Care Quality Commission (the health care regulator for England) expects that care delivery will be safe, effective, caring, responsive, well-led, and importantly, predicated on best evidence. Even at Great Ormond Street today, the home of paediatric care, there can still be problems. Systemic issues still get raised, including incident reports for things such as notes unavailable for clinic, children not booked in in a timely way, waiting times and lack of follow-ups. Ensuring these problems are not repeated necessitates attention to those who have gone before us.


Whilst it would be impossible to foresee the shape or configuration of children’s nursing in the future, we recognise that the frontiers of health care for children are changing at a phenomenal rate. Building on the precedents set by pioneers such as Charles West and Florence Nightingale, the field is continuing to progress rapidly. We still face significant challenges however. Issues such as the prevalence of childhood obesity, the onset of type-2 diabetes, the increase in mental ill-health during childhood, and the re-emergence of diseases such as rickets, are all linked to the way in which we as children’s nurses have to be resilient and adaptive to children’s health needs.


It is for reasons such as these, that it is so important to put clinical care into context – helping us to understand the origins of the field, and thus provide child policy recommendations for the future.


Featured image credit: ‘Great Ormond Street, Hospital for Sick Children, 1949’, from geograph.org.uk, by Ben Brooksbank. CC BY-SA 2.0, via Wikimedia Commons.


The post Charles West and Florence Nightingale: Children’s healthcare in context appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 06, 2015 04:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.