Oxford University Press's Blog, page 944
May 17, 2013
American psychiatry is morally challenged
The fundamental problem with American psychiatry is American psychiatrists. It seems every few months there’s fresh news about some well-known academic psychiatrist paid boatloads to endorse a new treatment that doesn’t work—or worse—causes harm. Among the 394 US physicians in 2010 who received over $100,000 from the pharmaceutical industry, 116 were psychiatrists, well out of proportion of the percentage of psychiatrists in medical practice. The American Psychiatric Association is also heavily supported by the drug industry. Its annual meetings, once efforts to educate members, are now basically week-long infomercials for Big Pharma. This influence has seeped into clinical trials as well, where study design is carefully manipulated by industry representatives to favor their new product. In turn, companies analyze their data out of view of academics, sequestering data unfavorable to their product, and ghostwriting journal articles for academics.
In similar fashion, fancy devices have been introduced with claims of wondrous benefits, none of which have materialized. Light-emitting boxes, for example, were supposed to be the next great psychiatric advent to prevent winter depressions, but the evidence for this claim is still weak. Similarly, vagal nerve stimulation (an implanted electronic pacer in the chest with electrodes attached to a nerve in the neck) was supposed to relieve treatment-resistant depressions. Yet it offers no demonstrated benefit and costs the poor soul subjected to it about $20,000 out of pocket. Transcranial magnetic stimulation, a ring-shaped magnet that delivers a magnetic pulse to the head, was going to replace electroconvulsive therapy. At best it has a placebo effect. And yet, these treatments continue because of their support by psychiatrists, many of whom have a vested interest in the success of the products. Integrity, it seems, is the only thing in short supply for psychiatry these days.
Just like the new antidepressant and antipsychotic drugs that have been introduced in the past three decades, the idea behind these new treatments was simply to make money. In 2006, US sales alone for these new gadgets topped 289 billion, and continue to rise. Between 1998 and 2006, the industry spent 855 million dollars on lobbying—a total which exceeds that of all other lobbies—to keep that momentum rolling.
You can’t fault the desire to make money; it’s the American way. But when treatments are equated to widgets, profits will always trump concerns of efficacy and safety. Can you think of an industry in which that has not been the case? Sadly, this was not always the situation with psychiatry. The early psychiatric drugs were developed by industry and psychopharmacologists working in concert, striving toward the production of effective and reasonably safe agents. And they succeeded. The older and less expensive antidepressants and antipsychotics are still just as good as or better than the new agents. In fact, the cost to patients drops from 18% to 6% of their medical dollar when they switch from patented to generic medications.
The new psychiatric drugs and novel treatments are frauds. The evidence that they work is weak and is often distorted to the point of fabrication. Studies show that the new antidepressants (e.g., Prozac, Paxil, and Citalopram) achieve remissions at only slightly better rates than a placebo. The widely prescribed anticonvulsant valproic acid (Depakote) outpaces lithium in prescriptions as a mood stabilizer, and yet it’s not as effective. That’s because the guidelines for psychiatric drug treatments are written by academics paid out of the pocket of Big Pharma. These guidelines are required reading in residency training and dictate the diagnostic and treatment decision-making of most psychiatrists, but really they’re just cookbooks, following the bottom line not the data. The most recent version of the DSM, for example, was drafted by academics, many of whom continue to receive substantial financial support from the industry. This clear conflict of interest in part accounts for why the thresholds for illnesses in the manual continue to get lower and lower: if more people are “ill,” it justifies the prescription of more psychotropic medication. Thus perpetuating the whole corrupt cycle.
Over the past half-dozen years, academic psychiatry has started to wean itself from the pharmaceutical milk-cow. Drug “reps” are restricted at most medical centers now, and direct payments to departmental activities are increasingly limited. These are good first steps, but financial support to departments still occurs. Multisite clinical trials are still industry affairs. The well-known psychiatrists and experts crafting treatment guidelines and new versions of the DSM are still industry supported. Despite the financial pain that might ensue, the only solution is to end the relationship. No academic responsible for the training and mentoring of medical students and young physicians should accept any industry money. They already receive adequate financial support from their institutions. If the industry wants its products tested, unrestricted grants can be given to the institution, which can then monitor the use of the funds for a small overhead fee as is done in the case of other funding sources. No more industry-designed and analyzed research. No more hidden unfavorable data. No more industry-supported lectures. No more direct industry support of any kind. This way, even if we make mistakes, our medicine will at least have integrity.
Michael A. Taylor, MD, is the author of Hippocrates Cried: The Decline of American Psychiatry. He works as an adjunct clinical professor of psychiatry at the University of Michigan Medical School. He was founding editor of the peer-reviewed journal, Cognitive and Behavioral Neurology, and also worked as professor, chairman, and director at the Department of Psychiatry and Behavioral Sciences at the Chicago Medical School. He established and directed the psychiatry residency-training program at the State University of New York at Stony Brook.
Oxford University Press is running a series of articles on psychiatry and the DSM-5 in anticipation of the launch of the DSM-5 at the American Psychiatry Association meeting on 18 May 2013.
The OUPblog is running a series of articles on the DSM-5 in anticipation of its launch on 18 May 2013. Stay tuned for a view from Joel Paris. Read previous posts: “DSM-5 will be the last” by Edward Shorter, “The classification of mental illness” by Daniel Freeman and Jason Freeman, and “Personality disorders in DSM-5″ by Donald W. Black.
Subscribe to the OUPblog via email or RSS.
Subscribe to only health and medicine articles on the OUPblog via email or RSS.
Image credit: Human brain function grunge with gears. Image by Francesco Santalucia, iStockphoto.
The post American psychiatry is morally challenged appeared first on OUPblog.



Dust off your flags … it’s Eurovision time!
Love it or hate it, you can’t deny that the Eurovision Song Contest has a unique appeal. Although often seen as tacky, extravagant and occasionally politically controversial, that doesn’t stop around 125 million people around the world watching it each year! It has helped to launch careers, in the cases of ABBA and Bucks Fizz, as well as destroy them (cast your memories back to Jemini, aka ‘nul points’).
To celebrate the 58th contest which takes place tomorrow night, we’ve put together a playlist of the best and worst entries in Eurovision history as well as some interesting (as well as bizarre) facts about the competition.
Fun facts about Eurovision
The first Eurovision Song Contest took place in Switzerland, with only 7 countries competing.
This year’s competition takes place in Malmö, Sweden’s third largest city. Did you know that Malmö’s football team, Malmö FF, is where footballer Zlatan Ibrahimović began his professional career?
Ireland is the most successful country in the Contest, winning 7 times, 3 of which were in consecutive years (1992, 1993 and 1994).
Portugal has competed since 1964 and is yet to finish in the top 5. The highest they have placed is 6th, which was in 1996.
Norway’s Alexander Rybak is the record-holder for the highest amount of points, scoring 387 in 2009. Closely followed by last year’s winner, Loreen from Sweden, who won with 372 points.
The maximum duration of each performance is 3 minutes.
A Eurovision song must always have vocals; purely instrumental music is not permitted.
No live animals are allowed on stage during a performance.
However, the costume options are pretty much limitless . . . . .
Annie Leyman is Marketing Executive for Music books at Oxford University Press.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
Image credits: (1) Photo of ABBA. By AVRO (FTA001019454_012 from Beeld & Geluid wiki) [CC-BY-SA-3.0], via Wikimedia Commons (2) Photo of Lordi performing at ESC 2007. By Indrek Galetin (http://nagi.ee/photos/sAgApO/824612/i...) [see page for license], via Wikimedia Commons (3) Photo of Verka Serduchka performing at ESC 2007. By Indrek Galetin [see page for license], via Wikimedia Commons (4) Photo of Jedward at ESC 2011. By Frédéric de Villamil (Flickr: DSC_9298) [CC-BY-SA-2.0], via Wikimedia Commons
The post Dust off your flags … it’s Eurovision time! appeared first on OUPblog.



The Trojan War: fact or fiction?
The Trojan War may be well known thanks to movies, books, and plays around the world, but did the war that spurred so much fascination even occur? The excerpt below from The Trojan War: A Very Short Introduction helps answer some of the many questions about the infamous war Homer helped immortalize.
By Eric Cline
The story of the Trojan War has fascinated humans for centuries and has given rise to countless scholarly articles and books, extensive archaeological excavations, epic movies, television documentaries, stage plays, art and sculpture, souvenirs and collectibles. In the United States there are thirty-three states with cities or towns named Troy and ten four-year colleges and universities, besides the University of Southern California, whose sports teams are called the Trojans. Particularly captivating is the account of the Trojan Horse, the daring plan that brought the Trojan War to an end and that has also entered modern parlance by giving rise to the saying “Beware of Greeks bearing gifts” and serving as a metaphor for hackers intent on wreaking havoc by inserting a “Trojan horse” into computer systems.
But, is Homer’s story convincing? Certainly the heroes, from Achilles to Hector, are portrayed so credibly that it is easy to believe the story. But is it truly an account based on real events, and were the main characters actually real people? Would the ancient world’s equivalent of the entire nation of Greece really have gone to war over a single woman, however beautiful, and for ten long years at that? Could Agamemnon really have been a king of kings able to muster so many men for such an expedition? And, even if one believes that there once was an actual Trojan War, does that mean that the specific events, actions, and descriptions in Homer’s Iliad and Odyssey, supplemented by additional fragments and commentary in the Epic Cycle, are historically accurate and can be taken at face value? Is it plausible that what Homer describes actually took place and in the way that he says it did?
In fact, the problem in providing definitive answers to all of these questions is not that we have too little data, but that we have too much. The Greek epics, Hittite records, Luwian poetry, and archaeological remains provide evidence not of a single Trojan war but rather of multiple wars that were fought in the area that we identify as Troy and the Troad. As a result, the evidence for the Trojan War of Homer is tantalizing but equivocal. There is no single “smoking gun.”
According to the Greek literary evidence, there were at least two Trojan Wars (Heracles’ and Agamemnon’s), not simply one; in fact, there were three wars, if one counts Agamemnon’s earlier abortive attack on Teuthrania. Similarly, according to the Hittite literary evidence, there were at least four Trojan Wars, ranging from the Assuwa Rebellion in the late 15th century BCE to the overthrow of Walmu, king of Wilusa in the late 13th century BCE. And, according to the archaeological evidence, Troy/Hisarlik was destroyed twice, if not three times, between 1300 and 1000 BCE. Some of this has long been known; the rest has come to light more recently. Thus, although we cannot definitively point to a specific “Trojan War,” at least not as Homer has described it in the Iliad and the Odyssey, we have instead found several such Trojan wars and several cities at Troy, enough that we can conclude there is a historical kernel of truth — of some sort — underlying all the stories.
But would the Trojan War have been fought because of love for a woman? Could a ten-year war have been instigated by the kidnapping of a single person? The answer, of course, is yes, just as an Egypto-Hittite war in the 13th century BCE was touched off by the death of a Hittite prince and the outbreak of World War I was sparked by the assassination of Archduke Ferdinand. But just as one could argue that World War I would have taken place anyway, perhaps triggered by some other event, so one can argue that the Trojan War would inevitably have taken place, with or without Helen. The presumptive kidnapping of Helen can be seen merely an excuse to launch a pre-ordained war for control of land, trade, profit, and access to the Black Sea.
In 1964, the eminent historian Moses Finley suggested that we should move the narrative of the Trojan War from the realm of history into the realm of myth and poetry until we have more evidence. Many would argue that we now have that additional evidence, particularly in the form of the Hittite texts discussing Ahhiyawa and Wilusa and the new archaeological data from Troy. The lines between reality and fantasy might be blurred, particularly when Zeus, Hera, and other gods become involved in the war, and we might quibble about some of the details, but overall, Troy and the Trojan War are right where they should be, in northwestern Anatolia and firmly ensconced in the world of the Late Bronze Age, as we now know from archaeology and Hittite records, in addition to the Greek literary evidence from both Homer and the Epic Cycle. Moreover, the enduring themes of love, honor, war, kinship, and obligations, which so resonated with the later Greeks and then the Romans, have continued to reverberate through the ages from Aeschylus and Euripides to Virgil and thence to Chaucer, Shakespeare, and beyond, so that the story still holds broad appeal even today, more than three thousand years after the original events, or some variation thereof, took place.
Eric H. Cline is Professor of Classics and Anthropology and chair of the Department of Classical and Near Eastern Languages and Civilizations, as well as director of the Capitol Archaeological Institute at George Washington University. He is Co-Director of the ongoing excavations at Megiddo (biblical Armageddon) in Israel and the author of Biblical Archaeology: A Very Short Introduction, winner of the 2011 Biblical Archaeology Society Publication Award for the Best Popular Book on Archaeology. His recent addition to the Very Short Introductions series is The Trojan War: A Very Short Introduction.
The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with Very Short Introductions on Facebook, and OUPblog and the VSI series every Friday!
Subscribe to the OUPblog via email or RSS.
Subscribe to only VSI articles on the OUPblog via email or RSS.
Image Credit: The Procession of the Trojan Horse in Troy 1773. Giovanni Domenico Tiepolo. Via Web Gallery of Art. Public domain via Wikimedia Commons.
The post The Trojan War: fact or fiction? appeared first on OUPblog.



May 16, 2013
Personality disorders in DSM-5
Those of us in the mental health professions anxiously await the release of the fifth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Others may wonder what the fuss is about, and may even wonder what the DSM-5 is. In short, it is psychiatry’s diagnostic Bible. While some imbue it with the reverence given a religious tract, it is not inerrant and only reflects the collective wisdom of those entrusted with the charge of revising it. The current manual, DSM-IV, came out in 1994 with a text revision in 2000, so in some ways the march to DSM-5 has been a 19 year journey.
As a psychiatrist, I am interested in classification, but I am particularly interested in how antisocial personality disorder, or ASP, has been classified over time. Over the past 200 years, ASP has been consistently recognized as one of the most identifiable and important of the psychiatric disorders, whether called manie sans délire, moral insanity, or even psychopathic personality. These terms all describe, at their most fundamental, bad behavior unconnected to medical illness or psychosis. During the DSM-5 deliberations, I and others became concerned that the committee dedicated to discussing personality disorders (the Personality and Personality Disorders Work Group) might decide to ditch the current diagnostic criteria and replace them with a combination of new diagnostic criteria and a “dimensional,” rather than categorical, evaluation of various personality traits.
The DSM-5 deliberations, for the most part, took place quietly and behind closed doors by clinicians and researchers who devoted many hours to their deliberations. They were tasked with considering the literature, research advances, and the users and patients’ needs when recommending changes to a diagnosis. Having watched the process as an interested observer, I can say that it was — for the most part — open, transparent, and free of conflicts of interest, despite loud and strident complaints from some quarters. Yet the Personality and Personality Disorders Work Group still produced a plan deemed by many as unworkable and overly complicated. This new plan was rejected by the leadership of the American Psychiatric Association in December 2012. The Personality and Personality Disorders Work Group was the only committee involved with the DSM-5 revision process in which two members openly and publicly resigned. No other work group had its many years of work rebuked.
So what went wrong? My own belief is that the work group overreached. In response to researchers on the committee whose life’s work was to understand and test dimensional schemes for describing personality traits, the committee wed itself to developing a scheme to replace the existing criteria for personality disorders. They came up against considerable pushback. I believe they never fully grasped that psychiatrists and many other clinicians tend to think categorically (is trait ‘x’ present or not?), rather than dimensionally (how much of trait ‘x’ is present?), and are very concerned with insurance reimbursement (would an insurer pay for the care of someone with some, but not all, of these traits?). The scheme itself appeared overly time consuming to busy practitioners; instead of simply deciding on a diagnosis, they might have to rate up to 5 personality ‘domains’ and 25 trait ‘facets’. Many clinicians, too, were concerned that some of the personality disorders that are well-researched and whose criteria were known to be valid (antisocial and borderline personality disorders, for example) would be changed for no good reason. In my view, the committee members have only themselves to blame for what proved to be an embarrassing turn of events. To preserve comity, the American Psychiatric Association leadership agreed to place the new scheme in the appendix of DSM-5 so as to be available to researchers and clinicians.
So, to those who wonder what has happened with antisocial personality disorder in DSM-5: the answer is nothing. After all those hours of deliberation and discussion, the criteria set for ASP, and all the other personality disorders, in the DSM-5 is exactly the same as it was in DSM-IV.
Donald W. Black, MD, is the author of Bad Boys, Bad Men: Confronting Antisocial Personality Disorder (Sociopathy), Revised and Updated Edition. He is a Professor of Psychiatry at the University of Iowa Roy J. and Lucille A. Carver College of Medicine in Iowa City. A graduate of Stanford University and the University of Utah School of Medicine, he has received numerous awards for teaching, research, and patient care, and is listed in “Best Doctors in America.” He serves as a consultant to the Iowa Department of Corrections. He writes extensively for professional audiences and his work has been featured in television and print media worldwide. Read his previous blog posts.
The OUPblog is running a series of articles on the DSM-5 in anticipation of its launch on 18 May 2013. Stay tuned for views from Michael A. Taylor and Joel Paris. Read previous posts: “DSM-5 will be the last” by Edward Shorter and “The classification of mental illness” by Daniel Freeman and Jason Freeman.
Subscribe to the OUPblog via email or RSS.
Subscribe to only psychology articles on the OUPblog via email or RSS.
The post Personality disorders in DSM-5 appeared first on OUPblog.



A different approach
I recently travelled with the band Victoire for a brief residency at the music school of a large university. As well as performing a concert, we spoke to the music majors there on the topic of “alternative career paths” in classical music. By “alternative” I mean career paths other than playing in an orchestra or teaching at an academic institution. In our case, the musicians of Victoire all work predominantly in the performance and composition of contemporary classical music.
During the workshop one of the school’s composition students asked me how I approach playing the clarinet in Victoire differently from how I approach playing clarinet in Newspeak, another contemporary music ensemble I perform with and co-direct. It was a good question, and showed that the asker had done enough background research to know how much these two ensembles differ. It was the kind of question that might lead to long and interesting discussions. But it stumped me; I simply hadn’t thought about my playing in these terms before.
In some ways the question made no sense to me. All I could answer was, “I don’t.” As far as I was concerned my approach to these two projects was the same as my approach to any piece of music. I put the music on my stand, figure out the technical requirements and stylistic characteristics, and play it. Does that count as an approach? If so, I approach all music in the same way. Compare these two excerpts from Newspeak and Victoire:
Newspeak
B & E (with aggravated assault)
By Oscar Bettison
From the album Cathedral City
Click here to view the embedded video.
It’s true that these two excerpts sound different from one another. Oscar Bettison’s work is louder and more aggressive (as well as being played on bass clarinet). The Victoire track (written by Missy Mazzoli) is less accented, more mellifluous. But I don’t put on vastly different hats when I perform with these two groups. Over the next few weeks the question continued to bother me. Did I have different approaches? Should I have different approaches?
Perhaps, I thought, I would have answered differently if the question had mentioned projects I’ve worked on that were stylistically further from one of these excerpts — like playing works by Matthias Spahlinger with Wet Ink or Oliver Knussen with Signal Ensemble. If I moved between more widely separated styles — like classical music or jazz or Klezmer — then perhaps I would switch “approaches” between styles (a question I look forward to discussing with colleagues). Or perhaps it would have made more sense if I had been asked if I approach playing something older like Mozart differently to the more contemporary music I usually play. In that case, having considered the piece stylistically, I would try to use Mozart-appropriate timbres, phrasing and articulations. But it’s all the same process — choosing techniques and stylistic elements that are appropriate — that I would follow for any piece. The specifics of the end result are different, but it doesn’t seem like an entirely different approach.
Most musical instruments (and I might with bias say especially the clarinet) have the potential to make an enormously wide range of sounds. This is one of the underpinnings of the explosion of modern music in the twentieth and twenty-first centuries. In the classical tradition for various reasons — acoustic and aesthetic (enough for another post) — instrumentalists have tended to stay within a smaller range of possible sounds. However, from the 1950s onwards, composers and performers, perhaps spurred on by the infinite sonic possibilities of electronic music, experimented a lot more with sounds that in the past had been rejected as incorrect — so-called extended techniques: multiphonics, air sounds, squeaks, different articulations, etc. These days as a performer it is pretty much de rigueur to learn to use and control at least some of these extended techniques.
The compositional landscape we inhabit now is, happily, stylistically diverse, with composers taking inspiration from any and all past streams of classical music, as well as from other kinds of music and from pure sound. So instead of always having exactly the same set of tones and articulations, an instrumentalist might at times use not just “extended” techniques but timbres and techniques borrowed from other kinds of music or even other instruments.
The result is that one player can be equipped with a huge range of sound possibilities. Each piece, or situation, involves the choice of a range of sounds, like colors from a paintbox: for Mozart a particular sound world; for Spahlinger another; still others for Knussen or Mazzoli or Bettison. Of course there is almost always overlap, as many of the basic sounds and techniques will be the same. So, to answer the original question: instead of “approach” I would say that each piece has a different “palette” and within that are different techniques and timbres that are achieved in various ways (which is perhaps what the question was intended to be about). The important thing is to “approach” each piece as being open to a full range of possibilities, so that a piece by Lachenmann doesn’t necessarily have to sound like one by Mozart, or Mazzoli like Bettison.
Clarinetist Eileen Mack grew up in Australia and is now based in New York. She is a member of post-minimalist band Victoire and the amplified ensemble Newspeak (which she also co-directs), and has performed with many other New York new music groups including Wet Ink, Alarm Will Sound, Signal Ensemble, the Bang on a Can All Stars and the Wordless Music Orchestra.
Oxford Music Online is the gateway offering users the ability to access and cross-search multiple music reference resources in one location. With Grove Music Online as its cornerstone, Oxford Music Online also contains The Oxford Companion to Music, The Oxford Dictionary of Music, and The Encyclopedia of Popular Music.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
Image credit: Clarinet. © THEPALMER via iStockphoto.
The post A different approach appeared first on OUPblog.



The missing children of early modern religion
I’ve been working on the ‘lived experience’ of early modern religion: what it was actually like to be a Protestant in 16th or 17th century Britain. And I’ve become more and more convinced there’s a crucial element of the story almost completely missing from the standard accounts: children.
Read most histories of early modern religion and you could be forgiven for concluding that there were no children in this period. But we are dealing with huge numbers of people: perhaps a third of the population of early modern England was under 12. And while every adult had of course been a child at some point, large numbers of children never became adults.
The sources are very thin. Most early modern Protestants saw childhood as a period of mere depravity, needing only correction. The period’s most popular devotional work, Lewis Bayly’s The Practice of Piety, asked, “what is youth but an vntamed Beast? … Ape-like, delighting in nothing but in toyes and baubles?” But a few patterns do emerge. Saying grace at table was, almost routinely, a child’s role in a family. Children’s patterns of prayer can be glimpsed sometimes – learning prayers by rote, or making vows. And we do have occasional testimonies of children’s actual religious experience – a seven year old finding “unexpressible joys” in reading and prayer, a four year old stargazing and meditating on God’s power.

A unique image of a Protestant family at prayer, from Auckland Castle, County Durham. As usual, the children are there only as an afterthought.
But we would be stuck with these glimpses if it not for two extraordinary accounts written in the 1630s. Richard Norwood and Elizabeth Isham had both read Augustine’s Confessions, newly translated into English, and had learned from it that it was worth paying close attention to how God had worked in their lives before their actual conversions. So Norwood described his schoolboy psalm-singing, and how, aged seven or eight, he was “taken with great admiration of some places” in the Bible. He remembered (and counted as a sin) “at several times reasoning … about whether there were a God”. Adults assured him that God loved him, but he was not sure “how they could know it was so”. And when he tried to share his enthusiasm for Scripture with his parents, “they made me little answer (so far as I remember) but seemed rather to smile at my childishness”. This made him wonder whether what the preachers taught was true, “or whether elder people did not know them to be otherwise, only they were willing that we children should be so persuaded of them, that we might follow our books the better and be kept in from play.” Norwood was that rare thing: an adult who could remember what it was really like to be a child.
Or again, the Northamptonshire gentlewoman Elizabeth Isham described how her religion took shape in counterpoint to her mother. She was taught to pray from infancy, but when she was eight years old, “I came to a fuller knowledge of thee”, through praying earnestly “to avoyde my mothers displeasure”. Her mother’s wrath was no joke: in her rages, Judith Isham had a servant hold her daughter down, the better to beat her. Elizabeth recalled that “in these dayes feareing my parents I had no other refuge but to flie unto thee”.
It was her grandmother who showed her another way. When the old lady was ill, and the nine year old Elizabeth was caring for her, she was struck by the delight her grandmother took in her devotional reading. For Elizabeth, as for so many other children before and since, books were her liberation. As her reading accelerated from her tenth year, her religion blossomed. It also brought greater peace with her mother, who took advice from a clergyman friend and developed a new way of dealing with her daughter. When she saw Elizabeth misbehave, instead of flying into a rage, she would “holde her fan afore her face”, praying for patience and judgement. This gave Elizabeth time to reflect on her error, so that as soon as the fan was lowered she would go and ask forgiveness, and would be set a penitential task, “which I performed with the more dilligence she having delt so well with mee”. We rarely come so close to a happy ending.
These are very individual stories, and that is part of the point: children are individuals, and neither happy nor unhappy families all resemble one another. But they do remind us that children take their own lives, including their religion, immensely seriously, and can be very finely attuned to managing the loving, unpredictable, condescending, inattentive and sometimes incomprehensibly punitive adult world.
They also suggest to me that there is much more to be done here. We have long learned the importance of gender to any serious historical analysis. It is time to pay attention to this equally pervasive division, and to this even more forgotten slice of humanity.
Alec Ryrie studied History and Theology at the universities of Cambridge, St Andrews, and Oxford. He is now Head of Theology and Religion and Professor of the History of Christianity at Durham University. His most recent book, Being Protestant in Reformation Britain, published in April 2013. His previous books include The Age of Reformation (2009), The Sorcerer’s Tale (2008), The Origins of the Scottish Reformation (2006) and The Gospel and Henry VIII (2003).
Subscribe to the OUPblog via email or RSS.
Subscribe to only religion articles on the OUPblog via email or RSS.
Subscribe to only history articles on the OUPblog via email or RSS.
Image credit: Courtesy of Alec Ryrie. Do not use without permission.
The post The missing children of early modern religion appeared first on OUPblog.



May 15, 2013
Musings on the Eurovision Song Contest
When the first Eurovision Song Contest was broadcast in 1956, the BBC was so late in entering that it missed the competition deadline, so it was first shown in my native England in 1957. Nonetheless, it seems as if this curious example of pan-European co-operation, which started with seven countries and is now up to 40, has been around forever. Certainly as the 1950s gave way to the ’60s, the contest created a degree of national fervour in Britain, and I suspect in most other parts of Europe. At its peak, it’s estimated to have drawn in around 600 million viewers worldwide.
The competition’s only seldom been part of the pop mainstream, and at the time when the Beatles and Rolling Stones were becoming world famous in the 1960s, Britain entered the bland sounds of Kathy Kirby and Matt Monroe instead. It took Britain’s first two wins, by Sandie Shaw in 1967 and Lulu in 1969 to bring about a convergence of pop culture and the more mainstream vocal entertainment of the contest. Meanwhile 1950s heart-throb and subsequent film-star Cliff Richard was controversially beaten into second place in 1968 with “Congratulations” — a song that has stood the test of time rather better than Spain’s winning “La La La,” (sung in Spanish by Massiel after the original Catalan entry by Joan Manuel Serrat was withdrawn by the Franco regime). Abba’s success with “Waterloo” in 1974 marks one of the few genuine moments when the contest reflected wider international taste. They aimed squarely at winning and did so, bringing their distinctive sound and utter professionalism to a vastly greater audience through their success in the competition. Some other acts were successfully launched on the world stage as a result of first being seen by an international audience during the finals, including early appearances by Julio Iglesias and Céline Dion.
Yet that is one of the reasons the contest is so fascinating. At a time when European monetary and political convergence is a burning question for governments, the Eurovision contest demonstrates just how varied approaches are to popular songs and entertainment across the continent, from Portugal to Azerbaijan, and from Norway to Israel. Dance moves, costumes, gestures, lyrics, and language convey insights into how other European countries go about the business of entertainment in a far more insightful way than almost any other television spectacular. Ukranian drag queen Verka Serduchka’s antics and lyrics upset Russia in 2007, but in 2006 Finnish heavy metal band Lordi took the world by storm in an over-the-top performance with latex masks, prosthetic beards and horns. Amazingly, they managed to convey rock and roll as a religion without alienating too many special interest groups.
Even back in the 1960s as we crouched round the flickering image of our black and white televisions, the voting system seemed arcane. It still does. The results can sometimes be skewed by blocs of countries who vote together for, one suspects, not entirely artistic reasons. Announced first in French and then English, the underdogs who only score “nil points” often become popular with the viewing audience for that very reason. Poor old Jemini gave the UK its first “nil points” in 2003, but in 1997 Portugal and Norway shared the ignominy of no votes at all, and in 1983 the same fate befell Turkey and Spain. Norway still holds the record for the greatest number of “nil points”. The term has entered the European vernacular, in many countries, describing a competitor who tries hard but with no hope of winning.
Click here to view the embedded video.
So now this year’s contest is under way in Malmö, Sweden, what can we expect? The sheer number of competing countries now means two nights of semis before the final, which takes place this Saturday, 18 May 2013. The bookies are backing Denmark and Norway to triumph in this very Nordic contest, but I have a hunch that after Engelbert Humperdinck’s not entirely satisfactory entry last year, the Scandinavians will be given a run for their money by British entry Bonnie Tyler. A legend of 80s pop with her great hit “Total Eclipse of the Heart,” Tyler is a Welsh singer who has the rare distinction of also topping the charts in France. She has also had hit records in Norway, Austria, Switzerland and Germany. When it comes to tactical voting, she’s potentially got a lot of different countries on her side! At least the title of her entry is a little more modest than Cliff Richard’s from 1968: it’s called “Believe In Me”.
Alyn Shipton is the author of Nilsson: The Life of a Singer Songwriter, to be published on July 18. He is also a critic for The Times in London and presents jazz programmes on BBC Radio.
Subscribe to the OUPblog via email or RSS.
Subscribe to only music articles on the OUPblog via email or RSS.
The post Musings on the Eurovision Song Contest appeared first on OUPblog.



The oddest English spellings, part 20: The letter “y”
I could have spent a hundred years bemoaning English spelling, but since no one is paying attention, this would have been a wasted life. Not every language can boast of useless letters; fortunately, English is one of them. However, it is in good company, especially if viewed from a historical perspective. Such was Russian, which once overflowed with redundant letters. To a small extent, such is Modern German with its ß (Swiss German does very well without it). In the Germanic and Romance languages, x, where it has not been abolished, is a needless luxury (sex would be as appealing in the form seks, and ax ~ axe would cut as nicely if it were spelled aks). Another luxury (luksury), or rather a great nuisance, is the letter y.
In old manuscripts, i occupied very little space (the dot did not help), and its smallness, inherited from the Greek iota, became proverbial. The English continuation of the word iota, via Latin, is jot, noun (not a jot), and possibly jot, verb (to jot something down means “to write something briefly”; compare jottings). When the personal pronoun (Old Engl. ic) lost its consonant and was reduced to a single vowel, it had two options: to attach itself to the adjoining word (I said and said I would then have become isaid and saidi respectively) or make itself more visible. Little words appended to the beginning of longer ones are called proclitics. Those glued to the end are known as enclitics. Medieval Frisian and Dutch are full of “clitical” forms (which makes texts in those languages sometimes hard to decipher), but English scribes chose another way: they capitalized the midget, and that is the reason for the modern spelling of I. Foreigners often wonder why the English aggrandized themselves by capitalizing the first person pronoun. The opposite is true. They were afraid of disappearing in texts and elevated the status of the letter of the alphabet, not of their personality.
For the same purposes of visibility, at the end of words scribes replaced i with y; hence any, busy, and their likes (in “pet forms,” y sometimes varies with ie: Johnny ~ Johnnie). Every new rule produces complications. Once you decide that y is a substitute for i in word final position, you have to learn how this position can be recognized. It looks like a trivial task, but appearances should not be trusted. Dry ends in y, which is fine (that is, we take the traditional spelling for granted). Nor do the comparative and the superlative drier, driest raise objections: the dangerous letter (i) is now in the middle. But we spell dryly with two y’s! To understand the rationale for this spelling, one has to distinguish inflectional suffixes (such as -er) from word-forming ones (such as -ly: dryly is a word different from dry, while drier is a form of dry). There is the noun dries “drought,” which coexists with its homophone drys “prohibitionists” or “dry places” (plurals). Drys looks unfamiliar and ugly, but it is correct. If someone decided to add the suffix -ism to bully, the result would be bullyism, not bulliism. Likewise, bullyrag is not bullirag. Dries “drought” is wrong.

Wyverns have no wives. Why don’t they?
A few words have y in the middle for all kinds of arcane reasons. Such are dye, rye, and lye. The Old English for rye was ryge (pronounced approximately rüye). Its spelling does not seem to have changed much since the days of King Alfred. Dye is a different case. In many languages, non-identical spellings are used to differentiate homophones in writing. In English, dye has the letter y to distinguish it from die. Seeing that dye and die can hardly be confused, this measure is a waste. But you never know. Perhaps the owner of some failing hair salon decides to ruin the reputation of the competitor and to this end disfigure the wall of the more successful establishment with the graffiti “Never say dye!” To this ruffian the redundant letter will come in handy. (No doubt, I was not the first to perpetrate this feeble pun. People devoid of the sense of humor always use the verb perpetrate in this context and call all puns feeble.) The same principle that explains the difference between die and dye has been used in flier ~ flyer. I discovered the existence of the program called frequent flyer (and I still remember when it started) from a flier distributed to the passengers. I assume that lye is spelled with a y to prevent its confusion with lie. If so, we witness another exercise in futility because lie (“tell falsehoods”) and lie (as opposed to sit and stand) are still spelled alike. Shakespeare puns, and puns very cleverly (that is, not feebly), on the two verbs in a bitter sonnet addressed to the Dark Lady.
Then there is goodbye, with its incongruous ye. And while I am dealing with by, I may mention that by- or byelaws have nothing to do with the preposition or adverb by (this is a well-known fact, but it may be new to someone). Bylaw, in one of its meanings, goes back to the concept of a local law (from the Old Scandinavian word for “place of residence”). It is the same by as in Crosby (cross + by), Whitby (“white settlement”), and so forth. Dickens chose to spell the name of his character Dombey, but it is still Dom-by. Even Frisbee traces to Frisby, originally “a Frisian town.”
Most learned words with y in the middle are of Greek origin. Regrettably, English has never shaken off its classical heritage in spelling. Cycle, cypress, cyst, dynasty, etymology, lyre, myopia, nymph, syllable, style, and many others — not necessarily bookish nouns, adjectives, and verbs — bear witness to this pedantry (a list of my-words is especially sizable: myth, mystery, etc.). There is still some controversy surrounding the coining of the name nylon, but in any case, the word is not Greek. Why do we spell distemper but dyslexia? An etymological reason for that exists: two prefixes are indeed involved here, but modern English-speakers hardly sense the difference between them. Dystopia is the opposite of utopia, and displace is the opposite of place. The necessity to learn the written image of every new word beginning with dis- in pronunciation will turn the sweetest individual into a disgruntled customer or cause dyspepsia. Are you sure it is disharmony but dysfunction? Look them up or search for them. However, the process of writing need not become a game of constant riddle solving. If I were king, with due apologies to the Wylds, Wyldes, Smyths and Smythes, I would abolish the letters x and y, except in their family names, and let lynx and Styx become homographs of links and sticks! Who will be stymied by my desire to make life easier? (Stymie is a late word of unknown origin.) My plan has little practical value, for the chance of my achieving the status of an absolute monarch, an enlightened despot, a benign (benevolent?) dictator, let alone king is remote. However, the die is not cast.
Those who enjoy reading dictionaries will discover gyves, lychgate, lykewake, wych-elm (along with wych-hazel), and many other nice-looking words. They will wonder why tryst, which is probably related to trust, is not trist. They will get entangled among tireless tyros (or tiros: a Latin word for “novice, recruit” of unknown origin: military slang, something like rookie?), British tyres, and American tires (from attire?). It remains to say that y is the first letter of numerous words, yes and (New) York among them. It allows dogs to yap and yuppies to flourish.
Anatoly Liberman is the author of Word Origins…And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears on the OUPblog each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.”
Subscribe to Anatoly Liberman’s weekly etymology posts via email or RSS.
Subscribe to the OUPblog via email or RSS.
Image credit: Ouroboros by Lucas Jennis. An etching of a wyvern eating its own tail. Public domain via Wikimedia Commons.
The post The oddest English spellings, part 20: The letter “y” appeared first on OUPblog.



The classification of mental illness
According to the UK Centre for Economic Performance, mental illness accounts for nearly half of all ill health in the under 65s. But this begs the question: what is mental illness? How can we judge whether our thoughts and feelings are healthy or harmful? What criteria should we use?
This month sees the publication of the latest version of the psychiatrist’s bible: the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is arguably the definitive reference work on mental illness, used by health services worldwide (though the World Health Organisation’s International Classification of Diseases and Health Related Problems is widely used in the UK). Sales of the previous edition, DSM-IV, are estimated at about a million copies — not bad for a book that runs to almost 1000 densely packed pages and retails for around £80.
What’s changed in DSM-5 — apart from the move from Roman to Arabic numerals in the title? Well, terms have been revised (“mental retardation” has become “intellectual disability”, for example). New disorders have been introduced. For instance, “premenstrual dysphoric disorder” has been added to the list of depressive disorders. And, perhaps most controversially, some professionals have worried that the threshold for diagnosis of certain disorders appears to have been lowered — meaning that more people may be classified as mentally ill. Indeed there is organised opposition to the new edition, exemplified by the International DSM-5 Response Committee.
The DSM’s basic approach, on the other hand, has remained consistent for more than 30 years: a painstaking enumeration of symptoms, designed to make the clinician’s task of diagnosis easier and more consistent. This is an objective that it has undoubtedly achieved. But are those diagnoses scientifically valid?
Take clinical depression, for example. Nine possible symptoms are listed in DSM-IV, and you’d need to report at least five of them to warrant a diagnosis. These symptoms must be sufficiently intense to really interfere with a person’s life and they must have lasted for a while.
One effect of this approach is to emphasize the severe end of a spectrum that also includes relatively mild psychological problems. So the DSM criteria won’t capture everyday fluctuations in mental health. And they won’t pick up people with, say, four symptoms rather than five.
Implicit here is a debate about the nature of mental illness. The DSM uses a medical model of psychiatric illness. It thinks in terms of separate, discrete disorders, just like physical medicine. The approach is binary: either you meet the criteria for a particular condition, or you don’t.
Many would argue that this kind of all-or-nothing attitude, with hundreds of separate conditions, doesn’t fit well with people’s real-life experience of psychological problems. Better instead to think of psychological experience as being dimensional — that is, encompassing a wide variety of experiences, from the unproblematic to the severely distressing. The further along that dimension, the more symptoms a person is likely to have and the more upsetting and disruptive those symptoms will be.
This is the psychological model of mental illness. It argues that there’s no binary opposition between disorder and ‘normality’. Psychological disorders are simply the extreme manifestation of traits that we all possess to varying degrees. For example, almost everyone experiences occasional feelings of anxiety. People who develop what the DSM classes as an anxiety disorder aren’t experiencing something qualitatively different. They’re simply undergoing a more intense version of the same thing.
There is a third approach to understanding mental illness: the sociological model. Proponents argue that psychological disorders aren’t illnesses at all. They’re a label used to stigmatize and control behaviour society deems objectionable — such as homosexuality, which featured in the DSM until 1980.
Our view is that psychological problems aren’t illusory. They are real expressions of distress, for which most people — understandably — want help. However there is variability in the validity of individual diagnoses. Therefore it is often wisest not to focus on particular diagnoses. Better instead to adopt a dimensional approach, and to concentrate on the key problems and day-to-day symptoms that lead people to seek assistance. To help us understand these problems, we can look at epidemiological information to see which experiences occur together, and therefore may share common causes. Psychologists call this a data-driven approach.
We can also be guided by our knowledge of how the brain works. For example, basic emotions such as fear or unhappiness are powered by relatively distinct circuits in the brain. So we can understand certain psychological problems as what follow when these emotional circuits don’t function properly. We can match up the emotion and the problem: sadness and depression, fear and anxiety disorders, for example. This is what we might call a theory-driven approach, though given the complexity of brain activity it may – at least at present — be a little optimistic.
Importantly, even such a psychological, evidence-based approach doesn’t get around the need to classify problems. Mental health professionals must still make decisions about how to label the problems people describe to them. Without some kind of classificatory system, we can’t communicate, research, and evaluate treatments.
But the problems inherent in the current systems arguably constitute the greatest obstacle to that work. Given the extent of the burden on society and individuals alike, improving the scientific understanding of psychological disorders remains a priority. And that means DSM-5 certainly won’t be the last word on the classification of mental illness.
Daniel Freeman is a Professor of Clinical Psychology in the Psychiatry Department at the University of Oxford. Jason Freeman is a writer and editor. Their latest book is The Stressed Sex: Uncovering the Truth about Men, Women, and Mental Health (Oxford University Press).
The OUPblog is running a series of articles on the DSM-5 in anticipation of its launch on 18 May 2013. Stay tuned for views from Donald W. Black, Michael A. Taylor, and Joel Paris. Read yesterday’s post “DSM-5 will be the last” by Edward Shorter.
Subscribe to the OUPblog via email or RSS.
Subscribe to only psychology articles on the OUPblog via email or RSS.
Image credit: Thinker, created by Auguste Rodin at the end of the 18 century. San Francisco Legion of Honor. © Rafael Ramirez Lee via iStockphoto.
The post The classification of mental illness appeared first on OUPblog.



An Oxford Companion to surviving a zombie apocalypse
Sons are eating their mothers’ brains. Brothers are eating each other’s brains, and the baby is eating the brain of the pet cat. It has finally happened. The zombie apocalypse is here. It’s time to put your survival instinct to the test. Tie your hair back, do some stretches, pick up your bloody machete, and join us as we go over the front-line into zombie-occupied territory, armed only with some of Oxford University Press’s finest online products and a ferocious temper. As May is International Zombie Awareness Month, I offer my bloodied hand to guide you through the five things you need to know to survive a zombie apocalypse. Are you ready? Let’s go!
1. Know your enemy
The term ‘zombie’ has seeped into our lexicon and bled into multiple areas of popular culture. For example, a ‘zombie’ can refer to a drink — a cocktail consisting of several kinds of rum, liqueur, and fruit juice. Alternatively, it could refer to a computer controlled by another person without the owner’s knowledge, or a ‘zombie’ could be a pejorative term for a Canadian soldier conscripted during the Second World War for service in Canada. However, the original meaning of the term ‘zombie’ came from nineteenth century West Africa and means “a corpse said to be revived by witchcraft, especially in certain African and Caribbean religions.” This is the entity that you have to fear in order to survive the zombie apocalypse.
According to The Oxford Companion to Consciousness, a zombie is “the living dead, a living creature indistinguishable in its physical constitution and in terms of its outward appearance and behaviour from a normal human being, but in whom the light of consciousness was completely absent.”
Therefore, in order to distinguish between the living and the living dead, you need to be able to spot the sentient from the senseless. Use Oxford Dictionaries to identify symptoms: those with a shuffling, lumbering, Neanderthal gait, faintly lugubrious facial expressions, and letting out guttural roars are most likely zombies. Also, if they appear soulless and are hell-bent on devouring your brain, it’s best to run as fast as you can…
2. Prepare your cardio
You now know what these harbingers of death look like but how can you get away from them if you can’t run? So long as you stay fit and exercise as much as you can during the zombie apocalypse, you will have a head-start on the creatures known as the walking dead. Actually, the clue is in the name. They’re called the walking dead for a reason. They can’t jog and they certainly can’t sprint, so provided you stretch before you attempt to replicate Mo Farah or Paula Radcliffe, you should be able to out-run these brain-thirsty zombies.
However, as Chris Cooper explains in Run, Swim, Throw, Cheat, there are other, less honest ways of improving your running ability. It may be unnatural, and cause you to exceed the normal limits of human endurance, but performance-enhancing drugs may help you out run your supernatural enemy. However, you’ll need more than running shoes to keep you safe…
3. Plan your resources
It may have sounded foolish to your neighbours but who’s laughing at your ‘Zombie Apocalypse Emergency Supplies’ now? Certainly not Martin, your overly friendly neighbour: he’s a re-animated zombie and desperately trying to devour Marjorie, the cat-lady next door. Failure to prepare is not an option. Using The Oxford Companion to Food as your guide, you’ve established what foods are the longest-lasting. Now equipped with a lifetime supply of canned meats, you barricade yourself in a DIY fort comprised of SPAM and canned tuna. Fun fact about SPAM: George A. Hormel, the inventor of tinned pork and the reason for its introduction to the food market in 1937, described the shelf-life of SPAM as ‘indefinite’. As you regard the desiccated daemons closing in around you, this might be the only time in your life you would trade places with a can of SPAM for its ‘indefinite’ shelf-life.
If the zombie attack becomes too much for you and all you want to do is sit in a corner, weeping silently and trembling with fear, then perhaps The Oxford Companion to Beer could help you through the dark times.
4. Pick your Weaponry
Don’t deny it; you’ve seen the films. The only way to kill a zombie is to remove the head or destroy the brain. It’s a lesson as old as time (it isn’t). If you’re thinking of a machine gun or a shotgun right now then you’re lucky to still be alive. Not only would the noise ring out like a dinner bell to the zombies, but ammunition would quickly run out and you’d be left with no means of self-protection. Your best bet is a machete, or anything that you can wield around. Reading the section entitled ‘Hand-to-Hand Weapons’ in The Oxford Encyclopedia of Medieval Warfare and Military Technology is an excellent way to understand how to build your arsenal. I’m not sure if you can buy a samurai sword in your local newsagents, but it would be worth a try.
5. Write about your experience
‘Combat Gnosticism’ was a term coined by First World War academic James Campbell who advocated that ‘legitimate war literature’ is literature produced exclusively by combat experience; that soldiers have a kind of ‘gnosis’, a secret knowledge that makes writers such as Wilfred Owen, Robert Graves, and Siegfried Sassoon the exemplars of First World War literature. You, yes YOU, could be the Wilfred Owen of the zombie apocalypse. All you need is a working laptop and you could become the voice of a generation of half-dead souls, documenting your experiences on the front-line. If your very own ‘Combat Gnosticism’ isn’t inspiration enough, Timothy Kendall’s Poetry of the First World War is due to publish October 2013. Let’s just hope the zombies don’t attack until then!
Congratulations brave soldier, you’ve done it! Fearlessly fighting your ferocious foe, you’ve stumbled out of the zombie apocalypse with all your limbs attached. We look forward to guiding you through the next ‘Zomb-pocalypse’!
Daniel Parker is a Publicity Assistant for Oxford University Press and fully prepared to fight off those seeking to eat his brains. You can find more about the Oxford resources mentioned in this article in Oxford Reference, Oxford Index, ODNB, Who’s Who, and Oxford Dictionaries.
Subscribe to the OUPblog via email or RSS.
Subscribe to only media articles on the OUPblog via email or RSS.
Image credits: (1) Haiti Zombie. Work of art by Jean-noël Lafargue. Free Art License via Wikimedia Commons (2) Blue Eyed Zombie. Photo by Josh Jensen. Creative Commons license via Wikimedia Commons
The post An Oxford Companion to surviving a zombie apocalypse appeared first on OUPblog.



Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
