Oxford University Press's Blog, page 919

July 30, 2013

The Treaty of Box Elder

By Colin G. Calloway




The thirtieth of July 2013 marks the 150th anniversary of the Treaty of Box Elder between the United States and the Northwestern Shoshones. At first glance it’s not much of a treaty, just five short articles. Unlike the treaty that a month before defrauded the Nez Perces of 90% of their land, or the Treaty of Fort Laramie with the Sioux in 1868, which the United States broke to steal the Black Hills, the Box Elder treaty gets little attention. Most historians who have written about treaties, myself included, do not even mention it. But like all treaties, it had its own story and mattered immensely to the Indian people involved. Like many treaties its terms were still being debated, and its repercussions felt, generations later.


The Shoshones had assisted Lewis and Clark in 1805 and their friendship with the United States continued through the era of the fur trade. But increasing traffic along the Oregon and California trails and the influx of Mormon emigrants into Utah Territory in the 1850s destroyed game, deprived the Northwestern Shoshones of land, and disrupted their seasonal subsistence. Hungry Shoshones resorted to stealing, violence escalated, and in January 1863 General Patrick E. Connor and a force of 700 volunteers from California—where exterminating Indians had become routine business—massacred more than 200, and perhaps as many as 400, Shoshone people at Bear River in Idaho.


In the wake of the massacre, Governor James Doty of Utah Territory made peace treaties with the Shoshones. The Treaty of Fort Bridger with Chief Washakie and the Eastern Shoshone on 2 July 1863 stipulated that emigrant roads would be made safe and that the Shoshones agreed to let the government establish posts, telegraph, overland stage routes, and railroad rights of way through their territory. At Box Elder (present-day Brigham City) on 30 July, Chief Pocatello and ten bands of Northwestern Shoshones agreed to the Fort Bridger treaty and their tribal boundaries were briefly defined (“on the west by Raft River and on the east by the Porteneuf Mountains”). The final article of the treaty declared: “Nothing herein contained shall be construed or taken to admit any other or greater title or interest in the lands embraced within the territories described in said treaty in said tribes or bands of Indians than existed in them upon the acquisition of said territories from Mexico by the laws thereof.” In October Doty made treaties with the Western Shoshones at Ruby Valley, with the Gosiutes, and with the mixed bands of Shoshones and Bannocks. He then delineated Shoshone territory on a map.


Chief Washakie of the Shoshone

Washakie, a Shoshone Chief who signed the Treaty of Fort Bridger, a precursor to the Treaty of Box Elder. From War Department (1789-09/18/1947). Photographer unknown, ca. 1881-ca. 1885. U.S. National Archives and Records Administration. Public domain via Wikimedia Commons.


The Treaty of Box Elder provided provisions and goods for immediate relief, “the said bands having been reduced by the war to a state of utter destitution,” and promised an annuity of $5,000 to provide clothing and rations. But the United States failed to honor the commitment and the Northwestern Shoshones were steadily edged off their lands.


In the 1920s, with the assistance of three white attorneys (one of whom, Charles J. Kappler, compiled the collection of federal Indian treaties still used by scholars as a basic reference), the Northwestern Shoshones brought a law suit against the United States government, claiming $15 million in damages for the unlawful taking of some 15 million acres of lands they held by aboriginal title, which they contended was recognized in the Box Elder treaty. The case began in the Court of Claims and made its way to the Supreme Court (Northwestern Shoshone v. United States, 324 U.S. 335). There it generated a much longer document than the brief treaty that gave rise to it.


“Asked to go back over three quarters of a century to spell out the meaning of a most ambiguous writing made in 1863,” the Court addressed federal Indian policy, indigenous land rights, and the history of dispossession. Since Johnson v. McIntosh in 1823, the Court explained, the United States had invoked the so-called “doctrine of discovery” to extinguish Indian land title “without any admitted legal responsibility in the sovereign [i.e. the U. S.] to compensate the Indian for his loss.” So the question was not whether the Northwestern Shoshones had rights of aboriginal occupancy to their territory (aboriginal title was not compensable), but whether the Box Elder treaty recognized their title to the land (treaty title was compensable).


In a 5-4 split decision, the Court ruled that Box Elder was a treaty of friendship only and did not constitute recognition by the federal government of any right, title, or interest in the territorial claims of the tribe. Since any rights the Shoshones had to the lands did not grow out of the treaty, no recovery was possible.


Justice Hugo Black (later famous for his admonition that the United States uphold Indian treaties because “great nations, like great men, should keep their word”) presented a frank assessment of the history of dispossession of Indian peoples, who were no match for the more aggressive and vicious whites. The whites were educated and shrewd; the Indians at the treaty were not literate, had no concept of individual land title, and no sense of property. “Here we are asked to attribute legal meanings to subscribers of a written instrument who had no written language of their own,” Black said. “Acquisitiveness, which develops a law of real property, is a accomplishment only of the ‘civilized.’”


Dissenting justices stressed the moral obligations inherent in treaties and argued that the very act of asking permission to cross Indians’ land implied recognition of their title to that land. Justice Douglas reminded the Court of the canon of construction (established in Jones v. Meehan, 175 U.S. 1, 11, 20 S.Ct. 1, 5, 44 L. Ed. 49) that treaties should be construed as understood by the Indians. “When that standard is not observed, what the Indians did not lose to the railroads and to the land companies they lose in the fine web of legal niceties.” The United States had not taken the Northwestern Shoshones’ land by force or purchase but instead obtained rights in those lands “for a meager consideration.” As a result, the Indians had been reduced to destitution. Under the circumstances, Douglas concluded, “to attempt to deny petitioners’ title is unworthy of our country. The faith of this nation having been pledged in the treaties, the honor of the nation demands, and the jurisdictional act requires, that these long unsettled grievances be settled in this court in simple justice to a downtrodden people.”


They were not. However, just a year later Congress established the Indian Claims Commission, and the Shoshones pressed their claims for compensation there. The claims of the various bands became consolidated and in 1968, the Commission entered a final judgment in the amount of $15,700,000, of which $1,375,000 was awarded to the Northwestern Band of Shoshones (minus deductions for attorneys’ fees and other expenses). In 1972, the final amount was distributed on a per-capita basis to 221 tribal members. In 1987, the Northwestern Shoshones won federal recognition as an Indian tribe.


Box Elder was a small treaty with a big story. Blow the dust off virtually any of the almost four hundred Indian treaties made by the United States, and you find a case study in federal-Indian relations and colonialism. Look beyond the formal language and the listed terms, and you find human stories of tragedy and endurance. Follow some of them to the Supreme Court and you find the judges who represent the conscience of the most powerful nation on earth confronting the often shameful record of its dealings with indigenous peoples and struggling to extend—or deny—a measure of justice to their descendants.


Colin G. Calloway is Professor of Native American Studies and John Kimball Jr. Professor of History at Dartmouth College. He has most recently published Pen and Ink Witchcraft: Treaties and Treaty Making in American Indian History with Oxford in May 2013. His other books include One Vast Winter Count: The American West before Lewis and Clark, for which he won the Merle Curti Award and the Ray Allen Billington Prize, The Shawnees and the War for America, The Scratch of a Pen: 1763 and the Transformation of North America, and New Worlds for All. He recently won the 2011 American Indian History Lifetime Achievement Award.


Subscribe to the OUPblog via email or RSS.


Subscribe to only history articles on the OUPblog via email or RSS.


The post The Treaty of Box Elder appeared first on OUPblog.




                Related StoriesKorea rememberedSources of change in CatholicismWorld Hepatitis Day 2013: This is hepatitis. Know it. Confront it. 
 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2013 01:30

Anthems of Africa

By Simon Riker




I would love to visit Africa someday. I think it would settle a lot of curiosity I have about the world. For now, my most informed experience regarding the place is a seminar I took this past semester, called “Sacred and Secular African American Musics”. It wasn’t in an ivory tower (actually, it was a basement rehearsal hall) but it’s fair to say that my current views are products of the limited—yes, even detached—experience of academia. Still, what I learned there led me to form a perspective on music and history which is ultimately sensitive to violence, yet focused on beauty.


africa map

Map of Africa, scanned from Hammond’s Atlas of the Modern World, 1917. Public domain via Wikimedia Commons


I’m writing this piece now because August was the month in 1960 when eight African nations were able to gain their independence from European powers, and I wanted to share what I know about the place of music—particularly national anthems—in this historical narrative. Celebrating these victories of the past century means grappling with great injustices, but, in my opinion, whatever strange or uncomfortable realities history may have left for us, there’s always solace in the continued march of progress.


Right now, I want you to take a moment, pause your reading, and use your imagination to conjure the sound of African music in your mind. Did you hear that? Now, of course, we all have completely individual musical histories, and it would be impossible for our thoughts in this little exercise to be the same. But I would guess that what you just experienced (if you humored me) probably involved some combination of the marimba, xylophone (or other idiophone), drums, shakers, bells, woodblocks, maybe a flute, or some singing or chanting. None of this would be unreasonable; every instrument I just listed is directly tied to the musical history of Africa and they represent just the tip of a rich musical tradition. But I would also guess that whatever you were thinking did not involve violins, snare drums, trumpets, trombones, or other familiar products of the Western canon. If my guesses were correct, then you’ll probably find the topic of African national anthems perplexing and thought-provoking, like I do.


So that you can really understand where I’m going with this, here’s a quick list of those countries which will be celebrating their independence this month, each followed by a link to a version of their national anthem, courtesy of YouTube:



Benin (1 August 1960) L’aube nouvelle

Click here to view the embedded video.


Niger (3 August 1960) La Nigérienne

Click here to view the embedded video.


Burkina Faso (5 August 1960) Une Seule Nuit

Click here to view the embedded video.


Central African Republic (13 August 1960) La Renaissance

Click here to view the embedded video.


Chad (11 August 1960) La Tchadienne

Click here to view the embedded video.


Cote d’Ivoire (7 August 1960) L’Abidjanaise

Click here to view the embedded video.


Gabon (17 August 1960) La Concorde

Click here to view the embedded video.






You won’t have to listen to all of the videos to see what I’m talking about; one or two should do the trick. Without the context I’ve provided so far, these anthems might seem royal, impressive, majestic, stately, etc. But they are also distinctly Western in style, and the fact that they function as celebrations of African pride makes the entire situation somewhat baffling.


Consider that each country adopted its new national anthem shortly after attaining independence. What’s more, if you look up the lyrics to these songs, you’ll encounter themes of freedom, rights, solidarity, and sovereignty — themes which are expressed in French lyrics. How do we make sense of this all? What could it mean that these anthems, which so many nations chose to assert their freedom and strength, were set in forms of language and music that were originally of the very people who had systematically conquered the continent and exploited it for centuries?


africa 2

Griotte lors de fiancailles burkinabè. Photo by Roman Bonnefoy. Creative Commons License via Wikimedia Commons


There are, of course, practically unlimited ways to interpret this all. Let me say at this point that this is my subjective conjecture. For obvious reasons, I was not there for those conversations or decisions, and I may never find out what the people behind these exciting choices were thinking at the time. Perhaps the contradictions I’ve described appear perfectly clear to you as indicative of the permanence of the cultural erosion caused by Europe’s influence in Africa. You could say that these people did not choose the French tongue, and they did not ask for Western instrumentation, rhythmic structure, or tonality — these things chose them, and forcefully, at that. No doubt, a globalized world which had never seen a colonized Africa would have produced drastically different anthems—if any at all. Personally, I don’t see a whole lot of merit in thinking of things in such hypotheticals. What happened in Africa happened, and next month eight independent countries will celebrate what they have become, in spite of—or perhaps, because of—it all. If they ever want to change their tunes, they will. Because, after all, isn’t that what independence is all about?


Simon Riker grew up in Rye, New York, and has spent his entire life so far trying to get a job in a skyscraper. When Simon isn’t interning at Oxford University Press, he can be found studying Music and Sociology at Wesleyan University, where he expects to graduate next year.


Oxford Music Online is the gateway offering users the ability to access and cross-search multiple music reference resources in one location. With Grove Music Online as its cornerstone, Oxford Music Online also contains The Oxford Companion to Music, The Oxford Dictionary of Music, and The Encyclopedia of Popular Music.


Subscribe to the OUPblog via email or RSS.


Subscribe to only music articles on the OUPblog via email or RSS.


The post Anthems of Africa appeared first on OUPblog.




                Related StoriesNo love for the viola?Lullaby for a royal babyConcerning the cello 
 •  0 comments  •  flag
Share on Twitter
Published on July 30, 2013 00:30

July 29, 2013

Post-DSM tristesse: the reception of DSM-5

By Edward Shorter




We’re all suffering from DSM-5 burnout. Nobody really wants to hear anything more about it, so shrill have been the tirades against it, so fuddy-duddy the responses of the psychiatric establishment (“based on the latest science”).


But now the thing’s here, and people have been opening the massive volume — where descriptions of depression are repeated almost verbatim seven times! — and asking what all the shouting was about. Post-DSM-tristesse.


There are a few pluses, and several huge minuses, worth calling attention to, just before everybody goes to sleep again over the question of psychiatric “nosology.” Oh, how the average newspaper reader is turned off by stories on “nosology.”


You have to remember that a lot of smart, well-informed, scientifically up-to-date people were involved in the drafting. So the final result can’t be all horrible.


On the plus side:




The new DSM separates the diagnosis “catatonia” from schizophrenia and makes it a label you can pin on lots of different things, notably depression. Catatonia means movement disorders, such as stereotypes (repetitive movements, common in intellectual disabilities and autism), together with some psychological changes, such as negativism: don’t wanna eat, don’t wanna talk, etc.


Why should anyone care about catatonia? Because it’s treatable. There are effective treatments for it, such as high-dose benzodiazepines and electroconvulsive therapy. If your autistic child has a catatonic symptom such as Self-Injurious Behavior (constantly hitting his head so that he detaches his retinas), he can be effectively treated. This is huge news.


Unfortunately, the pediatric section of the new DSM doesn’t use the term catatonia (apparently it’s only an adult disease) but refers to “stereotypic movement disorder,” without breathing a word about catatonia. But it’s the same thing. This is a real embarrassment for pediatric medicine and it’s a shame that the head disease-designers didn’t take control of the pediatric section as well.


So this is a big plus: Owing to DSM-5, many patients with catatonic symptoms will now be accurately diagnosed and effectively treated. What else? The anxiety disorders section of the DSM has always been something of a dog’s breakfast, heavily politicized and subject to constant horse-trading. But in DSM-5:


Obsessive-compulsive disorder (OCD) is removed from the anxiety section and made a disease of its own. This may encourage the development of new treatments, now that it’s no longer part of the anxiety package. OCD is recognized as responsive to antianxiety meds, but hey, maybe there’s something more specific out there for it. So this would be the way that psychopharmacology progresses — moving together in synch with improvements in diagnosis. It’s the way science is supposed to work.


doctor patient mental health


On the minus side:




In several other key areas, science has not only failed to work, it has been knocked down, kicked bloody, and thrown over the side of the pier.


The core mood diagnoses are still intact: “major depression” and “bipolar disorder.” Major depression is a mix of highly variegated depressive illnesses and should be taken apart.


How about bipolar depressions, involving mania and hypomania? It’s true that they are more serious than garden-variety non-melancholic depressions. But bipolar depressions don’t seem to have different symptoms (different “psychopathology”) than unipolar depressions, even though they may have a more chronic course. And that course doesn’t make them separate diseases. The whole “bipolar” concept is one of those things in science that is true but uninteresting. Yeah, true that some depressions have mania and hypomania tacked onto them. But so what?


Well, there are hundreds of millions of dollars in pharmaceutical profits riding on “bipolar disorder.” That’s what!


“Schizophrenia” is still in there as an undifferentiated entity. Hard to believe, after all this famous “research” that the DSM wonks have been bellowing about, that it remains in the nosology. The old German and French alienists from around 1900 knew there were different forms of chronic psychotic illness. Many of them, especially the French, cast a bleary eye at Emil Kraepelin’s new diagnosis of “dementia praecox” (Eugen Bleuler called it “schizophrenia” in 1908). But the concept was so towering in its majesty — one main psychotic illness with one predictable (downhill) course — that it won out over all the fussy differentiation that other psychiatrists had been attempting. A century of research says that lots of different forms of chronic psychosis exist — all forgotten. Of course the pharmaceutical industry has hyped the single-psychosis-called-schizophrenia line, because antipsychotic agents have been real blockbusters. The money that quetiapine and olanzapine and all the other “second generation” antipsychotics has made is unbelievable.


I could go on but I won’t. You see that the results are mixed, but it’s a tragedy in a way because the results could have been brilliant if: (1) the American Psychiatric Association had not been so determined to ensure continuity from one DSM edition to the next; (2) the patient groups had stopped yowling after their favorite diagnoses; and (3) if Big Pharma had realized that the way to future profits runs through new diagnoses, and only after they are in place, through new drugs.


OK. Now everybody can go back to sleep.


Edward Shorter is Jason A. Hannah Professor in the History of Medicine and Professor of Psychiatry in the Faculty of Medicine, University of Toronto. He is an internationally-recognized historian of psychiatry and the author of numerous books, including How Everyone Became Depressed: The Rise and Fall of the Nervous Breakdown,  A Historical Dictionary of Psychiatry and Before Prozac: The Troubled History of Mood Disorders in Psychiatry. Read his previous blog posts.


Subscribe to the OUPblog via email or RSS.


Subscribe to only brain sciences articles on the OUPblog via email or RSS.


Image credit: Teenage Girl Visits Female Doctor’s Office Suffering With Depression. © monkeybusinessimages via iStockphoto.


The post Post-DSM tristesse: the reception of DSM-5 appeared first on OUPblog.




                Related StoriesHomicide bombers, not suicide bombersWorld Hepatitis Day 2013: This is hepatitis. Know it. Confront it.Experiencing art: it’s a whole-brain issue, stupid! 
 •  0 comments  •  flag
Share on Twitter
Published on July 29, 2013 05:30

Sources of change in Catholicism

By Peter McDonough




Vocation directors report a 10% bump in applications to the Society of Jesus since the ascension of a Jesuit to the papacy. The blip reflects a certain relief. The personable contrast that Pope Francis offers to his dour predecessor shifts the motivational calculus.


Just as plain is the fact that upticks are not trends. The Jesuit-affiliated Marquette University suffered a temporary drop in applications once it became known that Jeffrey Dahmer, the serial cannibal, had lived in a Milwaukee neighborhood by the school. The Jesuit-sponsored Boston College enjoyed a boom in applications because of the Doug Flutie factor. This lasted roughly through the tenure of the school’s star quarterback.


The demographics of American Catholicism are hard to decipher because mini-trends cut across one another. Catholics remain ascendant in the United States mainly due to an influx of immigrants who outpace departures from the church. For now, these immigrants are more observant and more fertile than the comparatively laid-back majority of the faithful, but there is no guarantee that this subculture will hold up. The historical record indicates that the reverse will turn out to be true.


Photo by © Mazur/catholicnews.org.uk, CC BY 2.0, via Flickr.

Photo by © Mazur/catholicnews.org.uk, CC BY 2.0, via Flickr.


The cumulative shuttering of the parishes and parish schools that served immigrant Catholicism in the heyday of the American church, from about 1850 through 1950, is one trend that seems clear-cut and irreversible. Even so, some dioceses are booming in areas like the Southwest.


What does seem incontrovertible is two things: (1) The church’s “taxing authority”—its capacity to elicit contributions from the folks in the pews—is slipping. It has never approached the levels shown by its Protestant counterparts and tithing continues to be rare in Catholicism. Sexual abuse scandals has also aggravated shortfalls in financial support. (2) Attendance at Sunday services has fallen with assimilation and secularization, apart from the “exogenous shock” of the scandals.


The second downward trend is in recruitment to the priesthood and congregations of nuns. Some conservative orders receive more new members than their centrist and progressive peers. The descending spiral shows signs of leveling off, but the new level is low compared to what it was during boom times. The pre-sixties flow of young men and women into lives of celibacy will not come back. Decline is the long-term scenario for the priesthood even when candidates from Asian and Africa, for whom ordination entails upward mobility, are factored in.


These economic and demographic undercurrents don’t make the American Church the religious equivalent of a failed state. But chronic low achievement in its financial growth and a shrinking of the recruitment pool do make it resemble a weak state. It survives, with diminished influence.


The facts are one thing, interpretation another. We lack theories that can generate useful ideas when it comes to understanding how churches change. In contrast to markets, there are no supply-and-demand curves for forecasting religious equilibrium. Nor is there anything like the “the demographic transition,” familiar to students of population dynamics, that predicts a fall-off in mortality, as well as a lagged rise, then peaking, of fertility as societies move from agrarian to predominantly industrial profiles. Religious change ignites periodically, in incandescent outbursts, then ossifies. But that regularity, such as it is, is hard to predict or explain.


For centuries, Catholicism thrived in Counter-Reformation mode. In the United States, the immigrant enclaves of the Going My Way church approximated defensive worlds unto themselves. The liberalization of Vatican II, along with gradual assimilation into the American lifestyle, changed that. The restorationist neoconservatism promulgated by John Paul II and his right-hand man and eventual successor, Benedict XVI, stressed ressourcement, a return to the classical sources, above aggiornamento, or updating. But that too has foundered in the face of implacable cultural shifts. Family patterns and sexual values are not going back to what they used to be. The church suffers depletion, just as species are threatened by habitat loss.


The Counter-Reformation mentality could provide a defensive sense of solidarity but not a theory of religious transformation. The “revolution from above” it represented was undone in part by some of the modernizing measures (like Jesuit education) that it promoted but whose effects it could not control. As Eamon Duffy points out in a recent issue of the New York Review of Books, the Counter-Reformation mindset lived, and was undermined by, contradictions. The strategy was at once less coherent than strict adherence to tradition implies and more arbitrary and irrational in its outcomes than the plans of rigorous militancy foresaw. “The best laid plans” produced some unintended consequences.


There are no clean slates in Catholicism. On the flip side, tradition as understood by church officials does not spell out mechanisms for change. We are left with slowly building seismic forces punctuated by apparently random eruptions. The longevity of the church looks strangely erratic. Not quite as bizarre as Bolivian politics perhaps, where coups rather than elections have long been recognized as the normal mode of change. But the process seems adventitious nonetheless.


Close to where I am summering in Portugal there is a chapel dating from the fourteenth century that parishioners routinely fix up and use for festivals of their own devising. A newly arrived priest complained that several members of the ladies auxiliary were living in a variety of extra-marital arrangements. He disapproved of the raucous music, drinking, and assorted vulgarities that prevailed during feasts nominally commemorating one saint or another. The ladies and their friends have politely ignored him.


This is the way a lot of the church evolves, like an intricate ecosystem rather than a formal organization. The stories of everyday life wander away from the official symbolism, rather as the Romance languages grew out of Latin. Without a vision that grabs souls, institutional strategy veers off course, this way and that.


Peter McDonough has written two books on the Jesuits and others on democratization in Brazil and Spain. His most recent book is The Catholic Labyrinth: Power, Apathy, and a Passion for Reform in the American Church. He lives in Glendale, California. Read his previous blog posts.


Subscribe to the OUPblog via email or RSS.


Subscribe to only religion articles on the OUPblog via email or RSS.


The post Sources of change in Catholicism appeared first on OUPblog.




                Related StoriesStereotypes and realities in CatholicismCreativity in the social sciencesKammerer, Carr, and an early Beat tragedy 
 •  0 comments  •  flag
Share on Twitter
Published on July 29, 2013 03:30

Homicide bombers, not suicide bombers

By Robert Goldney




To some this heading may seem unexpected. The term ‘suicide bomber’ has entered our lexicon on the obvious basis that although the prime aim may have been the killing of others, the individual perpetrator dies. Indeed, over the last three decades the media, the general public, and sometimes the scientific community have uncritically used the words ‘suicide bomber’ to describe the deaths of those who kill others, sometimes a few, usually ten to twenty, or in the case of 9/11, about two thousand, while at the same time killing themselves.


Like many areas of human behaviour, these actions have been subjected to rigorous investigation and it is timely to reflect on the findings. Detailed studies have generally shown that there is little in common with those who die by suicide, using ‘suicide’ in its historically clinically accepted sense.


For example, in an early review in 2007, Ellen Townsend, a psychologist from the University of Nottingham, concluded that available evidence demonstrated that so called ‘suicide bombers’ had a range of characteristics which on close examination were not truly suicidal, and that attempting to find commonalities between them and those who died by suicide was likely to be an unhelpful path for any discipline wishing to further understand suicidal behaviour.


September 11 Memorial, New York, USA

September 11 Memorial, New York, USA

Furthermore, in 2009 the psychiatrist, Jerrold Post and his colleagues at the George Washington University referred to the ‘normality’ and absence of individual psychopathology of suicide bombers. Most other researchers have reported similar findings, although it is fair to acknowledge that the psychologist, Ariel Merari of Tel Aviv University, has expressed contrary views which have stimulated spirited and at times acrimonious debate, and there is the recent polemical work of the English Literature graduate and criminal justice Associate Professor of the University of Alabama, Adam Lankford.

From the point of view of experienced clinical psychiatrists, the usual feelings of hopelessness and unbearable psychic pain along with self-absorbtion and restriction of options in those who are suicidal are the antithesis of terrorist acts, and mental disorders do not appear to be a prominent feature. In fact, suicidal intent is usually specifically denied by ‘suicide bombers’, as it is proscribed by most religions, including Islam and Christianity. Indeed, Islam condemns suicide as a major sin with committers denied entry to heaven, and, as it is implied that the act of a ‘suicide bomber’ results in a shorter path to heaven, this would not be achieved if suicide intent was present.


Is focusing on the words an academic distraction, or could it be important?


It is pertinent to recall the saying ‘the pen is mightier than the sword’, attributed to Cardinal Richelieu by Edward Bulwer-Lyton in his 1839 play, which has entered our everyday language. My colleagues, fellow psychiatrist Murad Khan of the Aga Khan University and sociologist Riaz Hassan of Flinders University, who has collated the largest database in the world of such acts, and I are mindful of that saying and believe that the words do matter. We discussed this in more detail in 2010 in the Asian Journal of Social Science.


It has long been recognized that inappropriate publicity promotes further suicide, and the psychiatrist, Seb Littman of the University of Calgary in 1985 noted that the more there is any reporting of suicide, the more there is a tendency for it to be normalized as an understandable and reasonable option. That being so, the repeated use of the term ‘suicide bomber’ runs the risk of normalizing such behaviour, simply because of the frequent use of the words.


That the term ‘homicide bomber’ was more appropriate was probably first suggested by the White House press secretary, Ari Fleischer in 2002. However, perhaps because of that provenance it has all but been ignored.


It is time to address this again. Although the word ‘homicide’ is not entirely accurate because of the political/military context in which these deaths occur, it is more appropriate than the continued use of the word ‘suicide’. Furthermore, it has the potential to modify this behaviour. Whereas suicide can be portrayed as altruistic, there is nothing glamorous or idealistic about homicide.


Clearly there is no simple answer to what has occurred increasingly over the last decades. However, by the use of the words ‘homicide bomber’ a gradual change in the worldwide interpretation and acceptability of these acts may occur. Representatives of the media are urged to consider this change.


Robert Goldney AO, MD is the author of Suicide Prevention, Second Edition (OUP). He is Emeritus Professor in the Discipline of Psychiatry at the University of Adelaide. He has researched and published in the field of suicidal behaviors for 40 years and has been President of both the International Association for Suicide Prevention and the International Academy of Suicide Research.


Subscribe to the OUPblog via email or RSS.


Subscribe to only psychology and neuroscience articles on the OUPblog via email or RSS.


Subscribe to only health and medicine articles on the OUPblog via email or RSS.


Image credit: September 11 Memorial, New York, USA via iStockPhoto.


The post Homicide bombers, not suicide bombers appeared first on OUPblog.




                Related StoriesOn suicide preventionWorld Hepatitis Day 2013: This is hepatitis. Know it. Confront it.When science may not be enough 
 •  0 comments  •  flag
Share on Twitter
Published on July 29, 2013 00:30

July 28, 2013

Do dolphins call each other by name?

By Justin Gregg




If you haul a bottlenose dolphin out of the water and onto the deck of your boat, something remarkable will happen. The panicked dolphin will produce a whistle sound, repeated every few seconds until you release her back into the water. If you record that whistle and compare it to the whistle of another dolphin in the same predicament, you’ll discover that the two whistles are different. In fact, every dolphin will have its own “signature” whistle that it uses when separated from her friends and family.


dolphin-79848_640


This was a discovery made back in the 1960s by Melba and David Caldwell, who also learned that dolphins are not born with a signature whistle, but develop it over the first year of their life. Young dolphins cobble their unique whistle pattern together from sounds they hear in their environment – perhaps basing them on their mother’s whistle, or even incorporating aspects of their trainers’ whistles if they happen to find themselves born in captivity.


Once created, a signature whistle typically remains stable throughout the dolphin’s life, making it easy for the whistler to convey her identity to anyone within earshot. So is it correct to suggest that these whistles are equivalent to “names” as we would understand them in human terms? That’s still a matter of debate.


To function like name-use in human language, dolphins would need to not just repeat their own name, but call out the names of other individuals in order to address them. Figuring out if dolphins do this has been the goal of Vincent Janik and his research group at the Sea Mammal Research Unit at the University of St. Andrews. Their latest peer-reviewed article, with lead author Stephanie King, has provided evidence that dolphins might in fact be capable of labeling each other in this manner.


For this study, boat-based researchers followed a group of bottlenose dolphins in the Moray Firth and St. Andrews Bay. Once it was established that the “owner” of one of the signature whistles the researchers had on file was in the group, that whistle was played back under water and the animals’ response was recorded. In eight out of the twelve cases where this was attempted, the “owner” responded by calling back with his/her own signature whistle. The conclusion was that the individual must have recognized his/her name, and that signature whistles therefore could “serve as a label for that particular individual when copied.”


This is an important finding for a few reasons. It is extremely rare for non-human animals to learn and then use a vocal label for things in their environment outside of an experimental setting. There are plenty of examples of animals like vervet monkeys or chickens that produce alarm or food calls that refer to objects and events, but these are hard-wired vocalizations the animals are born knowing. Dolphins, on the other hand, create their own vocalizations from scratch. They retain the ability to imitate sounds throughout their life – a skill that has been demonstrated both in the lab and in the wild. Only parrots have shown similar skill when it comes to acquiring vocal labels under experimental conditions. A handful of bird species are also quite flexible when it comes to developing and learning to use contact calls in the wild, and might occasionally imitate the unique calls produced by other birds. But where dolphins are concerned, everything seems to be in place to support the idea that a dolphin could learn their friends’ signature whistles, and shout out the “name” of another dolphin in order to get their attention.


But the results of this study are we need to establish that dolphins call each other by name.  In experimental contexts, we know dolphins can learn to label objects. In the wild, we now know that dolphins are capable of reacting to the use of their signature whistle by others. But for this scenario to truly be similar to name-use in human language, we will need reliable observations wherein a dolphin appears to intentionally shout out the name of one of its friends, followed by that friend responding in some relevant way (e.g., swimming over, or repeating his or the other dolphin’s signature whistle). This would be the next piece of the puzzle.


There is one finding that throws a small wrench in the works, however. At the moment, we know that dolphins rarely ever produce the signature whistles of their friends; they produce their own signature whistles almost 100% of the time. Habitually saying your own name is an odd behavior if indeed signature whistles are used by dolphins to address each other. It could be that they do regularly call each other’s names, but that scientists have just not had the right tools – or the right opportunity – to observe it. The St. Andrews group is certainly not slowing down when it comes to providing us with and innovative research techniques. So will they be the ones to find examples of dolphins addressing each other by name in the wild? I really hope so.


But if they don’t, I offer the following reason as to why not. Language is more than just labels for things. It is primarily a means for individuals to share their thoughts with other individuals. In order for language to function correctly, language users need to be acutely aware of what others are thinking – what they might know and what they might need to know. There is a lot of debate in the sciences as to the extent to which non-human animals are aware of both the contents of their own minds, and the contents of other minds. As smart as dolphins are, we still don’t know how advanced their own minds are where this is concerned. If it turns out that dolphins don’t in fact use signature whistles to get each other’s attention, it will not be because they don’t have signature whistles. They most certainly do. And it’s not because they aren’t intelligent. They most certainly are. It might instead be because they don’t fully grasp the kind of hierarchical understanding that goes into this kind of language-like exchange. For me to call your name in the context of a human conversation, I have to know that you know that Jim is your name. I also have to guess that you know that I know that you know that Jim is your name, which is why I am calling you Jim in the first place. This kind of understanding is not trivial. It is deeply complicated. And it is the key to transforming a signature whistle into a full blown name.


Justin Gregg is the author of Are Dolphins Really Smart? The Mammal Behind the Myth. He is a research associate with the Dolphin Communication Project. Follow him on Twitter @justindgregg or at his blog justingregg.com.


Subscribe to the OUPblog via email or RSS.


Subscribe to only earth and life sciences articles on the OUPblog via email or RSS.


Image credit: Dolphin photo by tpsdave [in Public Domain via Pixabay]


The post appeared first on OUPblog.




                Related StoriesWorld Hepatitis Day 2013: This is hepatitis. Know it. Confront it.Experiencing art: it’s a whole-brain issue, stupid!Recent advances and new challenges in managing pain 
 •  0 comments  •  flag
Share on Twitter
Published on July 28, 2013 03:30

World Hepatitis Day 2013: This is hepatitis. Know it. Confront it.

By John Ward, MD




On the 28th of July, countries around the globe will commemorate the third annual World Hepatitis Day. One of only eight health campaigns recognized by the World Health Organization, this health observance raises awareness of the silent yet growing epidemic of viral hepatitis worldwide. Globally, each year, 1.4 million persons lose their life to viral hepatitis (as reported in the Global Burden of Disease Study 2010), approaching the number of deaths from HIV/AIDS (1.5 million) and surpassing mortality from tuberculosis and malaria (1.2 million each).


Deaths from viral hepatitis are caused by one of five hepatitis viruses. Hepatitis A and hepatitis E, which spread through person-to-person contact and contaminated food or water, are major causes of acute hepatitis, particularly in areas of the developing world suffering from poor hygiene. Hepatitis B virus (HBV) and hepatitis C virus (HCV) cause chronic infection that can result in liver disease and death. Hepatitis D requires hepatitis B for replication, and co-infection with both viruses increases the risk of severe liver disease. Worldwide, 240 million people have chronic hepatitis B infection, and 180 million are living with hepatitis C.


Hepatitis B vaccination has been a profound and enduring achievement in prevention of viral hepatitis. Since 1982, more than one billion doses of hepatitis B vaccine have been administered worldwide, preventing an estimated 3.7 million deaths. Because HBV can be transmitted from mother to child at birth, vaccinating newborns, preferably within 24 hours of delivery, can prevent 85% of HBV infections transmitted from infected mothers to their children. Unfortunately, several barriers  prevent universal provision of a birth dose of hepatitis B vaccine globally, including education of healthcare workers, timely access to newborns delivered at home, and lack of coordination between immunization and maternal and child health care providers. As a result of these obstacles, as few as one in four infants worldwide receive hepatitis B vaccine within 24 hours of birth. However, in China, which shoulders 50% of the global burden of hepatitis B mortality, these challenges have been overcome through increasing the number of infants born in birthing facilities and through obstetrical staff acceptance of the responsibility to vaccinate newborns against hepatitis B. As a result, birth dose coverage increased from 40% in 1992-1997 to 74% in 2002-2005 (Vaccine. 2010 Aug 23;28(37):5973-8).


iStock_000025280154XSmall


The benefits of vaccination will soon be joined by the benefits of viral hepatitis care and treatment particularly for hepatitis C. Effective, oral antiviral medications are now available to treat chronic HBV infection; 90% of HBV patients treated with one of these antivirals achieve viral suppression. Although a vaccine for hepatitis C remains elusive, antiviral therapies for HCV play a critical role in preventing disease progression and can lead to virologic cure. New all-oral therapeutic options anticipated for the near future are expected to require shorter durations of treatment (e.g. 12-24 weeks rather than the current 24-48 week regimens), improve rates of viral clearance, and cause fewer serious adverse events. The benefits associated with medical management and therapy can only be realized when persons with chronic viral hepatitis infections are identified through testing and linked to needed care. Unfortunately, barriers to viral hepatitis testing and referral persist worldwide. They include a low level of HBV- and HCV-related knowledge among the public and providers, inadequate screening policies, the affordability of treatment, inequitable access to care for marginalized populations, and limitations in health infrastructure that compromise testing and disease surveillance. As a result, many if not most persons infected with HBV or HCV are unaware of their infection, and a smaller proportion of infected persons receive proper care and treatment.


Timely changes in national policies can elicit significant changes in the way countries approach viral hepatitis prevention and control. For instance, to reflect current hepatitis C epidemiology, HCV testing recommendations were recently expanded in the United States. In 2013, CDC and the US Preventive Services Task Force called for both risk-based screening and one-time screening of persons born from 1945 through 1965, among whom the burden of HCV infection is highest. The new recommendations are expected to improve identification of people living with chronic HCV infection and enhance linkage to care and treatment for those found to be infected.


These recommendations must be accompanied by improvements in care and treatment. For example, professional training regarding viral hepatitis can be improved and best practices for delivering these services identified. Further, countries can expand laboratory capacity to include appropriate diagnostics (e.g. laboratory-based or point-of-care tests) and virologic supplemental testing to confirm current infection and quantify viral load. Policy directives also can highlight opportunities for cost-effective integration of services, including incorporating HCV and HBV prevention and care services into other services and encouraging the use of telemedicine to expand access to testing, care, and treatment.


The global burden of viral hepatitis is tremendous. With new opportunities for improved viral hepatitis prevention and control coming to light worldwide, increased communication and collaboration between policymakers, community organizations, providers, and the public will become even more critical to overcoming significant barriers to prevention and control. Annual commemoration of World Hepatitis Day creates an impetus for such collaboration by raising viral hepatitis awareness and eliciting the actions needed to prevent new infections and improve health outcomes for those already infected.


John Ward, MD, is Director of the CDC’s Division of Viral Hepatitis and Clinical Assistant Professor at Emory University School of Medicine.


To raise awareness of World Hepatitis Day, the editors of Clinical Infectious Diseases and the Journal of Infectious Diseases have highlighted recent, topical articles, which have been made freely available throughout the month of August in a World Hepatitis Day Virtual Issue. Oxford University Press publishes the Journal of Infectious Diseases and Clinical Infectious Diseases on behalf of the HIV Medicine Association and the Infectious Diseases Society of America (IDSA).


Subscribe to the OUPblog via email or RSS.


Subscribe to only health and medicine articles the OUPblog via email or RSS.


Image credit: HBV (Hepatitis B virus) positive by using HBsAg strip test. © jarun011 via iStockphoto.


The post World Hepatitis Day 2013: This is hepatitis. Know it. Confront it. appeared first on OUPblog.




                Related StoriesRecent advances and new challenges in managing pain‘Wild-haired and witch-like’: the wisewoman in industrial societyExperiencing art: it’s a whole-brain issue, stupid! 
 •  0 comments  •  flag
Share on Twitter
Published on July 28, 2013 00:30

July 27, 2013

When science may not be enough

By Louis René Beres




We live in an age of glittering data analysis and complex information technologies. While there are obvious benefits to such advancement, not all matters of importance are best understood by science. On some vital matters, there is a corollary and sometimes even complementary need for a deeper, more palpably human, kind of understanding. A critical example is the study of terrorism.


Statistics don’t bleed. Often, even the most elegant and persuasive science can illuminate only partial truths. What is missing in these cases is the special sort of insight that philosopher Michael Polanyi calls “personal knowledge,” an unsystematic means of grasping reality often associated with the openly “subjective” epistemologies of phenomenology and existentialism.


Sigmund Freud had already understood that meaningful psychological analysis could not afford to neglect innately private feelings. Therefore, he wisely cautioned, whatever the incontestable limitations of subjective investigation, we must nonetheless pay close attention to the human psyche, or soul. Oddly, Freud’s plainly “unscientific” view is applicable to our present-day investigations of terror-violence. How so? Freud himself probably never imagined the application of this humanistic warning to world politics.


Following terrorist attacks, we routinely learn of the number of fatalities and then the number of those who were “merely wounded,” whether such tragedies occur in Israel, Iraq, the United States, or anywhere else. What we can’t ever really seem to fathom are the infinitely deeper human expressions of victim suffering. As much as we might try to achieve a spontaneous experience of unity with the felt pain of others, these attempts are inevitably in vain. They lie beyond science.


In essence, the most important aspect of any terror attack on civilian populations always lies in what can’t be measured: the inexpressibility of physical pain. No human language can even begin to describe such pain, as the boundaries that separate one person from another are immutable.


Here, everyone will readily understand that bodily anguish must not only defy ordinary language, but must also be language-destroying. This inaccessibility of suffering — or the irremediable privacy of human torment — has notable social and political consequences. For instance, in certain foreign policy venues, it has repeatedly stood in the way of recognizing terror-violence as innately wrong and resolutely inexcusable.Rather than elicit universal cries of condemnation, these crimes have often elicited a chorus of enthusiastic support from those who are most easily captivated by self-justifying labels. Most conspicuous are self-righteous claims of terror-violence as legitimate expressions of ”revolution,” “self-determination,” or “armed struggle.”


Photo by Robert Johnson, CC BY 2.0, via Flickr.

Photo by Robert Johnson, CC BY 2.0, via Flickr.


But why do certain terrorists continue to inflict grievous pain upon innocent persons (“noncombatants”) without at least expecting some reciprocal gain or benefit? What are the real motives in these irrational cases? Are these particular terrorists narrowly nihilistic, planning and executing distinct patterns of killing for killing’s sake? Have they managed to exchange one murderous playbook for another, now preferring to trade in such classical military strategists as Sun-Tzu and Clausewitz, for Bakunin, Fanon, and even De Sade?


Why? Terrorism is often a twisted species of theatre. All terrorists, in the same fashion as their intended audiences, are imprisoned by the stark limitations of language. For them, as for all others, the unique pain experienced by one human body can never be shared with another. This is the true even if these bodies are closely related by blood, and even if they are tied together by other tangible measures of racial, ethnic, or religious kinship.


Psychologically, the distance between one’s own body and the body of another is immeasurably great. This distance is always impossible to traverse. Whatever else we may have been taught about empathy and compassion, the determinative membranes separating our individual bodies, will always trump every detailed protocol of formal ethical instruction.


This split may allow even the most heinous infliction of harms to be viewed objectively. Especially where a fashionably popular political objective is invoked, terror bombings can conveniently masquerade as justice. Because this masquerade often works, consequent world public opinion can easily come down on the side of the tormentors. Such alignments are made possible by the insurmountable chasms between one person and another.


For terrorists and their supporters the violent death and suffering so “justly” meted out to victims always appears as an abstraction. Whether inflicted by self-sacrificing “martyrs” or by more detached sorts of attackers, these harms are very casually rationalized in the name of “political necessity,” “citizen rights,” “self-determination,” or “national liberation.” Nothing else must ever be said for further moral justification.


Physical pain can do more than destroy ordinary language. It can also bring about a grotesque reversion to pre-language human sounds; that is, to those guttural moans and primal whispers that are anterior to learned speech. While the victims of terror bombings may writhe agonizingly, from the burns, nails, razor blades and screws, neither the world public who are expected to bear witness, nor the mass murderers themselves, can ever truly understand the deeper meanings of inflicted harms. For the victims, there exists no anesthesia strong enough to dull the relentless pain of terrorism. For the observers, no matter how well-intentioned, the victims’ pain will always remain anesthetized.


Because of the limits of science in studying such matters, terrorist bombers — whether in Boston, Barcelona, or Beer Sheva — are almost always much worse than they might appear. Whatever their stated or unstated motives, and wherever they might choose to discharge their carefully rehearsed torments, these murderers commit to an orchestrated sequence of evils from which there is never any expressed hope of release. Whatever their solemn assurances of tactical necessity, terrorists terrorize because they garner personal benefit from the community-celebrated killings. The terrorists terrorize because they take authentically great delight in executions.


It is not enough to study terrorism with reams of carefully-gathered data and meticulously analyzed statistics. Rather, from the start, it is essential to acknowledge the substantial limitations of science in such critical investigations, and also the deeply human meanings of terrorist-created harms. Although sometimes still not possible to fully appreciate, due to literally immeasurable victim pain and suffering, these complex meanings can ultimately be apprehended by more intensely subjective efforts at “personal knowledge.”


Louis René Beres is a professor of Political Science at Purdue who lectures and publishes widely on terrorism, national security matters, and international law. He is the author of some of the very earliest major books on nuclear war and nuclear terrorism, including Apocalypse: Nuclear Catastrophe in World Politics (1980) and Terrorism and Global Security: The Nuclear Threat (1979).  He is a regular contributor to OUPblog.


If you are interested in this subject, you may be interested in Psychology of Terrorism, edited by Bruce Bongar, Lisa M. Brown, Larry E. Beutler, James N. Breckenridge, and Philip G. Zimbardo. The first comprehensive book on the psychology of terrorism, it is particularly relevant to those interested in current events, and appeals broadly to many different professionals involved in medicine, public health, research, government, and non-profit fields.


Subscribe to the OUPblog via email or RSS.


Subscribe to only current affairs articles on the OUPblog via email or RSS.


The post When science may not be enough appeared first on OUPblog.




                Related StoriesWhat have we learned from modern wars?Oxford authors and the British Academy Medals 2013A birthday gift of lullabies for Baby Cambridge 
 •  0 comments  •  flag
Share on Twitter
Published on July 27, 2013 03:30

Korea remembered

By Steven Casey




It is sixty years since the Korean War came to a messy end at an ill-tempered armistice ceremony in Panmunjom’s new “peace” pagoda. That night, President Dwight Eisenhower made a brief and somber speech to the nation. What the American negotiators had signed, he explained to his compatriots, was merely “an armistice on a single battleground—not peace in the world.”


Eisenhower’s sobering words set the tone for the US reaction. Many Americans found it difficult to come to terms with a war that had ended, not with the unconditional surrender of its enemy, but with a ceasefire that left the North Korean aggressor intact. So on the day the guns fell silent, there was no rousing party around Times Square, no impromptu revelry outside the White House. In subsequent years, there was not even any rush to commemorate those who had so bravely served. In Arlington Cemetery, the grave markers were initially instructed not to write “Korean War,” and only after a protest did they add “Korea”—though still without the “war.”


At least Arlington gave Korea a mention. Across the Potomac the war did not get its own memorial for more than four decades. It was as if the country was gripped by a bout of willful amnesia. As the historian Charles S. Young has observed, “It was like a terminally ill relative whose family had lived with the loss so long the actual moment of death was anticlimactic. The trial could now be put behind them.”


The Korean War had certainly been traumatic for the American people. In the starkest reckoning, it took the lives of more than 30,000 US fighting men, with another 103,000 wounded. But during the three long years of conflict, many Americans had come to believe that the cost had been even higher.


Signing of the armistice

Image credit: UN delegate Lieut. Gen. William K. Harrison, Jr. (seated left), and Korean People’s Army and Chinese People’s Volunteers delegate Gen. Nam Il (seated right) signing the Korean War armistice agreement at P’anmunjŏm, Korea, July 27, 1953. From the U.S. Department of Defense. Public domain via Wikimedia Commons.


In the early days, when General Douglas MacArthur, commander of UN forces, refused to censor the press, many war correspondents played up the human toll. Some recounted interviews with GIs who complained that fighting the North Koreans “was a slaughterhouse.” Later, others produced graphic stories of the long American retreat after China’s intervention. This gruesome episode, one correspondent wrote, would be “irrevocably etched in the mind—and the conscience—of the American people. The etching will show frostbitten boys slipping, falling, and dying—but fighting, though facing a vastly greater foe, dragging out their dead and wounded, by hand, by jeep, by oxcart.”


Back in the United States, the Republican Party—desperate to regain power after five straight presidential election defeats—eagerly exploited the large death toll, while adding an extra twist. They accused Harry Truman’s administration of deliberately concealing the true human cost. One common Republican complaint was that the government refused to count non-battle casualties in its totals, especially those boys suffering from frost-bite in the North Korean mountains—a deliberate omission, Republicans charged, that meant the official statistics were 60,000 lower than the “real” figure. Another common Republican allegation was that, despite the mounting casualty lists, Truman had no ideas for how to end the bloodshed.


In the spring of 1951, MacArthur powerfully amplified this argument, and by so doing got himself fired by Truman. On returning home to a tumultuous welcome, MacArthur then launched a sustained attack on the government’s handling of the war, ending with a powerful rhetorical question: “Where does the responsibility for that blood rest? This I am quite sure—It is not going to rest on my shoulders.”


MacArthur need not have worried. As the war dragged on, many Americans placed the responsibility squarely with Truman and the Democrats. Parents whose sons had been drafted, observed Samuel Lubell, the writer and pollster, “were bitterly resentful of the administration.” For others, even bread-and-butter economic issues seemed increasingly subordinate to the war. “Surprising numbers of voters came to resent the prevailing prosperity as being ‘bought by the lives of boys in Korea,’” Lubell concluded in the context of the 1952 election campaign. “The feeling was general that the Korean War was all that stood in the way of an economic recession. From accepting that belief, many persons moved on emotionally to where they felt something immoral and guilt-laden in the ‘you’ve never had it better’ argument of the Democrats.”


When Americans voted in 1952, Eisenhower finally ended the Republican Party’s election drought, recapturing the White House largely on the basis that he was the man to end Truman’s bloody war in Korea. For the political elite, this electoral outcome was stunning: the end, no less, of Franklin Roosevelt’s all-conquering New Deal coalition that had dominated the ballot box for the past twenty years.


Because this election was so consequential, the political elite initially refused to banish Korea from its collective memory. Instead, it intensively pondered the war’s lessons. For the next decade, powerful players on both sides of the partisan divide reached the same conclusion: the political cost of waging a protracted Asian ground war was too high. During the Dien Bien Phu crisis of 1954, for instance, when the Vietnamese communists threatened to overrun French forces, Eisenhower shied away from sending American troops. In Congress Democratic leaders applauded his restraint. There must be “no more Koreas,” insisted Senator Lyndon B. Johnson, “with the United States furnishing 90 percent of the manpower.”


Of course, when Johnson was himself president ten years later, memories of Korea’s human and political cost were no longer strong enough to deter him from launching another Asian ground war. In this sense, Johnson was as guilty as his fellow Americans of Korean War amnesia—and with disastrous results. During the Vietnam War, the American death toll was even higher, and without the compensation of leaving behind a southern ally that would endure. The domestic repercussions were even more intense, as protests spread beyond the halls of Congress and onto the nation’s streets and campuses. And like Truman in 1952, Johnson bequeathed to his party an electoral defeat in 1968—although this time the Republican successor was unable to achieve a quick and durable armistice.


Korea therefore offers a cautionary tale of what can happen when people turn a blind eye to the past. History, to be sure, is often a treacherous teacher. Leaders need to be particularly careful not to glean glib lessons from a cursory knowledge of facts that reinforce their pre-existing assumptions. But neither, for that matter, should they ignore past parallels altogether. When the trauma of war is forgotten, the consequence can be calamitous. When the central political lessons of the Korean War were neglected too soon after its messy and painful end, the outcome was especially tragic.


Although it is now sixty years since the armistice was signed at Panmunjom, Korea deserves to be remembered still. For one thing, it reminds us how the reality of war is refracted to the public through partisan politicians and scoop-seeking reporters, often with traumatic results. But above all, we should never forget the war’s actual human cost—the lives curtailed on the battlefield, the families left grieving, the communities made emptier. These are the tragic consequences that should always be part of our collective memory.


Steven Casey is Professor in International History at the London School of Economics. He is author of Selling the Korean War: Propaganda, Politics, and Public Opinion (OUP, 2008), which won the Truman Book Award. His new book, When Soldiers Fall: How Americans Have Confronted Combat Losses from World War I to Afghanistan, will be published by OUP at the end of 2013.


Subscribe to the OUPblog via email or RSS.


Subscribe to only history articles on the OUPblog via email or RSS.


The post Korea remembered appeared first on OUPblog.




                Related StoriesKammerer, Carr, and an early Beat tragedyThe fall of MussoliniConstantine in Rome 
 •  0 comments  •  flag
Share on Twitter
Published on July 27, 2013 00:30

July 26, 2013

Oxford authors and the British Academy Medals 2013

We don’t often discuss book awards on the OUPblog, but this year the inaugural British Academy Medals were awarded to three authors and their titles published by Oxford University Press: Thomas Hobbes: Leviathan, edited by Noel Malcolm; The Organisation of Mind by Tim Shallice and Rick Cooper; and The Great Sea: A Human History of the Mediterranean by David Abulafia (USA only). Created to recognise and reward outstanding achievement in any branch of the humanities and social sciences, the British Academy Medals are awarded for a landmark academic achievement. On this occasion, we asked the Oxford University Press (OUP) editors of the three books to reflect on what these texts bring to scholarly publishing today.


Thomas Hobbes: Leviathan, edited by Noel Malcolm




thomas hobbes leviathan“Thomas Hobbes’s Leviathan, originally published in 1651, is one of the classics of Western thought. It is the first great philosophical work written in English. Except it’s written in Latin as well. And as well as covering a very broad range of philosophical issues, it’s also a landmark in the history of political thought. So the task of preparing the standard scholarly edition is a unique challenge and responsibility: after many years, cometh the moment, cometh the man. This moment actually occurred back in the 1980s, when Noel Malcolm, a young college lecturer at Cambridge, was identified as the person not only to tackle Leviathan but also to take a more general responsibility for planning and overseeing OUP’s edition of the complete works of Hobbes.


“Along the way Dr Malcolm has also been a prominent political journalist and a Balkan expert who has done as much as anyone to improve international understanding of the history and politics of the region. He paved the way for his edition of Leviathan with an acclaimed edition of Hobbes’s correspondence in 1994. The ultimate fulfilment of his Hobbesian destiny will be a biography, but we may have to be patient in waiting for that.


Leviathan will still be read in the next century and the one after. And I have no doubt that Malcolm’s edition will even then continue to be a source of illumination and a subject of awe, such is his achievement.”


– Peter Momtchiloff, Commissioning Editor, Philosophy, Oxford


The Organisation of Mind by Tim Shallice and Rick Cooper




organisation of the mindAs a psychology student in the early 90s, the name ‘Shallice’ was one encountered and cited frequently during my studies. I became a student of Tim in 1995 during further studies at UCL (though I suspect my unremarkable essays are long forgotten, assuming they registered in his memory in the first place).


“Years later, in 2001, at a meeting of the Cognitive Neuroscience Society in the World Trade Center, I was first informed about his ambitious new project, one that would attempt to unify two fields: cognitive science and cognitive neuroscience, which, though having similar names, had taken quite different trajectories. Though I continued pestering Tim over the years, and we collaborated on an edited volume, it was not until eight years later that I was thrilled to hear that Tim was considering OUP for his new book, co-written with Rick Cooper.


“The book was published in 2011 with glowing endorsements from two giants in cognitive neuroscience and cognitive science: Michael Posner and Jay McClelland.


“Books as broad and ambitious as this are rare. There are few people qualified to write them for one thing! I am sure that the many years invested by the authors in this book will pay off. It is likely to have an influence on the field for many years to come.”


– Martin Baum, Commissioning editor, Psychology, Oxford


The Great Sea: A Human History of the Mediterranean by David Abulafia




the great sea“I would love to take credit for bringing The Great Sea to port, but truthfully it was acquired by my predecessor, Peter Ginna, who left OUP’s US office before the rewards of his acumen drifted in to our shores. If anything, the project was seen as posing a daunting challenge for me in my new job—just how big could be the market in the United States for a very, very long book, offering a multi-millennial exploration of the Mediterranean? Quite honestly, given the size of the financial risk, we all fretted for a few years. But then the chapters began coming in and they buoyed our hopes; beautifully written, dramatically constructed, and convincingly detailed, they made for fascinating reading. Anyone could see that here was wide-sweep history at its best. In short, The Great Sea grew greater. I’m pleased for Oxford that I did acquire David’s next project, A Maritime History of the World, which will employ the same navigational-narrative lines as The Great Sea on an even grander scale. It should be capacious and distinguished enough to win all three British Academy medals.”


– Tim Bent, Executive Editor, Trade History, New York


The British Academy is the UK’s national body which champions and supports the humanities and social sciences. It is an independent, self-governing fellowship of scholars, elected for their distinction in research and publication. Created ‘for the Promotion of Historical, Philosophical and Philological Studies’, it was first proposed in 1899 in order that Britain could be represented at a meeting of European and American academies. The organisation received its Royal Charter from King Edward VII in 1902.


Please join us in congratulating Noel Malcolm, Tim Shallice, Richard Cooper, and David Abulafia on their British Academy Medals. Oxford University Press is proud to be publishing such exceptional scholars.


Noel Malcolm is a Senior Research Fellow of All Souls College, Oxford, and General Editor of the Clarendon Edition of the Works of Thomas Hobbes. He is the editor of Thomas Hobbes: Leviathan.


Tim Shallice was the founding director of the Institute of Cognitive Neuroscience, part of University College London, where he is an emeritus professor. Richard Cooper originally studied mathematics and computer science at the University of Newcastle, Australia. Together they are the authors of The Organisation of Mind.


David Abulafia is Professor of Mediterranean History at Cambridge University and the author of The Great Sea: A Human History of the Mediterranean and The Mediterranean in History.


Subscribe to the OUPblog via email or RSS.


Subscribe to only media articles on the OUPblog via email or RSS.


The post Oxford authors and the British Academy Medals 2013 appeared first on OUPblog.




                Related StoriesThe fall of MussoliniLady Chatterley’s Lover and the politics of “variable obscenity”Experiencing art: it’s a whole-brain issue, stupid! 
 •  0 comments  •  flag
Share on Twitter
Published on July 26, 2013 07:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.