Oxford University Press's Blog, page 802

June 14, 2014

Post-Hay Festival blues

By Kate Farquhar-Thomson




It was down to the trustworthy sat nav that I arrived safe and sound at Hay Festival this year; torrential downpours meant that navigating was tougher than usual and being told where to go, and when, was more than helpful.


Despite the wet and muddy conditions that met me at Hay, and stayed with me throughout the week, the enthusiasm of the crowd never dwindled. Nothing, it seems, keeps a book lover away from their passion to hear, meet, and have their book signed by their favourite author. But let’s not ignore the fact that festival-goers at Hay not only support their favourite authors, they also relish hearing and discovering new ones.


03062014 032


My working holiday centres on our very own creators of text, our very own exponents of knowledge, our very own Oxford authors! Here I will endeavour to distil just some of the events I was privileged to attend in the call of duty!


Peter Atkins was an Oxford Professor of Chemistry and fellow of Lincoln College, Oxford until his retirement in 2007 – many of us, including myself, studied his excellent text-books at ‘A’ level and at university. What Peter Atkins does so well is make science accessible for everyone and none less so than an attentive Hay audience. Peter puts chemistry right at the heart of science. ‘Chemistry has rendered a service to civilization’ Atkins says ‘it contributes to the cultural infrastructure of the world’. And thereon he took us through just nine things we needed to know to ‘get’ chemistry.


03062014 029


Ian Goldin’s event on Is The Planet Full? addressed global issues that are affecting, and will affect, our planet. So, is the planet full? Well, the Telegraph tent for his talk certainly was! Goldin, whose lime green sweater brought a welcome brightness to the stage, is Professor of Globalisation and Development and Director of the Oxford Martin School at the University of Oxford. His words brought clarity and insight: “politics shapes the answer to this question,” said Goldin.


Hay mixes the young with the old and academics with us mere mortals, and what we publishers call the ‘trade’ authors with the more ‘academic’ types. This was demonstrated aptly by Paul Cartledge who right from the start referenced an earlier talk he attended by James Holland. Cartledge is A.G. Leventis Professor of Greek Culture at University of Cambridge and James (who is an ex-colleague and friend) is a member of the British Commission for Military History and the Guild of Battlefield Guides but a non-academic. The joy of Hay is that it brings everyone together. Paul Cartledge was speaking about After Thermopylae, a mere 2,500 years ago, but rather a more tricky period to illustrate through props and pictures which Holland so aptly used in his presentation.


03062014 057


OUP had 15 authors at The Hay Festival but the Hay Festival also had other visitors such as Chris Evans whose show was broadcast live from the festival as it was the 500 Words competition announcement and I was lucky enough to be there.


So what does Hay mean to me? It’s a unique opportunity to get up close and personal with heroes in literature and culture, as well as academia. It’s a week of friends, colleagues, and drinking champagne with Stephen Fry whilst discussing tennis with John Bercow – and wearing wellies every day!


Kate Farquhar-Thomson is Head of Publicity at OUP in Oxford.


Subscribe to the OUPblog via email or RSS.


Image credits: Stephen Fry, Ian Goldin, and 500 Words competition at the Hay Festival. Photos by Kate Farquhar-Thomson: do not reproduce without permission.


The post Post-Hay Festival blues appeared first on OUPblog.




                Related StoriesLooking forward to the Hay Festival 2014Is the planet full?Behind-the-scenes tour of film musical history 
 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2014 00:30

June 13, 2014

Climate change and our evolutionary weaknesses

By Dale Jamieson




In the reality-based community outside of Washington D.C. there is a growing fear and increasing disbelief about the failure to take climate change seriously. Many who once put their faith in science and reason have come to the depressing conclusion that we will only take action if nature slaps us silly; they increasingly see hurricanes and droughts as the only hope.


This helps to explain why two articles published recently in scientific journals garnered such attention. Their message: It may already be too late to save the West Antarctic Ice Sheet. The slap is on the way. As glaciologist Richard Alley put it, “we are now committed to global sea level rise equivalent to a permanent Hurricane Sandy storm surge.” This sea level rise of 4-16 feet may be the “new normal,” and on top of that there will still be additional Hurricane Sandy style surges. Daniel Patrick Moynihan anticipated such a sea level rise in a 1969 memo he wrote to President Nixon’s White House Counsel, John Ehrlichman: “Goodbye New York. Goodbye Washington…” He might have added, “goodbye Shanghai, London, Mumbai, and Bangkok. Goodbye South Florida and goodbye to the California coast.”


Photo by NASA. Public domain via Wikimedia Commons

Photo by NASA. Public domain via Wikimedia Commons


Nature’s slaps have begun and they may soon become punches, but as any parent knows, slaps do not always help. Those who reject decades of climate science will not be swayed by two new scientific papers, while those who care about climate change may come to see their actions as increasingly futile. We need to get out of this cycle of denial and depression and get on a road to recovery.


The first step to take is to recognize that climate change is the most difficult problem that humanity has ever faced. Climate change deniers, greedy corporations, and opportunistic politicians deserve all the blame they get and more, but they are not the only problem. The most difficult challenge in addressing climate change lurks in the background. Evolution did not design us to solve or even recognize this kind of problem. We have a strong bias toward dramatic movements of middle-sized objects that can be visually perceived, and climate change consists of the gradual build up in the atmosphere of an invisible, odorless, tasteless gas. We are built to respond to sudden movements of middle-sized objects in our visual fields, so action would all but be assured if the threats that climate change posed were immediate and proximate. If carbon dioxide was sickly green in color and stank to high heaven, we would have done something about it by now.


Another feature of climate change that makes it difficult for us to respond is that its causes and effects are geographically and temporally unbounded. Earth system scientists study the earth holistically and think on millennial timescales and beyond, but this perspective is foreign to most people. Most of us pay little attention to events that occur beyond national boundaries, unless they are “one-off” disasters. The idea that turning up my thermostat in New York can contribute to affecting people living in Malaysia in a thousand years is virtually beyond comprehension to most of us.


The challenge is obvious once we see the problem in this way. We need to design institutions and policies that can help us to overcome our natural frailties in addressing climate change, and we need to make the threat as immediate and sensible as possible. The presentation and rollout of the US National Climate Assessment was a welcome attempt to do this. The report’s message was that climate change is here to stay and will only get worse. Some cities and states are already starting to take action, and administration officials fanned out across the country to make sure that local opinion leaders understood what climate change means for their communities.


We also need to strengthen and create institutions that provide credible knowledge of such long-term threats. Life in a large-population, high-consumption, high-technology world brings new risks, especially when nature is starting to wake up from the relatively stable period that it has been in for the last 10,000 years. We need the kind of knowledge that will enable us to anticipate and adapt to these unprecedented challenges. This was part of the thinking behind President Lincoln’s establishing the National Academy of Sciences in 1863, and Congress’s creation of the Office of Technology Assessment in 1972 (which was shut down in 1995). The media, educational establishments, and the general public have important roles to play in supporting and creating these institutions. All of us need to become more critical consumers of information. Reports from Washington “think tanks,” for example, are often highly partisan, and yet they are still treated as having the same authority as scientific assessments. What should matter when it comes to information is credibility, not insider influence, and this should be reflected in our airwaves as well as our scientific journals.


Finally, to address climate change we need new political and legal institutions that are specifically designed to restrain our tendency towards short-sighted behavior. There are many proposals and experiments from around the world designed to support us in addressing long-term threats, including various mechanisms for representing future generations in governmental decision-making, creating an atmospheric trust, and reforms in statistical, accounting, and decision-making procedures so that they better reflect the future effects of our present actions.


Climate change is not a single problem. It presents us with a wide range of challenges that will only become more severe as time passes. One of the most important steps to take is realizing how ill-equipped we are to deal with climate change and reforming our institutions and policies accordingly, but we should not lose sight of the need to mitigate the emissions and land-use practices that are bringing it about. No matter what we do, we are in for a rough ride, but by taking simple actions at present and recommitting ourselves for the long haul, we can preserve what we most value about the world that our ancestors have given us, and provide a livable future for our descendants.


Dale Jamieson is the author of Reason in a Dark Time: Why the Struggle Against Climate Change Failed — and What It Means for Our Future (Oxford University Press). He teaches Environmental Studies, Philosophy, and Law at New York University, and was formerly affiliated with the National Center for Atmospheric Research.


Subscribe to the OUPblog via email or RSS.


Subscribe to only earth, environmental, and life sciences articles on the OUPblog via email or RSS.


The post Climate change and our evolutionary weaknesses appeared first on OUPblog.




                Related StoriesGene flow between wolves and dogsThree objections to the concept of family optimalityFinding opportunities in risk management 
 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2014 05:30

Eight facts about the gun debate in the United States

By Philip J. Cook and Kristin A. Goss




The debate over gun control generates more heat than light. But no matter how vigorously the claims and counterclaims are asserted, the basic facts are not just a matter of personal opinion. Here are our conclusions about some of the factual issues that are at the heart of the gun debate.



Keeping a handgun to guard against intruders is now a Constitutional right. In Heller vs. District of Columbia (2008) and McDonald vs. Chicago (2010) the US Supreme Court ruled by a 5-4 majority that the Second Amendment provides a personal right to keep guns in the home. States and cities are hence not allowed to prohibit the private possession of handguns. However, the Court made it clear in these decisions that the Second Amendment does not rule out reasonable regulations of gun possession and use.


Half of gun owners indicate that the primary reason they own a gun is self-defense. In practice, however, guns are only used in about 3% of cases where an intruder breaks into an occupied home, or about 30,000 times per year. That compares with over 42 million households with guns. One way to understand just how rare gun use is in self-defense of the home is to look at it this way: on average, a gun-owning household will use a gun to defend against an intruder once every 1,500 years.


As many Americans die of gunshot wounds as in motor-vehicle crashes (around 33,000). In the last 30 years, over 1 million Americans have died in civilian shootings — more than all American combat deaths in all wars in the last century.
By Francois Polito (Own work) or CC-BY-SA-3.0 , via Wikimedia Commons

Non violence sculpture by Carl Fredrik Reutersward, Malmo, Sweden. Photo by Francois Polito. CC-BY-SA-3.0 via Wikimedia Commons.


Most gun deaths in the United States are suicides. There are 20,000 gun suicides per year, compared with 11,000 gun homicides and 600 fatal accidents. While there are hundreds of thousands of serious suicide attempts each year, mostly with drugs and cutting instruments, half of the successful suicides are with guns. The difference is in lethality — the case-fatality rate with guns is 90%, far higher than for the other common means. Availability influences choice of weapon; states with the highest prevalence of gun ownership have four times the gun suicide rate as the states with the lowest prevalence.


The homicide rate today is half of what it was in 1991. That is part of the good news in the United States — violent crime rates of all kinds have plunged since the early 1990s, and violent crime with guns has declined in proportion. (Two thirds of all homicides are committed with guns.) Still, homicide remains a serious problem — homicide is the second leading cause of death for American youths. Our homicide rates remain far higher than those of Canada, the UK, Australia, France, Israel, and other wealthy countries. We are not an exceptionally violent nation, but criminal violence in America is much more likely to involve guns and hence be fatal.


The widespread availability of guns in America does not cause violence, nor does it prevent violence. Generally speaking there is little statistical relationship between the prevalence of gun ownership in a jurisdiction and the overall rate of robbery, rape, domestic violence, or aggravated assault. But where gun ownership is common, the violence is more likely to involve guns and hence be more deadly than in jurisdictions where guns are scarcer. In place of the old bumper strip, we’d say: “Guns don’t kill people, they just make it real easy.”


The primary goal of gun regulation is to save lives by separating guns and violence. Federal and state laws regulate who is allowed to possess them, the circumstances under which they can be carried and discharged in public, certain design features, the record-keeping required when they are transferred, and the penalties for criminal use. The goal is to make it less likely that criminal assailants will use a gun. The evidence is clear that some of these regulations are effective and do save lives.


Gun violence can also be reduced by reducing overall violence rates. Gun violence represents the intersection of guns and violence. Effective action to strengthen our mental health, education, and criminal justice systems would reduce intentional violence rates across the board, including gun violence (both suicide and criminal assault). But there is no sense in the assertion that we should combat the causes of violence instead of regulating guns. The two approaches are quite distinct and both important.

 Kristin A. Goss is Associate Professor of Public Policy and Political Science at Duke University. She is the author of Disarmed: The Missing Movement for Gun Control in AmericaPhilip J. Cook is ITT/Terry Sanford Professor of Public Policy and Professor of Economics and Sociology at Duke University. He is the co-author (with Jens Ludwig) of Gun Violence: The Real Costs. Kristen A. Goss and Philip J. Cook are co-authors of The Gun Debate: What Everyone Needs to Know®.


Subscribe to the OUPblog via email or RSS.


Subscribe to only current affairs articles on the OUPblog via email or RSS.


The post Eight facts about the gun debate in the United States appeared first on OUPblog.




                Related StoriesThree objections to the concept of family optimalityPhilippines pork barrel scam and contending ideologies of accountabilityPolitics and cities: looking at the roots of suburban sprawl 
 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2014 03:30

In praise of Sir William Osler

By Arpan K. Banerjee




In May this year, the American Osler Society held a joint meeting with the London Osler Society and the Japanese Osler Society in Oxford at the Randolph Hotel. The Societies exist to perpetuate the memory of arguably one the most influential physicians of the early twentieth century, and to discuss topics related to Sir William Osler’s interests. It is fitting that this meeting was held in Oxford, where Osler spent his time as the Regius Professor of Medicine having transferred from another great seat of medical learning at Johns Hopkins Medical School in the United States.


William Osler. CC-BY-4.0 via Wikimedia Commons.

William Osler. CC-BY-4.0 via Wikimedia Commons.


Osler was interested in medical education (he produced his classic textbook, which ran to several editions) and set about trying to improve the education of future doctors. Osler’s other great legacy was his combination of superb clinical skills honed by experience not only on the wards but also in the laboratories, and his great interest in the humanities. Osler always tried to combine these two approaches in his work, and much of his writings and aphorisms are as relevant today as when they were first written. Medical students could read Aequanimitas today more than a century after it was written, and would profit from much of the advice to students within this volume of essays and addresses.


Osler had a great interest in the History of Medicine and helped found the history section of the Royal Society of Medicine in London. This scientific section has continued to flourish for over a century. He believed physicians should be well rounded and well read, and that medicine was a calling of both art and science. Although Osler was not against the idea of specialisation in medicine, he was a superb generalist and could manage both adult and child patients. He believed that doctors owed it to themselves to be well versed in the range of disease and illness afflicting mankind. His early interest in comparative pathology during his time at the Montreal Veterinary College prepared him well when dealing with infectious diseases which in the pre-antibiotic area were the scourge of the day, as compared with today in the West where degenerative diseases, cancers and diseases of longevity have overtaken infections as a major killer in the Western world.


The centenary of the Great War is 2014; it was Osler who started a campaign for the compulsory vaccination of soldiers for typhoid, publishing letters in the Times and The British Medical Journal on this topic. That year his literary output also included his Incunabula Medica, a study of 214 of the earliest printed medical books from 1467-1480. Although finished, it was not published until 1923, four years after his death.


Throughout the late twentieth century medicine has continued to super-specialize at an alarming pace throughout the world, driven by the rapid advances in medical diagnosis and treatment. X-rays were only invented in 1895, and the early part of the twentieth century began to see the introduction of chest x-rays into clinical practice. This was still a world away from CT scans, ultrasounds, and MRI scans, which are now de rigueur in the management of patients. Yet in spite of all this progress, disaffection with the medical profession seems rife. Could it be that the general physicians are going to make a comeback? Perhaps a more humanitarian approach to the patient is what is required again, maybe combined with the inexorable technical progress which will undoubtedly continue in the future. Osler would have been amused to see how the wheel of medical fashion has turned full circle.


Arpan K Banerjee qualified in medicine from St Thomas’s Hospital Medical School in London, UK and trained in Radiology at Westminster Hospital and Guys and St Thomas’s Hospital. In 2012 he was appointed Chairman of the British Society for the History of Radiology of which he is a founder member and council member. In 2011 he was appointed to the scientific programme committee of the Royal College Of Radiologists, London. He is the author/co-author of six books including the recent The History of Radiology.


Subscribe to the OUPblog via email or RSS.


Subscribe to only science and medicine articles on the OUPblog via email or RSS.


The post In praise of Sir William Osler appeared first on OUPblog.




                Related StoriesPersecution in medicineWorld No Tobacco Day 2014: Raise taxes on tobaccoSome highlights of the BPS conference 2014 Birmingham 
 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2014 01:30

Derrida on the madness of our time

vsi
By Simon Glendinning




In 1994 Jacques Derrida participated in a seminar in Capri under the title “Religion”. Derrida himself thought “religion” might be a good word, perhaps the best word for thinking about our time, our “today”. It belongs, Derrida suggested, to the “absolute anachrony” of our time. Religion? Isn’t it that old thing that we moderns had thought had gone away, the thing that really does not belong in our time? And yet, so it seems, it is still alive and well.


Alive and well in a modern world increasingly marked by the death of God. How could this be?


A revival of religion is particularly surprising, perhaps even shocking, for those who thought it was all over for religion, for those who “believed naively that an alternative opposed religion”. This alternative would be the very heart of Europe’s modernity: “reason, Enlightenment, science, criticism (Marxist criticisms, Nietzschean genealogy, Freudian psychoanalysis)”. What is modernity if it is not an alternative opposed to religion, a movement in history destined to put an end to religion?


Derrida’s contribution to the seminar attempted to re-think this old “secularisation thesis”. He attempted to outline “an entirely different schema”, one which would be up to thinking the meaning and significance of a return of religion in our time, and capable of making sense of the new “fundamentalisms” that are, he suggested, “at work in all religions” today. And here, in 1994, Derrida drew special attention to what he called “Islamism”, carefully disassociating it from Islam: Islamism is not to be confused with Islam – but is always liable to be confused with it since it “operates in [its] name”.


Before making further steps Derrida noted that the group of philosophers he was in discussion with at the Capri seminar might themselves share a commitment thought to be opposed to religion: “an unreserved taste, if not an unconditional preference, for what in politics, is called republican democracy as a universalizable model.”


This taste or preference in politics is itself inseparable from “a commitment…to the enlightened virtue of public space. [A uniquely European achievement which consists in] emancipating [public space] from all external power (non-lay, non-secular), for example from religious dogmatism, orthodoxy or authority.” And hence, this commitment – the commitment to making decisions without recourse to religious revelation or religious authority – might itself seem to be part of the “modernity” that the revival of religion would seem to challenge.


But Derrida refused to present this commitment as one belonging to “an enemy of religion”. It does not have to be understood as a commitment opposed to religion. In fact, and surely to the surprise of many believers and non-believers alike, he argued for seeing how the preference for republican political secularity is essentially connected to a thesis in Kant on the relation between morality – what it means to make decisions and conduct oneself morally as a human being – and, precisely, religion. A link that will make this European public space both secular and (specifically) Christian.


secular sacred


It is a thesis in Kant that Derrida attempted to use as an astonishing interpretive key to the question of religion and the religious revival today, a key also to the character of radicalised fundamentalisms which, in 1994, he already saw developing in the geo-political relations between this European Christianity and the other great monotheisms, Judaism and Islam.


The Kantian thesis could not be more simple, but Derrida asks us to “measure without flinching” the implications of it. If we follow Kant we will have to accept that Christian revelation teaches us something essential about the very idea of morality: “in order to conduct oneself in a moral manner, one must act as though God did not exist or no longer concerned himself with our salvation.” The crucial point here is that decisions on right conduct should not be made on the basis of any assumption that, by acting in a certain way, we are doing God’s will. The Christian is thus the one who “no longer turns towards God at the moment of acting in good faith”. In short, the good Christian, the Christian acting in good faith, is precisely the one who must decide in a fundamentally secular way. And so Derrida asked, regarding Kant’s thesis, “is it not also, at the core of its content, Nietzsche’s thesis”: that God is dead?


Derrida does not understate it: this thesis – the thesis that Christians are those who are called to endure the death of God in the world – tells us “something about the history of the world – nothing less.”


“Is this not another way of saying that Christianity can only answer to its moral calling and morality, to its Christian calling, if it endures in this world, in phenomenal history, the death of God, well beyond the figures of the Passion?… Judaism and Islam would thus be perhaps the last two monotheisms to revolt against everything that, in the Christianising of our world, signifies the death of God, two non-pagan monotheisms that do not accept death any more than multiplicity in God (the Passion, the Trinity etc), two monotheisms still alien enough at the heart of Greco-Christian, Pagano-Christian Europe that signifies the death of God, by recalling at all costs that “monotheism” signifies no less faith in the One, and in the living One, than belief in a single God.”


And what is the effect of this conflict among the monothesisms? With the Christianising of our world – globalization as “globalatinization” as Derrida put it – we are beginning to see nothing less than “an infinite spiral of outbidding, a maddening instability” in the dimension of revolt and mutual strangeness between these religions of the book. This scene is, Derrida suggests, the focal point of “the madness of our time”.


Simon Glendinning is a Reader in European Philosophy at the London School of Economics and Political Science and the author of Derrida: A Very Short Introduction.


The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.


Subscribe to the OUPblog via email or RSS.


Subscribe to only philosophy articles on the OUPblog via email or RSS.


The post Derrida on the madness of our time appeared first on OUPblog.




                Related StoriesMorality, science, and Belgium’s child euthanasia law15 facts on African religionsApples and carrots count as well 
 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2014 00:30

June 12, 2014

Behind-the-scenes tour of film musical history

As Richard Barrios sees it, movie musicals can go one way or the other — some of them end up as cultural touchstones, and others as train wrecks. In his book Dangerous Rhythm: Why Movie Musicals Matter, Barrios goes behind-the-scenes to uncover the backstories of these fabulous hits and problematic (if not exactly forgettable) flops. In the slideshow below, take a tour through some of the great movie musicals — and some insight into life on set.





"As Sentimental as Little Women"
  http://blog.oup.com/wp-content/upload...

Such was Time’s assessment of the final scene of The Wizard of Oz. For sure, many Ozians have been known to weep when Judy/Dorothy says that last line. Screen still from The Wizard of Oz, Warner Brothers



Little Women"">


Can't Stop the Music
  http://blog.oup.com/wp-content/upload...

Can’t or won’t? The wonder that is Can’t Stop the Music, with the Village People, Valerie Perrine, Bruce Jenner, Steve Guttenberg, and way too much badly used supporting talent. In an awful way, however, it sort of was the movie music of the ’80s. Film poster for Can't Stop the Music, Associated Film Distribution.






The Sound of Music cast
  http://blog.oup.com/wp-content/upload...

An informal portrait of the Von Trapp family, in the persons of Kym Karath, Debbie Turner, Angela Cartwright, Duane Chase, Heather Menzies, Nicholas Hammond, Charmian Carr, and proud sort-of-parents Julie Andrews and Christopher Plummer. Yes, it’s as relentless as it is cheery—and, for many, resistance will be futile. Publicity photo for The Sound of Music, Twentieth Century Fox.






“It’s Gershwin! It’s Glorious!”
  http://blog.oup.com/wp-content/upload...

So said the ads for Porgy and Bess—even as this stiff and rather stagy shot of Dorothy Dandridge and Sidney Poitier reveals the other part of the equation. The tin roof and peeling plaster look way calculated, everything’s spotless, and the camera isn’t willing to get too close. Screen still of Porgy and Bess, Samuel Goldwyn Films.






Hello, Dolly!
  http://blog.oup.com/wp-content/upload...

Not all of the massive quantity of the marathon “When the Parade Passes By” sequence in Hello, Dolly! lay in its cost. Nor in the number of people, of which only a tiny fraction is seen here. It also came musically, with Barbara Streisand singing (or syncing) what the publicity department calling the “the longest note of any movie musical.” Anybody got a stopwatch? Screen shot from Hello, Dolly!, Twentieth Century Fox.






The Four Stars of Guys and Dolls
  http://blog.oup.com/wp-content/upload...

On the screen and in the photo studio, the four leads frequently seemed like they had all been compartmentalized in some fashion. Brando seemed a tad offhand, Simmons gorgeous and radiant, Sinatra disjunct, Blaine working it. So they are seen here, and so they are through the film. Screen shot from Guys and Dolls, Samuel Goldwyn Films.






Astaire and Crawford in Dancing Lady
  http://blog.oup.com/wp-content/upload...

In Dancing Lady, Fred Astaire spends a fair amount of his first film working hard to be a proper partner to Joan Crawford. Here, in “Heigh-Ho the Gang’s All Here,” the strain almost shows. Screen shot from Dancing Lady, Metro-Goldwyn-Mayer.






Gene Kelly in Cover Girl
  http://blog.oup.com/wp-content/upload...

Gene Kelly, as dogged by Gene Kelly, performs the “Alter Ego” sequence in Cover Girl. This is a photographically tricked-up evocation, yet it still shows the scene for what it is—one of the most striking moments in 1940s musical cinema. Screen shot from Cover Girl, Sony Pictures Entertainment.






My Fair Lady
  http://blog.oup.com/wp-content/upload...

The singularly formal stylization of My Fair Lady on film is adored by some and irksome to others. Here, an on-the-set shot of Audrey Hepburn and Rex Harrison gives a good representation of many of Fair Lady’s components—the style, the stiffness, the wit, the calculation. Publicity photo from My Fair Lady, Warner Brothers.




















Richard Barrios worked in the music and film industries before turning to film history with the award-winning A Song in the Dark and his recent book on the history of movie musicals Dangerous Rhythm: Why Movie Musicals Matter. He lectures extensively and appears frequently on television and in film and DVD documentaries. Born in the swamps of south Louisiana and a longtime resident of New York City, he now lives in bucolic suburban Philadelphia.


Subscribe to the OUPblog via email or RSS.


Subscribe to only television and film articles on the OUPblog via email or RSS.


The post Behind-the-scenes tour of film musical history appeared first on OUPblog.




                Related StoriesWhy we watch the Tony AwardsSongs of the Alaskan InuitWhat kind of Lena Younger would Diahann Carroll have been? 
 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2014 03:30

Three objections to the concept of family optimality

By Carlos A. Ball




Those who defend same-sex marriage bans in the United States continue to insist that households led by married mothers and fathers who are biologically related to their children constitute the optimal family structure for children. This notion of family optimality remains the cornerstone of the defense of the differential treatment of LGBT families and same-sex couples under the law.


There are three main objections to the family optimality claim. The first is a logical objection that emphasizes the lack of a rational relationship between means and ends. Even if we assume that the optimality claim is empirically correct, there is no connection between promoting so-called family optimality and denying lesbians and gay men, for example, the opportunity to marry or to adopt. It is illogical to think that heterosexual couples are more likely to marry, or to accept the responsibilities of parenthood, simply because the law disadvantages LGBT families and same-sex couples.


The second objection is one of policy that questions whether marital and family policies should be based on optimality considerations. The social science evidence shows, for example, a clear correlation between parents who have higher incomes and more education, and children who do better in school and have fewer behavioral problems. And yet it is clear that neither marriage nor adoption should be limited to high-income individuals or to those with college degrees. This is because such restrictions would exclude countless individuals who are clearly capable of providing safe and nurturing homes for children despite the fact that they lack the “optimal” amount of income or education.


Image Credit: Gay Pride Parade NYC 2013 - Happy Family. Photo by: Bob Jagendorf. CC-BY-2.0 via Flickr.

Image Credit: Gay Pride Parade NYC 2013 – Happy Family. Photo by Bob Jagendorf. CC-BY-2.0 via bobjagendorf Flickr.


It is also important to keep in mind that judges and child welfare officials do not currently rely on optimality considerations when making custody, adoption, and foster care placement decisions. Instead, they apply the “best interests of the child” standard, which is the exact opposite of the optimality standard because it is based not on generalizations, but on individualized assessments of parental capabilities.


Finally, the optimality claim lacks empirical support. Optimality proponents rely primarily on studies showing that the children of married parents do better on some measures than children of single parents (even when controlling for family income) to argue that (1) marriage, (2) biology, and (3) gender matter when it comes to parenting.


The “married parents v. single parents” studies, however, do not establish that it is the marital status of the parents, as opposed to the number of parents, which account for the differences. Those studies also do not show that biology matters because the vast majority of the parents who participated in the studies — both the married parents and the single ones — were biologically related to their children.


As for the notion that parental gender matters for child outcomes, it is the case that most single-parent households in the United States are headed by women. This does not mean, however, that the absence of a male parent in most single-parent households, as opposed to the absence of a second parent, accounts for the better child outcomes found by some studies that compare children raised in married households to children raised in single-parent ones.


In short, the family optimality claim does not withstand logical, policy, or empirical scrutiny. Family optimality arguments, whether in the context of same-sex marriage bans or any other, should be rejected by courts and policymakers alike.


Carlos A. Ball is Distinguished Professor and Judge Frederick Lacey Scholar at the Rutgers University School of Law. His most recent book on LGBT rights is Same-Sex Marriage and Children: A Tale of History, Social Science, and Law.


Subscribe to the OUPblog via email or RSS.


Subscribe to only brain sciences articles on the OUPblog via email or RSS.


The post Three objections to the concept of family optimality appeared first on OUPblog.




                Related StoriesDiscussing gay and lesbian adults’ relationships with their parentsRussia’s ‘spring’ of 2014Finding opportunities in risk management 
 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2014 01:30

Eighteenth-century soldiers’ slang: “Hot Stuff” and the British Army

By Jennine Hurl-Eamon




Britain’s soldiers were singing about “hot stuff” more than 200 years before Donna Summer released her hit song of the same name in 1979. The true origins of martial ballads are often difficult to ascertain, but a song entitled “Hot Stuff” can be found in print by 1774. The 5 May edition of Rivington’s New York Gazetteer attributes the lyrics to sergeant Edward Bothwood of the 47th Regiment during the Seven Years War (1756-1763).


This text leaves little doubt that “hot stuff” held similar sexual connotations to its eighteenth-century crooners that it does today. Alluding to the famous generals on the battlefields of Quebec, the final verse describes the soldiers invading a French convent (or possibly a bawdy house, since the terms were synonymous among soldiers). The sexual element in “hot stuff” is abundantly clear:


With Monkton and Townshend, those brave Brigadiers,


I think we shall soon knock the town ‘bout their ears;


And when we have done with the mortars and guns,


If you please, madam Abbess, — a word with your Nuns:


Each soldier shall enter the Convent in buff,


And then, never fear, we will give them Hot Stuff.


The Oxford English Dictionary has not previously recognized the use of “hot stuff” as a term to denote sexual attractiveness in the mid eighteenth century; the earliest such usage claimed by the current edition only dates back to 1884 and I have alerted the editors of this earlier example.


William Hogarth 007

William Hogarth, The March of the Guards to Finchley. (1749-1750); Oil on canvas. Public domain via Wikimedia Commons

It should not be surprising that the expression “hot stuff” had its origin in military circles. Britain’s common soldiers were immersed in a counter-culture of which language was an important signifier. Men in uniform have long been known for having a greater propensity to swear, for example. This is borne out by the literature of the time. As early as 1749, Samuel Richardson referred to the popular expression of swearing “like a trooper” in his novel Clarissa. Characters in Robert Bage’s 1796 novel, Hermsprong, held profanity to be “as natural to a soldier as praying to a parson,” and worried that “if soldiers and sailors were forbidden it, their courage would droop.” It transcended the boundaries of rank and gender.

Folklore anthologist Roy Palmer uncovered a reference to a pensioner’s wife who swore compulsively, yet was considered a good soul whose coarse language was simply an indelible imprint of army life. One of the most famous of these military wives, Christian Davies — who followed her husband disguised as a soldier and later traveled with the troops as a sutler — commented on an officers’ ability to “curse,” noting one particular lieutenant who “swore a round hand.”



Martial language went beyond swearing, however. Francis Grose proudly named “soldiers on the long march” as one of the “most classical authorities” in the preface of his Classical Dictionary of the Vulgar Tongue (first published in 1785). Having served in the army himself, Grose had first-hand knowledge of military slang. His dictionary referred to terms such as “hug brown bess” meaning “to carry a firelock, or serve as a private soldier;” “fogey” for “an invalid soldier;” and “Roman” for “a soldier in the foot guards, who gives up his pay to his captain for leave to work.”


Though Grose arguably provides the best evidence of military slang in the eighteenth century, other records offer hints. One soldier testified at the Old Bailey in 1756 that it was common for military men to use the term “uncle” to mean “pawnbroker,” for example. The contemporary resonance of terms like “hot stuff” and “fogey” are evidence that some, though not all, eighteenth-century soldiers’ patter eventually found its way into the civilian lexicon.


Captain Francisa Grose, FSA

Francis Grose By D. O. Hill (Prof Wilson. Land of Burns. 1840) [Public domain], via Wikimedia Commons

Historians who have studied military slang for other armies tend to have a narrow scope that stresses the distinctive nature of the time and place under observation. Thus, a scholar of the American Civil War theorizes that the “custom of independently making up words” came at least in part from the fact that “the Civil War was fought by Jacksonian individualists.”



Tim Cook’s exploration of the colourful idioms of the Canadian troops in the First World War suggests that they served simultaneously to distinguish the Canadians from the other British forces and to help a disparate body of recruits develop a unified identity that separated them from their civilian counterparts. Although many of his insights could be applied to other armies in other wars, Cook limits his observations of language to its role in helping soldiers “endure and make sense of the Great War.”



I would suggest, instead, that linguistic liberties are a common characteristic to all Anglo armies from the eighteenth century onward. More needs to be done to determine whether the phenomenon is broader in geographic and temporal scope, and to understand precisely why military culture tends to take this particular shape.


At the very least, the British soldiers singing bawdily about “hot stuff” in the mid-eighteenth century probably found their shared slang helped to bond them to one another. Language operated similar to the uniform in separating military men from civilians and transforming them into objects of fascination (both positive and negative). Set beside Donna Summer, these raucous soldiers take their proper place at the forefront of popular culture.


Jennine Hurl-Eamon is associate professor of History at Trent University, Canada. She has published several articles and book chapters on aspects of plebeian marriage and the interactions between the poorer classes and the lower courts. She is the author of three books, Gender and Petty Violence in London, 1680-1720 (2005), and Women’s Roles in Eighteenth-Century Europe (2010) and Marriage and the British Army in the Long Eighteenth Century (OUP, 2014).


Subscribe to the OUPblog via email or RSS.


Subscribe to only language articles on the OUPblog via email or RSS.


Image credit: William Hogarth, The March of the Guards to Finchley. (1749-1750); Oil on canvas. Public domain via Wikimedia Commons. (2) Francis Grose By D. O. Hill (Prof Wilson. Land of Burns. 1840). Public domain via Wikimedia Commons


The post Eighteenth-century soldiers’ slang: “Hot Stuff” and the British Army appeared first on OUPblog.




                Related StoriesA globalized history of “baron,” part 1Life in occupied Paris during World War II“Stretch” Johnson, my father 
 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2014 00:30

June 11, 2014

A globalized history of “baron,” part 1

By Anatoly Liberman




Once again we are torn between Rome, the Romance-speaking world, and England. The word baron appeared in English texts in 1200, and it probably became current shortly before that time, for such an important military title would hardly have escaped written tradition for too long. One incontestable thing is that baron arose in Old French and through Anglo-French reached Middle English. At present, baron is the lowest rank in hereditary peerage, but “[t]he original meaning of baron in feudal times was one of a class of tenants holding his lands by military service from the king, or other superior lord. The term was soon restricted to king’s barons who were summoned by writ to the council. The practice grew up that those once summoned had a right to attend, and the honour and privilege became hereditary” (The Universal Dictionary of the English Language by Henry Cecil Wyld). The question is how this title happened to get the meaning recorded in Old French.


Early lexicographers were bold people: they formulated hypotheses and fearlessly proclaimed them, for nothing worse could happen to them than running afoul of a different politely formulated conjecture: no ridicule, no rebuke for violating phonetic laws (those had not yet been discovered) or missing an important publication (the few main books on the subject were widely known and always consulted). A look at the guesses by our distant predecessors is not devoid of interest, for some of them had a long life and are still with us.


The syllable bar occurs in many languages and not infrequently has a meaning that fits, at least to a certain extent, the meaning of baron. The first lexicographers noticed Hebrew bar “son,” recognized today even by those who have no knowledge of any Semitic language from bar mitzvah. Since for some time people traced all words to Hebrew, the alleged language of Paradise before Adam and Eve were banished from it, the tie between bar and baron seemed obvious. Then there was Old Irish bar “wise man, sage; leader; overseer.” For some reason, it frequently occurred in glossaries but did not turn up in any text, literary or legal. Such words occur in many old languages and look like learned concoctions. Still this bar, whatever its origin, has been attested, so probably it is not a figment, as James Murray suspected. Charles Mackay, whose etymologies are fanciful but forms invariably correct, mentioned the obsolete Irish Gaelic bar “a man, a learned man” and baran “a great man.” He hardly knew them from living speech.


Then there is Old Engl. beorn “man; hero; warrior,” which may be the same word as one of the Old Germanic names of the bear (this is uncertain; yet the alternative derivation from the verb bear is less likely). Bestowing the names of ferocious animals (bears and boars, for instance) on doughty fighters and esteemed chiefs was common practice. Old Germanic poetry is full of relevant examples. Next to it we find Old Engl. bearn “child, bairn,” an unquestionable cognate of the verb beran “to bear.” Beorn and bearn suggest a Germanic origin of baron, even though the details of the development are unclear.


We can now turn to Latin vir “man, husband,” often proposed as the source (etymon) of baron. Vir has respectable cognates in Old English and Gothic (nearly the same form and the same meaning). The alternation v ~ b poses problems, but they are not insurmountable. It is the suffix (or what looks like a suffix) -on that defies an explanation if we begin with vir. However, some of the best etymologists of the first half of the nineteenth century ignored the “suffix” and had no doubts about vir being the etymon of baron. Vir is not the only v-word that surfaced in the etymological explanations of baron. Latin varus “knock-kneed, bow-legged” and vara “a forked pole,” a cognate of varus, have also been referred to. The connection between them and baron is tenuous at best.


369px-Lex_Salica_VandalgariusMore promising is the Latin noun baro (genitive baronis, accusative baronem), which looks like a possible source of baron. However, the history, and not only the etymology, of baro is another hornets’ nest. The most baffling fact is that there seemingly were two Latin words baro. One had length on both vowels and is usually glossed as “fool; simpleton.” This is the meaning Cicero and at least one more author knew. The other baro, which is given in the most authoritative dictionaries of Latin with a short root vowel, meant “a free man” (that is, not a serf), but it emerged late, in a law code known as Lex Salica “Salian Law.” The code was put together at the beginning of the sixth century, in the reign of Clovis I, though no manuscripts antedating the eighth century have come down to us. The code regulated the life of the Salian Franks. The etymology of the name Salian is debatable and should not concern us. We only need to know that the Salian Franks were different from the so-called Ripuarian Franks and that later the same laws governed all of them. The Franks were a conglomeration of Germanic tribes.


Although Lex Salica was written in Latin, the word baro could be a Latinized German word. Untranslatable native terms regularly appeared in medieval Latin texts unchanged (occasionally -us would be added to them, and Alemannic barus has been recorded). If the word is German, we find ourselves on familiar ground (compare bearn and beorn mentioned above), but if it is Latin, we have to decide whether it has anything to do with baro “fool; simpleton” and ideally account for its origin. Baro “fool” has a well-known continuation in the Modern Romance languages. Italian barone means both “baron” and “rogue,” and many similar-sounding nouns with various suffixes have related meanings, “urchin” among them. “Simpleton,” let alone “fool,” could not develop into “a king’s man” or something similar. Most modern dictionaries state that baro1 and baro2 have nothing to do with each other, but the German linguist Franz Settegast thought differently and made an attempt to overthrow this conclusion.


Settegast showed that in some Latin books baro designated a strong (muscular) or an unpolished man, a hillbilly, a man from the boondocks, as we might say. His findings have never been refuted, but the question remains which sense is original and which is derived, that is, whether the path was from “fool” to “a strong man” or from “a strong man” to “fool.” Also, some etymologists say that Italian barone “rogue” and barone “baron” are different words (homonyms) and cite plausible sources for both, while others try to connect them. As could be expected, the definitive answer does not exist, but the situation may not be quite hopeless, and next week I’ll say what I think about it.


Anatoly Liberman is the author of Word Origins And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears on the OUPblog each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.” Subscribe to Anatoly Liberman’s weekly etymology articles via email or RSS.


Subscribe to the OUPblog via email or RSS.


Subscribe to only language articles on the OUPblog via email or RSS.


Image credit: Manuscrit de la loi salique datant de 793, bibliothèque de l’abbaye de Saint-Gall. Public domain via Wikimedia Commons.


The post A globalized history of “baron,” part 1 appeared first on OUPblog.




                Related StoriesFishing in the “roiling” waters of etymologySmall triumphs of etymology: “oof”Little triumphs of etymology: “pedigree” 
 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2014 05:30

Finding opportunities in risk management

By Torben Juul Andersen, Maxine Garvey, and Oliviero Roggi




For decades, the press has been full of fascinating and colorful stories about prominent and heralded enterprises ending up in scandal and bankruptcy. These include the diversion of corporate funds in the Maxwell Group in the early 1990s, the trading losses that made Barings Bank extinct in the mid-1990s, the accounting frauds at WorldCom in the late 1990s, and the spectacular collapse of Enron in the early 2000s. One could think and hope that these stories were exceptions, and that we learn from them, but this is not quite the case, as history seemingly repeats itself over and over again.


In early 2008, the board of Société Générale learned that a trader, Jérôme Kerviel, was capable of losing $7.2 billion for the company. He had the authority to put $183 million at risk, and was still able to increase the exposures to as much as $73 billion, exceeding the market value of the bank. The internal control systems, and the managers who used them, did not react, and this failure of risk governance cost both money and reputation. Later Bernard Madoff was charged with investor fraud as his Wall Street firm, Bernard L. Madoff Investment Securities LLC, was engaging in a major ponzi scheme paying returns to investors with proceeds from newly invested money. The associated losses were estimated in excess of $50 billion hitting investors around the globe.


There are many examples in emerging markets as well, for example, in Brazil, where pulp producer Aracruz and meat processor Sadia both suffered huge losses on foreign exchange derivatives. Similarly, Ceylon Petroleum Corporation (CPC) in Sri Lanka lost millions on commodity contracts. In these cases, the boards, and the main shareholders including the Sri Lankan Government in the case of CPC, asserted that the executives acted without proper authorization but the final responsibility remained with the board.


Business graph


Risk-taking is fundamental in business, where owners take a view on the future and commit their organizations accordingly. Taking risks and dealing with uncertainty in the competitive environment is part of doing business, and is the basis for entrepreneurial initiatives. Consequently, the effective oversight of risk-taking is a fundamental part of corporate governance, and a key responsibility of the board.


The board of directors and the executive management team must both protect and enhance profitable business activities in the face of risks and potential disasters that can arise in an unpredictable world. Formal risk management approaches can facilitate some of this, but proactive risk-taking activities are necessary to create new solutions that can deal with unexpected changes in the environment. That is, a business must be willing to take risks and be innovative in order to develop effective responses for future changes.


Effective risk governance consists of three important elements of practice: corporate governance, enterprise risk management, and strategic decision-making. Corporate governance considers the role of the board in its fiduciary obligations towards the shareholders to fend off major losses and optimize the value-creating potential of the enterprise. Enterprise risk management is a formal framework for structured risk management processes, which applies analytical tools to identify, assess, manage, and monitor major risks that may influence future performance. Strategic decision-making looks at risk analysis in forward-looking action plans and ongoing project investments carried out to execute the strategic aims. Proper practices to guide these aspects of risk governance can lead to better risk management outcomes.


It is imperative to consider both the downside and the upside of risk. Looking at the future, we see a changing risk landscape where environmental events are more frequent, intertwined, and complex, which will lead to higher uncertainty and unpredictability. Because of this, proactive risk-taking is essential, as is developing effective responses to unpredictable conditions. This is why the structural elements of the organization are also essential for effective risk management, including a risk aware corporate culture, decentralized decision processes, open information and communication systems, interactive management controls, and incentives commensurate with proactive risk-taking.


History suggests that the human factors behind adverse risk events and corporate scandals are the same no matter where in the world they happen, the United States, Europe, or different emerging markets. For the same reason, we also believe that these proposed remedies are universal. Managers, executives, and board members around the world can learn from the mistakes of others and take heed of suggestions for good risk governance practice.


Torben Juul Andersen is Professor of Strategy and International Management at Copenhagen Business School. Maxine Garvey is a Program Leader for The World Bank. Oliviero Roggi is Professor of Finance at the University of Florence and Visiting Professor of Finance at the NYU Stern School of Business. They are authors of Managing Risk and Opportunity: The Governance of Strategic Risk-Taking.


Subscribe to the OUPblog via email or RSS.


Subscribe to only business and economics articles on the OUPblog via email or RSS.


Image: Business Risk Luck Chart Arrow Wealth Finance. Public Domain via Pixabay.


The post Finding opportunities in risk management appeared first on OUPblog.




                Related StoriesProfessionals’ implication in corporate corruptionPhilippines pork barrel scam and contending ideologies of accountabilityChanging focus 
 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2014 03:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.