Oxford University Press's Blog, page 1027

August 30, 2012

What’s so super about Super PACs?

By Katherine Connor Martin



Back in January we published a short glossary of the jargon of the presidential primaries. Now that the campaign has begun in earnest, here is our brief guide to some of the most perplexing vocabulary of this year’s general election.


Nominating conventions


It may seem like the 2012 US presidential election has stretched on for eons, but it only officially begins with the major parties’ quadrennial nominating conventions, on August 27–30 (Republicans) and September 3–6 (Democrats). How can they be called nominating conventions if we already know who the nominees are? Before the 1970s these conventions were important events at which party leaders actually determined their nominees. In the aftermath of the tumultuous 1968 Democratic convention, however, the parties changed their nominating process so that presidential candidates are now effectively settled far in advance of the convention through a system of primaries andcaucuses, leaving the conventions themselves as largely ceremonial occasions.


Purple states, swing states, and battleground states


These three terms all refer to more or less the same thing: a state which is seen as a potential win for either of the two major parties; in the UK, the same idea is expressed by the use of marginal to describe constituencies at risk. The termbattleground state is oldest, and most transparent in origin: it is a state that the two sides are expected to actively fight over. Swing state refers to the idea that the state could swing in favor of either of the parties on election day; undecided voters are often called swing votersPurple state is a colorful metaphorical extension of the terms red state and blue state, which are used to refer to a safe state for the Republicans or Democrats, respectively (given that purple is a mixture of red and blue). Since red is the traditional color of socialist and leftist parties, the association with the conservative Republicans may seem somewhat surprising. In fact, it is a very recent development, growing out of the arbitrary color scheme on network maps during the fiercely contested 2000 election between George W. Bush and Al Gore.


Electoral vote


What really matters on election day isn’t the popular vote, but the electoral vote. The US Constitution stipulates that the president be chosen by a body, theelectoral college, consisting of electors representing each state (who are bound by the results of their state election). The total number of electors is 538, with each state having as many electors as it does senators and representatives in Congress (plus 3 for the District of Columbia).  California has the largest allotment, 55. With the exception of Maine and Nebraska, all of the states give their electoral votes to the winner of the popular vote in their state on a winner-takes-all basis, and whichever candidate wins the majority of electoral votes (270) wins the election. This means it is technically possible to win the popular vote but lose the election; in fact, this has happened three times, most recently in the 2000 election when Al Gore won the popular vote, but George W. Bush was elected president.


Veepstakes

The choice of a party’s candidate for vice president is completely in the hands of the presidential nominee, making it one of the big surprises of each campaign cycle and a topic of endless media speculation. The perceived jockeying for position among likely VP picks has come to be known colloquially as theveepstakes. The 2012 veepstakes are, of course, already over, with Joe Biden and Paul Ryan the victors.


Super PAC


If there is a single word that most characterizes the 2012 presidential election, it is probably this one. A super PAC is a type of independent political action committee (PAC for short), which is allowed to raise unlimited sums of money from corporations, unions, and individuals but is not permitted to coordinate directly with candidates. Such political action committees rose to prominence in the wake of the 2010 Supreme Court ruling in Citizens United v. Federal Election Commission and related lower-court decisions, which lifted restrictions on independent political spending by corporations and unions. Advertising funded by these super PACs is a new feature of this year’s campaign.


501(c)(4)


It isn’t often that an obscure provision of the tax code enters the general lexicon, but discussions of Super PACS often involve references to 501(c)(4)s. These organizations, named by the section of the tax code defining them, are nonprofit advocacy groups which are permitted to participate in political campaigns. 501(c)(4) organizations are not required to disclose their donors. This, combined with the new Super PACs, opens the door to the possibility of political contributions which are not only unlimited but also undisclosed: if a Super PAC receives donations through a 501(c)(4), then the original donor of the funds may remain anonymous.


The horse race


As we’ve discussed above, what really matters in a US presidential election is the outcome of the electoral vote on November 6. But that doesn’t stop commentators and journalists from obsessing about the day-to-day fluctuations in national polls; this is known colloquially as focusing on the horse race.


The online magazine Slate has embraced the metaphor and actually produced an animated chart of poll results in which the candidates are represented as racehorses.


This article originally appeared on the OxfordWords blog.


Katherine Connor Martin is a lexicographer in OUP’s New York office.


Oxford Dictionaries Online is a free site offering a comprehensive current English dictionary, grammar guidance, puzzles and games, and a language blog; as well as up-to-date bilingual dictionaries in French, German, Italian, and Spanish. The premium site, Oxford Dictionaries Pro, features smart-linked dictionaries and thesauruses, audio pronunciations, example sentences, advanced search functionality, and specialist language resources for writers and editors.


Subscribe to the OUPblog via email or RSS.

Subscribe to only language, lexicography, word, etymology, and dictionary articles on the OUPblog via email or RSS.




 •  0 comments  •  flag
Share on Twitter
Published on August 30, 2012 05:30

The Friday before school starts

By Alice M. Hammel and Ryan M. Hourigan



While standing at the local superstore watching my children choose their colorful binders and pencils for the upcoming school year, I saw another family at the end of the aisle. Their two sons had great difficulty accessing the space because of the crowd and they were clearly over-stimulated by the sights and sounds of this tax-free weekend shopping day. One boy began crying and the other soon curled into a ball next to the packets of college-lined paper. My daughter, empathic to a fault, leaned down and offered her Blues Clues notebook in an effort to make the boy happier. When we finally walked away, I saw the same pain and embarrassment in the eyes of the parents that I have often seen at parent-teacher conferences and IEP meetings.


For many families, the start of a new school year is exciting and refreshing. The opportunity to see old friends, meet new ones, and the ease of settling into a fall routine can be comforting. For families of students with special needs, however, the start of a school year can be anxious, frustrating, and filled with reminders of the deficits (social and academic) of their children. This dichotomy is clear and present as some children bound off the school bus with their shiny new backpacks hanging from their shoulders, while others are assisted off different buses as their eyes and bodies prepare for what sometimes feels like an assault on their very personhood.



These differences are apparent to parents as well as teachers and administrators at schools. Professionals often ask: “What can we do to be the best teachers for these students?”


Consider what school can mean for students who are different and how to create ways to welcome everyone, according to their needs. Before the school year begins, these longstanding suggestions still resonate as best practices for parents and students:


(1) Contact the student before the school year begins to be sure the student and family are aware that you are genuinely looking forward to working with them and have exciting plans for the school year! Everyone learns differently and wants to be honored for their ability to contribute. In the Eye Illusion not everyone is able to see the changes in the dots as they move around the circle. What you see isn’t better or worse — just different. When we think of students and children in the same way, by removing the stigma of labels and considering the needs of all, we become more of a community and less of a hierarchy.


(2) Be aware of all students in the classes you teach. Know their areas of strength and challenge, and be prepared to adapt teaching strategies to include them. We cannot expect students and children all to be the same. Use a fable to illustrate that everyone has strengths and can become an integral part of the learning experience.


(3) Review teaching practices: modalities, colors, sizes, and pacing. All students enjoy learning through various modalities (visual, aural, kinesthetic), love colors in their classroom, appreciate sizing differences to assist with visual concepts, and can benefit from pacing that is more applicable to them. Find ways to include these practices in an overall approach. Universal design (applied to the classroom) means that all students receive adaptations to enhance their learning experience, and no one is singled out as being different because of the adaptations applied.


(4) Create partnerships with all professionals who work with special needs students. A team approach is a powerful way to include everyone effectively. When we work as a team, everyone benefits and the workload is shared by all. This community of professionals creates a culture of shared responsibility and joy.


(5) Provide a clear line of communication with parents of students with disabilities. Often children cannot come home and tell their parents about events, assignments, announcements, and other important parts of their school day. Parents may not be able to gauge whether their child had a good day or if there are concerns. A journal between teacher and parent(s) can be a comforting and useful tool. This communication may also be done electronically through a secure Google or Yahoo group. Reading Rockets provides other useful tips in this area.


(6) Leave labels out of the conversation when communicating with parents. Parents can be sensitive to their child being known only by their diagnosis. In addition, some parents may be still processing the life change that comes with raising a child with special needs. When entering into a conversation with a parent, focus on your classroom and the needs of the student. If there is a concern, try to put the concern in the most positive light as possible. The Parent-Provider network at Purdue University offers some great tips as well for communicating with parents.


(7) Let parents know of student accomplishments even if they are small. Students with special needs often encounter failure. Parents attend countless meetings that remind them of all the challenges their children face. A note home when something goes well can make all the difference.


(8) Allow the parent and the child to visit prior to the start of school if the child is new. Students who are enrolling in a new program or a new school may have difficulty with this transition. Often this transition can cause anxiety that will hinder a child from seeing school as a comfortable, safe place. Walk them through the routines: where they sit, where materials are, etc. Social stories (short stories written in third person to illustrate an everyday situation) can also be useful in this circumstance. When read prior to beginning school, these stories help them move through their transition.


A culture of acceptance and compassion must permeate our educational institutions. By categorizing, labeling, and noting differences, we are often putting children in boxes that can then, unfortunately, define them for the rest of their lives. Every child wants to be part of the school experience and seeks to participate to the best of his ability. When the class and school culture are created to honor the personhood of every child, and each child is considered valuable to the success of every school experience, all children begin to enjoy the same childhood experiences.


Alice M. Hammel and Ryan M. Hourigan are the authors of Teaching Music to Students with Special Needs: A Label-Free Approach. Alice Hammel teaches for James Madison and Virginia Commonwealth Universities, and has years of experience teaching instrumental and choral music. Ryan Hourigan is Assistant Professor of Music Education at Ball State University and a recipient of the Outstanding University Music Educator Award from the Indiana Music Educators Association. The companion website to Teaching Music to Students with Special Needs provides more resources.


Subscribe to the OUPblog via email or RSS.

Subscribe to only music articles on the OUPblog via email or RSS.

Subscribe to only education articles on the OUPblog via email or RSS.

View more about this book on the


Image credit: Having fun in a music class. Photo by SolStock, iStockphoto.




 •  0 comments  •  flag
Share on Twitter
Published on August 30, 2012 03:30

Unfit for the future: The urgent need for moral enhancement

By Julian Savulescu and Ingmar Persson



First published in Philosophy Now Issue 91, July/Aug 2012.


For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.


But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?


With great power comes great responsibility. However, evolutionary pressures have not developed for us a psychology that enables us to cope with the moral problems our new power creates. Our political and economic systems only exacerbate this. Industrialisation and mechanisation have enabled us to exploit natural resources so efficiently that we have over-stressed two-thirds of the most important eco-systems.


A basic fact about the human condition is that it is easier for us to harm each other than to benefit each other. It is easier for us to kill than it is for us to save a life; easier to injure than to cure. Scientific developments have enhanced our capacity to benefit, but they have enhanced our ability to harm still further. As a result, our power to harm is overwhelming. We are capable of forever putting an end to all higher life on this planet. Our success in learning to manipulate the world around us has left us facing two major threats: climate change – along with the attendant problems caused by increasingly scarce natural resources – and war, using immensely powerful weapons. What is to be done to counter these threats?


Our Natural Moral Psychology

Our sense of morality developed around the imbalance between our capacities to harm and to benefit on the small scale, in groups the size of a small village or a nomadic tribe – no bigger than a hundred and fifty or so people. To take the most basic example, we naturally feel bad when we cause harm to others within our social groups. And commonsense morality links responsibility directly to causation: the more we feel we caused an outcome, the more we feel responsible for it. So causing a harm feels worse than neglecting to create a benefit. The set of rights that we have developed from this basic rule includes rights not to be harmed, but not rights to receive benefits. And we typically extend these rights only to our small group of family and close acquaintances. When we lived in small groups, these rights were sufficient to prevent us harming one another. But in the age of the global society and of weapons with global reach, they cannot protect us well enough.


There are three other aspects of our evolved psychology which have similarly emerged from the imbalance between the ease of harming and the difficulty of benefiting, and which likewise have been protective in the past, but leave us open now to unprecedented risk:



Our vulnerability to harm has left us loss-averse, preferring to protect against losses than to seek benefits of a similar level.
We naturally focus on the immediate future, and on our immediate circle of friends. We discount the distant future in making judgements, and can only empathise with a few individuals based on their proximity or similarity to us, rather than, say, on the basis of their situations. So our ability to cooperate, applying our notions of fairness and justice, is limited to our circle, a small circle of family and friends. Strangers, or out-group members, in contrast, are generally mistrusted, their tragedies downplayed, and their offences magnified.
We feel responsible if we have individually caused a bad outcome, but less responsible if we are part of a large group causing the same outcome and our own actions can’t be singled out.



Case Study: Climate Change and the Tragedy of the Commons

There is a well-known cooperation or coordination problem called ‘the tragedy of the commons’. In its original terms, it asks whether a group of village herdsmen sharing common pasture can trust each other to the extent that it will be rational for each of them to reduce the grazing of their own cattle when necessary to prevent over-grazing. One herdsman alone cannot achieve the necessary saving if the others continue to over-exploit the resource. If they simply use up the resource he has saved, he has lost his own chance to graze but has gained no long term security, so it is not rational for him to self-sacrifice. It is rational for an individual to reduce his own herd’s grazing only if he can trust a sufficient number of other herdsmen to do the same. Consequently, if the herdsmen do not trust each other, most of them will fail to reduce their grazing, with the result that they will all starve.


The tragedy of the commons can serve as a simplified small-scale model of our current environmental problems, which are caused by billions of polluters, each of whom contributes some individually-undetectable amount of carbon dioxide to the atmosphere. Unfortunately, in such a model, the larger the number of participants the more inevitable the tragedy, since the larger the group, the less concern and trust the participants have for one another. Also, it is harder to detect free-riders in a larger group, and humans are prone to free ride, benefiting from the sacrifice of others while refusing to sacrifice themselves. Moreover, individual damage is likely to become imperceptible, preventing effective shaming mechanisms and reducing individual guilt.


Anthropogenic climate change and environmental destruction have additional complicating factors. Although there is a large body of scientific work showing that the human emission of greenhouse gases contributes to global climate change, it is still possible to entertain doubts about the exact scale of the effects we are causing – for example, whether our actions will make the global temperature increase by 2°C or whether it will go higher, even to 4°C – and how harmful such a climate change will be.


In addition, our bias towards the near future leaves us less able to adequately appreciate the graver effects of our actions, as they will occur in the more remote future. The damage we’re responsible for today will probably not begin to bite until the end of the present century. We will not benefit from even drastic action now, and nor will our children. Similarly, although the affluent countries are responsible for the greatest emissions, it is in general destitute countries in the South that will suffer most from their harmful effects (although Australia and the south-west of the United States will also have their fair share of droughts). Our limited and parochial altruism is not strong enough to provide a reason for us to give up our consumerist life-styles for the sake of our distant descendants, or our distant contemporaries in far-away places.


Given the psychological obstacles preventing us from voluntarily dealing with climate change, effective changes would need to be enforced by legislation. However, politicians in democracies are unlikely to propose such legislation. Effective measures will need to be tough, and so are unlikely to win a political leader a second term in office. Can voters be persuaded to sacrifice their own comfort and convenience to protect the interests of people who are not even born yet, or to protect species of animals they have never even heard of? Will democracy ever be able to free itself from powerful industrial interests? Democracy is likely to fail. Developed countries have the technology and wealth to deal with climate change, but we do not have the political will.


If we keep believing that responsibility is directly linked to causation, that we are more responsible for the results of our actions than the results of our omissions, and that if we share responsibility for an outcome with others our individual responsibility is lowered or removed, then we will not be able to solve modern problems like climate change, where each person’s actions contribute imperceptibly but inevitably. If we reject these beliefs, we will see that we in the rich, developed countries are more responsible for the misery occurring in destitute, developing countries than we are spontaneously inclined to think. But will our attitudes change?


Moral Bioenhancement

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species. We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities. We have provided ourselves with the tools to end worthwhile life on Earth forever. Nuclear war, with the weapons already in existence today could achieve this alone. If we must possess such a formidable power, it should be entrusted only to those who are both morally enlightened and adequately informed.


Objection 1: Too Little, Too Late?

We already have the weapons, and we are already on the path to disastrous climate change, so perhaps there is not enough time for this enhancement to take place. Moral educators have existed within societies across the world for thousands of years – Buddha, Confucius and Socrates, to name only three – yet we still lack the basic ethical skills we need to ensure our own survival is not jeopardised. As for moral bioenhancement, it remains a field in its infancy.


We do not dispute this. The relevant research is in its inception, and there is no guarantee that it will deliver in time, or at all. Our claim is merely that the requisite moral enhancement is theoretically possible – in other words, that we are not biologically or genetically doomed to cause our own destruction – and that we should do what we can to achieve it.


Objection 2: The Bootstrapping Problem

We face an uncomfortable dilemma as we seek out and implement such enhancements: they will have to be developed and selected by the very people who are in need of them, and as with all science, moral bioenhancement technologies will be open to abuse, misuse or even a simple lack of funding or resources.


The risks of misapplying any powerful technology are serious. Good moral reasoning was often overruled in small communities with simple technology, but now failure of morality to guide us could have cataclysmic consequences. A turning point was reached at the middle of the last century with the invention of the atomic bomb. For the first time, continued technological progress was no longer clearly to the overall advantage of humanity. That is not to say we should therefore halt all scientific endeavour. It is possible for humankind to improve morally to the extent that we can use our new and overwhelming powers of action for the better. The very progress of science and technology increases this possibility by promising to supply new instruments of moral enhancement, which could be applied alongside traditional moral education.


Objection 3: Liberal Democracy – a Panacea?

In recent years we have put a lot of faith in the power of democracy. Some have even argued that democracy will bring an ‘end’ to history, in the sense that it will end social and political development by reaching its summit. Surely democratic decision-making, drawing on the best available scientific evidence, will enable government action to avoid the looming threats to our future, without any need for moral enhancement?


In fact, as things stand today, it seems more likely that democracy will bring history to an end in a different sense: through a failure to mitigate human-induced climate change and environmental degradation. This prospect is bad enough, but increasing scarcity of natural resources brings an increased risk of wars, which, with our weapons of mass destruction, makes complete destruction only too plausible.


Sometimes an appeal is made to the so-called ‘jury theorem’ to support the prospect of democracy reaching the right decisions: even if voters are on average only slightly more likely to get a choice right than wrong – suppose they are right 51% of the time – then, where there is a sufficiently large numbers of voters, a majority of the voters (ie, 51%) is almost certain to make the right choice.


However, if the evolutionary biases we have already mentioned – our parochial altruism and bias towards the near future – influence our attitudes to climatic and environmental policies, then there is good reason to believe that voters are more likely to get it wrong than right. The jury theorem then means it’s almost certain that a majority will opt for the wrong policies! Nor should we take it for granted that the right climatic and environmental policy will always appear in manifestoes. Powerful business interests and mass media control might block effective environmental policy in a market economy.


Conclusion

Modern technology provides us with many means to cause our downfall, and our natural moral psychology does not provide us with the means to prevent it. The moral enhancement of humankind is necessary for there to be a way out of this predicament. If we are to avoid catastrophe by misguided employment of our power, we need to be morally motivated to a higher degree (as well as adequately informed about relevant facts). A stronger focus on moral education could go some way to achieving this, but as already remarked, this method has had only modest success during the last couple of millennia. Our growing knowledge of biology, especially genetics and neurobiology, could deliver additional moral enhancement, such as drugs or genetic modifications, or devices to augment moral education.


The development and application of such techniques is risky – it is after all humans in their current morally-inept state who must apply them – but we think that our present situation is so desperate that this course of action must be investigated.


We have radically transformed our social and natural environments by technology, while our moral dispositions have remained virtually unchanged. We must now consider applying technology to our own nature, supporting our efforts to cope with the external environment that we have created.


Biomedical means of moral enhancement may turn out to be no more effective than traditional means of moral education or social reform, but they should not be rejected out of hand. Advances are already being made in this area. However, it is too early to predict how, or even if, any moral bioenhancement scheme will be achieved. Our ambition is not to launch a definitive and detailed solution to climate change or other mega-problems. Perhaps there is no realistic solution. Our ambition at this point is simply to put moral enhancement in general, and moral bioenhancement in particular, on the table. Last century we spent vast amounts of resources increasing our ability to cause great harm. It would be sad if, in this century, we reject opportunities to increase our capacity to create benefits, or at least to prevent such harm.


© Prof. Julian Savulescu and Prof. Ingmar Persson 2012


Julian Savulescu is a Professor of Philosophy at Oxford University and Ingmar Persson is a Professor of Philosophy at the University of Gothenburg. This article is drawn from their book Unfit for the Future: The Urgent Need for Moral Enhancement (Oxford University Press, 2012).


Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

View more about this book on the  




 •  0 comments  •  flag
Share on Twitter
Published on August 30, 2012 00:30

August 29, 2012

May the odds be ever in your favor, APSA 2012

On Sunday, 26 August 2012, storm clouds were gathering over political scientists in the United States.


The American Political Science Association (APSA) Annual Meeting and Exhibition 2012 was due to begin on Wednesday, August 29th, but Hurricane Isaac had other plans. The #APSA2012 hashtag was blazing like District 12′s costumes as academics and exhibitors pulled out. (Oxford University Press will not be attending APSA 2012, but you can order books with the conference discount.)


Luckily, W. K. Winecoff stepped into the fray and created #APSA2012HungerGames. Inside Higher Ed has the story, but we’ve created a Storify of the 100(!) best tweets to share with you.


View the story “#APSA2012HungerGames” on Storify


And Norton, it’s on!


Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.




 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2012 07:30

(Bi)Monthly Etymology Gleanings for July-August 2012

By Anatoly Liberman



Farting and participles (not to be confused with cabbages and kings). Summer is supposed to be a dead season, but I cannot complain: many people have kindly offered their comments and sent questions. Of the topics discussed in July and August, flatulence turned out to be the greatest hit. I have nothing to add to the comments on fart. Apparently, next to the election campaign, the problem of comparable interest was breaking wind in Indo-European. The uneasy relations between German farzen and furzen have been clarified to everybody’s satisfaction, and interesting parallels from Greek and Slavic adduced. Rasmus Rask and Jacob Grimm might have been embarrassed if they had discovered how fart and its cognates are being used to illustrate their law.


The longevity of fart is typical of words describing physiological functions. The grossest names for the genitals can also often boast of great antiquity, which means that at one time they were just names, but ill luck made them “unpronounceable.” This seems to have happened in the history of the English C-word, and at one time even the F-word was less offensive than it is today. That English learners are shocked or amused by the German noun Fahrt “travel” is also well-known. Such anecdotal encounters across languages occur all the time. For example, many an unwary American in Italy wanted to buy a cold beverage and grabbed a glass with the word caldo on the label (caldo means “hot”). All this goes a long way toward showing that learning foreign languages is a profitable occupation and that innocence doesn’t always pay off abroad.


Learning one’s own language can also be recommended. A recent article in the Los Angeles Times discussed the deleterious effects of texting on youngsters’ spelling. It was pointed out that if they text all day long, they forget how “to properly write English.” Surprise, surprise! Accept and except, its and it’s, and even frayed and afraid. I submit that before the texting/sexting epoch they didn’t know how to spell those words either. Everybody must have heard that grammar and math are not “fun.” (On my campus, there are or were in the not too distant past math anxiety courses. Students, also known as kids, were taught how “to not be ‘afrayed’ of algebra and trigonometry.”) A student from Thailand once walked into my office and asked me whether I could hire her as an assistant for my etymological dictionary. She said that she was taking six courses per semester but would like to work twenty hours a week. I didn’t conceal my amazement and asked her what courses they were. I forget the entire list, but the last course she mentioned was trigonometry. She responded to my disbelief by laughing: “This is American trigonometry!” That it should come to this! By the way, the selfsame kids are no longer able to read English classics: too long, too dull, and no one knows those terrible long words. They are foreigners in their native land. OMG!


Hemingway (or H. Bogart?) joins the debate on the present perfect.


I been, I seen, I done. I cannot help reproducing a charming conversation quoted by Ms. Annie Morgan. The following exchange was overheard in Toronto in the thirties: –You done it! I didn’t done it!—You did done it. I seed you done it.


Language certainly evolves and keeps historical linguists busy. In that post, I also said that one can use the present perfect while translating into Swedish a sentence like Dickens died in 1870 and explained in what circumstances this would be possible. Two native speakers of Swedish doubted the truth of my statement. I may recommend the following articles to them:



Rolf Pipping, “Om innebörden av perfektum i nusvenskan.” Bidrag till nordisk filologi tillägnade Emil Olson den 9 Juni 1936. Lund: Gleerup, etc., 1936, 143-54.
Åke Thulstrup, “Pretritalt perfekt. Till belysning av gränsområdet mellan perfekt och imperfekt in svenskan.” Nysvenska studier 28, 1948, 70-101
Einar Haugen, “The Perfect Tense in English and Scandinavian: A Problem of Contrastive Linguistics.” Canadian Journal of Linguistics 17, 1972, 132-39
A chapter in my book Word Heath, Wortheide, Orðheiði. Rome: Il Calamo, 313-55 (“The Present Perfect in Old and Modern Icelandic”).



In this blog, I almost never give references, but how else can I defend myself from native speakers except by pitting them against other native speakers? By the way, I was happy to witness a disagreement between two Swedes on the vital subject of fisa and fjärta in Swedish (both mean “fart”). Some of those who read the post on “I been…” misunderstood its purport and stated that in English one never says Dickens has died in 1870. Of course. This was the whole point.


Halibut. Yes, indeed, butt is a fish name; cf. also turbot. It is not improbable that buttock is related to it. Many Germanic and Romance words (cf. button) have the root but, which refers vaguely to things fat, protruding, and so on. Hali- does go back to holy. Apparently, the halibut was good food for holidays.


Smashing. Is it possible that this adjective derives from a phrase in Gaelic? In my opinion, the probability is zero. One can seldom prove that a derivation from a foreign source is impossible, but since, as regards meaning, smashing is equal to smash + the suffix -ing, I wouldn’t go any further for the true origin.


On the Fritz. I keep repeating the same sad dictum: the origin of idioms, especially of slang, is harder to trace than the origin of individual words. Recorded examples of on the Fritz go back to the beginning of the twentieth century. The idiom is, most likely, an Americanism. One can sometimes read that there are several theories of this idiom’s etymology. Those are not theories but wild guesses of no interest whatsoever. If I may add my “theory” to the heap of nonsense already said about this phrase, which, as everybody knows, means “in bad repair,” I would suggest that Fritz here has nothing to do with the proper name. Perhaps there was a word frits (homophonous with Fritz) allied to the verb fritter and meaning “disjointed fragments.” Being on the frits would refer to being broken, but, since frits, unlike fritters, has not been attested, this “theory” is not worth a broken farthing on the market day.


The etymology of hare. Nothing in Dutch or any other modern Germanic language suggests the connection of hare with a color name, but Old Engl. hasu and its cognates did mean “gray.” Perhaps Engl. haze “thick mist” is related to it, and Dutch haar, synonymous with Engl. haze, cited in the letter, is akin to it. In the history of hare, r and s alternate, as evidenced by Engl. hare and Old Icelandic heri versus Dutch haas and German Hase. By the way, the Dutch form with r also exists. Those of our readers who have studied the history of any Germanic language will recognize the workings of Verner’s Law.


Natural and organic once more. Some people say that only products like gas and coal (as opposed to wine) can be called natural. This is a reasonable idea, but the fact remains that some wines have been called natural for a long time. The OED (natural 7a) says:


“Of a substance or article: not manufactured or processed, not obtained by artificial processing, made only from natural products. Also: manufactured using only simple or minimal processes; made so as to imitate or blend with the naturally occurring article.”


A 1991 example runs as follows: “We have recently seen the introduction in the UK of so-called natural paints which are marketed as being based on naturally occurring materials.” The definition highlights the vagueness of the concept natural in today’s life. Note the difference between “not obtained by artificial processing” and “made so as to imitate… the naturally occurring article.” Note also the use of so-called in the example cited. As far as I can understand, natural has become a buzzword competing with the overused, over-advertised, and over-admired organic. (The time seems to be ripe also for replacing sustainable with a flashier adjective.) My recent walk in a huge co-op revealed (alongside of 100% natural hardwood coal, which makes sense), super natural (two words) uncured turkey hot dogs (super natural, though a pun on supernatural, is a joke like super-virgin oil), and natural sour dough bread. Does natural sourdough grow on trees? And I wonder how “natural” hot dogs can be. Our correspondent is interested in the legitimacy of the term natural wine. I think that if sour dough can be natural, so can wine. He refers to the French Wikipedia article, where he found the first occurrence of natural wine. Natural sour dough, super natural hot dogs, and a bottle of natural wine will make a memorable meal.


Errare humanum est. Finally, I would like to assure our correspondents that I always read their comments and use the links they give me with gratitude and genuine interest (one of our friends doubted my enthusiasm). I should also add that mistakes and typos break my heart (temporarily), but here too my gratitude is stronger than my grief, for mistakes can be corrected, while attention is cheap at any price. By the way, comments on pictures are appreciated too, and I was delighted to read that at least one correspondent noticed the double entendre in the ajar post: not only is the woman with a pitcher a “jar woman” but also the window is ajar.


Anatoly Liberman is the author of Word Origins…And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears here, each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.”


Subscribe to Anatoly Liberman’s weekly etymology posts via email or RSS.

Subscribe to the OUPblog via email or RSS.

View more about this book on the


Image credit: To Have and Have Not (1944 film) poster. Warner Bros. Pictures. Source: Wikimedia Commons.




 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2012 05:30

Presidential nominating conventions matter

By Kate Kenski



In recent years, the value of American presidential nominating conventions has been questioned. Unlike the unscripted days of old, the modern conventions are media events used to broadcast to the nation the merits of the parties’ presidential nominees as the country moves toward the general election campaign. Because of the convention scripting and pageantry akin to a hybrid of the Oscars and a rock concert, some media outlets don’t feel that the conventions are as news worthy as they once were — a view that is unfortunate. Nominating conventions are valuable events, especially in an era when candidate messages have to compete against messages from independent groups with unrestrained and seemingly unlimited funds.


Over forty years ago, political conventions were the places where party goals and policies were debated and presidential nominees were chosen. At each convention, a party could reaffirm its identity or amend its policies and priorities to reflect its adaption to historical and environmental changes affecting public opinion. Convention debates were lively, but exposure was limited primarily to those who attended the conventions. In 1972, both the Republican and Democratic parties reformed how presidential nominees were selected. The reforms were designed to create transparency to the nomination processes, which had previously been described as taking place behind closed doors in smoked-filled rooms.


Republican National Convention. Republicans raise "Country First" signs as their leaders speak, St. Paul, Minnesota. Photo by Carol M. Highsmith on 4 September 2008. Source: Library of Congress.


The selection reforms took place amid changes in the media environment. They opened up the process to the extent that citizens, not just party elites, had some say in who became the parties’ nominees. Parties and their candidates came to the conclusion that the safest strategy for a party was to wrap up the primary process early, rather than dragging out the competition. Lengthy primary battles result in competing candidates essentially digging up and airing free opposition research for the other party. That of course means that the previous function of the nominating competitions, to select the party’s nominee, has become obsolete. While parties go through the process of counting up candidate delegates pro forma, the delegate voting is symbolic, not consequential. Media coverage of the state delegate counts is no longer prime time news material.


Reporters, editors, and producers who feel that conventions are not new and therefore are not news-worthy miss an opportunity to educate the general public about the candidates. For viewers, particularly Independents and members of the opposition party, the convention is the first meaningful exposure for a new presidential ticket. While it would be unusual for a candidate to present a completely different set of policies or ideological positions from the previous months of campaigning, the campaigns and parties use conventions to emphasize what they see as the most important issues facing the nation. It is one of the rare occasions in which the candidates get the opportunity to frame themselves and their positions unfettered by spin from their opposition and media, who seek to frame the candidates into the stories that they want to tell.


The broadcast airtime allocated to the national conventions by the networks will be thin this year, as their time allocation has been since 2000. Those interested in wider coverage of the conventions will need to turn to C-SPAN or look for segments on various cable programs. The networks, which once helped create a shared public space that emphasized the importance of politics, don’t see the benefit in continuing that public service. It is not financially profitable for them to do so.


Many political scholars have overlooked the importance of conventions in swaying public opinion. While citizen predispositions and the state of the economy are the strongest factors affecting presidential vote preference, my research with Bruce W. Hardy and Kathleen Hall Jamieson shows that candidate messages matter and that convention speeches matter as a vehicle for presenting those messages. Although convention viewership is highly affected by partisan attitudes, with significant numbers of Democrats limiting themselves to Democratic convention coverage and Republicans restricting their media diet to GOP convention coverage, even taking those partisan faults into account, we have shown that political speeches move people.


Presidential candidate Barack Obama, his wife Michelle, and his children Malia and Sasha wave to the audience at the Democratic National Convention, Denver, Colorado. Photo by Carol M. Highsmith on 28 August 2008. Source: Library of Congress.


During campaigns, presidential candidates not only have to worry about what their opponents say but they also have to worry about what their friends say as well. Well-meaning but ultimately misguided ideological friends often dilute the power of candidate messages by flooding the communication environment with different messages and different sets of priorities. In our post-Citizens United era, candidates need and deserve to control their basic message. Both sides of the political spectrum and hopefully those in-between should allow the candidates from the major parties to have their say, to be given a chance. If we truly believe in the ideals of our democratic republic, that citizens should be informed about where the candidates stand on issues, then providing each candidate some limited reprieve from the onslaught of attacks is warranted.


One of the biggest problems facing our nation is that we are increasingly unwilling to listen to the other side. It is difficult to hear someone utter arguments that one fundamentally disagrees with, but living in echo chambers isn’t healthy. For one moment in time, we should pay head to political conventions as an American tradition. While the format of party conventions has changed drastically over the years, their fundamental importance to the vitality of the society has not.


Kate Kenski (Ph.D. 2006, University of Pennsylvania) is an Associate Professor in the Department of Communication and School of Government & Public Policy at the University of Arizona where she teaches political communication, public opinion, and research methods. Prior to teaching at Arizona, she was a Senior Analyst at the Annenberg Public Policy Center at the University of Pennsylvania. She was a member of the National Annenberg Election Survey (NAES) team in 2000, 2004, and 2008. She is co-author of award-winning book The Obama Victory: How Media, Money, and Message Shaped the 2008 Election (2010, Oxford University Press) with Bruce W. Hardy and Kathleen Hall Jamieson and Capturing Campaign Dynamics: The National Annenberg Election Survey (2004, Oxford University Press) with Daniel Romer, Paul Waldman, Christopher Adasiewicz, and Kathleen Hall Jamieson.


Oxford University Press USA is putting together a series of articles on a political topic each week for four weeks as the United States discusses the upcoming American presidential election, and Republican and Democratic National Conventions. Last week our authors tackled the issue of money and politics. This week we turn to the role of political conventions and party conferences (as they’re called in the UK). Read the previous blog posts in this series: “Romney needed to pick Ryan” by David C. Barker and Christopher Jan Carman and “The Decline and Fall of the American Political Convention” by Geoffrey Kabaservice.


Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

View more about this book on the  




 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2012 03:30

Understanding ‘the body’ in fairy tales

By Scott B. Weingart and Jeana Jorgensen



Computational analysis and feminist theory generally aren’t the first things that come to mind in association with fairy tales. This unlikely pairing, however, can lead to important insights regarding how cultures understand and represent themselves. For example, by looking at how characters are described in European fairy tales, we’ve been able to show how Western culture tends to bias the younger generation, especially the men. While that result probably won’t shock anyone more than passingly familiar with the Western world, the method of reaching these results allows us to look at cultural biases in a new light. Our study and many others like it are part of a growing trend in applying the power of computing and quantitative analysis toward understanding ourselves.


This is not a new idea. Isaac Asimov’s science fiction Foundation novels, dating back to 1942, explore the repercussions of being able to mathematically predict human activity based on an analysis of history. In the early 20th century, the Annales school of history began crunching historical numbers to learn more about cultures on a large scale. Various groups since then have risen with similar goals, including the cliometricians in the 1960s and the cliodynamicists more recently.


Folklorists, too, have always been interested in tracing large-scale patterns in expressive culture ranging from storytelling to pottery. In one now-classic example of structural analysis, Russian folklorist Vladimir Propp separated fairy tales into plot components based upon the action being performed regardless of the character performing it (hence it doesn’t matter whether a witch or dragon steals the princess; what matters is that the princess has been removed from the civilized sphere, creating the need for a hero and a quest). More recently, folklorists such as Kathleen Ragan and Timothy Tangherlini  have been using statistical analysis and geographical information systems to study gender bias in folktale publications and storytelling diffusion over time and space.


The biggest news to hit the streets recently combined the power of Google, a few Harvard mathematicians, and five million digitized books covering the last two centuries. They dubbed their computational study of culture “culturomics”, and several more research projects have grown in its wake.


This type of research has traditionally been limited by inadequate technology, incomplete data, and the scarcity of scholars well-versed in both computation and traditional humanities research. That scene is now changing, due largely to efforts from both sides of the cultural divide, the humanities and the sciences. It is in this context that we undertook a study of European fairy tales, yielding interesting and occasionally unexpected results.


An analysis of over 10,000 references to people and body parts in six collections of Western European fairy tales can reveal quite a bit. Understanding fairy tales pays off twofold: they reveal the popular culture and beliefs of the past, while simultaneously showing what cultural messages are being transferred to modern readers. There is no doubt that the Disney renditions of classic fairy tales both reflect assumptions of the past and helped shape the gender roles of the present.



One finding from this analysis dealt with the use of adjectives when describing bodies or body parts in the stories. The most frequently-used adjectives cluster around the themes of maturation, gaining and maintaining beauty or wealth, and the struggle for survival, all concepts that still have a prominent place in our culture.


The use of age in these stories is of particular interest. While young people are described more than twice as frequently as old, the word old (and similar words indicating old age) appears more frequently than the word young (and related terms). That means the tellers of these stories rarely find it necessary to mention when someone is young, but often feel the need to describe the age of older people.


In fact, old people tend to attract more adjectives than their younger counterparts in general. If someone is going to be described in any way at all, whether it be about their beauty or their age or their strength, it’s far more likely that those descriptions are attached to the old rather than the young. This trend also holds true with regards to gender; men are described significantly less frequently than women. Combining these facts, it appears that although old women are brought up relatively infrequently, they are described much more frequently than would be expected.


The fact that women are described more frequently than men fits with a common feminist theory suggesting Western culture treats the male perspective as universal, unmarked, public, and default. Extending that theory further, the fairy tale analysis reveals that the young perspective is also default and unmarked. Older people and especially older women must be described in greater detail and with greater frequency, marking them as old or as women or both, because otherwise the character is assumed as young and masculine, maintaining those traits which are considered defaults.


These results just scratch the surface of what can be discovered using the automated and quantitative analysis of cultural data. As technology and data sources improve, there will be an increasing number of studies which combine algorithms and statistics with traditional humanistic theories and frameworks. The holy grail, which we are reaching ever-closer to, is the successful bridging of traditional close reading approaches of humanistic inquiry and the distant reading quantitative methods being developed by researchers like Franco Moretti and the Google Ngrams Team. This is another step on that path.


Scott B. Weingart is an Information Science Ph.D. student at Indiana University studying the history of science. and Dr. Jeana Jorgensen is a recent graduate of Indiana University who specializes in folklore and gender studies. This work is from a paper they co-presented at Digital Humanities 2011, for which they won the Paul Fortier Prize for best young researchers at the conference. The paper ’Computational analysis of the body in European fairy tales‘ is in the journal Literary and Linguistic Computing, and is available to read for free for a limited time.


Literary and Linguistic Computing is an international journal which publishes material on all aspects of computing and information technology applied to literature and language research and teaching. Papers include results of research projects, description and evaluation of techniques and methodologies, and reports on work in progress.


Subscribe to the OUPblog via email or RSS.

Subscribe to only literature articles on the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.


Image credits:

The Oxford Treasury of Fairy Tales, ed. Geraldine McCaughrean & Sophy Williams, Oxford University Press, 2012.

Fairy Tales and Other Stories by Hans Christian Andersen, ed. W.A. & J.K. Craigie, Oxford University Press, 1914: via digitized content at the New York Public Library.




 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2012 00:30

August 28, 2012

What would the ancient Greeks make of London 2012?

By Nigel Spivey



Overheard somewhere near London’s Green Park tube station, amid a throng of spectators for the 2012 Olympic triathlon: “What would those ancient Greeks make of this?”


I had no opportunity there and then to attempt a response, but it still seems worth considering. What indeed? Triathlon, for a start, they should comprehend; an ancient Greek word (meaning ‘triple challenge’), it would seem like some fraction of the ‘Twelve Labours’ (dodekathlon) undertaken by Herakles, and the winner duly heroized. Archaic ‘kudos’ and contemporary ‘celebrity’ elide across thousands of years. Crowds with painted faces, flags and accolades, the winner’s podium – the core gestures and sentiments here are essentially unchanged (though ancient victors, to judge from their commemorative statues, affected a fetching demeanour of downcast modesty). And for ancient athletes, as for today’s, winning was not just about fame and entering the record lists. Substantial material rewards awaited the best performers.


Of course Pierre de Coubertin didn’t recreate the Olympics in a spirit of historical accuracy. His own vision of gentlemen amateurs, tussling in a spirit of muscular Christianity and international bonhomie, seems now a Victorian-Edwardian period piece, much of it outdated. (If Nietzsche had been a founding father, the story would be different.) Ancient Greeks wouldn’t recognize our respect for failure, however plucky (can anyone put the phrase well done for trying into Homeric verse?); at Olympia, there was scant honour in second place.


Many competitors at London — surprisingly many, I thought — made the sign of the Cross prior to their effort. One can’t rule out the possibility that some athletes at the ancient Olympics during the centuries of Roman administration, before closure circa AD 400, did likewise. But even if the specific symbolism of the gesture were obscure to a pagan observer, its intention would be clear. Divine favour plays some part in mortal triumph; piety will have its reward. And how often was the adjective ‘incredible’ deployed by pundits at London 2012? The tally must run into thousands. If it denoted a physical feat beyond the scope of human reason and experience, this too seems attuned to an ancient acceptance of supernatural forces at work in the stadium. Mythical figures — Pelops, Odysseus, Achilles — were athletes; historical Olympic victors, such as Milo of Croton, became the stuff of mythology.


They were idealized as such. The canonization of the hero-athlete by sculptors such as Myron and Polykleitos has left an aesthetic legacy to what constitutes physical beauty; it also amounts to a sort of ‘body fascism’. With regard to the Paralympics, accordingly, our classical mentors are uninspiring. Physical misfortune was widely derided in the ancient world, with effigies of such unfortunates used to avert the evil eye. As losers in a race were subject to public scorn, so the disabled won no sympathy.


Under the Romans, access to the Games widened; one of the last recorded victors came from Persia. But the Greeks rigorously excluded contestants of non-Greek ethnicity and made no secret of their general disdain for ‘barbarians’.


What of women boxing, and women throwing hammers? Baron de Coubertin would certainly not approve; by contrast, Plato — admittedly, not your typical ancient Greek — might be more open-minded. Whether demonized as Amazons, or heroized as Atalanta, females in action were at least conceptually acceptable to the Greeks. (As a character in Athenian drama observes — not quite in these words — if you can give birth to a child, anything else, including fighting in the front line, is a piece of cake.)


A final question is hard to resist. How far would the prizewinning heroes of ancient Olympia be able to compete against the likes of Usain Bolt? Skeletal analysis tells us that people in antiquity were generally shorter in stature, and their life expectancy tended to be much shorter too. From Taranto, once a Greek colony in southern Italy, we have the excavated grave of a fifth-century BC individual who appears — from the possessions buried with him — to have been a successful competitor, perhaps in the pentathlon. He was just 1.70 metres tall and died in his mid-thirties, but appears to have been robustly built, as he would need to have been for the various disciplines of running, jumping, discus, javelin, and wrestling. We don’t have absolute records from the ancient games, though certain extraordinarily long jumps are alleged, and also some remarkable feats of strength. From Olympia comes a large sandstone boulder, weighing 143 kilos, with an inscription stating that one Bybon threw it over his head with one hand. Gazing at a gallery of athlete-victors, including the formidable ‘Terme Boxer’, my guess is that these ancient athletes would have pulverized us in any of the combat sports, and held their own in many other events. But such is idle speculation. As athletes, perhaps, we have not come a long way from Olympia. From a humanitarian perspective, by contrast, the distance is immense.


Nigel Spivey is Senior Lecturer in Classical Art and Archaeology at the University of Cambridge, where he also is a Fellow of Emmanuel College. He is the author of The Ancient Olympics. As an undergraduate he won honours at the Oxford-Cambridge athletics match, and set the university record for throwing the hammer. He went on to study at the British School at Rome and the University of Pisa. He has written widely on Classical culture and beyond: among his previous publications are the prize-winning Understanding Greek Sculpture (1996) and the widely acclaimed Enduring Creation (2001). He presented the major BBC/PBS television series How Art Made the World in 2005.


Subscribe to the OUPblog via email or RSS.

Subscribe to only classics and archaeology articles on the OUPblog via email or RSS.

Subscribe to only sports articles on the OUPblog via email or RSS.

View more about this book on the




 •  0 comments  •  flag
Share on Twitter
Published on August 28, 2012 22:30

Knowing it when we see it: ‘Madness’ and crime

By Arlie Loughnan



One of the most high profile court cases concerning ‘madness’ and crime has concluded. In a unanimous decision, the Oslo District Court in Norway has convicted Anders Behring Breivik of the murder of 77 people in the streets of central Oslo and on the island of Utøya in July 2011. Breivik’s conviction was based on a finding that he was sane at the time of the killings. He has been sentenced to 21 years in prison but it is possible that he will be detained beyond that period, under a regime of preventative detention.


As is well-known, Breivik faced trial for multiple counts of murder, following gun and bomb attacks resulting in mass killing of adults and children. Since his apprehension, Breivik has admitted planning and carrying out the killings, and is on record as saying that they were necessary to start a revolution aimed at preventing Norway from accepting further numbers of immigrants. In a strange twist, the court’s verdict is a victory for the defence; they had been instructed by Breivik to argue that he was sane. The prosecution had argued that Breivik was insane.


The issue at the centre of Breivik’s trial was whether he was criminally responsible for the killings. If he was insane at the time of killings, he was not criminally responsible. Criminal responsibility concerns the capacities — cognitive, volitional and moral — that an accused is both assumed and required to possess. Legal principles and practices, like a criminal trial and criminal punishment, depend on these capacities.


The Breivik trial brings the complex issues surrounding criminal responsibility into sharp relief. It prompts us to ask how we ‘know’ when someone is not criminally responsible.


Media reports indicate that Brievik had been examined by a total of 18 medical experts. Some of these experts concluded that he met the legal test of insanity, which, in Norway, requires that he acted under the influence of psychosis at the time of the crime. But Breivik himself disputed this diagnosis, claiming it is part of an attempt to silence him and stymie his message about ‘saving’ Norway. Other medical assessments concluded that Breivik was sane at the time of the offences, his actions motivated by extremist ideology not mental illness. The judges reached the same conclusion.


Perhaps this difference of opinion among expert is not surprising. Not only is the process of diagnosing a mental disorder complex, determining whether a disorder had a relevant effect on an individual at a specific point in time, is notoriously difficult. At what point, if any, does ideologically-driven fanaticism become ‘madness’?


And legal opinion may differ from medical opinion on such a question. Even if lawyers and medics share a language around mental incapacity, what is medically significant may not map directly onto what is significant in law. A prominent example here is personality disorder (a well-recognised mental disorder), but in the criminal law of England and Wales, not one that can ground an insanity defence.


The question of evaluating criminal responsibility becomes more complex when we take into account the other player present in some criminal courtrooms — lay decision-makers. These actors (archetypally, the jury) in the courtroom drama have a lay knowledge of mental incapacity that, when contrasted with expert knowledge, we can define as the socially-ratified but unsystematised attitudes and beliefs about mental incapacity held by non-experts.


Taking this further, we would need to acknowledge that, when it comes to ‘madness’, legal actors — judges, magistrates, prosecutors, and defence counsel — are lay people too. Even in the absence of a jury (as in the Breivik trial), lay knowledge still forms a component of the mix of knowledges brought to bear on evaluations of mental incapacity. This status as lay vis-à-vis knowledge of mental incapacity is not to deny legal actors their status as experts vis-à-vis legal practices and processes. Rather, it is to acknowledge that both non-expert and expert knowledge is invoked in insanity trials.


It is against this mixed knowledge backdrop that meanings are produced around mental incapacity. Particular aspects of context in which the evaluation takes place — in Breivik’s case, perhaps his history of involvement with mental health services, the extreme nature of his offence, his idea of performing a ‘duty’ to his country, and perhaps even his own vehement rejection of the label that would have seen him subject to treatment rather than punishment — are assessed in a context marked by different knowledges of mental incapacity.


This brief discussion hints at a dilemma for courts and law reformers working in the area of ‘madness’ and crime. Different sets of knowledge are brought to bear on the adjudication of criminal responsibility. Although expert medical knowledge of ‘madness’ may carry greatest social clout, its dominance in the criminal courtroom is not unchallenged. Indeed, we could say that it is under constant challenge in that context, evidenced by the whiff of illegitimacy (was he or wasn’t he?) that seems to linger in cases such as that of Anders Behring Breivik.


Dr Arlie Loughnan is a Senior Lecturer in the Faculty of Law, University of Sydney. She is the author of Manifest Madness: Mental Incapacity in Criminal Law (Oxford, 2012).


Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

View more about this book on the




 •  0 comments  •  flag
Share on Twitter
Published on August 28, 2012 07:30

The political impossibility of the Ryan-Romney budget

By Andrew J. Polsky



Pain has no political constituency.


This fundamental rule of American politics (and democratic systems more generally) points up the difficulty of enacting or sustaining public policies that leave large numbers of citizens worse off. Politicians dread casting votes on legislation that will impose costs on any significant group of constituents, lest the opposition seize on the issue in the next election. Austerity policies typically spell defeat for the political party or coalition that imposes them (see Greece). Given the political consequences of inflicting pain, many of the key budget prescriptions embodied in the budget plan developed by Representative Paul Ryan and now effectively endorsed by Mitt Romney will never be realized in practice.


Political parties that run on a “cod liver oil” platform face a critical obstacle on the campaign trail. They can always be undersold in the competition for votes by other parties that offer voters instead the proverbial spoonful of sugar. The political challenge entailed by recommending policies that promise pain becomes more acute if the danger that the pain is designed to avert lies far off in the future.


In 1984, Democratic presidential nominee Walter Mondale vividly demonstrated the lesson that pain is a losing political proposition. He believed that the American people would accept the hard truth that tax increases were the only solution to the large federal deficit generated by the tax cuts pushed through by the Reagan administration. In his acceptance speech at the Democratic convention, he delivered the bad news directly: “Whoever is inaugurated in January, the American people will have to pay Mr. Reagan’s bills. The budget will be squeezed. Taxes will go up….Mr. Reagan will raise taxes, and so will I. He won’t tell you. I just did.” Mondale’s candor earned him no credit among the American people. With barely 40% of the popular vote, he lost 49 states. (If Reagan had decided to campaign in Minnesota, Mondale’s home state, the Democrat might have lost all fifty.)



Another episode from the Reagan era demonstrates a more palatable approach to allocating pain. In 1981, recognizing that Social Security would soon face a short-term funding shortfall, Reagan appointed a bipartisan group, the National Commission on Social Security Reform (called the Greenspan Commission after its chair, Alan Greenspan) to review the program and its finances. The commission recommended a series of changes that included increased taxes and reduced benefits. Congress in 1983 approved recommendations that yielded $168 billion to assure that the program would remain solvent. The solution set borrowed from both parties, including increasing in the retirement age and raising the payroll tax ceiling on higher income workers. Importantly, the commission gave both parties political cover, and the bipartisan support effectively removed the issue from the 1984 campaign.


But the conditions that made possible the 1983 compromise have proven harder to replicate over time. Barack Obama sought to lay the groundwork for a similar bipartisan approach when he appointed the National Commission on Fiscal Responsibility and Reform (usually referred to after its co-chairs as Simpson-Bowles). Rather than embrace the report, however, lawmakers in both parties shunned it. Among the obstacles were a more sharply polarized political context and the lack of urgency inherent in the underlying problem. Any long-term debt crisis involves a distant threat, quite unlike the immediate problems facing Social Security in the early 1980s.


If we apply the lessons from these episodes to the Ryan budget, certain conclusions follow. First, so long as the Democrats control one of the main policy branches of the national government (the White House, the Senate, or the House), the plan will go nowhere. Indeed, that is the best of all worlds for the GOP, because then Republicans don’t have to answer for the consequences. Second, were the Republicans to sweep the 2012 elections, they might enact the features of the plan attractive to their core constituents — cutting discretionary expenditures for the poor and lowering taxes. The result would be a larger federal deficit and a worsening of the future debt problem. Third, Republican lawmakers would likely defer proposed changes in Medicare and changes in the tax code (such as eliminating popular deductions) intended to offset tax cuts. These unpopular moves would leave them politically vulnerable in 2014. To enact them could spell a quick farewell to majority status for the GOP.


Republicans know this. Many are already scared to run on the Ryan scheme to replace traditional Medicare for those under the age of 55 with vouchers that cannot possibly cover the same level of services. That Medicare poses a danger to the federal government’s solvency as baby boomers retire may be true, but proposing to slash Medicare spending still makes for bad politics. (And the Republican ticket appreciates the politics, too, witness the Romney-Ryan attack on Obamacare for allegedly cutting Medicare.) Nor will the Republicans’ Orwellian efforts to package the reform as a plan to protect and enhance Medicare work. In a contested information environment, efforts to reframe the terms of debate don’t work.


The same holds for the unspecified revenue increases that the Ryan plan expects to realize from reforming the tax code. At a time when the federal government already takes in much less than it spends, the GOP budget formula seeks lower tax rates and an end to taxes on capital gains. The plan in its pure form offers more than $4 trillion in tax cuts over the next decade. Finding the revenue to offset such a loss runs afoul of political reality at every turn. End the home mortgage interest deduction? The one for state and local taxes? How about putting a stop to charitable deductions? These moves amount to political suicide. Yet nothing less could close the gap between revenues and expenditures entailed by the Ryan budget (or the Romney tax plan proposed during the primaries).


In the end, then, the politics of pain mean that anything resembling the Ryan-Romney budget approach will become another exercise in supply-side economics — the discredited faith that cutting taxes sharply enough will generate so much economic growth that total revenues will increase. The Republicans can deliver the tax cuts and some spending reductions targeted at the most vulnerable, who are also the least organized and powerful in our politics. But for those who think the Ryan budget represents a serious approach to the long-term federal debt problem, believing in the tooth fairy is a better bet.


Andrew Polsky is Professor of Political Science at Hunter College and the CUNY Graduate Center. A former editor of the journal Polity, his most recent book is Elusive Victories: The American Presidency at War. Read Andrew Polsky’s previous blog posts.








Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

Subscribe to only American history articles on the OUPblog via email or RSS.

View more about this book on the 


Image credit: Macro shot of the seal of the United States on the US one dollar bill. Photo by briancweed, iStockphoto.




 •  0 comments  •  flag
Share on Twitter
Published on August 28, 2012 05:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.