Oxford University Press's Blog, page 612

September 27, 2015

Improving police and public safety: a win-win opportunity?

The month of September marks commemorative services for both the United Kingdom’s National Police Memorial Day on the 27th, and the Australian Police Remembrance Day on the 29th. Since modern professional policing developed in the British Isles in the nineteenth century, many thousands of police officers have sacrificed their lives in the interest of public safety. The rates of death on duty have varied enormously over time and between jurisdictions. For example, police fatalities are rare in the United Kingdom. In contrast, they are very much an occupational hazard in South Africa, where 59 officers were killed in 2014 alone.


There are a number of perhaps surprising characteristics of police safety issues, which are common in many countries. Policing tends to be much less dangerous than many occupations—such as construction, mining, forestry, and fishing—but police are frequently at or near the top of lists for occupational homicides. Research also often shows that around three-quarters of police deaths result from accidents, with only a quarter being attributable to attacks. In addition, research clearly shows that the very large majority of police deaths are preventable, often through simple measures, such as restricting high speed vehicle pursuits, or pulling back and calling in specialist negotiators when confronting an armed offender in a siege situation.


Police officer safety might be seen as involving an ‘us-against-them’ situation, where improved protection for police requires greater use of force against the public—especially against offenders—with potential for increased injuries. Officer safety at public demonstrations, for example, might be seen as requiring pre-emptive force. Police unions also often criticise restrictive pursuit policies as giving a green light to offenders. However, there is a growing body of research showing that improved policing tactics can reduce injuries to both police and citizens. A prime example is the adoption of ‘verbal judo’ tactics to de-escalate conflict in encounters with aggressive members of the public.


Another example concerns improved training and procedures around firearms usage and the management of situations involving armed offenders. The New York City Police Department provides an instructive example here. In the early 1970s the streets of the famous city resembled the Wild West when it came to shootouts between cops and crooks. In 1971, 12 officers were shot dead and 47 were shot and injured, while police shot dead 93 people and shot and injured 221. The following decades have seen reductions across these categories of more than 90 percent. In 2013, the most recent year on record, no officers were shot dead and three were shot and injured. Police shot dead eight people, and shot and injured 17. Despite some ups and downs over more than four decades, the data show a long-term downward trend, with very low numbers in recent years.


How was this remarkable turnaround achieved? A variety of factors most likely played a part, including greatly reduced crime rates and a focused, often controversial, program to take illegal guns off the streets through aggressive stop-and-search practices. There have also been important legal cases limiting police discretion in the use of deadly force. However, it is also the case that major changes have been made to procedures and training, beginning in 1972 and continuing over the years. The Department introduced its own rules limiting justifiable deadly force, including disciplinary action for rule violations. Refresher training was made mandatory for all officers who discharged their firearm. An intensive program of research was also initiated, involving detailed situational analyses of all shooting incidents, focusing on lessons for improved practice. Restraint was made a procedural norm. Innovations in hardware included bullet-resistant vests, semiautomatic handguns, and conducted electricity devices (‘Tasers’).


The results in New York support the proposition made at the beginning of this blog regarding win-win outcomes. Improvements in officer safety can include improvements in public safety, and the strategies should be overlapping and synergetic. This is not to say that cases of large improvements in officer and public safety necessarily represent all that can be done. There have been suggestions, for example, that there is scope for further improvements in the NYPD firearms management system, including findings from a major review by the RAND Corporation. Recommendations include the need for more complex scenario-based training, more demanding testing in pre-service and refresher training, and wider adoption of the less-lethal option of Tasers.


The general lesson here is that willingness to change, and a data-rich, research-driven continuous improvement management model, are the best means of addressing safety and force issues in policing.


Image Credit: “In Memoriam” by L4S. CC BY 2.0 via Flickr.


The post Improving police and public safety: a win-win opportunity? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2015 05:30

Richard Cobden: hero of the Left or Right?

This year marks the one hundred and fiftieth anniversary of the death of the great Victorian politician and ‘sage’, Richard Cobden, born in 1804, who died on 2 April 1865. Once a name familiar to every school-child, the prophet of ‘free trade, peace, and goodwill’ is now all but forgotten save among professional historians but he has spawned a diverse political legacy. On the one hand, his name, so strongly associated with free trade and the Repeal of the Corn Laws in 1846, can be vicariously linked with any subsequent free trade movement, for he loudly proclaimed the virtues of the free market, and ‘the unsoundness of any & every action that is incompatible with the most perfect freedom of trade’. From this it is a short step to Cobden’s becoming the prophet of globalization, and linking him with the full panoply of neo-liberal values enshrined in today’s institutional structure of world trade.


Similarly, his negotiation of the fabled ‘free trade’ Anglo-French (‘Cobden-Chevalier’) commercial treaty of 1860 proved a step towards the Victorian ‘Common Market’, so it is not implausible to link Cobden with moves towards the creation of a European Union; indeed in the 1930s, his old home Dunford House in Sussex was the venue for the first conference in Britain to advocate a ‘United States of Europe’ while the ‘father of Europe’, Jean Monnet, significantly perhaps, grew up in rue Cobden in Cognac. Cobden’s economic beliefs have also often been linked with ‘sound money’, ‘balanced budgets’, minimal state expenditure, and the retreat of the state from all areas of the economy, from which perspective he looks a paragon of Thatcherism, and is still admired by the free market Right in the Conservative party and among libertarians in the United States. There too his name survives in protectionist circles, where critics of freer trade policies in the 1990s asked ‘Is Clinton Cobden?’ Cobden may also in this context be seen as the advocate of looser economic arrangements in Europe, favouring economic but not federal links between European states of the sort the 1860 treaty fostered, and as were embodied more widely in the General Agreement on Tariffs and Trade after the Second World War.



Richard Cobden by Anne E. Keeling. Public domain via Wikimedia Commons.Richard Cobden by Anne E. Keeling. Public domain via Wikimedia Commons.

On the other hand, among those who gathered at Cobden’s statue in Camden Town to commemorate the two hundredth anniversary of his birth in 2004 were not only Bruce Kent, leader of CND but also the new leader of the Labour Party Jeremy Corbyn. For on the Left, Cobden is still fittingly remembered as an opponent of unnecessary arms expenditure based on inflated dangers of war but used to boost the interests of the arms establishment, what the American sociologist Wright Mills called ‘the military-industrial complex’. In the context of the 1860s, when Britain’s aristocratically-officered armed services claimed the biggest slice of public expenditure and ‘welfare’ was devolved to poor law authorites, this puts a diffeent perspective on Cobden’s desire to ‘shrink the state’.


Cobden was also a profound opponent of intervention abroad, the prime antagonist of his contemporary proponent of liberal interventionism, Lord Palmerston; he would, we might plausibly surmise, have been in the leading ranks of opponents of intervention in Iraq and Syria as he was in his own day against involvement in the Crimean War (1853-56), and, more successsfully, intervention in the American Civil War (1861-65). Hence after his death generations of pacifists and internationalists looked back to Cobden’s example for inspiration in their search for a ‘moral’ foreign policy. Beyond this, Cobden opposed Britain’s imperial aggrandisement, including British rule in India, which he believed to be both undesirable and unsustainable.


Nevertheless, between Right and Left, we might also expect to find Cobden still reckoned among the great carriers of those ‘liberal values’ which have inspired the centre of British politics. Like John Stuart Mill, Cobden favoured women’s rights; he spoke consistently in favour of public welfare as the advocate of the ‘People’ against, in his day, the ‘Aristocracy’; as his creation of the Anti-Corn Law League showed, he put his faith in popular activism outside traditional political parties; while for some a ‘Little Englander’ he was also the ‘International Man’, who favoured a global civil society, based on the peaceful interactions of peoples rather than the intrigues of state diplomacy; he remains, in this context, an archetype for liberals of all political varieties, and none.


Headline image: Photo by Henry Hemming. CC BY 2.0 via Flickr.



The post Richard Cobden: hero of the Left or Right? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2015 03:30

Our exhausted (first) world: a plea for 21st-century existential philosophy

Consider: a lecture hall of undergraduates, bored and fidgety (and techne-deprived, since I’ve banned computers and devices in class) in distinctive too-cool-for-school Philosophy 101 style.—Ah, but today will be different: the current offering is not Aristotle on causation, or Cartesian dualism, or Kant’s transcendental unity of apperception—no. Today the reading is from Kierkegaard’s Concluding Unscientific Postscript, and surely these students are eager to talk about the significance (or lack thereof) of our own fragile, brief lives.


With some anticipatory relish, I give them Kierkegaard’s ontology in broad strokes. “Kierkegaard,” I announce, “is concerned about one thing: meaningfulness, the meaning of life, your life, materially realized.”


Now for the question: what is it that a human being might initially move towards in an effort to realize her or his existential birthright of selfhood? Why, pleasure, yes?


Yes?


Strangely, no one bites. I try again: ‘Think about what it would be like to commit yourself to a life of pleasure—not as an occasional relief, but as a vocation? What would you do? Where would you start? Be specific. If, right now, you were defined as a pleasure-seeker, what would you do, where would you go, right now?’—Aha! Hands in the air.


Dear Reader: what do you think I hear? Sex, you say? Drugs? Sex and drugs on the beach?


No. Here are the first two responses I got:


“Lunch.”


“A Barcalounger and a TV.”


Seriously.


I press them: what kind of lunch? (I wondered if they would approach the fabulous heights of Kierkegaard’s own champagne-soaked aesthetes.) “Grilled cheese,” the student shrugs, “you know, comfort food.”


That’s it? And our Barcalounger? ‘Just ready to zone out.’


Indeed. And, lest you think these students are somehow more dispirited than the usual hormonally wracked/existentially pumped variety (undergraduate and graduate alike), let me tell you that this is a phenomenon I’ve observed over the years, on a number of North American campuses, and it continues to astonish me. They are exhausted. These children of first world everything, every material good and privilege that humans have wrought, fashioned and fought for over millennia, are simply tired of the whole thing.


What does this mean?


It means, in part, that Kierkegaard’s diagnosis of our age (originally of his ‘early 19th-Century backwater of Europe’ age, but plus ça change…) is correct: we are failing to think in the right way about the demands of subjectivity, and indeed what it is to be a subject.


Trained as we are to occupy the objective mode, to quantify, analyze and measure—how much bandwidth? Points scored in the debate? Jelly beans in the jar, or dollars in the bank account?—we forget that all of these facts, the torrent of Googled information ever exponentially increasing, are meant to mean, or to be, something for us. More to the point: something for you, or for me.


Evidently, we need some help in knowing ourselves (to crib a line), or with coming to terms with what it is to be a self. One of Kierkegaard’s pseudonyms, Anti-Climacus, describes a person who ‘lives fairly well’, has a family and a good job, is in all respects honored and esteemed—and yet no one detects that he lacks a ‘self’. Anti-Climacus trenchantly concludes, “The greatest hazard of all, losing the self, can occur very quietly in the world, as if it were nothing at all. No other loss can occur so quietly; any other loss—an arm, a leg, five dollars, a wife, etc.—is sure to be noticed.”


We might observe that the students in the existential ‘check-out’ line are in a different condition from this redoubtable, yet empty, human being: they aren’t even aroused by the prospect of taking up the gleaming mantles of propriety and esteem—those letters after the name! The corner office and the elaborate business card. Oh, they will do all of it, of course, but the point of their activity, any of it, except for indolent repose, has gone missing: their subjective condition is not fully functioning.



Image credit: Drawing of Søren Kierkegaard. The Frederiksborg Museum. Public domain via Wikimedia Commons.Image credit: Drawing of Søren Kierkegaard. The Frederiksborg Museum by Niels Christian Kierkegaard. Public domain via Wikimedia Commons.

Hence Kierkegaard’s often misunderstood, generally misquoted notion of ‘truth is subjectivity’—which does not mean, as someone once said to me at a faculty party, “Oh, yeah, he’s just saying that if you think it’s true then it’s true for you, right?” Not right. (As if the clause ‘true for you’ had any epistemic purchase at all.) In fact, Kierkegaard is merely amplifying the ancient Socratic dictum that there is an absolute, incontrovertible difference between orthê doxa, correct opinion, and epistêmê, knowledge. Two persons holding the same belief can’t be distinguished on the basis of, say, an utterance; both will claim ‘X.’ The difference between the two is how the belief is held. A merely true belief is unreliable, not located in a network of relevant justified true beliefs. Knowledge, of course, is a true belief that is held in the right way, standing in the proper relation to other pieces of knowledge; it can be accounted for, and recognized when approached from multiple epistemic perspectives.


Now the issue is clear: merely holding, and espousing, a true belief is not necessarily to be in the right relation to that truth. Our students utter all manner of truths, but they often seem delivered as materiel stolen from a construction site: here’s a couple of worthy boards, there’s a bag of nails, and what’s it all about, anyway? Another of Kierkegaard’s voices, Johannes Climacus, provides a trenchant example of just such a subjectively-deficient truth claimant: he imagines a madman who has escaped from an asylum. Of course, the madman doesn’t want to be captured and taken back: what to do? He must convince everyone around him that he is in fact sane, and how better to do this than to speak the objective truth? He finds a ‘skittle ball’ on the ground, and he secures it in the hem of his coat, vowing to utter a true sentence every time it bumps his bottom. What does he choose? ‘Boom! The world is round.’ He visits friends in order to convince them of his renewed sanity and paces the floor, uttering this sentence at each posterior prompt.


But surely the earth is round?


Of course it is: the fault of the madman’s recitation doesn’t lie in the truth of the utterance, but in his relation to it. His objectively true remark is subjectively empty, an incantation to ward off the asylum supervisor, not a meaningful remark about the world through which he moves.


‘Truth is subjectivity’ is thus, in part, a meditation on the way in which a truth is held. And indeed, those who teach—any subject at all, but particularly philosophy—have an intersubjective obligation to underscore not only why the lesson at hand is important to understand, but how each student should consider what that text, or argument, or historical account, might mean particularly for them. The task of education is not (again, borrowing from Socrates) to pour external facts and claims down their willing (or unwilling) gullets, but to set a challenge, that of developing what Kierkegaard calls ‘interiority,’ the aduton or sanctuary that is an established self, one committed to a task in the world.


Or: getting a life, you might say.


Back to the lecture hall: I try once more to peddle the life of pleasure—that first and ultimately hopeless attempt in the Kierkegaardian quest for selfhood—and ask the question again. One of them sits forward to ask: ‘what do you mean by “defined as a pleasure seeker”? You mean that’s my identity?’


Indeed. Seeing that human beings have, and are responsible for, an identity—ethical, sexual, religious, racial, political—is a good place to start the subjective conversation.


Image credit:  Lecture theatre fills up by portableantiquities, CC BY 2.0 via Flickr.


The post Our exhausted (first) world: a plea for 21st-century existential philosophy appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2015 02:30

Shakespeare’s encounter with Michel de Montaigne

Some people sign their books but never read them. Others devour books without bothering to inscribe their names. Shakespeare falls in the latter category. In fact we don’t truly know whether he owned books at all; just six Shakespearean signatures are considered authentic, and they appear exclusively in legal documents.


But given Shakespeare’s profound reliance upon such works as Ovid’s Metamorphoses, Plutarch’s Lives, and Holinshed’s Chronicles of England, Scotland, and Ireland, it’s overwhelmingly probable that he acquired at least a small collection of books during his career as a poet and playwright. Where these books are now is anyone’s guess. Some may have crumbled to dust or served as fuel for fires. A few, however, are probably still extant, perhaps resting on the shelves of rare book rooms or moving through the hands of private collectors. Imagine how their “market value” would soar if they were known to have belonged to the author of Macbeth and King Lear. Yet they remain inert objects of no value whatsoever until they come to life through the attention of alert and imaginative readers.


Of all the books that Shakespeare encountered – whether he owned them, borrowed them, or flipped through their pages in a bookstall near St. Paul’s – the most original and engrossing may well have been the Essays of Michel de Montaigne as translated by the scholar John Florio. Published in 1603, this work was probably known to Shakespeare even before it appeared in print. Florio, after all, had obtained the patronage of the Earl of Southampton in the early 1590s – the same Earl to whom Shakespeare had dedicated Venus and Adonis in 1593 and The Rape of Lucrece a year later. So there’s every likelihood that the two writers met and talked shop within the Southampton circle. Florio also mentions that half a dozen other scholars had attempted to translate Montaigne, but that none were sufficiently adept in French to succeed at the task. Montaigne, in other words, was something of a sensation in late sixteenth-century London. And Shakespeare, a voracious and opportunistic reader, would have been curious to know whether this was a writer from whom he might learn, take pleasure, or steal.


He probably did all three. But we can only demonstrate the thefts. Shakespeareans have long recognized, for example, that a passage in The Tempest borrows extensively from a lengthy Montaignian paragraph in an essay called “Of the Caniballes.” And why shouldn’t it? Elizabethan playwrights were constantly lifting the words of other writers – “filching” them, as Florio puts it – and who wouldn’t be tempted to draw material from a blog-like meditation on a topic as scandalous as cannibalism in the New World? Never mind that Montaigne eventually concludes that Europeans are more barbaric than Americans inasmuch as they roast people alive rather than eating them after they’re dead. The topic is inherently fascinating. And due to Montaigne’s penchant for examining a given subject from multiple perspectives, writers have always found a treasure-trove of fresh perceptions and striking opinions in his prose.



l’Histoire au bout des doigts by Pierre (Rennes), CC BY 2.0 via Flickr.

Consider the titles of his essays as rendered by Florio: “How we Weepe and Laugh at one selfe-same Thing”; “That our Desires are Encreased by Difficulty”; “Of the Affection of Fathers to their Children”; “Of Physiognomy”; “Of Crueltie”; “Of Thumbs.” How could any reader with an active mind fail to be intrigued? Or consider some of his characteristic conclusions: “Both male and female are cast in one same mold: instruction and custome excepted, there is no great difference betweene them”; “It is an overvaluing of one’s conjectures, by them to cause a man to be burned alive”; “Of all the infirmities we have, the most savage is to despise our being.” Montaigne is often singled out as the most forward-looking writer of the Renaissance, and it’s not hard to see why. His skeptical predisposition combined with his penetrating intelligence must have seemed irresistibly attractive to many English readers. Shakespeare was likely among them.


In the end, though, it was probably Montaigne’s style of thought rather than his arguments that left the deepest impression on English literary culture. Florio captures his inquisitive, meandering style with astonishing verbal exuberance. Apart from Shakespearean drama itself, there’s scarcely another work from Elizabethan England that offers a similar display of lexical brio. Hundreds of words make their first appearance in English, including “criticism,” “masturbation,” “judicatory,” and “dogmatism.” Florio experiments with verbs such as “fantastiquize,” “attediate,” and “dis-wench”; he serves up nouns like “profluvion,” “codburst,” “ubertie,” and “supputation”; and he coins dozens of compound terms, among them “cup-shotten,” “ninny-hammer,” “sinnewe-shrunken,” “wedlocke-friendship,” “greedy-covetous,” and “wit-besotting.” Shakespeare himself was a lover of words and a prolific neologist, so it’s difficult to imagine that he didn’t enjoy perusing Montaigne in Florio’s ebullient vernacular.


Dr. Johnson, in his Life of Milton, famously claims that Paradise Lost is a poem that the reader “admires and lays down, and forgets to take up again.” The same could never be said of Florio’s Montaigne. It’s true that few people read it from cover to cover, but the book is relentlessly interesting, and one can open it anywhere – as Augustine did with his Bible – and find oneself immediately caught up in Montaignian introspection. My guess is that Shakespeare had sustained access to a copy of this book, and that he ventured into it repeatedly, soaking up the language and the free-form contemplation without ever feeling short-changed by the essayist’s proclivity for self-contradiction.


In the end, Montaigne is less a source for Shakespeare than a catalyst, a provocation, a spur. Had his book never seen print, the great plays would still have been composed. But the works of Shakespeare are richer for Montaigne’s existence – and for Florio’s long labor in Englishing the Frenchman’s extraordinary “register” of his “live’s-essayes.”


The post Shakespeare’s encounter with Michel de Montaigne appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2015 01:30

How much do you know about Hannah Arendt? [quiz]

This September, the OUP Philosophy team have chosen Hannah Arendt as their Philosopher of the Month. Hannah Arendt was a German political theorist and philosopher best known for coining the term “the banality of evil.” She was also the author of various influential political philosophy books. In addition to her scholarly work, Arendt had a fascinating personal life, including surviving an internment camp in Southern France.


Test your knowledge of Hannah Arendt in the quiz below.



 


Feature image credit: Bacharach Germany, by Jiugang Wang. CC-BY-SA-2.0 via Flickr.


Quiz image credit: Brandenburg Gate, by Arne Huckelheim. CC-BY-SA-3.0 via Wikimedia Commons.


The post How much do you know about Hannah Arendt? [quiz] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2015 00:30

September 26, 2015

Istanbul, not Constantinople

Throughout history, many cities changed their names. Some did it for political reasons; others hoped to gain an economic advantage from it. Looking at a modern map of the world, you’d probably have a hard time finding Edo, Istropolis, or Gia Dinh. That is because these places are today known as Tokyo, Bratislava, and Ho Chi Minh City respectively. With this interactive map, you can explore a few notable examples of city name changes, and the history behind them.




Image Credit: “Istanbul” by Robert S. Donovan. CC BY NC 2.0 via Flickr.


The post appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2015 05:30

Substance, style, and myth in the Kennedy-Nixon debates

On the evening of September 26, 1960, in Chicago, Illinois, a presidential debate occurred that changed the nature of national politics.


Sixty-five years ago debates and campaign speeches for national audiences were relatively rare. In fact, this was the first live televised presidential debate in U.S. history.


The two presidential aspirants were both youthful but seemed to present a contrast between substance and style: Richard M. Nixon, from working class origins, appeared to represent the former. He had spent the previous eight years as vice president and had served in the senate and in the house of representative. John. F. Kennedy signified the later. A single-term senator, previously in the house, he was from a wealthy, Catholic, New England political family.


A whopping seventy million people tuned in to witness the confrontation (the U.S. population was 180 million, so roughly 60 percent or more of the adult population watched). Scholars (and observers at the time) agree that this debate signified a turning point in the election. Kennedy essentially won by showing up. Wearing a form fitting dark suit, Kennedy looked directly into the camera. He performed confidently and thereby came across presidential. He practiced extensively and rested up for the event. Nixon, on the other hand, effectively lost by showing up. He appeared un-presidential, wore a loose gray suit, looking pale, and was sweating profusely. (He had been campaigning hard, had been sick and lost weight before the debate). Nixon, trying to be less combative, also was less adept at looking right into the camera.


Kennedy began the debate nervously but resolutely gazed into the lens as he delivered an eloquent opening statement: “In the election of 1860, Abraham Lincoln said the question was whether this nation could exist half-slave or half-free. In the election of 1960, and with the world around us, the question is whether the world will exist half-slave or half-free, whether it will move in the direction of freedom, in the direction of the road that we are taking, or whether it will move in the direction of slavery.”


Nixon did well with his opening content but not in delivery. He, too, began hesitantly. Eyes drifting, he ceded ground with even his very first words, saying: “The things that Senator Kennedy has said many of us can agree with. There is no question but that we cannot discuss our internal affairs in the United States without recognizing that they have a tremendous bearing on our international position.” Nixon remained agreeable throughout out the debate. He sought to “erase the assassin,” as advised by running mate Henry Cabot Lodge. Indeed, some political insiders, such as pro-Kennedy journalist Joe Alsop thought that this was a good tactic for Nixon. For more on the debate see the John F. Kennedy Presidential Library.


What is usually cited as exceptional about this first of four debates was the formative impact of image. As Frank Stanton, president of CBS at the time, put it bluntly: “Kennedy was bronzed beautifully . . . Nixon looked like death.” Don Hewitt, who produced the debate, agreed. Upon seeing the candidates together on screen before the event began, Hewitt pushed Nixon’s advisers to mop off the already melting Lazy-Shave powder (a drug store pancake makeup which an aide applied because of Nixon’s permanent five o’clock shadow) and have a professional makeup artist make him look less sweaty and pale. Nixon, a seasoned performer on television and in debates and press conferences, and team declined.


This was clearly a mistake. One glance at an image of the two men side-by-side reveals the obvious. Kennedy looked like “a matinee idol,” as observers opined; Nixon paled in comparison.


Post-debate newspaper coverage and surveys immediately suggested not a landslide but a slight positive turn toward Kennedy. The New York Times was characteristic, reporting on September 27th: “For the most part, the exchanges were distinguished by a suavity, earnestness and courtesy that suggested that the two men were more concerned about ‘image projection’ to their huge television audience than about scoring debating points.”


But a closer look at surveys, such as by Sindlinger & Co., seemed to suggest a different case: those who identified as watching the debate on television deemed Kennedy the clear winner, while those who said they listened on the radio gave the edge to Nixon. It was a classic case of style trumping substance. Or was it?


First, most such polls and surveys did not control for crucial variables such as pre-debate preferences (party, religion, etc.), making it exceedingly difficult to determine how much the debate changed preexisting views.


Second, by 1960 radio listeners were by no means a random sample. Roughly 88 percent of households in the U.S. had televisions (up from 11 percent in 1950). Listeners were more likely to be in rural areas, tended to be Protestant, and skewed to bias against Kennedy. How representative these studies were also is very much in doubt. (Sindlinger, for example, seems to have only sampled 282 radio listeners, far fewer than peers would have considered for a plausible random sample.) As political scientist James Druckman clarified, “relative to television viewers, radio listeners may have been predisposed to favor Nixon over Kennedy.” Thus the role of image (if it is singular at all) needs to be tested more rigorously; these anecdotal surveys simply cannot not be relied on.


As Druckman documented in a 2003 article in the Journal of Politics on a range of new tests with fresh subjects watching and listening to the 1960 debate, he found that at least for contemporary viewers image did matter centrally to shaping perceptions of the “winner” and thus the effects of this first debate. But Druckman indicated that it is also significant that Kennedy performed so well in in articulating his policies. As historian David Greenberg persuasively explained, “the notion that Nixon won on radio but lost the debate—and, in some tellings, the presidency—‘only’ because Kennedy looked better on the tube turns out to be lacking in much support.”


So, yes, image was crucial, but it also seems that what Kennedy had to say and how he approached the issues—most notably his apparent lack of experience, which he parried by discussing his work in Congress and his philosophies of “effective government” and anti-communism in contrast to the past Administration’s period of “stagnation”—was significant as well. Indeed, the very split between substance and style is arbitrary and may not be particularly illuminating. A blended recognition of the intertwined role of—and recognition of the limits of—substance as well as style helps us to better see what made the charismatic, fresh Kennedy such a revelation in 1960 but also reveals why Nixon continued to poll so strongly as well. So, what was new about September 26, 1960 and why does it matter today?


Until the election of 1960, the medium of television had not been central to politics and vice versa. In 1952, for instance, Republican Dwight Eisenhower ran presidential ads featuring a format of real Americans “asking Ike” called “Eisenhower Answers America” but Democratic Adlai Stevenson refused to appear in televised advertisements and frowned on candidates being “marketed like soap.”


Communications scholars and historians note that until the early 1960s television was much more of an entertainment medium. In fact, the Kennedy-Nixon debates had a chilling effect because they seemed to matter so much. The next live televised presidential debate did not occur for another decade and a half. Risk-averse candidates worried about the twin roles of substance and style on TV – incumbents, those with significant experience, or those who were not as telegenic, like Nixon, could generally only lose, while challengers of various types, particularly those whose looks and abilities were well-suited to the medium, would likely benefit disproportionately.


In this way, 1960 marked a break from the old. Elements of today’s info-entertainment form of politicking, where style often trumps substance, are age-old in American politics but their reach into living rooms across the nation and their impact in shaping candidate pools, and thus elections, is new. Televised political stumping developed with widespread ramifications for American politics – connecting with the audience and “winning” were vitally important. How a candidate looked and sounded merged with how audiences felt about the candidates in ways fully recognizable today but new in the 1960s. Until that time most Americans read or saw photos of candidates but never really had the opportunity to experience them in the more personal format offered by television.


Six weeks later, a record number came out to vote in the national election. As predicted, it was a close race. Kennedy secured a narrow popular vote victory: 49.7 percent to 49.5 percent. Polling by Gallup and others revealed that a narrow majority of voters reported being influenced by four televised “great debates,” and as many as six percent claimed that the debates were decisive for them.


Moving beyond the myth of image as determining close elections such as Kennedy’s victory over Nixon in 1960, televised presidential debates have been standard practice—and virtually omnipresent—in American political life. Even those who never watch a debate live are exposed. So, too, the false litmus test of a substance-style divide has become a central feature of how Americans evaluate politicians.


 


The post Substance, style, and myth in the Kennedy-Nixon debates appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2015 03:30

Why does the European Day of Languages matter?

Each year, the European Union celebrates the European Day of Languages on 26 September. To mark this celebration of linguistic diversity, we asked the editors of Forum for Modern Language Studies to tell us why they think people should study some of the major European languages.


French

David Evans, Subject Editor (French)


Where is it spoken? French is an official language in 29 countries of Europe, spoken by over 270 million people across five continents, and the fifth most commonly spoken language worldwide. In North and sub-Saharan Africa, more than 100 million people in over 30 countries speak French, and it is the official language of communities such as the Canadian province of Quebec, and Haiti, where it co-exists with a creole variety, as it does in many islands of the Antilles and Indian Ocean, such as Mauritius and Réunion.


Why study French? For centuries, French culture has had a huge influence on the world, including Descartes’ ‘Je pense donc je suis’, enlightenment thinkers Voltaire and Rousseau, the revolutionary ideals of liberté, égalité and fraternité, poets such as Hugo, Baudelaire, Verlaine, and Rimbaud, and philosophers such as Sartre, De Beauvoir, or to Camus. France was the birthplace of the déclaration des droits de l’homme et du citoyen of 1789 and the counter-cultural revolution of May ’68, and its culture still fascinates and provokes in equal measure today, thanks to internationally infamous writers such as Michel Houellebecq. To learn French, and to study French culture in its dizzying variety, is to participate in the ever-evolving linguistic and cultural vitality of our modern world.


German

Michael Gratzke, Subject Editor (German)


Where is it spoken? German is spoken in Germany, Austria, large parts of Switzerland, Liechtenstein, and Luxemburg. There are recognised German-speaking minorities in Italy, Belgium, and Denmark. Diaspora communities in Hungary, Romania, Russia, Kazakhstan, the United States, Canada, and many places in Latin America, such as Chile, Paraguay, and Brazil speak German. Yiddish has its roots in German and there is even a creole variety of German spoken in Papua New Guinea, called Unserdeutsch.


Why study German? Before we even start talking about economic power, political influence, or Germany’s troubled history in the twentieth century, we can ascertain that there is huge cultural variety and richness within the German speaking sphere. When we add to this picture the recognised language minorities such as Danes and Sorbs in Germany, and Slovenes, Hungarians, and Croatians in Austria we get a sense of the linguistic and cultural diversity of central Europe. Moreover, 12.3% of all German nationals have some kind of background in immigration, whilst 7.7% of the population are foreign nationals who reside in Germany. The German speaking countries therefore sit at the major European crossroads of migration, trade, and cultural exchange. What other reason do you need to study their language(s) and cultures?


Russian

Claire Whitehead, Subject Editor (Russian)


Where is it spoken? Russian is the seventh most-spoken language in the world and is the most geographically widespread language in Eurasia. It is the official language of Russia, Belarus, Kazakhstan, and Kyrgyzstan, as well as being unofficially but widely spoken in Ukraine, Latvia, Estonia, and Moldova.


Why study Russian? Russia today, with a population of some 144 million people, is increasingly influential on the world stage in terms of its politics, economics, art, and culture. It is a nation with a rich artistic and intellectual heritage that includes great writers such as Dostoevsky and Tolstoy, and notable thinkers like Herzen and Berdiaev. Dating back at least as far as the reign of Peter the Great, Russia has enjoyed something of an antagonistic relationship with the rest of the world, Europe especially. And so the study of Russian history, literature, and culture in all of its various iterations provides the opportunity to begin to understand a country that asserts the existence of a multi-polar world and a specifically Russian way of doing things. Current research in the broad area of Russian studies includes but is not limited to literature and art from medieval times to the present; issues in cultural memory and memory studies; reassessments of the structure and functioning of Soviet-era Russian society, especially under Stalin; Russian thought and intellectual history; manifestations of Russian nationalism; environment studies and ecocriticism; comparative studies between Russian literature and other world literatures; film studies; Russian music; and Russian and Soviet orientalism.



To learn about Hispanic cultures in all their many manifestations and variations is to gain invaluable insight into the complexity, diversity, and plurality of our contemporary world.



Spanish

Fiona Mackintosh, Subject Editor (Spanish)


Where is it spoken? Spanish, as the fourth most commonly spoken language worldwide, is spoken by some 350 million people around the world, principally in Latin America and Spain, but also increasingly in the United States. By 2050, it is estimated that 30% of the population of the United States will be native Spanish speakers.


Why study Spanish? Studying Hispanic cultures worldwide gives a window into the major forces that have shaped and continue to shape our globalized world, with Columbus’s violent irruption into the ancient and mighty native civilizations of the Americas being one of the many symbolic watershed moments of Hispanic history. Current research in Hispanic Studies is going through exciting times. This includes debates about nationhood, identity, and governance within Spain and its autonomous regions; re-examination of the effects of neo-liberalism across Latin America; international literary dialogues between Latin American writers at home and those in European and North American diasporas; the rise of social media and their interaction with new forms of creativity and cyberculture; indigenous movements and issues of territorial and cultural representation; and connections across Hispanic and other language areas on subjects such as ecocriticism, translation studies, and crime fiction, with the novela negra being particularly strong in Spain and Latin America. To learn about Hispanic cultures in all their many manifestations and variations is to gain invaluable insight into the complexity, diversity, and plurality of our contemporary world.


Image Credit: ‘Conversation’ by argaplek. Public Domain via Pixabay.


The post Why does the European Day of Languages matter? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2015 02:30

Five astonishing facts about women in Shakespeare

What would Macbeth be without Lady Macbeth? Or Romeo and Juliet with only Romeo? Yet there’s an enormous disparity between female and male representation in Shakespeare’s play. Few, great female characters deliver as many lines or impressive speeches as their male counterparts. While this may not be surprising considering 16th century society and theater, data can reveal a wider disparity than previously thought. As cinemetrics and filmonomics examine the gender gap in cinema through screen time, Shakespeare scholars have counted lines and roles to better understand gender on Shakespeare’s stage.


Female characters have less than half the amount of lines compared to male characters. Rosalind, from As You Like It, is the largest female role in all of Shakespeare’s plays, yet only speaks 721 lines. Hamlet, the largest male role, speaks a total of 1,506 lines.


The women with the most speeches also have less than half as many as their male counterparts. Cleopatra (Antony and Cleopatra) and Rosalind (As You Like It) deliver 204 and 201 speeches respectively, the most out of all female Shakespeare characters. The male characters with the most speeches are Falstaff (Henry IV, Part 1, Henry IV, Part 2, and The Merry Wives of Windsor) and Richard III (Henry VI, Part 3 and Richard III), with 471 and 409 speeches respectively.


There are seven times as many roles for men as there are for women in Shakespeare’s plays. Of the total 981 characters, 826 are men while only 155 are women; that means that women account for less than 16% of all Shakespearean characters.


Even fewer women actually performed on stage since most female roles were portrayed by men until the mid-seventeenth century. The first professional female actress recorded on the English stage, Margaret Hughes (c. 1630-1719), initally performed as Desdemona in Othello on 8 December 1660. Alternately, the last notable actor who performed female roles on stage was Edward Kynaston (1640-1712), who later gained popularity in Shakespeare’s Henry IV.


Nearly 40% of the lines in Shakespeare’s As You Like It are spoken by women, an impressively high percentage of female lines compared to his other plays. Romeo and Juliet, despite being about a tragic love story between a man and a woman, is only 31% female lines. Timon of Athens has the least amount of female lines in all of Shakespeare’s plays, amounting to a minuscule 0.67%.


Featured Image: “Rosalind and Celia” by William Henry Simmons (1870). Folger Shakespeare Library. CC BY-SA 4.0 via Wikimedia Commons


The post Five astonishing facts about women in Shakespeare appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2015 01:30

A crisis of commitment

A reasonable line of thought can give rise to a crisis of commitment: Many a commitment requires persistence or willpower, especially in the face of temptation. A straightforward example is the decision to quit smoking; another is the promise to be faithful to someone for the rest of one’s life. However, when we consider making such a commitment, we are often in a position to anticipate that we will be exposed to temptation and to realize that following through would require persistence or willpower. But this means that we may not be in a position to predict that we will follow through.


Yet if we are not in a position to predict that we will follow through, then—so the reasonable line of thought goes—we are not entitled to make the commitment. After all, if we cannot say that we will persist, then how could we make the commitment to do it? And what would making the commitment consist in, if not at least in saying that we will persist? This line of thought is especially pressing if in making a commitment we invite others to rely on us. For example, the promise to be faithful might be part of an invitation to share one’s life with another—to have children, a house, and a thirty-year mortgage. It seems reasonable to think that such a commitment would require the prediction that we will follow through.



Image: Image: “Wedding” by kgorz. Public domain via Pixabay.

It might be replied that we often are in a position to predict that we will follow through. Yet simple reflection should lead us to realize that in very many cases, this is not the case. One reason is the prevalent statistical evidence about human failure to uphold commitments of various kinds; another reason is that when we make commitments, we usually don’t gather evidence about our chances of success, or about the success rate of similarly situated people. Thus even if the empirical evidence is not against us, it seems that we don’t bother to gather evidence that speaks for us—but a prediction that is based on little or no evidence is unfounded. Furthermore, if we understand that we will be exposed to temptation, and we understand that temptation exercises a pull on us (as is in the nature of temptation), we will understand that we cannot predict that we will resist it. To make such a prediction would require predicting that we will feel no pull. Finally, experience with others and ourselves simply tells us that we often tend to fail to live up to substantial commitments to resist temptation.


Nonetheless, I hold, such a crisis of commitment is unfounded (though, of course, we might have reasons for a different crisis of commitment). That is because it rests on a mistake: making a commitment does not require being in a position to predict that we will follow through. That is because our commitments concern our future actions. And as agents, we have a distinct view of our future—insofar as it is a future that we determine through our agency. More precisely, to the extent that our future is up to us, to that extent we can settle the question of what we will do as agents: we can settle what will happen in light of reasons that show it worthwhile to make it happen—viz. reasons to act.


But these are practical reasons, not evidence, which would be the basis of a prediction. And we can have practical reasons that show doing something worthwhile, and so make our commitment rational, without thereby being in a position to predict that we will follow through. This implies that being in a position to predict that we will follow through on a commitment is not necessary for making the commitment, nor even for rationally making it. To be an agent consists not only in the freedom to make something happen, but also in the authority to settle the question of whether to do so in light of reasons that show it worthwhile to make it happen.


That is not to say that following through on a commitment is easy, or that we should think that it is easy. If we anticipate temptation we need to be mindful of the possibility of failure. But being mindful means making smart choices—choices that may help us steer clear of temptation—not predicting that there is a good chance that we will fail or even planning for failure. Also, being mindful and steering clear of temptation is not enough. In the end, we must resolutely resist temptation. And resoluteness starts with the right mindset—the mindset of decision rather than prediction.


Featured image credit: “Wedding rings”, by Allan Ajifo. CC BY 2.0 via Flickr.


The post A crisis of commitment appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 26, 2015 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.