Oxford University Press's Blog, page 517

April 25, 2016

A tale of two cities: Anzac Day and the Easter Rising

On 25 April 1916, 2,000 Australian and New Zealand troops marched through London towards a service at Westminster Abbey attended by the King and Queen. One of the soldiers later recalled the celebratory atmosphere of the day:


The thousands cheered us till they were hoarse; they broke through the cordon of police. Girls hugged us, and I don’t remember the number of kisses that we received. From the windows of the different buildings flowers were showered on us as well as cigarettes, flags, handkerchiefs, and other little articles. The Strand had the appearance of a carnival. (Quick March, April 1919, p. 38)


This was the first Anzac Day. A year earlier, Australian soldiers had been the first to land on the Gallipoli peninsula as part of an attempt by the combined forces of the British and French empires to invade the Ottoman Empire. Their unprecedented feat of arms had prompted breathless reporting about this ‘race of athletes’ from the war correspondent Ellis Ashmead Bartlett whose despatch told the world of how the Anzacs:


[W]aited neither for orders nor for the boats to reach the beach, but, springing out into the sea, they waded ashore and forming some sort of a rough line rushed straight on the flashes of the enemy’s rifles. Their magazines were not even charged. So they just went in with cold steel, and I believe I am right in saying that the first Ottoman Turk since the last Crusade received an Anglo-Saxon bayonet in him at 5 minutes after 5 a.m. on April. 25.


This moment formed the basis of the Anzac legend, an understanding of the events at Gallipoli which identified the military prowess of these soldiers as distinctively Australian. Repeated and analysed in historical writings, in poems and songs, plays and films, and above all through commemoration each year, the Anzac legend and Anzac Day have shaped Australian national identity.



Image credit: Australian and New Zealand soldiers marching to Westminster Abbey to commemorate the first Anzac Day, London, 25 April 1916 by National Library of Austrailia. Public domain via Wikimedia Commons.

What is interesting about the march through London is that the occasion highlighted the role of Australians and New Zealanders even though they formed a relatively minor part of the British forces of the Mediterranean Expeditionary Force. British and Irish men formed the numerically greater part of the force. There was scarcely a corner of the British Isles that did not send men to Gallipoli: from the Highlands of Scotland and rugged countryside of Northumberland, to the flat expanses of Lincolnshire and East Anglia or the green fields of Ireland; from the men of the king’s estate at Sandringham in Norfolk and the mounted yeomen of Berkshire, to the industrial heartlands of Glasgow and Lancashire.


In 1915 all of these men were citizens of the United Kingdom. By 1922, the Irish Free State was a separate country. The most important step towards Irish independence was the Easter Rising of 1916. Thus, at the very moment that the Anzacs were marching in London, across the Irish Sea an armed insurrection was underway in Dublin. The day before, Patrick Pearse, one of the leaders of the approximately 1600 rebels, had stood on the steps of the post office and read a proclamation that declared Ireland an independent republic. On 25 April 1916, the British armed response began to gear up. Amongst the British forces were Irishmen from the Dublin Fusiliers who succeeded in capturing a key newspaper building in the city, but who were also subject to an ambush on Parliament Street in which 23 men died.


These men might otherwise have also been marking the Gallipoli anniversary, for members of the regiment, professional soldiers recruited in Ireland, had also landed on the peninsula on 25 April 1915. Indeed, the Dublin and Munster Fusiliers landing at V beach was among the bloodiest and hardest fought battles of the day. Yet there are no reports of commemorative events anywhere in Ireland in 1916. This reflects a difficult situation, one that only grew over time. Irishmen who later wished to mark their participation in Gallipoli did so unobtrusively each year in April. Armistice Day was also marked by thousands of people, but it was fraught with controversy. As the fledgling nationalist government worked to establish the Irish Free State there was an understandable atmosphere of Anglophobia, and the sacrifice and slaughter of Irishmen in the service of the Crown was not a memory they cared to burnish.


Comparing the commemoration of the Gallipoli campaign in Britain, Ireland, Australia, and New Zealand across the century reveals the interchange between commemoration and national identity. The way in which the campaign and the contributions of these various national contingents were perceived and remembered took radically different trajectories. This was a moment when the nature of Britishness was in flux. An all-encompassing imperial identity began to atomize. At the outbreak of the war it was possible to consider oneself to be simultaneously British and Australian or British and Irish, but those identities were later rent asunder. In Australia it happened slowly and peacefully; in Ireland it happened rapidly and violently.


Featured image: Landing at Gallipoli by Archives New Zealand. CC-BY-SA 2.0 via Flickr.


The post A tale of two cities: Anzac Day and the Easter Rising appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2016 02:30

Human rights and the (in)humanity at EU’s borders

The precarious humanitarian situation at Europe’s borders is creating what seems to be an irresolvable tension between the interests of European states to seal off their borders and the respect for fundamental human rights. Frontex, EU’s External Border Control Agency, in particular has been since its inception in 2004 embroiled in a fair amount of public controversy. Described by the Human Rights Watch as “EU’s dirty hands” and linked to human rights violations, Frontex has been one of the most visible examples of the militarization of Europe’s borders. However, the organization has in recent years seen a remarkable growth in its budget, personnel, and in the size of its joint operations. Frontex is a central element in the EU Commission’s ambitious plans to strengthen European border security and create a European Border and Coast Guard force.


But how is it to work on the ground, as a border guard officer, and feel the political tensions on a daily basis? How do officers, when met face-to-face with the needs of people in extremely precarious life situations, reconcile the objectives of state security with the respect for human rights? As part of our research project on border policing we interviewed police officers participating in Frontex operations. Somewhat to our surprise, human rights and humanitarian language seemed to be everywhere. For example, the Frontex Code of Conduct, handed out to all officers on operations, lists the following:



Know and respect the law
Inform those in need of international protection about their rights and relevant procedures
Respect human dignity at all times and be sensitive to cultural differences
Pay particular attention to the need of vulnerable persons
Uphold the highest ethical standards
Act fairly and impartially at all times
Report all violations of the law and the Frontex guide to behavior…


Frontexprotestshut down frontex (demonstration through warsaw) by Noborder Network. CC BY 2.0 via Flickr.

Officers are expected to achieve the remarkable task (which has eluded so many European states) of effectively controlling migration, yet with a full respect for fundamental rights, including the right to asylum. The question can be asked, of course, whether this is just a smoke screen. Are human rights and the humanitarian discourse simply a discursive tool, employed by EU institutions to justify policies and actions, which directly and indirectly contribute to the precariousness of life? After all, humanitarianism is, as Didier Fassin points out, becoming an increasingly salient mode of governance in crisis situations. Are we simply witnessing performative aspects of humanitarianism with little relevance on the ground?


The EU’s recent deal with Turkey may indicate that this clearly is the case. Our analysis of Frontex risk reports and other operational documents also reveals obvious contradictions and disjunctions between the objectives of state security and a concern for migrants’ vulnerability. The loss of life at the borders is, in spite of frequent public and academic critique, still not counted in the official statistics. Vulnerability, in fact, refers to “pressures on the borders”, rather than to humans crossing them. A question can thus be raised about who is considered deserving of protection in such a context?


However, our interviews with Frontex officers on the ground reveal a more complex and paradoxical picture. While human rights and the humanitarian discourse certainly perform a certain kind of political and public relations “work” in the policing of European borders, they also seem to be to some extent internalized and appropriated by actors on the ground. Frontex officers thus try to reconcile the conflicting demands to help and to control, to be at the same time humane towards and suspicious of migrants, which can result in self-doubt and soul searching.  As one Norwegian officer, stationed in Greece, put it:


“Are we contributing to something good or are we just helping Greece to do something wrong? /…/ I hope that my children and grandchildren can look back on what their father and grandfather did as something that was right, that he did something good; that this will not be a shadow in European history that I have contributed to. I really hope so.”


His reflections reveal an awareness that simply having good intentions in deeply inhumane conditions may not be enough, and that history may pass a harsh judgement on the present border control measures. Perhaps these are moral questions that we all, including policy makers and the architects of current migration control solutions, should be asking ourselves.


Featured image credit: Refugees on the Hungarian M1 highway on their march towards the Austrian border. 20150904 174 by photog_at. CC BY 2.0 via Flickr.


The post Human rights and the (in)humanity at EU’s borders appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2016 01:30

How well do you know your quotes from Down Under?

“What a good thing Adam had. When he said a good thing he knew nobody had said it before.”


Mark Twain put his finger on one of the minor problems for a relatively new nation: making an impact in the world of famous quotations. All the good lines seem to have already been used somewhere else, by somebody else. For example, when the Australian Prime Minister Malcolm Fraser said ‘Life is not meant to be easy’, he was echoing George Bernard Shaw. And when talking about people who live ‘down under’, even Shakespeare got in on the act (pardon the pun) first: in Richard II the title character talks about ‘wandering with the Antipodes’ to his cousin.


This month we have added some new famous lines said by Australians and New Zealanders to Oxford Essential Quotations. We’ve gathered a selection of these together to test your knowledge – do you know who said what?



Featured image credit: Globe. CC0 public domain via Pixabay.


Quiz image credit: Exchange of ideas. CC0 public domain via Pixabay.


The post How well do you know your quotes from Down Under? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 25, 2016 00:30

April 24, 2016

Anzac Legend

On 25 April 1915, Australian and New Zealand forces landed on Gallipoli. The campaign that followed had a lasting impact on the two nations, with commemorations beginning the following year. The following is an extract from The Australian Imperial Force: Volume V, The Centenary History of Australia and the Great War, edited by Jean Bou, Peter Dennis, Paul Dalgleish, and Jeffrey Grey.


Ever since news of the landing at Gallipoli first reached Australia via the reporting of the British war correspondent Ellis Ashmead-Bartlett, the achievements of the AIF have become embedded in Australian national consciousness. By the end of the war the AIF had come to be regarded as one of the premier Allied fighting forces, and [General Sir John] Monash as one of their most successful generals. Reflecting the widespread militaristic outlook of the early twentieth century, Gallipoli was regarded as the nation’s ‘baptism of fire’, which was understandable given that its only previous military involvement had been in the much smaller scale South African (Boer) War. What was, and is, less understandable is the suggestion that Gallipoli marked the birth of the nation, as if the very achievement of Federation in 1901 by peaceful means and the introduction of universal suffrage (Indigenous inhabitants excepted) was less significant in the history of the new Commonwealth. One hundred years on from the landing of 25 April 1915, ‘Anzac’ remains a contested concept that attracts vigorous criticism and impassioned defence. The fact that scores of thousands turn out every year at dawn services throughout the country suggests that the AIF as the original Anzacs continues to inspire new generations.


Bean concluded his final volume of the official history by hailing the story of the AIF as a ‘monument to great-hearted men; and, for their nation, a possession forever’. That the AIF was able to achieve what it did truly is a remarkable story. It would have been almost inconceivable in the decade following Federation that Australia could raise a substantial force within a matter of months and dispatch it to fight in distant campaigns. Yet from the moment that Australia entered the war and opened recruiting until the first convoy sailed from Albany, less than three months had elapsed. It was because of the work of countless military and civil officers, and with the support of large sections of the Australian community, that the initial force of 20 000 men—one infantry division and a light horse brigade—was raised so quickly. That was a significant achievement in itself, but the ultimate expansion (and probably over-expansion) of the AIF to a strength of five divisions and the best part of two mounted divisions was, by any measure, an extraordinary effort on the part of a small (and new) nation. By 1918 the AIF was, by any reckoning (and here we can avoid the extravagant claims of some cheerleaders), among the best fighting forces in the empire and, indeed, in the whole of the Allied camp. In the process it produced officers (many from the ranks but also from the pre-war Militia/Citizen Military Forces) who could command at every level. Monash was the outstanding Australian officer that the war produced, and in some circles he was touted as a possible commander-in-chief for the whole of the British Expeditionary Force, but this move to elevate him to the very top was as much a political campaign as it was a sound evaluation of his capability. He was supported by a legion of subordinates, many of whom grew into their positions from a very low base of experience: war was to be the great teacher. The war also produced thousands of soldiers of all ranks who performed their duties efficiently and effectively.


Underpinning these achievements was, first, a training scheme that quickly developed into one that could turn untried civilians into soldiers in a short period, for time was always of the essence. From its arrival in Egypt in December 1914, the AIF had barely four months to create a semblance of a military force from the mass of raw recruits that had embarked in the first convoy. Thereafter as reinforcements arrived in Egypt and, after the failure of the Gallipoli campaign, in Britain for eventual deployment on the Western Front, the training system developed the capacity to keep units at the maximum strength that the flow of recruits would allow. This was no mean feat.


The second, and often overlooked, factor underpinning the exploits of the AIF was the complex yet efficient administrative system that was developed, one that extended from the front lines to the bases in Egypt, France and Britain, all the way back to Australia. It is a source of wonderment a hundred years on to see the level of detail that was recorded on an individual’s file and the efforts that were made to communicate to families the particulars of their loved ones at the front, thereby ensuring public support for the AIF, even when wider questions were increasingly contested. More generally the act of keeping track of movements, equipment and all the support functions necessary to keep the AIF in the field required remarkable administrative abilities across the whole of the AIF and the Department of Defence.


What made all this possible? In the first instance it must be recognised that although the AIF was largely formed from scratch in terms of the bulk of enlisted men, it did not spring from nowhere. The small cadre of officers who formed the tiny pre-war professional army, together with the more robust officers and men of the CMF (a number of officers quickly showed that the rigors of a campaign were beyond their mental and physical capacity and were let go), provided a solid base on which to build. Such men as [Inspector-General, Brigadier-General William] Bridges had honed their skills through experience, in Australia and in South Africa, and on attachment to and working with the British Army. It is fashionable in some ignorant circles to decry the influence of the British Army on the AIF, but the fact is that the AIF fought as part of a larger British formation: the Mediterranean Expeditionary Force at Gallipoli, the British Expeditionary Force on the Western Front and the Eastern Expeditionary Force in Egypt, Palestine and Syria. Besides the indispensable support that the British Army and Britain more generally made available to the AIF, which was far beyond the capacity of Australian industry to provide, the British Army was, for better or worse (decidedly better by war’s end) the source of imperial military doctrine that made it possible for such a formation as the AIF to slot easily into wider operations.



Anzac Day ceremonies became an annual fixture after the war. This one, held around a temporary memorial, was in Mackay, Queensland, in 1929. Credit: Mackay Regional Council.Anzac Day ceremonies became an annual fixture after the war. This one, held around a temporary memorial, was in Mackay, Queensland, in 1929. Credit: Mackay Regional Council.

Similarly, access to British training establishments was critical in enabling the AIF to develop over time its operational skills while, without the resources of the British Army medical system and its supporting network of hospitals, especially in Britain, the AIF could not have sustained the level of medical care that it was able to afford its sick and wounded. British officers who served with the AIF, from Birdwood down, rendered invaluable service, especially in such areas as staff work where the Australian military lacked deep experience. Again, much popular writing denigrates British officers (contemporary cartoons in unit newspapers mercilessly lampooned the monocled ‘toffs’ of the British military establishment), and there were certainly cases of incompetent British officers being posted to Australian units (just as there were incompetent Australian officers), but on the whole the British officers who were attached to the AIF performed well, and the AIF would have been hard-pressed without them.


Nevertheless, although it is essential to acknowledge the inevitable reliance of the AIF on its far larger British counterpart, we should not underestimate the element of self-reliance that eventually made the AIF the force that it was by 1918. We should remember also that the fledgling force of 1914 bore little relation to the AIF of 1917–18. That it should become so highly valued within Allied circles was due in no small part to its officers, whose professional ability grew with experience. Monash, for example, had not done very well at Gallipoli; three years of hard fighting on the Western Front turned him into a leader to rank with the best. Those who held commissions in the CMF more on the basis of their social standing than because of their perceived ability were quickly weeded out in the AIF: demonstrable merit rather than background became the test for commissioning and promotion, an approach that served the AIF well.


Much has been made of the egalitarianism of the AIF, especially compared with what was regarded as the hidebound, class-conscious British Army. Emphasis on the latter can be exaggerated, but it is clear that officer–men relations in the AIF were more relaxed than in the British Army, not least because by the second half of the war many officers had come from the ranks. The AIF became notorious for the ill-discipline displayed by its members of various occasions, not only in comparison with the British Army but also with the Canadian and New Zealand forces. The vast majority of disciplinary cases arose from minor transgressions—drunkenness, overstaying leave, being out of bounds, using obscene language and so on—but there was a significant number of cases of criminal behaviour. Minor lapses in discipline and displays of ‘larrikinism’ could be excused as a release from the stresses of the front line, and in any case they largely escaped the attention of the public in Australia. It was a different matter when troops returned to Australia and engaged in public rowdiness and, in some cases, in such discreditable behaviour that there was danger of a public backlash against them.


‘Mateship’ is often touted as a peculiarly Australian characteristic, but this is a gross exaggeration, as though this tendency to stick together, whatever name is given to it, was not equally to be seen in every other army, especially those from the sister dominions. Australian troops might have been more overt in their demonstration of mateship, but the Diggers were no more concerned about their fellow soldiers than their counterparts from Canada and New Zealand, or indeed from the British Army. Australians did not have a monopoly on small group cohesion. What they shared in particular with their dominion counterparts was the fact that they were away from Australia for exceptionally long periods, and very few got home leave. This naturally focused emotions and a sense of responsibility on the soldier’s immediate surroundings—his platoon and company. The ‘fellowship of the trenches’ was a very real motivating factor that enabled men to endure the rigors of war.


[Charles] Bean was right when he wrote that the AIF became for Australia a possession forever.

The bitterness of the conscription campaign took years to fade and had long-lasting political effects, but the reputation of the AIF remained undiminished. When a second AIF was raised in 1939 it seemed only proper that, following in the footsteps of its famous forebear, it should adopt the names and numbering system of the 1st AIF (thus 2/10th Battalion, 2/12th Battalion and so on), with its divisions following on sequentially from the five divisions of the First AIF. Whatever the prevailing views about the Great War and ‘Anzac’ are—and they regularly change and mutate—the AIF is rightly firmly established in Australia’s consciousness as one of its great achievements.


This article originally appeared on the Oxford Australia blog.


Featured image: The Landing at Anzac, 25 April 1915. Archives New Zealand. CC BY-SA 2.0 via Wikimedia Commons.


The post Anzac Legend appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 23:30

World Malaria Day 2016

Over the past few years, the momentum of research and efforts on malaria has tremendously decreased malaria transmission and the number of deaths from this disease. However, in many poor tropical and subtropical countries of the world, malaria continues to be one of the leading causes of illness and death. To avoid a decline in the efforts to prevent and treat this disease, World Malaria Day on 25 April is focusing to “End Malaria for Good.” We recently spoke to Johanna Daily, MD, MS, an Associate Editor of the journal Open Forum Infectious Diseases, who answered some important questions including why malaria remains such a difficult public health challenge and if malaria can ever truly be eradicated.


You previously described t he state of the fight against malaria for OUP blog readers in 2013. What major advances have there been in the prevention, diagnosis, and treatment of the disease since then?


Let’s recap the progress over the last major period of evaluation, 2001-2015: we saw 1.2 billion fewer malaria cases and 6.2 million fewer deaths globally in this period than would have occurred had rates stayed at the 2000 level. This remarkable reduction resulted from aggressive vector control, bed net distribution, effective treatment, and prevention with antimalarial agents.


Going forward, new priorities have been established. There’s a concerted effort to eliminate the transmissible form of malaria, the gametocyte, which silently circulates in humans, awaiting a mosquito bite that could perpetuate transmission. Adding a single dose of primaquine to artemisinin combination therapy to treat malaria will clear the gametocyte, a strategy that is now recommended.


There is renewed interest in the use of mass drug administration, which is treatment of the entire population in a geographic area with a curative dose of an antimalarial drug without first testing for infection. This would reduce the number of human carriers and in combination with vector control could further reduce transmission.



Young girl receiving treatment. Image provided by World Malaria Day. Used with permission.

Why does malaria, which is preventable and curable, remain such a difficult public health challenge to address today? 


Combating malaria, like any other public health scourge, needs resources. Malaria is generally endemic in poor regions, and these communities often cannot access the tools to prevent and treat this infection. For example, antimalarial stock outages in health clinics are not uncommon in endemic regions. A critical problem is how to provide and sustain funding to allow each country to have robust malaria control programs.


There are “hotspots” of malaria transmission – 80% of malaria deaths are concentrated in just 15 countries. These regions often have ecological conditions favoring large populations of the mosquitos vector combined with a high percentage of the human population infected (either clinically ill, or as silent reservoirs). Reducing transmission in these regions will require multipronged, highly organized efforts. Often these countries are handicapped by low gross national incomes and weak health systems, both of which impede effective malaria control, treatment, and eradication programs.


Another looming problem is the emergence of artemisinin resistance in regions of South East Asia. Artemisinin is now the cornerstone drug for malaria combination therapy, and high levels of resistance would be disastrous. While there are new antimalarial compounds under development, they will require clinical trials evaluating efficacy and toxicity. Such trials are time consuming and expensive. Importantly, simple methods to track artemisinin resistance using molecular techniques are available, so we can follow this worrisome trend.


What should the public and policymakers take away from these developments?


The WHO has laid out a plan that addresses many of these issues, providing a strategy toward control and elimination. They set out an ambitious target of reducing global malaria burden by 90% in 2030.


Key principles include a by country tailored approach, enhanced in-country leadership with engagement of communities, improved monitoring and evaluation, health care equity and further innovations to achieve these goals.


What are the most promising areas of current research? 


Vaccine development remains a very active area of research. The RTS,S/AS01 vaccine demonstrated 56% efficacy (5-17 months), and 30% efficacy (6-12 weeks) against clinical malaria; it is now undergoing regulatory review. There are additional vaccines under development to provide higher efficacy, including a whole sporozoite vaccine, which is currently in clinical trials.


The theme for this year’s World Malaria Day is “end malaria for good.” In January, US President Barack Obama made a similar pledge in his State of the Union speech to lawmakers. How realistic is the goal of malaria eradication?


I am grateful that both President Bush and President Obama had global elimination of malaria on their radar screens and have supported efforts toward this end. As for how realistic this goal is, remember these key facts: it has widespread support from people all over the world, we have the technology and tools to achieve this end, and the data already show some success – an increasing number of countries are in pre-elimination status or have eliminated malaria completely. For example, the WHO European Region reported zero indigenous cases for the first time last year. In 2014, 16 additional countries reported zero indigenous cases.


We cannot lessen our commitment to fighting this parasite, which has evolved to be a powerful and formidable foe. The success stories I cited above should energize funders, public health workers, and all those involved to maintain a steady and committed focus to continue their work toward “the end of malaria for good.”


Featured Image Credit: Image provided by World Malaria Day. Used with permission.


The post World Malaria Day 2016 appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 04:30

Who is “victorious?”: transformed American meanings of war and power

We lost the Vietnam War. There is little reasonable ambiguity about this judgment, nor can there be any apparent consolation. Losing, after all, is assuredly worse than winning. And victory is always better than defeat.


But what if there is no longer a meaningfully determinable way to calculate victory and defeat? What if it should turn out that the Iraq, Afghanistan, and now Syrian wars will have been fought without ever being able to ascertain the victory versus defeat outcomes? With such a future, we would have to abide, inter alia, a pattern of endlessly confused war terminations, a pattern potentially more destabilizing than one that would exhibit endlessly conspicuous failures.


Whatever our current views on the wars in Iraq, Afghanistan, and Syria, one analytic judgment is certain. Going forward, traditional notions of victory and defeat will have diminishing or little relevance in measuring our military operations. This instructive indictment also holds true (but even more so) for the ongoing and increasingly inchoate American “war on terror.” To be sure, this Bush-era term is no longer in fashion, but the underlying concept remains very much the same.


In the past, whenever our country’s wars had more-or-less readily identifiable beginnings and endings, declarations of victory and defeat could still make military and political sense – at least in principle. Today, however, when we are engaged in simultaneous interstate and counterterrorism conflicts that will never close with any ordinary war-terminating (treaty or armistice) agreements, and that are animated by compelling promises of power over death (“martyrdom”), such declarations are bound to be hollow or premature. Now, it will be difficult to challenge, the core lines of demarcation between conflict and peace have become blurred, merely distracting, meaningless, and very, very gray.


For the future, there will likely be no recognizable enemy surrenders. Instead of parades and flowers, there will only be interconnected plateaus of exhaustion, suffering, and – of course – an exasperatingly empty rhetoric. Always – and this never really changes – we can expect the utterly humiliating and debilitating rhetoric.


What does this all really mean for America? At a minimum, it suggests that we should no longer cling desperately to manifestly outdated and futile strategic expectations. No, at some point, at least, truth will have to have its correct place, and truth, as we must already know, is always exculpatory.


The ritualistic pleas of both politicians and generals that we should always plod on till some glorious “victory” can no longer be grounded in any serious thought. Accepted too uncritically, these grotesquely vain exhortations would only lead the United States to further insolvency, and to a state of more-or-less absolute vulnerability. It’s very nice, of course, to plan to “make America great again,” but any such plan represents little more than a tactical obfuscation. It is always just an unspeakably shallow witticism.


Surprisingly, perhaps, there is also a significant upside to these changing meanings of victory and defeat. Here, what is true for America, is also true for its principal enemies. Like us, these assorted foes must now also confront potentially huge homeland vulnerabilities in the absence of any prior military defeat.


Properly understood by our leaders, this largely unforeseen mutuality of weakness could soon be turned to our own critical advantage. Once we can acknowledge that our strategic goals may now have to be far more modest than traditional ideas of “victory,” our indispensable exercise of world power could begin to become much less visceral, and thus far more thoughtful.


In the final analysis, as the ancient Greeks and Macedonians had already recorded in their principal historical texts, war – though a “violent preceptor” – is ultimately an intellectual affair, a calculating contest of “mind over mind,” not one of mind over “matter.” Even if this country is not yet prepared for a more generally reinvigorating blast of Emersonian “high thinking,” our military plans will still need a far more explicit grounding in “mind.”


Image credit: “Iraq Tour 814” Elliott Plack, CC BY-SA 2.0 via Flickr


The post Who is “victorious?”: transformed American meanings of war and power appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 03:30

Remembering Easter 1916 in 2016

Remembering the Easter Rising has never been a straightforward business. The first anniversary of the insurrection, commemorated at the ruins of the General Post Office on Easter Monday, 1917, descended into a riot. This year its centenary has been marked by dignified ceremonies, the largest public history and cultural event ever staged in Ireland and, in Northern Ireland, political discord, and menacing shows of paramilitary strength. Over the past century, the Rising’s divisiveness has remained its most salient feature.


When the Irish Free State came into existence in 1922, it rooted its legitimacy not in the Anglo-Irish Treaty that established its authority (and led to a bitter Civil War), or the general election of 1918 which saw a majority in Ireland vote in favour of the Republic, but the unmandated blood sacrifice of 1916. This did not prevent the widows of the Rising’s executed leaders from boycotting State commemoration of the event throughout the 1920s. Like other militant republicans, they regarded a partitioned State whose leaders swore fealty to the British Empire as an illegitimate entity.


Although Éamon de Valera’s anti-Treaty Fianna Fáil repudiated the Free State’s right to the ownership of the legacy of 1916, his party placed even greater emphasis on commemorating the Rising when it came to power in 1932. Despite their political differences, both regimes constructed a conservative vision of the most revolutionary moment in Irish history, one which played down the influence of radical impulses such as socialism, feminism and secularism. This was achieved by remembering 1916 within a Catholic perspective, with particular emphasis placed on the legacy of Patrick Pearse, whose powerful writings emphasised the idea of martyrdom.


The Irish State’s efforts to construct a usable memory of 1916 have often faltered. Against a background of improving North-South relations, Irish Taoiseach (Prime Minister) Seán Lemass’s efforts to project a modern civic patriotism in 1966 failed to displace decades of anti-partitionist grievance. In Northern Ireland, where the rebellion symbolised unfinished business rather than national sovereignty, the 50th anniversary was exploited by the rising Protestant demagogue Ian Paisley to inflame communal tensions. The troubles that followed further complicated the memory of the Rising as the Provisional IRA positioned itself as the heirs of the physical-force tradition sacralised in 1916. Against a background of sectarian violence in the North, the 75th anniversary in 1991 evoked little enthusiasm in the Irish Republic.



Image credit: The Leaders of the 1916 Easter Rising were buried in the old prison yard of Arbour Hill prison by Domer48. Public Domain via Wikimedia Commons.

In terms of its scale, popularity, and the extent of State involvement in the centenary, the parallels with 1966 seem the most apparent. There is one important difference, however: the desire to balance a celebratory remembrance of 1916 with a more sophisticated acknowledgement of the complexity of the historical event. Commemorations, as Roisín Higgins noted in her study of the fiftieth anniversary of the Easter Rising, tend to celebrate ‘the dominance of one historical narrative and the defeat of another.’ For most of the past century, remembering 1916 in the Irish Republic meant forgetting Home Rule, the alternative future that most Irish people took for granted prior to Easter 1916, and overlooking Irish nationalist participation in the First World War. In contrast, the present Irish government’s decision to remember 1916 as part of a wider Decade of Centenaries, incorporating the Home Rule crisis, First Word War, and revolutionary violence that followed, has seen the political losers of the era reintegrated into the national narrative. Wider social developments, particularly the liberalisation of society following the collapse of Catholic authority, have contributed to this shift. Previously neglected fatalities of 1916 — such as civilians (who accounted for the majority of deaths during Easter week), policemen and Crown forces (many of them Irish) — are now being remembered. The involvement of women in the Rising has also received unprecedented attention, as have the radical social ideals of the revolutionary generation.


The most obvious continuity with earlier commemorations remains the State’s desire to fashion a usable memory of 1916. Remembering a revolutionary act of violence that aimed to destroy British rule in a manner intended to consolidate, rather than destabilise, the Peace Process has brought its own tensions, with the State’s commemorative programme initially criticised for evading the radicalism at the heart of the rebellion. In Northern Ireland, Unionists have declined to commemorate the Rising, while its remembrance within the nationalist community is dominated by Sinn Féin rather than civic organisations or the State.


Nor, unsurprisingly, has the shift from a republican to a pluralist memory of 1916 gone uncontested in the South. A memorial at the iconic Glasnevin Cemetery, which lists without distinction the names of the republican dead and British soldiers, has met with the opposition of 1916 relatives’ groups and republican organisations, including Sinn Fein, notwithstanding its opposition to a hierarchy of victims of the Troubles. The response of the government to such controversies provides a striking indication of the extent to which the times are changing. Asserting that ‘all lives are equal’, the minister responsible for commemoration, Heather Humphreys — a Presbyterian from an Ulster Unionist background whose grandfather signed the Ulster Covenant — confirmed the government’s intention to host a State event at Grangegorman Cemetery to commemorate the British soldiers killed in the course of suppressing the Rising.


Who knows what the signatories of the Proclamation would have made of all this? Many Irish people will see it as a testament to the self-confidence and maturity of an Irish State which now looks to its closest neighbour as an equal partner rather than former oppressor.


The post Remembering Easter 1916 in 2016 appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 02:30

Twenty-first-century Shakespeare

Forever demanding new performers to interpret them for new audiences under new circumstances, and continuing to elicit a rich worldwide profusion of editions, translations, commentaries, adaptations and spin-offs, Shakespeare’s works have never behaved like unchanging monuments about which nothing new remains to be said. The histories of important theatre companies need almost continuous rewriting. Both Shakespeare’s Globe and the Royal Shakespeare Company, for example, carried out some significant architectural self-reinvention between 2001 and 2015 and a fair few changes of artistic policy too. Current critics and artists move from bylines to obituaries in a sort of permanent melancholy background knell. (I remember adding ‘d.2000’ to the entry about Sir John Gielgud when the first edition of The Oxford Companion to Shakespeare was just going to press; Alan Howard’s ‘d.2015’ just missed the deadline for the second.) Yet neither Stanley Wells nor I could have anticipated the extent of changes in the culture at large and to Shakespeare’s place within it within the last fifteen years.


While the last two decades have seen a boom in biographies of Shakespeare, there have been no major archival discoveries about his life, though if the Cobbe portrait (brought to fresh public attention in 2009) is genuinely a likeness of Shakespeare we may be able to make some new inferences about his interactions with the aristocracy (more of which were convincingly teased out of his poem ‘The Phoenix and Turtle’ by James Bednarz in 2012). Recent theatre archaeology, meanwhile, notably at the site of the Theatre in Shoreditch (the main playing place of the Lord Chamberlain’s Men until they dismantled it and recycled its timbers to build the Globe in 1599), has tended to confirm earlier hypotheses rather than to overturn them. More attention is devoted to new Shakespearean theatres rather than old. These have included not just the remodelled Royal Shakespeare Theatre in Stratford (2010) and the indoor, neo-Jacobean Sam Wanamaker Playhouse in London (2014) but more novel structures elsewhere in the world. The remarkable Teatr Szekspirowski in Gdansk, Poland (2014), for example, a distant dream in 2001, can convert from being an indoor, seated, proscenium-arch venue to being an outdoor yard-based one (its shape based on that of the Fortune in London) thanks to its magnificent self-opening roof.



Winedale Shakespeare Festival by AJ LEON. CC BY 2.0 via Flickr.

The worldwide attention given to the opening of Gdansk’s new Shakespearean playhouse, itself designed to serve a well-established and growing international Shakespeare festival, is one minor symptom of a much larger change in Shakespeare’s status. In 2015, to think about Shakespeare primarily as the cultural property of the British, or even to regard him solely as the supreme literary figure of Anglophone culture, seems parochial as never before. Although Shakespeare continues to benefit from the dominance of English as a world language, more people now speak English as a second or third language than as a first, and globally at least as many performances of Shakespeare are given in translation as are offered in his native tongue. (German-language productions in Germany, for instance, alone outnumber English-language productions in Britain and Ireland.) Geopolitically, this has produced a definite shift in the balance of power in Shakespearean performance and scholarship. The years since 2001, for instance, have seen the formal establishment of the European Shakespeare Research Association (2007), the foundation of the Asian Shakespeare Association (2014), and the inauguration of the Asian Shakespeare Intercultural Archive (2010), a Singapore-based digital resource which provides online access to video recordings of a whole new wave of Shakespearean performances from Korea, Japan, Singapore, Taiwan, and mainland China. (In 2015, tellingly, even the loyally Stratford-focused RSC is embarking on a project to foster a new, more actor-friendly translation of the plays into Mandarin.) These developments became visible even to those British theatregoers who never venture beyond London in 2012, when the Cultural Olympiad that accompanied the London Olympics chose a World Shakespeare Festival as its central feature, and Shakespeare’s Globe contributed by hosting visiting productions from all over the world – memorably advertised as ‘36 plays in 36 languages.’


If Anglophone live performance has had its centrality to the world’s engagement with Shakespeare challenged over the last two decades, then so has live performance itself. Film and television have continued to adapt and appropriate Shakespeare (in Britain, in Hollywood, and ever more visibly in India and the Far East too), while theatre audiences at mainstream performances by large companies are more and more likely to find themselves in the company of television cameras, as more productions are digitally streamed in real time to screens around the world in a curious 2-D hybrid between live theatre and the cinema. The physical book, and even the library, meanwhile, have been equally decentred. Nowadays ‘digital Shakespeare’ is more likely BuzzFeed quizzes, satirical memes, movie trailers, and live-performance tweets than specialist-content subscription sites in libraries.


Older scholars, then, have been fortunate to recruit those that, among their many other qualifications for the job, are significantly younger than us. It isn’t exactly that we feel ourselves to be closer to the Shakespeare of Richard Burbage and the First Folio than to that of Tom Hiddleston and YouTube, but it has been a definite advantage to have as colleagues brilliant commentators on 21st-century Shakespeare.


The post Twenty-first-century Shakespeare appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 01:30

Is Buddhism paradoxical?

Buddhist literature is full of statements that sound paradoxical. In Mahāyāna sūtras, for instance, we repeatedly find claims of the form, “x is not x, therefore it is x.” This has led to the widespread idea that Buddhism, like some other religions, wants to point us in the direction of a reality transcending all intellectual understanding. But while this view of Buddhist thought may be common, it is rejected by most Buddhist thinkers. For it puts Buddhist teachings perilously close to Advaita Vedānta, the Indian school that claims that ultimately all is One. It also calls into question the idea that the Buddha taught the truth when he said that the cause of suffering is ignorance about impermanence and non-self. So for the Madhyamaka school, for instance, the point of the paradoxical-sounding statements is just to get us to stop engaging in metaphysical theorizing.


I have long favored this anti-metaphysical (or “semantic”) interpretation of Madhyamaka. This is what I had in mind when I said that for Madhyamaka, the ultimate truth is that there is no ultimate truth. Of course this itself sounds paradoxical. But this is paradox used ironically (Tom Tillemans calls this the “head-snapper” use): paradox as a rhetorical device that invites us to work out an ambiguity and resolve the seeming contradiction. In the case at hand this goes as follows: the first “ultimate truth” refers to whatever realization brings about final cessation of suffering, the second refers to the idea that there can be a theory that corresponds to the mind-independent nature of reality. And I’ve thought a similar strategy can be used to discharge all apparent contradictions in Madhyamaka.


Recently, though, there have been challenges to this way of resolving paradoxes in Madhyamaka, with Jay Garfield and Graham Priest proposing a dialetheist reading of Madhyamaka. Dialetheists hold that there can be contradictions that are true; they use a paraconsistent logic to prevent the “explosion” that results from the presence of a contradiction when our thinking is governed by the rules of classical logic. Now there is ample evidence that Indian Mādhyamikas accept the laws of classical logic. Indeed Candrakīrti says somewhere that anyone who accepts a contradiction is “crazy” (unmattaka). But Robert Sharf recently told me that Chinese Madhyamaka and its successor schools may be more friendly to dialetheism. I don’t read classical Chinese, so I’ll have to limit my discussion to Indian Buddhist philosophy. Could it be that Indian Mādhyamikas were wrong to reject the possibility of true contradictions, and that their arguments actually show reality to be, at heart, paradoxical in nature?


Here is a test case. Not just Madhyamaka but Mahāyāna in general holds that conceptualization is central to the ignorance that keeps us stuck in saṃsāra. This can be expressed as the claim that all conceptualization falsifies. Is this claim true? If it is, then since it uses concepts it is also false. So if these Buddhists are right about the roots of ignorance, then we are faced with a paradox. Is there any way they might get around this?



Seated Buddha Gandhara by PHGCOM. CC BY-SA 4.0 via Wikimedia Commons.Seated Buddha Gandhara, by PHGCOM. CC BY-SA 4.0 via Wikimedia Commons.

Before I retired from Seoul National University, our department brought François Recanati from Paris to give an intensive seminar on contextualist semantics. There I learned of an approach to the theory of meaning that he calls radical contextualism. We all know that some statements don’t have a determinate meaning apart from a context in which they are asserted. The statement, “It’s raining here today” only succeeds in saying something – in picking out some state of affairs in the world – when it is spoken at a particular place and time. Radical contextualist semantics says this is true for all statements, not just those containing words like “here” and “now.” While we may think we can understand the meaning of a statement without knowing the context in which it is uttered, this is only because we imaginatively fill in a background in which someone might say it. This is not the place to go into the evidence supporting radical contextualist semantics. But suppose it were true. It offers a way to resolve the paradox involved in trying to say that all conceptualization falsifies.


When we utter a statement, we presuppose that there is some mind-independent truth-maker for our assertion. Just what would count as such a truth-maker will vary, however, depending on the interests and cognitive limitations of speaker and audience in a particular context. Madhyamaka claims that no known theory about the nature of those truth-makers really works. This is what stands behind their view that conceptualization falsifies. The difficulty comes when we try to say this. To say that all conceptualization falsifies, one must presuppose a mind-independent fact that makes it true – in which case there would be at least one such truth-maker. One may be able to demonstrate that what counts as doing the grounding in one context stands in need of further grounding in some other. But we cannot look for a demonstration that this holds universally. Radical contextualism explains why this should be. To ask for such a demonstration is to ask for an assertion that holds across all possible contexts – something the radical contextualist claims there cannot be. That there are no ultimate truth-makers is something that perhaps can be shown, but it cannot be said.


I once decided to call work that brings two philosophical traditions into dialogue in order to help solve philosophical problems “fusion philosophy.” Others don’t like “fusion,” and use “confluence” instead. In most cases of fusion or confluence philosophy, there’s an attempt to show that ideas from an Asian philosophical tradition can be used to help solve a problem arising in current philosophical thought. The present case is different: radical contextualism is a modern theory being recruited to help solve a problem arising in Buddhist philosophy. This sort of confluence is something we should look to see more of in the future.


Featured image credit: Confluence of the Tolminka and Zadlaščica, by Paul Asman. CC BY 2.0 via Flickr.


The post Is Buddhism paradoxical? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 24, 2016 00:30

April 23, 2016

The shambolic life of ‘shambles’

You just lost your job. Your partner broke up with you. You’re late on rent. Then, you dropped your iPhone in the toilet.


“My life’s in shambles!” you shout.


Had you so exclaimed, say, in an Anglo-Saxon village over 1,000 years ago, your fellow Old English speakers may have given you a puzzled look. “Your life’s in footstools?” they’d ask. “And what’s an iPhone?”


Some centuries later, had you cried out your despair in Chaucer’s London, your Middle English-speaking compatriots may have given you some sympathy: “Yup, the meat market is a tough trade”.


See, the word shambles has really changed over the years.


Chaos, omnishambles, and chairs

Today, shambles conveys a state of ‘confusion’ or ‘chaos’ – or a ‘hot mess’, more colloquially. The word enjoyed some special attention back in 2012. Then, Oxford Dictionaries named omnishambles – first used by Malcolm Tucker in the BBC’s The Thick of It – as its UK Word of the Year. The coinage later inspired the Twitter hashtag #RomneyShambles, which mocked 2012 US presidential candidate Mitt Romney after his gaffe about London’s Olympics preparations.


But for all its recent wordplay, the English language has long been creative with shambles.


The Oxford English Dictionary (OED) finds evidence of shambles all the back in Old English. Back then, a shamble – or sceamel, among its many other spellings – named a ‘stool’ or a ‘footstool’. Anticipating shamble’s modern spelling, the Old English sc sounded more like the Modern English sh. And as far as I can tell, the b in shambles is what linguists would call ‘excrescent’ or ‘epenthetic’; that is, a non-etymological sound speakers add to the middle of certain words, just as many pronounce hamster as hamptser.


Shamble, etymologists explain, was a common West Germanic borrowing of Latin’s scamellum, a diminutive of scamnum, a ‘bench’ or a ‘stool’. Modern German’s Schemel, for example, retains this earlier meaning of shamble, while the word took a much different direction in English – the direction of metaphorical extension.


Butchering shambles

In Old English, shamble was extended from ‘stool’ to a ‘table’ or ‘counter’ where goods were sold. By the 1300s, shambles signified a table or stall where meat, specifically, was sold; later a meat market more generally. In the mid-16th century, the record shows shambles naming not just where meat was sold but where it was butchered: a slaughterhouse. It’s around this time shambles seems to settle into its modern spelling – and its treatment as a singular noun in a plural form.


Butchery is a messy business: it’s no surprise that the OED finds the figurative usage of shambles as a ‘scene of blood’, as it gruesomely glosses it, by the late 16th century. Such a scene can seem chaotic and confused, so, by the start of the 20th century, shambles cleaned up the gore and took on its modern sense of ‘great confusion’.


The noun shamble also inspired the adjective, shamble, ‘ungainly’, first cited in 1607 in the phrase shamble legs. One must straddle the legs to sit on a stool, hence the sort of bowleggedness of shamble. (The French bancal, literally ‘bench-legged’, shows a parallel development from its root, banc, ‘bench’.) From the adjective shamble English derives the verb to shamble and with its ‘awkward gait’. Possibly modeled on the form of symbolic, the ‘disorderly’ shambolic appears by the second half of the 20th century.


From ‘footstool’ to ‘chaos’, some might say all these changing meanings of shambles has made its life, well, a shambles – or at least wobbly like shamble legs. But I, for one, think the word has done its job well, illustrating how words are like that original shamble: stools, if you will, that prop up or provide support for all the changing ideas, needs, and realities we use language to express.


A version of this article originally appeared on the OxfordWords blog.


Featured Image: “Destruction in Homs” by Bo yaser. CC BY-SA 3.0 via Wikimedia Commons.


The post The shambolic life of ‘shambles’ appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on April 23, 2016 04:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.