Oxford University Press's Blog, page 656

June 9, 2015

Unmasking Origen

“To be great is to be misunderstood”

–Ralph Waldo Emerson, “Self-Reliance”



If the degree of misunderstanding determines the greatness of a theologian, then Origen (c. 185-254 C.E.) ranks among the greatest. He was misunderstood in his own time and he continued to be misunderstood in subsequent centuries, resulting in his condemnation—or the condemnation of distortions of his ideas—at the Fifth Ecumenical Council at Constantinople in 553 C.E. Why has Origen been misunderstood? How do we understand him better? We need not agree with his theology, but we should at least do him the courtesy of trying to understand him before joining the chorus of his detractors or defenders.

First, Origen was misunderstood because of the complexity of his thought. He was a genius, and geniuses rarely operate within the prevailing modes of thought and rarely fit into simple conceptual compartments. They venture into unchartered territory. They experiment. They hold seemingly incompatible concepts in creative tension. Origen, for instance, affirms both free will and divine providence, both the reality of hell and its eventual destruction, as well as other theological and philosophical binaries. His speculative theology, creative exegesis, and ability to perceive “both/ands” where others see only “either/ors” naturally cause confusion and suspicion. Over the centuries the nuances of his thought have been frequently lost or flatted.

Second, Origen was misunderstood because of the complicated reception of his thought. His legacy was tainted by its simplification and misrepresentation by those who drew selectively on Origen for their own purposes without adequate attention to the subtlety of his thought. Given the sheer volume of Origen’s writings and the vastness of the subjects covered therein, it is not surprising that his theology would be appropriated in ways he would have disapproved of, and so rejections of Origen in subsequent centuries are often rejections of Origenism, that is, the ways Origen was selectively redeployed in later theological contexts. These misapprehensions, then, reflect later theological controversies, where Origen finds himself in the cross-hairs.

Third, Origen was misunderstood because of jealousy. Origen was famous for his prodigious intellect from childhood until his death shortly after undergoing torture for his faith. He was invited by international dignitaries, both ecclesial and political, to illuminate the scriptures and to teach sacred doctrine. His bishop in Alexandria, Demetrius, did not have the intellectual prowess of Origen and may have felt slighted by or jealous of his international fame. Origen was his catechist, after all, and Origen’s eclipse of Demetrius locally and internationally might have incited contempt. That might have contributed to his willingness to condemn Origen unfairly (in his eyes, at any rate), although it is difficult to determine his internal motivations.

Origen. Public domain via Wikimedia.Origen. Public domain via Wikimedia Commons.

So how do we understand Origen aright? First, we should contextualize his thought historically, geographically, philosophically, and conceptually. Historically, Origen wrote before the codification of orthodoxy in the major ecumenical creeds, so we should not hold him to those standards. Geographically, Origen’s thought shifts in significant ways, from his Alexandrian to his Caesarean phase. Philosophically, we should situate him in his Middle Platonic context without dismissing his theology as purely Platonic, especially since he gives decisive weight to the authority of scripture. Lastly, we will understand Origen better if we distinguish between his speculative theology, dogmatic or confessional theology, and exegetical theology, carefully tracing the areas of continuity and discontinuity between them. We need to understand him in his own terms as he understands himself, and his own work.

Second, we should read Origen rather than read about Origen. Ancient and contemporary perspectives have often been filtered through readings of Origen with the result that interpretations of Origen are mistaken for Origen’s position. Henri de Lubac expresses the problem perfectly: “To see him at work: this, we must repeat, is what has been most lacking. Many of the allegations we have recalled would have fallen away on their own after reading him. But Origen is rarely read . . . except by fragments and without making an effort sufficient to understand him. Or else he is approached with prejudices.” (Henri de Lubac, History and Spirit: The Understanding of Scripture According to Origen, trans., Anne Englund Nash (SanFrancisco: Ignatius, 2007 [reprint]), 37-38). Too often evaluations of Origen are based on the judgment of others rather than direct engagement with his corpus.

Finally, we will understand Origen better if we recognize his internal motivations (as far as possible) and self-identification as a man of the church: “But I hope to be a man of the Church. I hope to be addressed not by the name of some heresiarch, but by the name of Christ. I hope to have his name, which blessed upon earth. I desire, both in deed and in thought, both to be and to be called a Christian” (HomLc 16.6). As the son of a martyr, he was driven, I think, by the desire to honor the memory of his father by carrying on the work of teaching divine truths, and comporting himself with seriousness and integrity. His prodigious accomplishments were fueled by his love of the Word of God and his love of his deceased father; his hero.

Origen will continue to be misunderstood, of course. Perhaps that is part of his greatness. Those only slightly familiar with his writings will apply the easy labels and familiar charges to him without serious investigation. Once we move behind and beyond the stock indictments, however, we will encounter a beautiful mind aflame with passion for divine truth and committed to teaching those willing to embark on the journey to union with God with him. Hopefully, if enough people see past the invective, Origen will cease to be the most misunderstood theologian in the history of Christianity. We might still disagree with him on many theological issues, but we will not easily dismiss him, for we will appreciate his unflagging devotion to finding meanings “worthy of God” in scripture for the purpose of spiritual transformation.

Image credit: The Secret Church by Trey Ratcliff. CC BY-NC-SA 2.0 via Flickr.

The post Unmasking Origen appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2015 02:30

Giving up control

One of the great joys of classical composing is the plotting and planning of new sounds, harmonies, and rhythms. Many composers delight in working out exactly which instrument will sound when, which voice forms what part of a harmony, or how a motif will be created, twisted, and perhaps developed, morphed, or abandoned. Just like writers or filmmakers, many composers think ahead in order to work out the ebbs and flows of tension that create an arresting musical narrative. In many ways, this method of working stems from the natural impulse to control one’s creation—by writing out a precise score, the composer attempts to control what will happen at any given moment during the performance of the piece.

In extension of this idea of a pre-planned narrative, in the twentieth century it has become ever more common to fill a score with detailed markings. While scores by earlier star composers, such as Mozart or Beethoven, would have implied rather than explicitly spelled out the way they imagined a musical phrase to be played, many scores by such twentieth-century heavyweights as Stockhausen or Nono, and indeed many other serialist composers, indicate very specific dynamics or tempos, and even specify the exact duration of a phrase in seconds. Pages are covered in markings, technical jargon, and footnotes to make the meaning of each phrase and bar crystal clear to the performer. In contemporary music circles the inclusion of such details is often seen as careful craftsmanship and an assertion of the composer’s artistic vision.

Bob Chilcott’s The Miracle of the SpringBob Chilcott’s The Miracle of the Spring

However, given that these pieces are expected to be performed by living, breathing performers—who have spent a lifetime practising how to add a personal touch to each piece they play—in reality, such a fine degree of control is nearly impossible to achieve. No matter how detailed the score, a performance will inevitably be unique: each one takes place under different conditions, in different venues with different acoustics, performed by different musicians with different moods, skills, and artistic approaches. In that way, live music differs from recorded music—it can never happen in a vacuum. Each performance represents a unique moment in time that will never be repeated quite the same way. This quality of uniqueness is precisely what can make repeated live listening of a familiar piece so rewarding: the tension of the audience’s expectations, based on previous hearings, clashing with the reality of a new interpretation is an essential driving force of the experience.

In light of the unpredictability inherent in a performance, some composers have given up any pretence of control and officially handed the creative reins to the performers. By designing pieces that include musical ideas for performers to interpret in a uniquely personalised way, the composers can provide a rough framework for the piece but let the performers create something new and unique on the spot. Terry Riley’s seminal piece In C is a prime example of this practice: performers work their way through 53 musical motives, each player repeating these as many times as he or she wishes before moving on to the next one; the resulting overlap of sounds and repeated motives creates an ever-moving carpet of sound that can never be recreated. Today, a whole host of alternative compositional models exist to enable the performers’ spontaneity—such as graphic scores, which may only provide visual images to stimulate the performers’ imagination. Perhaps the most radical solution is expressed in John Cage’s 4’33’’, which presents the musicians with a few simple instructions and then a blank page; for 4 minutes and 33 seconds the performers and audience make their own music by listening to the sounds that occur during this time period.

However, these approaches beg the question of how much creative ownership of the piece the composer can still claim. If the performer actively creates content in response to a stimulus provided by the composer, how is that any different from responding to a story, a landscape, or a moment of human interaction, as any creator does? In an age where drawing on concepts provided by others is seen by many as a valid, even unavoidable way of gaining inspiration, who is the creator and who the performer? Many former members of rock bands have fought epic legal battles over who in the band composed a song and who merely performed it.

Gabriel Jackson’s To the Field of StarsGabriel Jackson’s To the Field of Stars

On the other hand, the opposite extreme can deny a performer’s sensitivity altogether: a pedantically notated score cluttered with so many detailed instructions that it denies any artistic input from a player might as well have been written for a computer programme. If there is no space for musicians to imbue a piece with their own spirit, then what becomes of that tension and unique experience of a live performance? Then why perform the piece at all, why not create a definitive recording instead?

Such thoughts are exaggerations of course, but they do highlight a common problem that composers face in the wake of these two composing traditions: how much control should they retain? How much leeway should they give to the performer? There are no right answers to these questions, but it is interesting to see how different composers handle such issues in their own way.

Oxford University Press composers are no exception to this: in Gabriel Jackson’s To the Field of Stars for SATB, percussion, and cello and Bob Chilcott’s The Miracle of the Spring for SATB and percussion, both composers use largely conventional notation methods to communicate the piece to the performers. However, for specific climactic sections, they allow the singers to repeat their lines freely and independently from the still strictly notated and metred accompanying lines—in Jackson’s case, sometimes even independently from each other or a conductor. By allowing some performers to act on their own artistic initiative while others play predetermined music, the composers ensure that these memorable moments of unpredictable shifting sounds will never be performed the same way twice, but still maintain a degree of control over the overall form.

The post Giving up control appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2015 01:30

Six things you didn’t know about Brighton and the law

This coming weekend is the BIALL (British and Irish Association of Law Librarians) conference in Brighton. As always, the event looks set to be an engaging two days, with an excellent selection of speakers talking around the theme of ‘Collaboration, Co-operation and Connectivity.’ But how well do you know the host city? Read on to discover six examples of legal trivia about Brighton.

(1) The West Pier: protected Starling roosts

On autumn evenings, the sky above the Brighton waterfront is filled with the black, undulating shape of a starling flock, thousands strong. The natural spectacle is called a murmuration. Starlings are designated as critically endangered in this country and are protected under the Wildlife and Countryside Act 1981. The act, which makes it illegal to intentionally kill or injure a starling, or to damage or destroy an active nest, protects the roosting site on the Brighton’s West Pier from interference. The pier burnt down in 2003 and only the fire-damaged skeleton remains.

(2) Caroline Lucas’s seaside memento

When Caroline Lucas affirmed her positioned as Green Party candidate for Brighton Pavilion, she clutched a pebble from Brighton Beach to represent the interests of her constituents, which she would carry with her wherever she went. Unfortunately, due to the 1949 Coastal Protection Act it is illegal to remove stones from a British beach. The erosion of Britain’s sea defences is a serious issue due in part to members of the public removing stones by the barrel-load to landscape their gardens. Lucas duly returned the stone to its rightful place.

(3) Battle of the piers

The fire-damaged West Pier, built during the pleasure pier boom of the 1860s, was the first British pier to become Grade 1 listed. In 2001, it seemed the blackened wreck off Brighton’s coast would be returned to its former glory when the National Lottery fund pledged £14m towards the pier’s restoration. Alas, the Victorian structure remains dilapidated today. Brighton Pier, the fully-functioning, 21st century gambling and gaming attraction off Brighton Beach, claimed unfair competition and subsequently squashed all hopes of rejuvenating the West Pier in the legal battle that followed.

(4) Cannabis culture: an untapped revenue stream

Modern Brighton is awash with shops offering smoking paraphernalia. The shops are catering to the large counter-culture movement that wants cannabis legalised, citing the repeated studies that show prohibition does more harm than good. In 2014, Caroline Lucas obtained over 130,000 signatures petitioning the government to review the Misuse of Drugs Act. The Green Party supports the ‘Copenhagen Model’ of cannabis legalisation, putting distribution in the hands of the local government. If the campaign is successful, introducing European coffee-shop culture to Brighton could bring a significant new revenue stream to Brighton’s flagging economy.

(5) First to tie the knot

At midnight, on Saturday 29th March 2014, the Marriage (Same Sex Couples) Act came into effect. In Brighton, competition to be the first same sex couple to tie the knot was fierce, with several night-weddings orchestrated perfectly so that the vows were spoken just as the clock struck twelve.

(6) Lewes bonfire night celebrations

The village of Lewes outside Brighton is infamous for the huge Bonfire Night celebrations on 5th November. Each year, 80,000 people descend on the small village, while the seven bonfire societies terrorize tourists with rockets and sinister face paint. The village has celebrated the foiling of the Guy Fawkes plot with ale-fuelled revelry for over 400 years. In 1606, An Acte for a publique Thancksgiving to Almighty God everie yeere of the Fifte day of November was passed proclaiming the discovery of the Gunpowder Plot should ‘be held in a perpetual Remembrance.’ The villagers have been true to the declaration ever since, marching through the streets in fancy dress, beating drums, and setting off fireworks with little regard for health and safety. In 2014, 14 people were arrested, 4 taken to hospital and a further 86 were injured.

Headline image credit: CCO Public Domain via Pixabay.

The post Six things you didn’t know about Brighton and the law appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2015 00:30

June 8, 2015

Before Wolf Hall: How Sir Walter Scott invented historical fiction

Historical fiction, the form Walter Scott is credited with inventing, is currently experiencing something of a renaissance. It has always been popular, of course, but it rarely enjoys high critical esteem. Now, however, thanks to Hilary Mantel’s controversial portraits of Thomas Cromwell (in Wolf Hall and Bring Up the Bodies), James Robertson’s multi-faceted studies of Scotland’s past (in The Fanatic and And the Land Lay Still), and Richard Flanagan’s Narrow Road to the Deep North, winner of the 2014 Man Booker Prize, the genre has recovered serious ground, shrugging off the dubious associations of bag-wig, bodice, and the dressing-up box.

Part of the difficulty of finding a place for historical fiction lies with our belief that history (being fact) and fiction (being invention) are such completely different entities, when in truth the historian’s techniques (of retrospection, selection, and re-engagement) and tools (narration) are also those of the novelist. All history, however scrupulous its record, is written, like fiction, out of an excess of material, constrained and shaped by hindsight, and from a sense of significance unavailable to those living through it.

Women writers have long found success with historical fiction, and that too has on occasion been another obstacle to the form’s serious valuation. From the 1920s to the 1970s there was a flourishing female tradition. Several of its practitioners in Britain and North America were among the first generation of women university graduates; many saw active lives of public service during the First World War. For women writers like Naomi Mitchison, Hilda Lewis, Carola Oman, Dorothy Kathleen Broster, Eleanor Hibbert (who was both Jean Plaidy and Victoria Holt), Anya Seton, Margaret Irwin, Mary Renault, and Norah Lofts and their readers, the historical novel was a space for mapping alternative histories, for expressing dissatisfaction with History’s orthodox and professional faces, and for supplementing the official record with the lives of the marginalized or those who appear not to have been there at all. Whose story will history tell? How many stories will it tell? The academy — what Catherine Morland, the heroine of Jane Austen’s Northanger Abbey, called ‘real solemn history’ — only caught up around 1975. Under some circumstances, it is because fiction is invented that it can inform us about reality; historical novels can, as these women writers found, expand our understanding of history by means of the story that pretends to be true.

Sir Walter Scott by Henry Raeburn, 1822. Scottish National Portrait Gallery. National Galleries of Scotland. Public domain via Wikimedia Commons. Sir Walter Scott by Henry Raeburn, 1822. Scottish National Portrait Gallery. National Galleries of Scotland. Public domain via Wikimedia Commons.

Ideas about the nature and uses of history, above all about its impact on the lives of men and women, are woven into historical fiction. If the professional historian need not always reflect on the purposes of history, the historical novelist should. In the best examples, a complex sense of the past makes for fiction that engages us critically or ethically with historical issues. Waverley is a novel about war and the pity and waste of war. It is set at precisely the same distance in the past for Scott as Flanagan’s Narrow Road to the Deep North, about a group of Australian PoWs working on the Thailand-Burma ‘Death’ Railway in the early 1940s. In Scott’s case, the decisive historical event is the 1745-6 Jacobite rising, the last civil war fought on mainland Britain.

Edward Waverley, Scott’s romantic protagonist, is caught up in events that prove beyond his ability to comprehend. He earns our compassion not because he is heroic (he is not) but because in him we see represented what we understand to be our own condition of helplessness and moral inadequacy. Waverley refines our historical intelligence through the failures and education of his own. He makes terrible mistakes, he fails the men under his command, he has their blood on his hands. A fictional character, Waverley travels through history and out the other side. Others stay sealed within history – actual characters from the past, like Charles Edward Stuart, of course; yet others, fictional creations like the MacIvors, in standing for a particular fusion of history and imagination, develop Scott’s enduring fascination with fanaticism. Fanaticism implied for Scott a particularly malign relationship to history – one that refuses to acknowledge that the primary law of history is change.

Re-reading Scott’s novel in the early twenty-first century has been for me a powerful experience. Like Scott in 1814, we in 2015 are caught up in reassessing the past, as we continue the long commemoration of the centenary of the First World War and confront the legacies of twentieth-century conflicts across the globe. We are now exactly the same distance from the events of 1914-18 as those then active were from Waterloo. The events of 1745-6 had been a serious threat to British unity and were followed by terrible reprisals – the brutal suppression of the Highland clan system and the systematic destruction of an older way of life. More than sixty years later, Scott approached his subject under the shadow of another war – Britain’s long campaign against Napoleon, during which the exceptional heroism of the Highland regiments, now a weapon turned against enemies abroad, won them adulation at home. History, Scott knew, reaches long fingers into the future and has a way of returning in unexpected shapes.

Featured image: Old Books, CC0 via Pixabay.

The post Before Wolf Hall: How Sir Walter Scott invented historical fiction appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 08, 2015 05:30

Roger Luckhurst’s top 10 vampire films

There are many film adaptations of Bram Stoker’s Dracula; many, of course, that are rubbish. If you need fresh blood and your faith restored that there is still life to be drained from the vampire trope, here are ten recommendations for films that rework Stoker’s vampire in innovative and inventive ways.

1. Vampyr (Carl Dreyer, 1932)

The story of Dracula on film tends to jump from Murnau’s silent masterpiece, Nosferatu (1920) to the invention of the ‘horror’ film with Todd Browning’s Dracula (1931). Interestingly, Bram Stoker’s widow sued Nosferatu for breach of copyright and set about trying to destroy every print of the film in Europe: lucky for us, she didn’t succeed. Universal at least remembered to pay Florence Stoker for the rights, and safely launched Bela Lugosi in the iconic form of the Count. Amidst all this noise, Dreyer’s wondrous film Vampyr sometimes gets forgotten, and yet it is a delirious, dream-like experience full of striking, unforgettable imagery. Go compare!

2. Dracula (Terence Fisher, 1958)

Hammer films reinvented itself as a ‘House of Horror’ with this splash of vivid Technicolor gore – shocking riches of gaudy colour in a drab post-war England. Christopher Lee embodies the model of the Count after the war, a relentless menace played off against the febrile and neurotic Van Helsing of Peter Cushing. It feels like an eloquent commentary on England’s decline somehow, full of an odd nostalgia for the life and death struggles of the past.

3. Cuaedcuc, Vampir (Pere Portabella, 1971)

A rare and subversive miracle of a film. Portabella was one of Luis Bunuel’s producers, who had been effectively banned from making films in Spain by the fascist regime. Portabella shot this film on the set of Jess Franco’s film Count Dracula, which stars Christopher Lee. It is a scratchy, over-exposed black and white avant-garde poaching of images from a slick Technicolor pulp. Scenes are played out from Franco’s film, yet Portabella’s camera keeps moving after the action stops, gliding into the wings, revealing the rickety wooden sets, the lights and smoke machines that generate all that fake Gothic atmosphere. We see Lee laughing and joking as he is made up as the Count. It was banned in Spain: everyone understood this Count Dracula to be a portrait of the undead fascist dictator General Franco, who finally died in 1976.

4. Martin (George Romero, 1978)

The perennial Pittsburgh outsider George Romero is better known for inventing the modern zombie with Night of the Living Dead (1968), yet his vampire film is a brilliant revision of the whole genre. We are never quite sure if the misfit teenager Martin is actually a vampire, a sexual neurotic, a serial killer, or just a monster created by the cracked religious fantasies of his crazy family. An extraordinary evocation of the collapse of the steel belt in America, too, undeadness a product of post-industrial ruin. After this, it becomes extremely hard to take any Count Dracula seriously, so effectively does it modernize the vampire trope. This is why Coppola’s Bram Stoker’s Dracula gets nowhere near this list.

5. Pure Blood (Luis Ospina, 1982)

Colombian artist and activist Ospina made a satirical short film called Vampires of Poverty in 1977, a rather heavy-handed satire on documentary-makers and photographers who trade on beautiful images of the urban poor. He took this one step further with the film, Pure Blood, about the abduction and exsanguination of the urban poor by a couple of serial killers. It is uninterested in the rhythms of tension and release typical of genre horror films, and is deliberately slow and alienating: it insists, however, that B-movie vampires can be a resource for political films.

6. Cronos (Guillermo del Toro, 1993)

Del Toro went on to become a Hollywood block-buster giant, and his Blade films are amongst the best of their kind. Before that, he made this almost perfect Mexican vampire film, a moving story of the love of a small child for her accidentally vampirized grand-dad, who has discovered a mysterious alchemical device that feeds on blood. There is another great political story here about the vampirism of North America on Latin America, as a super-rich gringo seeks this device that promises eternal life amongst the barrios. Unlike Ospina’s rather serious and laborious film, Cronos has a lightness of touch and a wonderful mordant wit.

7. Daybreakers (Spierig brothers, 2009)

The B-movie still gets made and is still the place where startlingly clever ideas can be worked out in low down and dirty narratives a long way from twitchy executives looking at the bottom line. It has a great cast (Willem Defoe, Ethan Hawke) who clearly enjoy their subversive slumming. Daybreakers is set in a future where 95% of the Earth’s population are vampires and the remaining 5% of humans are farmed and commoditized for their blood. Vampirism becomes an allegory of growing inequality: it spoke eloquently to a post-crash world with a sly and cynical wit.

8. Only Lovers Left Alive (Jim Jarmusch, 2013)

Jarmusch’s ultra-hip, slow and deadpan style is not to everyone’s taste. In this film it fits the languid account of two good-looking vampires living out a privileged but dwindling existence in the ruins of Detroit. Their options diminishing, they eventually travel back to the old world, with the last section set in the winding streets of Tangiers. The locations are used to impressive effect and you’ll either love the erotic languor of Tilda Swinton and Tom Hiddleston or hate it. It seems to announce the exhaustion of the vampire genre, a living on after the end of something. That, of course, is a risky thing to propose…

9. What We Do in the Shadows (Jermaine Clement and Taika Waititi, 2014)

A spoof ‘reality’ documentary on the flat-share of four vampires in New Zealand, which shows to wonderful comic effect and with a judicious dash of CGI special effects, how the vampire can be reinvented. After the tiresome culture wars between the sparkly vampires of Twilight and the sexy vampires of True Blood, this comedy feels like it has found new avenues of vampire life to exploit.

10. A Girl Walks Home Alone at Night (Ana Lily Amirpour, 2014)

Stark, high-contrast black and white images reveal the strange and wonderful story of a young woman vampire in contemporary Iran. Maybe don’t try harassing her as she walks home alone at night? Stoker exploited fantasies about the East, the ‘whirlpool’ from which pollution would infect the West. One of the delights of world cinema is to see how the vampire trope is reworked for different cultures and locales, subverting Stoker’s conservative impulses.

Featured image: Chicago Theatre by gautherottiphaine. CC0 via Pixabay.

The post Roger Luckhurst’s top 10 vampire films appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 08, 2015 03:30

Changing languages

In the literature on language death and language renewal, two cases come up again and again: Irish and Hebrew. Mention of the former language is usually attended by a whiff of disapproval. It was abandoned relatively recently by a majority of the Irish people in favour of English, and hence is quoted as an example of a people rejecting their heritage. Hebrew, on the other hand, is presented as a model of linguistic good behaviour: not only was it not rejected by its own people, it was even revived after being dead for more than two thousand years, and is now thriving.

Irish people feel a bit inadequate when confronted with the case of Israel. “Why can’t you be more like the Israelis?” we are constantly being asked, in the same tone that an exasperated parent would use to a difficult child. Usually, we just hang our heads and mutter something about 800 years of colonization, or remind the interlocutor that James Joyce and/or U2 are Irish. I’m going to try a different tack here.

In language, as in life, it sometimes happens that a certain code outlives its use. By that, I don’t mean that the language has no practical value. From a purely linguistic point of view, no language is more or less practical than another. The numeral system of Latin seems unwieldy to us, but it worked fine for the men and women who conquered half of Europe. As somebody who teaches and does research on Irish, I get weary of people telling me with an air of infallible authority that English is logical, or that Irish is a feminine language, or more spiritual than English. What they really mean is that in the world we live in, English is the language of computer programming, and for most of us, our only encounter with Irish is at a folk-festival, or in the picturesque surroundings of the Atlantic seacoast. But that is an accident of history, unconnected to the languages themselves, which are just sequences of arbitrary sounds.

Front cover of first issue of Gaelic Journal, Public Domain via Wikimedia CommonsFront cover of first issue of Gaelic Journal. Public Domain via Wikimedia Commons

So what do I mean by a code outliving its use, then? Language does not exist independently of society and culture, and if a community comes under intense pressure from another one, it has to adapt to survive. That may involve changing technology, or religion, or language. In the case of Ireland, the Gaelic population came under such pressure in the 18th and 19th centuries. The Irish language didn’t fail its speakers, but another language, English, offered opportunities that Irish couldn’t compete with. And I’m not just talking about material benefits, such as employment or social advance. English offered the possibility of access to the world of print, and all that that entailed, new forms of cultural, political, and literary expression that could find no outlet at that time in Irish. Maybe English also offered a kind of escape, particularly for those Irish who emigrated after the Great Famine. Sometimes language, like history, becomes a burden.

“Yes,” you might say, “but what about the Jews who revived Hebrew? They didn’t go looking for new forms of self-expression. They didn’t try to throw off the shackels of the past.” Actually, they did. Many of the immigrants to Israel were native speakers of Yiddish. They exchanged their native language for Hebrew. Apart from the sheer inconvenience of this exchange for new arrivals, there was a huge cultural and emotional loss. Yiddish had been spoken for centuries by Jewish communities in Europe. It had a rich literature and folklore. Most of that heritage has been lost in the new, Hebrew-speaking society of modern Israel. There were good reasons for abandoning one language and taking on another, I’m not questioning that. Hebrew offers new forms of self-expression that Yiddish lacked, and it does not carry the negative connotations of the latter language, associated as it was with oppression of Jews in eastern Europe. But there was a huge loss as well. I imagine that most younger Israelis would have to read the works of the Nobel Laureate Isaac Bashevis Singer in translation. At least the Irish get to read W.B. Yeats and Seamus Heaney in the original.

Finally, it’s worth bearing in mind, even if you’re a linguist, that there is more to life than language, and that a lot can be carried over from one language to another. Many of the Gaelic songs of the past survived in macaronic form or in English translation. Hiberno-English, the result of the crossing of two linguistic strains, produced a fine literature in its heyday, although now it too is on the brink of extinction, as it is being replaced by a new, Americanized, global dialect. Change is part of language: you can embrace it, or resist it, but there’s no escaping it.

Header image credit:Cuneiform inscription on the left of the temple door by EvgenyGenkin. CC-BY-SA-4.0 via Wikimedia Commons.

The post Changing languages appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 08, 2015 02:30

Vienna and the abolition of the slave trade

In April 1822, sailors from the British warships HMS Iphigenia and HMS Myrmidon, after a brief but fierce fight, captured two Spanish and three French slave ships off the coast of what is now Nigeria. Prize crews sailed the ships to Freetown in Sierra Leone, where the international mixed commission which was competent to hear cases regarding the slave trade decided to liberate the slaves found on the Spanish schooners, as well as those slaves found on a Portuguese ship which the British naval vessels had taken earlier.

With this story, the American international lawyer Jenny Martinez opens her book on the abolition of the slave trade in the 19th century, The Slave Trade and the Origins of International Human Rights Law. The actions of the British navy to fight the international, and particularly the Atlantic, slave trade, the establishment of mixed commissions, and the ensuing liberation of almost 80,000 slaves by those international tribunals were pursuant to a growing web of treaties which prohibited the international slave trade, and eventually slavery itself. In this development, the Vienna Congress of 1814–1815 played a seminal role. Among the treaties which were produced at Vienna was the Declaration of the Eight Courts Relative to the Universal Abolition of the Slave Trade of 8 February 1815 (63 CTS 473).

The Declaration was signed by the seven leading powers of the anti-Napoleonic coalition – Austria, Britain, Prussia, Russia, Portugal, Spain, and Sweden – as well as France. The Declaration was an achievement of British diplomacy, and of its major representative at Vienna, Robert Stewart, Lord Castlereagh (1769–1822).

If, during the early 19th century, Britain had become the champion among European States for the abolition of the slave trade, this was largely the merit of a movement which sprang from civil society. During the final decades of the 18th century, Britain, as well as the American colonies, saw the emergence of a strong and vocal movement that strove for the abolition of slavery. This movement, which was also driven by economic motives and inspired by sensitivities about human dignity flowing from the Enlightenment, had its strongest roots in radical, puritan Protestant circles. After having scored a major success before the courts in Somerset v. Stewart in 1772 (98 English Reports 499), wherein the holding of slaves on English soil was banned, the abolition movement turned its guns against the international slave trade. After Parliament had rejected several proposals to enact legislation to that extent, the fortunes of the movement changed when it allied its cause to the war effort against France and targeted the French slave trade. In 1806, Parliament passed the Foreign Slave Trade Act (46 Geo. 3, c. 52 (Eng.)), which forbade British subjects to trade in slaves with France or its allies. A year later, the Act for the Abolition of the Slave Trade (47 Geo. 3, c. 36 (Eng.)) expanded the prohibition to the slave trade as a whole.

During the Napoleonic wars and the War of 1812 against the United States, the British navy used its rights under the laws of war and neutrality to act against enemy and neutral vessels to stop the slave trade. As this occurred in contravention to established neutrality law and carried dangers for the relations with different neutral countries, British diplomacy endeavoured to conclude bilateral treaties with other powers whereby these powers accepted partial restrictions against trade in certain geographical areas, as in the treaties with Portugal of Rio de Janeiro of 19 February 1810 (61 CTS 41-1) and of Vienna of 21 January 1815 (63 CTS 453).

The prospect of peace, however, forced the antislavery movement to change tack. The First Paris Peace Treaty of 30 May 1814 (63 CTS 171) between the allies and France made this painfully clear. In one of the separate articles, concluded between Britain and France, it was stipulated that France would join Britain in its endeavour to attain a universal prohibition of the slave trade. The same article, however, made allowances for the French to wait for the enactment of national legislation to quell the slave trade for five more years, which in practical terms meant a step back from what Britain had been doing against the French slave trade during the war. This clause triggered a massive campaign within Britain which helped to force the hand of British diplomacy to push the issue at Vienna.

The Slave Trade by François-Auguste Biard. Wilberforce House Museum, Hull. Public domain via Wikimedia CommonsThe Slave Trade by François-Auguste Biard. Wilberforce House Museum, Hull. Public domain via Wikimedia Commons

The British followed a dual strategy. On the one hand, they tried to move towards a general, multilateral convention against the slave trade. This proved to be particularly difficult because of French resistance. On the other hand, success was attained by including prohibitions of the slave trade in bilateral treaties with other countries, including – apart from Portugal – the Netherlands and the US.

In the face of French resistance, the British failed to attain an immediate, general prohibition of the slave trade. But on 8 February 1815 the eight leading powers did sign a Declaration which condemned the slave trade as ‘repugnant to the principles of humanity and universal morality’, making direct reference to public outcry against it in ‘all civilised nations’. Regardless of this strong language, the Declaration remained more at the level of expressing a lofty cause than imposing concrete obligations, except for the commitment to start negotiations about general abolition.

Although the abolition movement felt a lot of disappointment over this result, in the most ironic of ways it served to clear a major obstacle towards general abolition. After Napoleon reassumed power upon his escape from Elba in March 1815, he moved to abolish the slave trade in order to placate the Vienna Congress. Whereas no general treaty of abolition of the slave trade materialised after Vienna, over the next decades many States moved to enact abolition through national legislation. As a more direct result of the Vienna regulation, in 1817 Britain succeeded in concluding several bilateral treaties, by which mixed commissions were set up to deal with slave trade issues. It was under these latter two treaties that the Freetown mixed commission operated in the above-mentioned case of 1822. On 20 December 1841, the five great powers of Europe concluded a treaty whereby they committed themselves to promote the abolition of slave trade and recognised a right to stop and search each other’s merchant vessels in certain waters in order to enforce it (92 CTS 437).

The Vienna Declaration has been rightly credited with having introduced abolition of the slave trade as a principle in general international law. As such, it became an inspiration and point of reference for the fight for general abolition.

The post Vienna and the abolition of the slave trade appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 08, 2015 01:30

Sexual deception in orchids

Alfred, Lord Tennyson once said “In the spring a young man’s fancy lightly turns to thoughts of love”, but he could have said the same for insects too. Male insects will be following the scent of females, looking for a partner, but not every female is what she seems to be.

It might look like the orchid is getting some unwanted attention in the video below, but it’s actually the bee that’s the victim. The orchid has released complex scents to fool the bee into thinking it’s meeting a female. Why would an orchid do this? It’s because the orchid is hoping to reproduce, and this is a cunning ploy to share pollen with other orchids like itself.

We’re used to the idea that a flower provides food to insects to encourage them to carry pollen to other flowers. It works for many plants, but it isn’t always a good deal. Insects can pick up nectar from many plants, but pollen only works if it’s delivered to a plant of the right species. So a plant keeps having to produce sugars to feed insects in the hope that some of them with eventually find the right plant to feed on.

Some orchids don’t bother with all that effort. Instead of welcoming many insects, they’ve set their own messages to appeal to just one.

The orchids emit a perfume that smells like a pheromone from a female insect. This can attract males from quite a distance. When they’re close enough they can see the flowers. The orchids provide a landing pad that mimics a female. The combination of sight and smell pulls the males in. It works but it doesn’t work for every insect, and but that might work to the orchid’s advantage.

Everyone likes food, so a flower offering nectar can attract a lot of different kinds of insects. However to be attractive to thynnine wasps you really have to be a thynnine wasp yourself. The orchid Chiloglottis attracts only thynnine wasps, so it doesn’t get a huge number of visitors, but when a wasp lands it’ll be a thynnine wasp looking for love. When it’s disappointed it’ll fly off again looking for another partner and it’s not so likely to land on a random plant. It’ll either find a female wasp or another Chiloglottis. So while it might not get a lot of pollinators, the ones it does get are more likely to deliver the pollen to the right destination.

“There are more species of orchid than there are of birds and mammals combined.”

You might think this kind of thing would annoy an insect, and you’d be right. Scientists captured and marked wasps that visited sexually deceptive orchids to see how they reacted to the flowers. They found that after a short while, the wasps learned to avoid the area and looked for females elsewhere. However, they also found that the next day the wasps would be back again, showing short-term, but not long-term learning. Either their brains weren’t capable of learning, or else when they got a whiff of scent the next morning they stopped thinking with their brains.

It’s the appeal to a specific insect that has helped drive diversity among orchids. If you mutate to produce more than one scent, then you might attract a second pollinator, but if that pollinator hits another orchid with a similar mutation then they could produce a flower that appeals to the new insect and not the old pollinator. Once that change in scent happens, the pollinators will isolate DNA between orchids, by only visiting one sub-group. It seems that even a small shift in chemistry can be enough to create a barrier between species.

It means that specific orchid species can be limited by their pollinators. After all, they cannot be pollinated in places where their pollinators don’t live. So it seems sexual deception ties a plant very closely to the insect it exploits. Despite this, sexual deception must be a good idea as it has independently evolved many times.

It seems that orchids in the Mediterranean were able to sexually exploit insects around ten million years ago, with a combination of chemicals and developing more complicated petals, like the labellum, which is effectively the landing pad for an insect. Independently, in Australia, they’ve evolved to exploit quite a few insects. Most scientists found even fungus gnats were good enough pollinators for one orchid to exploit. Such is the success of deception in Australia that some scientists have recently proposed that the combination of isolation and climate makes the continent a ‘perfect storm’ for deception.

But while sexual deception for orchids might be a free ride, it’s not always easy. Making the complex chemicals to lure insects takes a lot of complicated chemistry. Recently scientists found that the visible spectrum wasn’t enough for Chiloglottis orchids. To make their perfume, they need UV-B.

UV-B is in the ultraviolet range of the electromagnetic spectrum. It’s the part of sunlight that gives you a rich tan or, if you skimp on the sunscreen, the red glow of sunburn. A lot of it is blocked by the ozone layer, but the fraction that comes through is essential for orchids to make chiloglottones, the chemicals that attract insects. However, UV-B light is part of sunlight. While we can walk into the shade, a plant cannot. Making the scent needs a lot of resources that the plant cannot afford to waste, so how can the plant avoid making the perfume until it is ready? The answer is in the colour of its petals.

The colour comes from how the petals reflect light. We see the light that gets reflected, so not all light is treated equally. When it comes to UV-B, the petals block it. Tests on wavelengths showed that the petals were particularly good at blocking the short-wavelengths of ultraviolet light. The labellum, the landing pad that makes the scent, only sees ultraviolet when the flower opens up. The plant doesn’t waste any energy making chemicals to attract an insect to a closed bud.

It’s easy to think of plants as simple and passive. However, they are not passive at all, they simply live on a different timescale. Evolutionary biologists love orchids because their diversity shows that plants are as capable of exploiting any niche as anything else alive. They are extraordinarily diverse. There are more species of orchid than there are of birds and mammals combined. Nor are they simple. They’ll use anything they can to get an advantage over their competitors, synthesising fantastically complex chemicals with ease that an industrial chemist can only envy. The sexually deceptive orchids have combined their traits to create an irresistible lure that ensures their reproductive success. The unfortunate insects on the other hand will have to make a bit more of an effort.

Featured image credit: “Bee Orchid – Ophrys apifera”, by Björn S. CC-BY-SA-2.0 via Flickr.

The post Sexual deception in orchids appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 08, 2015 00:30

June 7, 2015

Party games: coalitions in British politics

The general election of May 2015 brought an end to five years of coalition government in Britain. The Cameron-Clegg coalition, between 2010 and 2015, prompted much comment and speculation about the future of the British party system and the two party politics which had seemed to dominate the period since 1945. A long historical perspective, however, I think throws an interesting light on such questions.

Famously, Benjamin Disraeli declared to the House of Commons in December 1852, against the background of a violent thunderstorm, that “England does not love coalitions”. The Liberal leader and prime minister Herbert Asquith pronounced that “nothing is so demoralising to the tone of public life or so belittling to the stature of public men as the atmosphere of a coalition”. The term coalition has often carried strongly negative connotations. Yet coalitions have not been rare in British politics since the early 19th century. What do they tell us about the party system? What lessons do earlier coalition governments offer to those interested in the nature of the party system?

Let me suggest that there have been two types of coalition governments over the last 160 years. Most obviously, there are those coalitions formed at moments of national crisis or emergency. Most often this has occurred in the context of war. Asquith and Lloyd George’s coalitions of World War One and Winston Churchill’s coalition ministry of 1940-5 clearly fall into this category. Ramsey Macdonald’s “National Government”, a coalition in all but name, was formed in 1931 amidst an economic crisis. Yet there have been other peace-time coalitions of a rather different kind. These coalitions portend or foreshadow fundamental shifts in the alignment of political parties. The Aberdeen coalition of 1852-5 is an example of this. Supposedly bringing together “a distillation of talent”, the Aberdeen coalition contained Whigs, Liberals and Peelites. William Gladstone, Chancellor of the Exchequer under Lord Aberdeen, called it “a mixed government”. Rather than the result of war, the Aberdeen coalition was brought down by war, the conflict in the Crimea producing vivid reports of military and logistical mismanagement. Yet the Aberdeen coalition anticipated that merger of party elements that came to form the parliamentary Liberal party in 1859. Similarly, the 1895 coalition government of Lord Salisbury marked a profound process of party realignment, as Conservatives and Liberal Unionists came together to form the Conservative Unionist party. Lloyd George’s peace time coalition after 1918 also signalled party realignment as a fractured Liberal party gradually gave way to the Labour party as the major opponents of Conservativism.

The coalition (Aberdeen ministry) of 1854 as painted by Sir John Gilbert (1855) via Wikimedia Commons (public domain).The coalition (Aberdeen ministry) of 1854 as painted by Sir John Gilbert (1855). Public domain via Wikimedia Commons.

What historical lessons do such peace time coalitions, foreshadowing party realignment, suggest? First, they show how the prospect of the next election hangs over the coalition like the sword of Damocles. Secondly, they reveal that entering coalition government is far easier than exiting from it gracefully. And finally, they show coalition relations are easier to maintain the closer one is to the centre of power. Coalition in the cabinet is easier to sustain than in parliament, and coalition relations are easier to sustain in Westminster than in the constituencies. In short, retribution for the failures of coalition seeps in from the grass roots. The weight of these lessons was all the greater in 2010 as, unlike the coalitions of 1895, 1918, and 1931, coalition was not endorsed by the electorate. It was the result of hurried private negotiations over a few hectic days. Also, when the Liberals entered coalitions in 1895, 1918, and 1931 it was a section of the Liberal party that combined with other parties. In 2010 it was the Liberal Democratic party as a whole that entered coalition.

What light does all this throw on the Cameron-Clegg coalition of 2010-5? It was formed and presented as a coalition in the context of a national financial emergency. The first kind of peace time coalition I have described. But it transformed into a coalition of the second kind; a coalition portending profound shifts in party alignment. Retribution from the grass roots fell upon the Liberal Democrats with a vengeance in May 2015. Equally importantly, Scotland became a virtual one-party state with the SNP eradicating almost all Labour representation north of the border. That UKIP received about 4 million votes signalled a further seismic shift in popular party sentiment.

Two broad historical conclusions might be drawn I think. First, coalition government is not an abhorrent occurrence in British politics. Indeed, for the sixty-year period between 1885 and 1945, for only ten years was a single party commanding a Commons majority in government. For fifty-years of that period coalition or minority governments held office. Perhaps only between 1859 and 1880 and 1945 and 1979 has a simple binary two-party alignment of parties been dominant. Secondly, what kind of coalition is formed can indicate and then precipitate impending profound changes in the longer-term nature of party alignment.

Headline image credit: Palace of Westminster at dusk, by chensiyuan. CC-BY-SA-4.0 via Wikimedia Commons.

The post Party games: coalitions in British politics appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2015 05:30

The 2015 General Rejection? Disaffected democrats and democratic drift

Political science and journalistic commentaries are full of woe about the abject state of modern politics and the extent of the gap that has supposedly emerged between the governors and the governed. In this context, the 7 May 2015 might have been expected to deliver a General Rejection of mainstream democratic politics but did this really happen? Is British democracy in crisis? Matthew Flinders argues that although interpretations of ‘crisis’ are exaggerated, there is a serious problem with democracy.

The Hansard Society’s ‘Audit of Political Engagement’ in 2015 was published just weeks before the General Election and offered further evidence of growing public disillusionment with (and disengagement from) mainstream democratic politics. Just 49% of the public said they were certain to vote and this figure fell to just 16% for those aged 18-24 (only 22% said they had undertaken a political activity in the past year), just 30% of those surveyed claimed to be a strong supporter of a political party, the number of those who believed they were registered to vote declined and the number claiming they had not registered increased. Only 61% of those surveyed thought that Parliament was ‘essential to our democracy’ (a decline of six percentage points), and 68% thought our system of governing needs improvement. Just 18% think that standards of conduct in office are high. Combine this with the surge in support for those ‘insurgent’ parties that were interpreted in fuelling anti-politics and the 7 May might have been forgiven for delivering a General Rejection of democratic politics.

10 Downing Street door by robertsharp. CC BY 2.0 via Wikimedia Commons.10 Downing Street door by robertsharp. CC BY 2.0 via Wikimedia Commons.

But is there evidence that a General Rejection occurred? More specifically, what can we actually learn about ‘the politics of political disaffection’ from both the election in terms of both the campaigning strategies of the parties and the voting behavior of the public? It would at this point be possible to highlight the impressive turnout, especially amongst the young; or the high levels of electoral volatility — the highest since the Second World War — that resulted in a high number (92 to be exact) losing their seats. But such answers would be too obvious, far better to focus on quite different issues.

The first issue is contextual and institutional in the sense that a debate about anti-politics, about disaffection, about disengagement arguably dominated the 2015 election. From Russell Brand’s intervention as the archduke of anti-politics to Ed Miliband’s claims about being the first politician to ‘not over-promise but then over-deliver’ the election was one in which the mainstream parties and their candidates started very much from a position of having to justify ‘the system’ rather than just their policies. The flip-side to this was the rise of the ‘insurgent parties’ and their politicization of anti-political sentiment based around a condemnation of the Westminster elite and the adoption of ‘outsider status’. And yet even here the insurgent parties had to tread a careful line between, on the one hand, rejecting the actually existing model of politics while, on the other hand, promoting a deeper conviction that democratic politics is not futile.

But let’s be honest about this. The 2015 General Election was not an anti-political election and the SNP, Greens and UKIP were not (and are not) anti-political parties. They are — rhetorically at least — anti the ‘actually-existing-model-of-politics’. They are pro-political but anti-Westminster majoritarianism. This is the crux of the issue. In this sense they all promised to ‘do politics differently’ and it would appear that in Scotland and across the rest of the UK a large segment of the public supported them in this endeavor. What did not support them, however, was the electoral system. Or – more specifically – the electoral system ‘worked’ for the SNP but ‘failed’ for the SNP and Greens. But what is important about the 2015 election is that is demonstrated that the UK is no longer a two or ‘two-and-a-half party system’ but a multi-party system being crudely suppressed by a simple plurality electoral system. And this brings me to my third and final issue: what the 2015 General Election really revealed was a major lack by any of the main political parties to demonstrate a little political imagination.

‘Doing politics differently’ or ‘re-designing democracy’ does not come easy to an official or politician schooled in the British political tradition. As a result we have a Government that is committed to tinkering with the mainframe (i.e. the Westminster Model) rather than thinking anew. The problem is that as Andrew Marr recently argued ‘The Centre Cannot Hold’ (New Statesman 23 March 2015), forces have been unleashed, but the centre has no sense of how to channel or control these dynamics. As the Hansard Society argued in a report in 2010 ‘there is no silver bullet for tackling public distrust and disengagement with politics’ but until British democracy stops drifting and politicians show a little political imagination — and political scientists adopt a solution-focused approach — confidence in the system is unlikely to grow.

Featured image: UK polling booth 2011 by Microchip08. CC0 via Wikimedia Commons.

The post The 2015 General Rejection? Disaffected democrats and democratic drift appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on June 07, 2015 03:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.