Oxford University Press's Blog, page 499
June 16, 2016
Dublin on Bloomsday: James Joyce and the OED
The sixteenth of June is the day on which James Joyce fans traditionally email each other their Bloomsday greetings. And nowadays it has become the focus for a global celebration of Joyce’s work, marked by readings and performances, and many other acts of Joycean homage.
Nutty gizzards, fried hencod roes, and Nora Barnacle
The reason: the action of Joyce’s novel Ulysses (1922) takes place on this day in 1904. During the novel we follow Leopold Bloom–its hero and antihero–from his breakfast of bread, butter, and slightly burnt pork kidneys (despite his longing for nutty gizzards and fried hencod roes), on his passage through the streets of Dublin, his encounters with (and avoidance of) fellow Dubliners, his cab ride out to Glasnevin for Paddy Dignam’s funeral, his lover’s intrigue, and much more, until he retires for the evening (home like Odysseus to Ithaca and Penelope) and–in counterpoint to his own intermittent “interior monologue”–we hear his wife Molly’s astounding and rambling stream-of-consciousness jumble of thoughts, impressions, loves, hates, and sundry inconsequentialities, in the longest sentence ever published in the English language.
And the reason for the reason: Joyce wanted to commemorate the day in 1904 when he first walked out in Dublin with his future wife, Nora Barnacle.
The first Bloomsday: hortensias, white and dyed

Bloomsday isn’t a word from the core vocabulary of English, but we included it in the OED for its “cultural” significance. The first known reference comes from Joyce himself, in a letter of 27 June 1924 – written from Paris, where he lived in self-imposed exile. He tells his patron, Harriet Shaw Weaver, about “a group of people who observe what they call Bloom’s day—16 June. They sent me hortensias, white and blue, dyed” (after the colours of the book jacket: Joyce was in hospital recovering from an eye operation). He scrawled despondently in his notebook: ‘Today 16 June 1924 twenty years after. Will anybody remember this date?’
Joyce anticipated that 16 June could become etched in the minds of the book-buying public. But Bloomsday didn’t immediately assume global importance. Joyce was a writer looking for a reputation. Many readers avoided his banned book, others found it too long and complex, but a hardcore of admiring fans did exist, and gradually they started to grow.
The first Bloomsday was celebrated publicly in Ireland in 1954, its 50th anniversary, when writers Patrick Kavanagh and Flann O’Brien visited the Martello Tower at Sandycove, Davy Byrne’s pub, and Bloom’s “home” at 7 Eccles Street, reading parts of Ulysses and drinking generously along the way (pictured in the featured image). The Times Literary Supplement has little to say about Bloomsday until the late 1950s. This date profile is supported by a Google ngrams word profile.
Absent heroes: the OED waits until 1972 to embrace Joyce
The OED’s reception of Joyce follows a similar pattern. Ulysses was published six years before the completion of the First Edition of the dictionary in 1928. But although the OED was full of references to the literary heroes of the past, it was entirely silent about Joyce. It could have cited him: Dubliners, for example, was published in 1914.
Joyce was still absent from the first Supplement to the OED in 1933. But the situation changed with the second Supplement (1972-86). Here it was hard to avoid Joyce, who leapt onto the leader board of most-cited authors. The vast majority of his 1,709 quotations derived were provided by a single OED contributor, Roland Auty, a retired English master from Faversham, Kent, and author of Nesfield’s Errors in English Composition (Madras: 1961). OED Editor Bob Burchfield wanted to see modern writers better represented in the dictionary: ‘like a medieval scribe,’ he recalled, ‘[Auty] copied in his own handwriting many thousands of 6×4 inch slips on which he entered illustrative examples for any word or meaning that occurred in Joyce and was not already entered in the Dictionary.’

The emergence of Joyce in the OED followed a global trend in Joyce appreciation. The dictionary has continued to add further examples from his works, and currently includes 2,436 quotations, with 1,728 coming from Ulysses.
Joyce’s double life in the OED
There is a sting in the tale to this story of the rise of Joyce in the OED. In his efforts to reproduce the fading Dublin of his youth, Joyce littered his text with words and phrases extracted from his reading, using a technique resembling that of the OED’s “readers.” He even included personal details of real Dubliners, lightly disguised or explicitly–sometimes even giving their home addresses (“the disorderly house of Mrs Bella Cohen, 82 Tyrone street, lower:” Mrs Cohen lived there from 1888 until 1905).
The aggregate number of his quotations is increasing, in line with his growing popularity, but the actual number of his first usages diminishes with each quarterly update simply because, with the mass of online material available, OED editors are able to find earlier sources, such as those Joyce himself ransacked for verisimilitude.
Eyeslits, prurition, rib steak, and the future
When the Second Edition of the OED was published in 1989, it contained 548 terms first attributed to Joyce. With the revision almost 40% complete, exactly one hundred of those usages have been replaced by other, earlier examples: eyeslit rockets back from Ulysses in 1922 to Noble’s Geodaesia Hibernica (1768); prurition plummets to Claude Lancelot’s Primitives of the Greek Tongue (1748); rib steak migrates to Charles Elmé Francatelli’s Modern Cook (1846). None of this reduces Joyce as a writer, but allows us to refine the areas in which his true creativity lies.
Joyce will remain a major source of vocabulary for the OED. In celebrating Bloomsday we are gratifying the author, who saw a marketing opportunity before it occurred to the mass of his reading public, but we are also paying homage to a writer whose engagement with the OED will continue–and doubtless change–for many years to come.
Featured image credit: Bloomsday performers outside Davy Byrne’s pub, Dublin, Bloomsday 2003. Public Domain via Wikimedia Commons.
The post Dublin on Bloomsday: James Joyce and the OED appeared first on OUPblog.

“A dream, which was not all a dream”: dark reflections from June 1816
Two hundred years ago, on 16 June 1816, one of the most remarkable gatherings in English literary history occurred in a villa just outside Geneva. Present at the occasion were Lord Byron, who had left England in April to escape (unsuccessfully, in the event) the scandal surrounding his separation from Lady Byron; John Polidori, whom Byron had engaged as his personal physician; Percy Bysshe Shelley; the eighteen-year-old Mary Wollstonecraft Godwin, with whom Shelley had eloped two years earlier and whom he was to marry in December 1816 (after the suicide of his first wife, Harriet Westbrook); and Godwin’s eighteen-year-old stepsister Claire Clairmont, who was hoping to resume a relationship with Byron that had begun in March (resulting in her pregnancy) and ended with his departure for the Continent. As Mary Shelley recalled in 1831, the group, confined by “incessant rain” to Byron’s rented villa in Cologny, had been reading Fantasmagoriana, a collection of ghost stories translated from German into French, when Byron proposed that each (possibly excluding Clairmont) should write an original ghost story.
The next day Polidori recorded in his diary, “The ghost stories are begun by all but me.” Percy’s story, which Mary remembered to have been “founded on the experiences of his early life”, was soon abandoned and is now lost. Byron’s, about a tormented and dying aristocrat called Augustus Darvell, was also soon abandoned, but the fragment was published as a pendant to his verse tale Mazeppa in 1819. Polidori’s story, begun on 18 June, involved, according to Mary’s later account, “some terrible idea about a skull-headed lady, who was so punished for peeping through a key-hole”. Nothing fitting that description survives, but Polidori was stimulated by his employer’s fragment to write another story, whose sinister protagonist Lord Ruthven takes his name from the thinly veiled Byronic figure of Lady Caroline Lamb’s Gothic roman-à-clef Glenarvon (1816). Published in the New Monthly Magazine on 1 April 1819 with a false attribution to Byron and an anonymous introductory note revealing the basic facts of the ghost-story contest, Polidori’s “Vampyre” marked the first appearance of a vampire in prose fiction, thereby initiating a subgenre of the Gothic that remains popular today. Finally, Mary Godwin’s story, deriving from her dream about a “pale student of unhallowed arts kneeling beside the thing he had put together”, grew over the following months into a novel, which was published anonymously in three volumes in 1818: this was, of course, Frankenstein, now widely considered the first work of science fiction.
Reverberations of the June 1816 discussions of the supernatural and the nature of life, conducted at the Villa Diodati over several days of exceptionally miserable weather, can be discerned in two remarkable poems that Byron composed in July or August of that year, before the Shelley part departed for England on 29 August. Although neither poem refers to Christianity, both indirectly challenge Christian soteriology—the doctrines of Christ’s redemption of humanity from sin and of eternal life—by presenting visions of a world in which and from which there is no possibility of salvation by divine agency.

Like Mary Shelley, the subtitle of whose novel is The Modern Prometheus, Byron appropriated and radically transformed the myth of the Titan god punished by Zeus for giving fire to mankind and thereby depriving the gods of its exclusive possession. Although Victor Frankenstein may represent, as Shelley herself put it in her 1831 introduction to the novel, the “human endeavour to mock the stupendous mechanism of the Creator of the world”, he is punished not by God but by his own creation, and for refusing to create a second creature as its mate—not for transgressing on divine prerogative, in other words, but for being insufficiently transgressive. In Byron’s “Prometheus”, prompted by Percy Shelley’s translating aloud from Aeschylus’s Prometheus Bound, the Titan’s gift loses nearly all its sense of transgressiveness against divinity because humanity already possesses, in the consciousness of its mortality, what Prometheus’s very name means in Greek, foresight: “Like thee, Man is in part divine, / A troubled stream from a pure source” (lines 46–7). Thus Prometheus, whose “Godlike crime was to be kind” (line 34), can do little more than “render with [his] precepts less / The sum of human wretchedness” by teaching us how to suffer with dignity, thereby “making Death a Victory” (lines 36–7, 59).
If “Prometheus” seems gloomy, it can’t hold a candle in that respect to “Darkness”, unusually for Byron an unrhymed poem, which offers a post-apocalyptic vision of the word:
The bright sun was extinguish’d, and the stars
Did wander darkling in the eternal space,
Rayless, and pathless, and the icy earth
Swung blind and blackening in the moonless air;
Morn came, and went—and came, and brought no day,
And men forgot their passions in the dread
Of this their desolation . . . (lines 2–8)
In this condition of unrelieved cold and darkness, social hierarchies dissolve and the trappings of civilization are destroyed (palaces and modest huts alike “burnt for beacons”) as, the earth having become infertile (“a chaos of hard clay”), famine spreads and people turn against one another in a vain struggle to survive. Finally, all light having been extinguished and all life having expired, only Darkness remains: “She was the universe” (line 82).
Various sources of the poem have been suggested, from the Bible (especially Jeremiah 4:23–8) to Lucretius’s account of the Athenian plague of 430 B.C. (De rerum natura 6.1138–286). Recent commentators have noted that the darkness, cold, and barrenness conjured up in the poem corresponded to actual conditions in 1816, which became known as the “Year without Summer”. Ash particles and sulphur dioxide from the eruption of Mt Tambora in Indonesia in April 1815—the most powerful volcanic eruption in recorded history—had spread round the world, dimming the sun for months and reducing average global temperatures. The resulting crop failures led to severe food shortages and civil unrest in some European countries: “Seasonless, herbless, treeless, manless, lifeless” indeed (line 71).
Yet “Darkness” makes no direct reference either to contemporary events or to the biblical Apocalypse, and for exactly that reason it seems at once mysterious and applicable to situations that Byron couldn’t have imagined. In the 1980s, for example, the poem was occasionally interpreted as a prophecy of nuclear winter, and twenty years later (by Jonathan Bate) as a prophecy of “ecocide”. Perhaps, indeed, the Promethean Byron is offering us a vision of the future that awaits us, the burners of fossil fuels, as we begin, in June 2016, what promises to be the hottest summer on record.
Featured image: ‘Castle in thunderstorm’ by Dieter_G. CC0 Public Domain via Pixabay.
The post “A dream, which was not all a dream”: dark reflections from June 1816 appeared first on OUPblog.

On deep learning, artificial neural networks, artificial life, and good old-fashioned AI
In the second part of her Q&A, Maggie Boden, Research Professor of Cognitive Science at the University of Sussex, and one of the best known figures in the field of Artificial Intelligence, answers four more questions about this developing area. At a theoretical level, the concept of Artificial Intelligence has fueled and sharpened the philosophical debates on the nature of the mind, intelligence, and the uniqueness of human beings. Insights from the field have proved invaluable to biologists, psychologists, and linguists in helping to understand the processes of memory, learning, and language.
What are artificial neural networks (ANNs)?
ANNs are computer systems made of large number of interconnected units, each of which can compute only one (very simple) thing. They are (very broadly) inspired by the structure of brains.
Most ANNs can learn. They usually do this by changing the ‘weights’ on the connections, which makes activity in one unit more or less likely to excite activity in another unit. Some ANNs can also add/delete connections, or even whole units. So ANNs can (sometimes) be evolved, not meticulously built.
Some learn by being shown examples (labelled as being instances of the concept concerned). Others can learn simply by being presented with data within which they find patterns for themselves. Sometimes, the human researchers weren’t aware that these patterns were present in the data. So ANNs can be very useful for data-mining.
What is deep learning?
Deep learning (DL) is the use of multilevel neural networks to find patterns in huge bodies of data (e.g. millions of images, or speech-sounds). The system isn’t told what patterns to look for, but finds them for itself.
The theoretical ideas on which it is built are over twenty years old. But it has now sprung into prominence, because recent huge advances in computational power and data-storage have made it practically feasible.
It is called ‘deep’ learning because the pattern that is learnt is not a single-level item, but a structure represented on various hierarchical levels.
The lowest level of the network finds very basic patterns (e.g. light-contrasts in visual images), which are passed on to the next level. This finds patterns at a slightly higher level (e.g. blobs and lines). The subsequent levels continue (finding corners, simple shapes… and finally, visible objects). In effect, then, the original images are analysed in depth by the multilevel ANN.
DL has had some widely-reported results. For instance, when Google presented a set of 1,000 large computers with 10 million images culled randomly from YouTube videos, one unit (compare: one neurone) learnt, after three days, to respond to images of a cat’s face. It hadn’t been told to do this, and the images hadn‘t been labelled (there was no “This is a cat”).
That happened in 2012. Now, there is an annual competition (The Large Scale Visual Recognition Challenge) to increase the number of recognized images, and to decrease the constraints concerned—e.g. the number and occlusion of objects.
‘Successes’ are constantly reported in the media. However, this form of computer vision is not at all the same as human vision. For instance, the cats-face recognizer responded only to frontal images, not to profiles of cats’ faces. Moreover, if there had been lots of cats’ profiles on YouTube, so that a profile-detector eventually emerged, the DL system would not have known that the two images relate to one and the same thing: a cat’s face. In general, DL systems have no understanding (i.e. no functional grasp) of 3D-space, no knowledge of what a profile, or occlusion, actually is.
There are many other things that DL cannot do, including some—e.g. logical reasoning–that no-one has the remotest idea of how it could do. It follows that its potential for practical applications, although significant, is much less wide than some people imagine.
DL is the latest example in a long line of AI techniques that have been hyped by the press and cultural commentators –and sometimes by AI professionals, who should know better. (The outstanding example of DL is the program AlphaGo, which beat the human Go world-champion in March 2016.)

What is Artificial Life?
Artificial life (A-Life) is a branch of AI that models biological, or very basic psychological, phenomena. It studies (for example) reflex responses, insect navigation, evolution, and self-organization.
The type of robotics favoured in A-Life is ‘situated’ robotics. Here, the robot responds automatically to particular environmental cues, when it encounters them in the particular situation. The inspiration is not deliberate human reasoning (as it was in the early AI robots), but the reflex activities of insects. For instance, the behaviour (and neuroanatomy) of cockroaches has been used to suggest ways of building six-legged robots that can clamber over obstacles (not just avoid them), remain stable on rough ground, and pick themselves up after falls.
Self-organization is the characteristic property of living things. Work in A-Life has hugely improved our understanding of this apparently paradoxical concept.
Did GOFAI fail?
GOFAI, or Good Old-Fashioned AI (also called symbolic, classical, or traditional AI), pioneered fundamental ideas that are still crucial in state-of-the-art AI. These include heuristics, planning, default reasoning, knowledge representation, and blackboard architectures.
Today’s AI planners, for example (widely used in manufacturing, retailing. and the military), are much more complex, and significantly less limited, than the GOFAI versions. But they are based in the same general ideas and techniques.
The USA’s Department of Defense, which paid for the majority of AI research until very recently, has said that the money saved (by AI planning) on battlefield logistics in the first Iraq war outweighed all their previous investment.
Some modern planners have tens of thousands of lines of code, defining hierarchical search-spaces on numerous levels. They don’t assume that all the sub-goals can be worked on independently. That is, they realize that the result of one goal-directed activity may be undone by another, and can do extra processing to combine the sub-plans if necessary. Nor do they assume (as the early planners did) that the environment is fully observable, deterministic, finite, and static. The system can monitor the changing situation during execution, and make changes in the plan—and/or its own “beliefs” about the world—as appropriate.
GOFAI techniques have been supplemented by other types of AI. For example, artificial neural networks and evolutionary programming. So symbolic AI isn’t the only game in town (actually, it never was, not even in the 1950s).
But to say that it has failed is a mistake.
The only sense in which it has, truly, failed is that the pioneers’ dream of building a general artificial intelligence has not been achieved—and, despite current fears about “the Singularity”, is not yet in sight.
You can also read part one of Maggie Boden’s Q&A.
The post On deep learning, artificial neural networks, artificial life, and good old-fashioned AI appeared first on OUPblog.

June 15, 2016
A timeout: the methods of etymology
I expected that my series on dogs would inspire a torrent of angry comments. After all, dog is one of the most enigmatic words in English etymology, but the responses were very few. I am, naturally, grateful to those who found it possible to say something about the subject I was discussing for five weeks, especially to those who liked the essays. As I have observed in the past, though I am supposed to love my enemies, I have a warmer feeling for those who are fond of me. At the moment, my dogs sleep in relative isolation, and that is where I’ll leave them. But suddenly my rather trivial old post on strawberry was picked up by MSN News, Hacker News, and Reddit, and thousands of people participated in the “chat.” Who could predict those five (three) minutes of evanescent fame? However, I also received a serious private letter. Our correspondent expressed surprise that I constantly refer to onomatopoeia and sound symbolism and asked me to clarify my attitude toward this matter. Summer is a dead season: outside the strawberry patch, in May and June almost nothing happened to me as an etymologist (that is why I even skipped my traditional gleanings last month—for the first time ever in more than ten years), and here was suddenly a consolation prize in the form of a big question. I am happy to answer it.

Those who are interested in word origins know the basic facts.
For centuries people compared look-alikes and found worthwhile etymologies more or less by chance.
In the nineteenth century, linguists discovered sound correspondences and learned to compare such words across languages that today bear little or even no outward resemblance to one another. The story began with Rasmus Rask (a Dane) and Jacob Grimm (a German) and continued with a group of mainly German scholars called Junggrammatiker in German and Neogrammarians in English.
Quite often, words that violate regular sound correspondences still seem to be related, and no one knows what to do with them.
Equally often the origin of words remains undiscovered despite numerous attempts to reconstruct their past.
First, it should be said that Jacob Grimm did not prove anything. Let us look at the most elementary example. We state that Latin p corresponds to Germanic f (this is “a law”), as evidenced by the pair pater ~ father. Then we formulate the rule that in our search of cognates we should be guided by the notion that if, for example, both Latin and Germanic words begin with f, they cannot be related (e.g. Old Engl. fæmne “woman”—long æ—and Latin fēmina). Either fæmne is a borrowing of the Latin noun (which is for many reasons unlikely) or it has an etymology of its own (which has been attacked more than once but never found). The circularity of the pater ~ father argument is obvious, and yet it seems that the conclusion is correct. To be sure, if our starting point were fēmina ~ fæmne (that is, always compare the Latin and Germanic words beginning with f and call them allied), the origin of fæmne would have been crystal clear, while the etymology of father would have presented an insoluble riddle. And yet, despite the fatal flaw behind Grimm’s reasoning, it appears that the results drawn from his premise are correct. Granted, they defy the main scientific principle, but they are worth salvaging!

The next example is of a similar type, but the reasoning is more convincing. According to the same law that pairs non-Germanic p and Germanic f, non-Germanic t corresponds to Germanic þ, that is, th as in Modern Engl. thin. But the Gothic for “father” (the oldest Germanic form available to us), was fadar, in which d sounded as ð (that is, as th in Modern Engl. this). Though the difference is minimal, Grimm and the Neogrammarians taught us to deal with sounds with utmost caution. Enter Karl Verner, another great Dane, who noticed that in the Sanskrit word for “father” stress fell on the second syllable (pitár). He checked numerous words and concluded that þ and other similar consonants (that is, fricatives, or spirants, as they are also called) were voiced if they followed, rather than preceded, an unstressed syllable. Consequently, Verner said, at one time even Germanic did not always have stress on the root. This discovery revolutionized Indo-European studies. But alas, dozens of words violate Verner’s Law!

Whence those “exceptions” to seemingly “exceptionless” laws? The answer is easy. Language is not elementary algebra, and hundreds of early and late words have individual histories. The rule of thumb is: try to apply the Neogrammarian principles to etymology. If they do not work, look for other factors, and you will discover sound symbolic and sound imitative formations, baby words, migratory words, taboo, and so forth. Your results will not be a hundred percent convincing, but that does not mean that they are wrong.
In this context, I want to tell a story that probably few people know. Half a century ago Jacques Rosenman, M.D. brought out a two-volume book titled Primitive Speech and English, which was followed by Onomatopoeia and Word Origins (1982). All three were published and distributed by the author. Rosenman noticed the circularity of Grimm’s Law and concluded that everything said by him and the Neogrammarians was nonsense; he treated Verner with special disdain. Rosenman used the worst tactic imaginable, for, ignorant of tons of special literature, he decided that no one had noticed the pitfalls of the “classical” theory. He screened numerous dictionaries and grammars (Noah Webster did the same at one time) but missed the fierce opponents of the Neogrammarians (such as Hugo Schuchardt), the researchers who insisted on the role of non-traditional factors in word formation (for example, Otto Jespersen, to mention the most famous name), and the huge body of literature on expressive sounds. Rosenbaum also missed Hensleigh Wedgwood’s English etymological dictionary. If he had studied that dictionary and reviews of it, he would have become aware of the many dangers his own approach entailed. Time and again he compared Indo-European and Semitic words, as though the groundbreaking books and articles by Hermann Möller, Alfredo Trombetti, Albert Cuny, and Graziado I. Ascoli had not existed. His books are full of reinvented wheels.
Rosenman also chose a self-defeating approach to his potential critics. Instead of showing that he was developing a fruitful view of etymology, he presented himself as an iconoclast and told the historical linguists whom he addressed in person that all past work had been a stupid mistake. As could be expected, no one took him seriously, especially because he was an outsider, and no journal agreed to review his work. Apparently, he never made it even to the section “Books Received.” He encountered only snobbery and at best puzzled indifference. I am telling this sad story not to add one more insulting remark to a host of those Rosenman had to endure. On the contrary, I believe that many etymologies he offered are correct and suggest that specialists make use of the three volumes he wrote: his material is rich, and his conclusions are often instructive.
Now to return to the last paragraph of the previous post. I said that every time I deal with words like big, pig, bug, bed, bad, dig, dog, and god (monosyllables beginning with and ending in stops), I end up with sound-imitative or symbolic formations. They indeed sometimes rebel against the Neogrammarian laws, but before one classifies them (or any other “recalcitrant” words) with “freaks,” it is necessary to exhaust all the traditional means of revealing their past. Jacob Grimm and Karl Verner have not gone to the dogs, but they were certainly not gods.
Featured image: Dog Sled Team by skeeze, Public Domain via Pixabay.
The post A timeout: the methods of etymology appeared first on OUPblog.

Researchers use drones and satellite photos to document illegal logging in monarch butterfly reserve
The monarch butterfly has been called “the Bambi of the insect world.” These fascinating insects are famous for their bright colors and their incredible fall migratory route, which can be as long as 2,500 miles.
Starting from as far north as Canada, millions of monarchs take a two-month journey to a mountain range that straddles the border of two Mexican states, Michoacán and México, where they spend the winter. This area, known as the Monarch Butterfly Biosphere Reserve, is sacred ground for entomologists and butterfly enthusiasts. Each year thousands of tourists visit the Reserve, which was declared a World Heritage Site by the United Nations in 2008.
Unfortunately, the Oyamel firs and pines that shelter the butterflies are also valued for their timber, and illegal logging operations frequently occur in the Reserve. This is unfortunate because monarch butterflies have a tightly evolved relationship with the Oyamel firs (Abies religiosa), which thrive in high elevations. One of the most important monarch overwintering areas, the Sierra Chincua, has peaks that are 3,400 meters above sea level, and the Oyamel firs that grow on the slopes below the peaks protect the butterflies from the cold mountain air.
“The Oyamel fir forest moderates the temperature and moisture of the butterflies by acting as a blanket, keeping heat in during the night and keeping heat out during the day,” said Dr. Lincoln Brower, a research professor at Sweet Briar College who has studied monarchs for 62 years. “The firs also act as an umbrella during storms, which is important because wet butterflies are less resistant to frost.”
In April 2015, Brower and some colleagues learned that the Oyamel fir habitat was being destroyed in the Sierra Chincua after Mexican environmentalists reported illegal logging was occurring there. In an attempt to see it for themselves, the researchers tried to visit the area but were denied access.
“Neither we nor other individuals were granted permission to visit the area in order to witness the logging,” they wrote in an article appearing in American Entomologist, “but we became aware of its severity when we examined current satellite imagery.”

By comparing a satellite image from November 2013 to an image from November 2015, they were able to determine that logging had occurred on approximately 10 hectares of land – and each hectare can support about 50 million monarchs!

They were also able to determine that most of the logging occurred between April and August after examining Landsat images from April, August, and September 2015. In addition, they were able to determine the severity of the logging by examining high-resolution drone images that were taken in January 2016.
“The Sierra Chincua is a jewel in the crown of overwintering monarch butterflies in Mexico and the severe damage we report here is very disturbing,” the authors wrote. “If the migratory and overwintering phenomenon is to persist, forest protection must be enforced year-round in the entire Reserve. We hope that the [United States] and Canada will join with the people, government, and scientific community of Mexico to provide whatever support is needed to ensure that an effective level of enforcement takes place.”
Featured image credit: A 1.34 hectare monarch butterfly colony is clearly visible in the center of this aerial photograph from 2007. The brownish hue covering the trees shows the dense butterfly population. Image from “Illegal logging of 10 hectares of forest in the Sierra Chincua monarch butterfly overwintering area in Mexico” in American Entomologist.
The post Researchers use drones and satellite photos to document illegal logging in monarch butterfly reserve appeared first on OUPblog.

Where is China going? The history and future prospects of China’s economic reforms
In recent years, numerous phenomena in Chinese society have worried the informed elites and have angered the common citizens. On the one hand, government power has been expanding, the monopolies of state-owned enterprises, especially central enterprises, have grown, and consumption of public funds and official corruption have become rampant. On the other hand, there have been widespread forced evictions and demolitions in the rural areas, soaring house prices in the cities, serious inflation, glaring wealth gaps and inequalities, and all kinds of rising social tensions. What are the reasons for all of this? Is there any way out?
Whereas some may blame the market economy for all the negative features in Chinese society today, others point out that the real source of the current problems is precisely the opposite, that is, a true market economy has not yet been completely established. The fundamental argument is that the so-called “socialist economic system” that exists in China today is in fact a hybrid system, with half-statist and half-market characteristics, and dominant control by the government. The dangers of stagnation, even retrogression, are lurking within this system.
Reviewing the reform process, we discover the historical roots to the formation of this system. China initially adopted a strategy of incremental reforms to avoid social turbulence, which involved introducing some market mechanisms into the statist economic system, and allowing the development of a private sector. At that time, such a government-dominated market economy model was a necessary stepping stone, but it resulted in a two-track system within an institutional environment that encouraged rent-seeking. The expansion of the institutional bases for rent-seeking activities in turn led to rampant corruption. That is to say, the half-statist, half-market system facilitated the formation of special interest groups that ultimately became the biggest obstacles to the furthering of the reform and resulted in an inextricable impasse.
During the last decade, this alarming process has intensified as a result of a strengthening of the statist elements that have contributed to substantial stagnation, and even retrogression. In many areas, we have witnessed an advance of the state-owned sector and a retreat of the private sector. In particular, a policy of “expanding demand and maintaining growth” was adopted in 2009. The result was that 4 trillion yuan in investments and 10 trillion yuan in loans all went to government projects and the huge state-owned enterprises, causing a large-scale transfer of wealth from the citizens to the government. Central enterprises now hold powerful monopolistic advantages in industries such as energy, raw materials, transportation, communications, and finance. Furthermore, state-owned enterprises have amassed huge profits due to the power of their administrative monopolies and their commandeering of public resources. In fact, the state-owned assets, which have actually become private assets held by various departments, have encouraged the widespread corruption.
The root of all these problems is the growing government power in this strengthened half-statist, half-market system.
Another much criticized problem is land financing. Since the early 1990s, land has become a major source of rent-seeking. After acquiring cheap land from farmers, local governments have sold it at high prices, thereby exploiting the wealth of farmers on a shockingly large scale. On the one hand, land financing has made the local governments extremely rich, encouraging extravagance and leading to large and wasteful projects. On the other hand, many farmers have become either refugees or potential sources of the rising social discontent. Land financing has also increased housing prices and widened the income gap.
The root of all these problems is the growing government power in this strengthened half-statist, half-market system. Surprisingly, there are some who sing the praises of this so-called “Chinese model,” arguing that the strong government, big state-owned enterprises, and GDP growth driven by massive investments have all been positive successes.
However, a continuation of this model will inevitably lead China along a path to an “Asian drama” of rampant corruption and social disintegration.
In itself, the half-statist, half-market system is a transitional form that must move forward and evolve into a true market economy based on rule of law. The consequences of the petrification of this system will be retrogression, with the government constantly expanding its regulatory powers and reverting to state capitalism. In such a case, a small number of plutocrats will control the rights of disposal of state-owned assets and will easily be able to transform them into private property. In essence, the result will be a form of “crony capitalism.” This could provoke a rise of ultra-leftists who, taking advantage of the popular anger, would mislead the people by their “revolutionary” slogans. They would then call for a return to a completely statist system, hence seriously disrupting the process of modernization.
Going forward, how it chooses to proceed will be critical in determining the ultimate fate of the country. If China allows the reform process to stagnate and retrogress, the result will be serious social chaos and disintegration. The only way forward is to readjust the reform agenda by unswervingly promoting market-oriented economic reforms and political reforms in the direction of rule of law and democracy.
Featured Image Credit: Wuxi, Jiangsu, China by Thomas Depenbusch. CC-BY-2.0 via Flickr.
The post Where is China going? The history and future prospects of China’s economic reforms appeared first on OUPblog.

The perks and perils of trespassing
Some eight years ago I sat down to draw out a blueprint for a book that should tell stories about how the chemistry of individual elements of the periodic table had changed, for better or for worse, the courses of ordinary peoples’ lives. Several things motivated me; I was sitting on a number of stories where literature and history intersected with chemistry that I would love to tell to a bigger audience, but I also found there was a lack of popular science books in chemistry that actually explained something, as opposed to just telling how things are.
The chemistry was to be the firm ground from where I could make fishing expeditions into history for suitable protagonists and where I could anchor up with a set of characters I had already decided upon. Roughly half of the stories that ended up in The Last Alchemist in Paris are there because of a particular chemistry I thought needed telling, and the other half because of my particular interests in history, literature, and film.
Having no formal education above upper secondary school in any of these latter subjects of course made me somewhat nervous, but I got some very good advice from my editor. One was to avoid stories too close in time, as these are much harder to judge. The same goes for more sensational stuff. In both cases, if you get it wrong then it may get quite visible and distract the attention from what you are really trying to say.
Having said that, there were controversial stories that needed including, and where I had to dig deeper into the history discipline than I am perhaps formally qualified to do. Did Seretse and Ruth Khama (subject of Amma Asante’s movie A United Kingdom to be released later this year) get exiled from the Bechuanaland protectorate because of South African blackmail over a uranium contract? Easy to believe from the more popular stories, but more problematic if you looked up what professional historians wrote in more scholarly texts.

Then the Napoleon’s buttons story, where basic fact-checking of the historical literature has been sadly lacking, both in chemistry textbooks and popular science books. Here I hit a dead-end after having dug up as many survivor diaries from the 1812 disaster I could find on the internet. So I had to rely on someone more knowledgeable than me, and the author of 1812 Napoleon’s Fatal March on Moscow, Adam Zamoysk, agreed with my conclusion that this was yet another sensational story that could be filed away as a persistent myth.
In general I found asking people worked well, scholars of different disciplines were happy to provide help and feedback on anything from playwright August Strindberg to the 18th century use of pencils, and even a local CID officer and an ex-prime minister answered my question.
While The Last Alchemist in Paris on the whole has a lighter tone, it also set off an unexpected side reaction. Since 2015, I teach and develop a Master’s-level course called “Resources and Innovations in a Chemical and Historical Context“, as this, completely unplanned, turned out to be a major theme of The Last Alchemist in Paris. The course can be taken to fulfill the required credits in the area “Humans, technology, and society” needed for an engineering degree from Chalmers University of Technology.

The only sad thing about this trespassing activity is that you turn up potential lines of research that you can never follow up. So please, if someone has the time and resources, do write a biography or make an 18th-century costume drama about Mary Bright, marchioness of Rockingham, who once may have met with Swedish spy Reinhold Angerstein, but who’s life is full of other, far more interesting episodes, among these begin close adviser to the UK Prime Minister.
A version of this blog post was originally published by We The Humanities.
Featured image credit: The neat and tidy kgotla in Serowe where, in 1949, Seretse Khama addressed the Bamangwato tribal court with unforeseen consequences, to be told in Amma Asante’s coming movie, A United Kingdom. Photo © Lars Öhrström.
The post The perks and perils of trespassing appeared first on OUPblog.

Why study English? Literature, politics, and the university, 1932-1965
What is the purpose of studying English? How does language underpin politics? What role, if any, should the subject play within democratic society?
Attempts to understand attitudes towards these questions in the early-to-mid-twentieth century have previously emphasized two hostile schools of thought. Firstly, an approach towards criticism influenced by the Cambridge critic F.R. Leavis, who emphasized both the moral seriousness of literature and the natural hostility of the critic towards most aspects of contemporary society. This hostility ranged from mass entertainment to the ephemeral world of most intellectual discussion, highlighted most clearly by C.P. Snow’s claims about the Two Cultures. The other school of thought was predominant in the work of British Marxist critics such as Alick West, Edgell Rickword, and Christopher Caudwell, who saw all literature as political and believed that the best literature pointed towards a Communist society. For them it was the goal of the critic to agitate for this new culture.
However there was also an overlooked strand of criticism between these two poles, critics who saw the study of English Literature within the university as a tool with which to create the kind of citizens capable of creating and administering a better, more socially-just society. In the conception of four critics; L.C. Knights, Bonamy Dobrée, F.W. Bateson, and David Daiches, an English degree, particularly one that taught students to engage with subjects such as history and sociology, was perfectly poised to create a humane and empathetic minority. These four critics taught for the most part at newer institutions, at Leeds, Manchester, Sheffield, Sussex or, in Bateson’s case, from a peripheral position at Oxford University. They attempted to create, through departmental and syllabus reforms, versions of an English school capable of educating humane, democratic citizens.
They each engaged in debates over university reform and the wider role of the university in society. A key part of these reformed syllabuses was to expose English students to other subjects which brought wider social and political contexts to literature, such as history or sociology without reducing the subject to mere ‘cultural history’.
Whilst these manifestoes were slightly too ambitious to ever be fully implemented, each critic did make a significant impact. Knights, who was a major contributor to Leavis’ journal Scrutiny in its early years before drifting away, was a figure in adult education and teacher training, educating a number of tutors including Roy Shaw (later director of the Arts’ Council) as well as the novelist Anthony Burgess.

Bonamy Dobrée became a prominent and popular voice in Leeds, writing regularly for the Yorkshire Post on cultural matters, such as the expansion of arts education and the unheard-of suggestion of putting cafes in art galleries. Dobrée was also instrumental in establishing the Army Bureau of Current Affairs (which Churchill famously blamed for losing him the 1945 election), and set up the Universities Quarterly in 1946. His Universities and Regional Life (1943) saw the university as a beacon of post-war social reconstruction, acting as a bastion of cultural value in places often lacking cultural and intellectual life. Universities must become ‘propagandists’ for social change and ‘the good life’, creating graduates who were able ‘to mould the new industrial civilization in which the century of the common man will find its being.’
Though Bateson’s career at Oxford was difficult (he failed to become a fellow at his college until 1946), he founded the journal Essays in Criticism and was an influential teacher and mentor to New Left critics including Stuart Hall, Graham Martin, and Raymond Williams, as well as a number of poets and writers including Al Alvarez, Kingsley Amis, Bernard Bergonzi, Robert Conquest, Donald Davie, John Holloway, Philip Larkin, W.W. Robson, and John Wain. He became a rallying point for radical academics who sought to reform the Oxford curriculum after his death and he waged a protracted campaign against the patrician anonymity of the Times Literary Supplement, which had permitted a closed elite of reviewers to monopolise literary life. In fact, many of the important reviewers of the past 40 years, including John Carey, Val Cunningham, and Christopher Ricks were close to Bateson.
David Daiches repudiated his youthful Marxism of the 1930s, which inspired his early works. However, he continued to see the university English Department possessing a vital social role, with the potential to reconstruct society. He was a key figure behind creation of the University of Sussex (which opened in 1961) where students in the humanities were required to study their subject in conjunction with wider historical and social courses which sought to prepare the undergraduate for the rigours of modern life. In this, he sought to transcend the Two Cultures debate and to foster the kind of literary education which did not recoil from contemporary society, but embraced technological modernity.
Whilst these visions of the English School were ultimately limited, and generally failed to withstand critiques from Marxist, feminist, or Post-colonial perspectives, they were important for reshaping the subject in the pre-Robbins era, and in educating a generation of influential students.
Featured image credit: English Literature by gacabo (cropped). CC BY-SA 2.0 via Flickr.
The post Why study English? Literature, politics, and the university, 1932-1965 appeared first on OUPblog.

June 14, 2016
James Madison and Tiberius Gracchus on representative government
In Federalist 63, Madison pointed out that the principle of representation was not exclusive to modern republics. In the Roman Republic, Madison thought, the Tribunes of the plebs were “annually elected by the whole body of the people, and considered the representatives of the people, almost in their plenipotentiary capacity.” Representation was not unknown to the ancients. The “true distinction” between ancient constitutions and American governments, Madison thought, was “the total exclusion of the people” in the latter. Ancient and modern republics both knew representation, but modern republics severely diminished the role of the people.
Historians, ever suspicious of anachronism, are skeptical of Madison’s claim that representation had played a role in classical antiquity. But Madison may have been on to something; there is clearly a sense in which the ancient Romans regarded the ten Tribunes of the plebs as representatives of the Roman People.
In 133 BCE, the Tribune Tiberius Gracchus clashed with a fellow Tribune, Marcus Octavius, who had vetoed Gracchus’ agrarian bill. Tribunes could constitutionally veto legislation by other magistrates. After Octavius’ veto, Gracchus asked the popular assembly to depose Octavius, an unprecedented move that was widely considered unconstitutional. The assembly followed Gracchus and proceeded to depose his colleague Octavius, but there must have been second thoughts: according to Plutarch, the unprecedented dismissal of a Tribune by the People was “very displeasing, not only to the nobles,” but even “to the multitude.”
Gracchus had to justify his course of action before an informal popular assembly. The challenge was to explain why the deposition of a fellow Tribune did not amount to the destruction of the power of the tribunate and of popular rights. Gracchus argued that a Tribune was “sacred and inviolable because he was consecrated to the people and was a champion of the people.” However, if a Tribune should “wrong the people, maim its power, and rob it of the privilege of voting,” he “by his own acts deprived himself of his honourable office by not fulfilling the conditions on which he received it.”

This was a revolutionary theory of representation. By exercising his constitutional veto against Gracchus’ agrarian bill, Octavius had employed his power “against the very ones who had bestowed it.” Octavius had “robbed” the People of the “privilege of voting” and had thus forfeited his office. Gracchus claimed that by vetoing the bill, Octavius had effectively ceased to be Tribune. If it was right for Octavius “to be made tribune by a majority of the votes” it must be “even more right for him to be deprived of his tribuneship by a unanimous vote.” The Tribune is conceived to act on binding instructions from the popular assembly; failing to do so will depose him. This is of course directly opposed to Madison’s “plenipotentiary” view of the tribunate.
It is instructive to compare Gracchus’ with American ideas of representation. One strand of political thought adheres to a “plenipotentiary” or “pre-Gracchan,” view (also called the “trustee conception” of representation by political scientists). According to this view, representatives are separated from those who elected them and autonomous in their decisions. This “autonomy gap” between people and representatives allows for representatives to act according to their personal judgment and conscience. Representatives are not bound by the will of their electors but free to reach the conclusions and compromises they see fit. They enjoy Madison’s “plenipotentiary capacity.”
A second strand of political thought is closer to Gracchus. The British thinker Edmund Burke famously called this the “ambassador” idea of representation. Here the representative acts on binding instructions, resulting in a much closer connection between representative and electorate. In recent U.S. politics, Grover Norquist’s famous “Taxpayer Protection Pledge” and Tea Party aspirations provide examples for this “Gracchus model” of representation—pledges and oaths are supposed to hold members of Congress on a short leash, committing them to policy and diminishing their bargaining options.
The tension between Gracchus and Octavius—between representatives as agents with a mandate or plenipotentiary trustees—played a prominent role in the debates of the American Founding. Before the Revolution, colonial legislatures adhered to an “ambassador” conception of representation where closeness was prized. In the debates between Federalists and Anti-Federalists these opposing conceptions of representation were again at stake. For the Anti-Federalists, the House of Representatives was not sufficiently alike those it represented and not sufficiently close to them—representatives were bound to be too independent from their electorate, separated by an autonomy gap.
The Federalists admitted that their conception of representation opened up such a gap. Indeed, they were adamant that representatives be insulated from the people, and they even thought the Constitution allowed for a “natural aristocracy.” For the Federalists, such insulation would only contribute to the quality of government. In Federalist 57, Madison welcomed representation by an elite. Degeneracy of this elite would be prevented by constitutional constraints such as term limits. During their term, however, representatives would enjoy the plenipotentiary capacities of Tribunes before Gracchus.
In 1788, the Federalists won, but throughout American history, Tiberius Gracchus’ view reasserted itself. It is tempting, but would indeed invite anachronism, to look at this contest of representational ideals as Madison himself did: through the lens of the history of the Roman Republic. One important reason for the Federalists’ take on constitutional representation was of course their interest in the crisis, civil wars, and collapse of that ancient Republic. This crisis was precipitated—or so the Federalists were led to believe by a prominent Roman tradition—by Gracchus’ unconstitutional deposition of Octavius.
Featured image credit: Cicero Denounces Catiline by Cesare Maccari, 1889. Public domain via Wikimedia Commons.
The post James Madison and Tiberius Gracchus on representative government appeared first on OUPblog.

Horace’s pulp fiction? Rediscovering the Epodes
When it comes to Roman poets, most have heard of Horace (Horatius Quintus Flaccus). Horace is the freedman’s son who, against all odds – impoverished circumstances, fighting on the losing side at the Battle of Philippi (42B.C.) – secured the patronage of Maecenas, Augustus’ right hand man. He is the poet of a multi-book lyric work, the Odes, which gave us the well-known cliché carpe diem! (C.1.11.8) and which inspired its parody, carpe noctem, often a popular name for bars, clubs, and other establishments.
But while most people have at least a passing acquaintance with Horace’s most famous work, few have bothered to read, or indeed have heard of, his iambic Epodes. The Epodes are a pugnacious little collection of seventeen poems, at times witty and smutty, but more often than not, a verbal slap in the face. Today all but the most hardened tend to steer clear of this slim yet challenging collection. But this wasn’t always the case. Horace’s version of pulp fiction, or trash aesthetics, not only appealed to the masses of his own times, but spawned characters such as the wicked witch of Rome, Canidia, who remained a well-known figure up until the end of the 19th century. What happened, then, to make Horace’s Epodes fall off the best-sellers’ list?
Horace published his Epodes some time in 30 or 29 BC, some five years after the publication of his first book of Satires (36/35BC). As a collection of iambic poetry, Horace’s Epodes are firmly rooted in the iambic tradition of Greek writers such as Archilochus and Callimachus, whose biting invective poetry aimed at lampooning (directly or indirectly) various figures of their own day. The collection proved popular, at least to Horace. At the start of his Epistles 1.19, published some time in 20BC or 19BC, Horace makes the claim: ‘It was I who first showed Latium Parian iambics (Parios ego primus iambos/ ostendi Latio).’ No matter that Horace is playing fast and loose with the truth here – iambic in fact already had a long history in Rome and was the meter of Comedy and Tragedy – by stating the primacy of his Epodes, Horace was staking his claim to have created something innovative and worthy of attention.

Nevertheless, however worthy of attention the Epodes might be, they have been overshadowed by his more popular works. The Epodes are often viewed by scholars as a warm-up act in Horace’s career before he sat down seriously to compose his Odes. In part, this response is due to the position that the Epodes occupies in Horace’s literary corpus: caught between his informal hexameter poetry, the Satires (Book 1, 36/35BC; Book 2, 30/29BC), and the more elevated tone of his celebrated lyric Odes (Books 1-3, 23 BC), readers of the Epodes have often found themselves looking either forwards or backwards, and rarely giving their undivided attention to this little collection. Recent scholarship has tried to give the Epodes their due, and to focus on this collection of poems as a work worthy of attention in its own right. But it would be wrong to assume that the Epodes languished in obscurity from ancient times onwards until contemporary scholars in a fit of Horatian fervour decided to switch on the light.
True, for the majority of the 20th century few people had heard of the Epodes, but their popularity within the Anglo-speaking world for at least three centuries prior to this has just now come to light. Evidence shows that they featured as an important part of the curriculum in many English schools, albeit in sanitised form with the more scandalous Epodes (notably 8 and 12) removed. From the 17th century onwards, they were often published in translation alongside the Odes serving as a worthy companion piece.
Finally, there is Canidia, the witchy star of the Epodes, who was so well known in literate circles from the 17th – 19th centuries that she often appeared in texts without any direct reference to the Epodes. The transformation of this mythological hell-raiser to obscure nobody during the 19th century is symptomatic of the Epodes’ fall from grace during this period too. Both text and witch were probably victims of historical events: the advent of the First World War which signalled huge societal upheaval and the loss of ‘Classics’ on the school curriculum. Popular culture continued to perpetuate the legacy of a Hercules and a Spartacus, but the witchy Canidia had lost her invective bite. Thankfully for Canidia, and the rest of the individuals who populate the world of Horace’s Epodes, Horace’s smallest collection is staging a come-back. It remains to be seen if it can ever recapture the attention of writers to the extent that it once did.
Headline image credit: Carpe Diem by pedrik. CC-BY-2.0 via Flickr.
The post Horace’s pulp fiction? Rediscovering the Epodes appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
