Oxford University Press's Blog, page 451
October 21, 2016
The irony of gunpowder
Few inventions have shaped history as powerfully as gunpowder. It significantly altered the human narrative in at least nine significant ways. The most important and enduring of those changes is the triumph of civilization over the “barbarians.” That last term rings discordant in the modern ear, but I use it in the original Greek sense to mean “not Greek” or “not civilized.” Historian Kenneth Chase has represented such people as nomads from “the Arid Zone”— the Eurasian steppe and the North African desert. They were often what anthropologists call pre-state communities, usually governed by tribal or kin relationships. Their containment made possible what sociologist Norbert Elias called The Civilizing Process (1939) and what psychologist Steven Pinker has recently captured in his magisterial The Better Angels of Our Nature: How Violence Has Declined (2011). The irony, however, is not that gunpowder reduced violence.
Gunpowder existed in China for centuries without having much impact. When it appeared in Europe, however, in the 13th century, it began its slow transformation into an agent of historical change. In the 14th century, Europeans experimented with various powders and guns, seeking to harness the power of this chemical mixture for military purposes. In the 15th century, siege artillery began to systematically destroy the city walls that had harbored civilizations for thousands of years. In the 16th century, small arms appeared on the battlefield to challenge the dominance of the heavily armed and armored mounted knight. In the 17th century, mobile field artillery transformed land battlefields. As guns gained purchase in sieges and on the battlefield, larger changes in war and society followed.
First, powder in guns — the first internal combustion engines — introduced the world to a Chemical or Carbon Age, during which carbon-based fuels transformed everything, from warfare to power-generation, transportation, manufacture, and communication. Second, gunpowder reversed the dominance of fortifications over siege technologies, making states more vulnerable to each other but not to the “barbarians.” Third, gunpowder made missile weapons deadlier than stabbing and hitting weapons, instituting modern war at a distance. Fourth, gunpowder dethroned the mounted knight and elevated the gunner, ending a cavalry cycle going back to the Roman empire and eroding the nobility of the sword, whose eclipse Don Quixote so lamented. Fifth, gunpowder added field artillery to the combined-arms paradigm of infantry and mounted warriors, that had dominated land warfare in the Eurasian Ecumene since the age of the chariot. Sixth, gunpowder replaced the European feudal order with monarchical states and centralized power, hosts for Elias’s “civilizing process.” Seventh, the ammunition consumed by gunpowder weapons transformed the logistics of war, anticipating modern tooth-to-tail ratios of one-to-ten. Eighth, cannons empowered the European side-gunned sailing ship, which projected Western exploration, discovery, conquest, imperialism, and colonialism over 35% of the world’s landmass.

Ninth, and most importantly, gunpowder permanently settled the ten-thousand-year contest between civilization and the barbarians. This ninth and most significant impact of gunpowder is also the most ironic. At least since the inhabitants of Jericho erected walls to protect their city around 8,000 BCE, human populations have been dividing themselves into more or less “civilized” groups. The more civilized communities formed states based on cities and other infrastructure such as roads, harbors, and fortifications. Less civilized communities held onto the life-styles of their ancestors, hunter-gatherers, and pastoralists governed by tribe and kin. The civilized and barbarian communities often bumped into each other, by accident or design, and fought asymmetrically. The barbarians tended to rely on mobility, stealth, and skills developed in hunting and fighting. Soldiers of the state relied on fortification, armor, heavy weapons, and field engineering. Civilizations had the better record over the millennia, only because their fortifications deterred many attacks, halting the barbarians at the gates.
But repeatedly in human experience, barbarian forces conquered civilizations. Germanic tribes conquered Rome. Aryans may have conquered Harappan civilization in the Indus River Valley. The Xiongnu defeated Han China in 200 BCE, and the Mongols repeated the achievement more durably in the 13th century. Those same Mongols, still tribally organized, also spread their conquest to the West, controlling at one time the largest land empire the world has ever known. They threatened Europe in the 13th century, but postponed their invasion to return home and elect a new Khan. By the time they returned, Europeans had strengthened their fortifications and begun their adoption of gunpowder, which the Mongols may have brought on their first incursions.
But the slow introduction and assimilation of gunpowder and its weapons in the West eventually robbed the barbarians of the existential threat they had posed for millennia. After the 14th century, the barbarians might challenge Western military forces, they might defeat them in the field from time to time, and they might even adopt gunpowder weapons from the West. But they could not manufacture in quantity the weapons and ammunition that gave Westerners their new military advantage. As Hillaire Belloc put it so pithily during the British imperial adventures in Africa: “Whatever happens, we have got The Maxim gun and they have not” ( page 42, H.B. and B.T.B., The Modern Traveller).
Even when a Maxim gun fell into the hands of the barbarians, they could not replicate the machine in bulk or produce its ammunition. Without the infrastructure of civilization, the barbarians no longer posed an existential threat to the “developed states.” The American Indians who defeated George Armstrong Custer at the Little Big Horn in 1876 probably had more guns than the soldiers they attacked, but none of their own making. Not just gunpowder weapons, but the ability to refine and manufacture them and their ammunition in industrial quantities, allowed the forces of the United States to finally subdue, disarm, and civilize the Native Americans — for better or for worse.
The irony of this historic shift in the world-wide balance of power is that the developed world in the 21st century still faces barbarians at the gates: terrorists, pirates, warlords, and criminals. And the weapons of choice of those barbarians are all based on gunpowder. Be they guns or bombs or explosive vests or “improvised explosive devices” (IEDs), the weapons that continue to empower the barbarians at the gate and in our midst flow from the very technology that immunized civilization against barbarian conquest. Still, the threat of gunpowder abides, 800 years into “the civilizing process.”
Featured image credit: Moscow russia soviet union by Peggy_Marco. Public domain via Pixabay.
The post The irony of gunpowder appeared first on OUPblog.

October 20, 2016
Learning about lexicography: A Q&A with Peter Gilliver part 1
Peter Gilliver has been an editor of the Oxford English Dictionary since 1987, and is now one of the Dictionary’s most experienced lexicographers; he has also contributed to several other dictionaries published by Oxford University Press. In addition to his lexicographical work, he has been writing and speaking about the history of the OED for over fifteen years. In this two part Q&A, we learn more about how his passion for lexicography inspired him to write a book on the development of the Oxford English Dictionary.
How did you become interested in lexicography?
I can’t remember a time when I wasn’t interested in language. Both my parents were language teachers, and the family was always discussing English words and usages. And I remember being fascinated by the first dictionary I ever looked at: it was a dictionary for schoolchildren, but it must have been an unusual one in that it was full of strange and intriguing words that a schoolboy was hardly likely to come across in his reading (chalazion is one that sticks in my mind). Later my interest in words found other outlets, like Scrabble and The Times crossword.
But these things are a long way from lexicography as such; and in fact it was only in 1987, when a friend — knowing that I ‘liked words’— drew my attention to an ad for a job on the OED, that I seriously thought about it as an occupation. And that was when I realized that I couldn’t think of a more interesting job. I still can’t, 29 years later.
What was the first word you worked on at Oxford?
Perhaps surprisingly, it was fish. Of course there was already an entry in the OED for the word, as both noun and verb; but it had come to Oxford’s attention that people working in the oil drilling industry had begun to use the noun to refer to the bits of stuff — broken bits of drill and the like — that accumulate at the bottom of an oil well, and have to be ‘fished out.’ This was the first word in the first bundle of new words and meanings that I was given to work on when I started.

What does a day look like for a typical Oxford lexicographer?
Well…there’s really no such thing as a typical Oxford lexicographer! First of all, it depends on which dictionary you’re working on. Even if we’re just talking about dictionaries of English, there are Oxford dictionaries designed specifically for people learning the language, general-purpose dictionaries of current English, and historical dictionaries. The Oxford English Dictionary is what I’ve always worked on, and it’s a historical dictionary — in other words, it tells you not just what words mean today, but how they have changed over the course of time. And each word, and each meaning of each word, is illustrated with quotations showing its history, from the first known occurrence down to the most recent.
I work as part of a team of lexicographers engaged in revising the Dictionary’s existing entries: which means looking at each definition, seeing whether it needs updating, whether the meaning of the word has shifted, and also looking at the illustrative quotations, seeing whether we can now find earlier evidence of the word — or a particular subsense of it — being used, and finding more recent examples to bring the illustrative examples up to date. Some words are pretty straightforward — you can get through several of these in a morning — but sometimes you find yourself working for days, or even weeks, on a word with a long and elaborate history. Generally we work our way in order through an alphabetical sequence of words; just at the moment I’m working on a range of words beginning with au-. I’ve had a string of fairly straightforward ones, like aumoniere (a kind of purse — and possibly also a kind of dumpling!) and au naturel (a French expression which now has a long history of being used in English, meaning various things like ‘plainly cooked’ and ‘naked’)… but I know I’ve got aunt coming up, and that will be a longer piece of work. Quite a lot of variety, as you can see. In some ways alphabetical order is a great randomizer!
For other members of the OED team a day will look quite different. Some people work specifically on the etymologies of words, investigating their origins in various other languages; others deal with pronunciation; others work entirely on new words. And all of these different activities, and many others, need to be carefully coordinated so that we can keep on producing the updates to OED text that get published online every three months.
What is your favourite word, and why?
Now, in my experience that’s not a question that lexicographers are much good at answering. Maybe it’s because one of the characteristics of a good lexicographer is the ability to find something of interest in whatever word they happen to be working on. But…well, it’s a question that I’ve often been asked, and rather than give the rather uninteresting answer ‘I don’t have one’, I now tend to mention a word for which I do in fact have a particularly soft spot: twiffler. It’s a word I learned in the course of my job, and I like it for two reasons: firstly, it has a great etymology; secondly, it’s one of the very few words I’ve learned in the course of my job that really filled a gap in my own vocabulary. (Working on the OED I encounter words I’ve never seen before all the time, but generally I find that — having managed to get by without them for so long — I have no need to start using them.) A twiffler is a plate that’s intermediate in size between a side-plate and a dinner plate. We have a pile of these plates in our sideboard at home, but I never knew that they had a specific name until I learned twiffler. And now I regularly ask my partner things like ‘Shall we serve the first course on twifflers?’ Then there’s the etymology. Like a lot of terms to do with pottery, the word is a borrowing from Dutch, where the equivalent word— twijfelaar — has much the same meaning; and it derives from the verb twijfelen, which means ‘to be unsure’ or ‘to vacillate.’ This is a plate that can’t make up its mind. Which I think is rather charming.
Featured image credit: Oxford English Dictionary by mrpolyonymous. CC-BY-2.0 via Flickr.
The post Learning about lexicography: A Q&A with Peter Gilliver part 1 appeared first on OUPblog.

In conversation with cellist Evangeline Benedetti
What was it like as one of the few female performers in the New York Philharmonic in the 1960s? We sat down with cellist and author Evangeline Benedetti to hear ask this and other questions about performance and teaching careers, favorite composers, and life behind the doors of Lincoln Center.
What was your most memorable musical experience?
Playing Mahler’s Second Symphony, the Resurrection, with the New York Philharmonic with Leonard Bernstein conducting. We performed it many times with him, and it continued to move me at each and every performance. This work is a complete and glorious musical experience—the power of symphonic orchestration displayed in all its glory, to which Mahler thoughtfully placed a soulful aria as the fourth movement, and the inclusion of a chorus in the finale make it so. Having the most informed and inspirational interpreter of Mahler during our life time at the helm made this one of the most memorable musical experiences that I have had.

What were some struggles you encountered when you were first beginning your career?
Moving from my home in Austin, TX and adapting to New York City living was amongst the most trying that I experienced. The aspiration to attend the Manhattan School of Music and to study cello with Bernard Greenhouse was my driving force that powered the ability to endure the difficulty of settling in New York. Except for the summer that I attended Interlochen Music Camp before my senior year of high school, I had no experience on how to maneuver anywhere outside of Austin, much less New York City. MSM had no housing support at the time, so I moved five times, I believe, staying in each place no longer than a month, until I found a match for my living requirements at the Studio Club, a YWCA residence for single professional women, which no longer exits. Once there, I had a haven to live safely and easily,with just enough room to practice and at last a place conducive to intense study and practice.
What is your favorite piece to play?
I actually have 18—the complete movements of the six Bach Suites. They are probably the most satisfying of the repertoire in their completeness. They are endlessly intriguing due to their complexity in form, rhythm, harmony and emotional content. They are for solo cello (without accompaniment), therefore affording the player complete freedom of expression as long as the mores of the style are taken into consideration. The content is so jam-packed that one can never cease to mine them. Perhaps they are the cellist’s bible.
Which composer, dead or alive, would you most like to meet?
It would be Bach, because he wrote the Suites. He somehow understood the sensibilities of a cellist, actually, he seemed to know this for every instrumentalist. I would love to have a discussion with him to discover if he knew how he was able to do this, or if it was by musical intuition alone, which is what I suspect.
What musical movement or type of music do you struggle to appreciate?
In the classical genre, it is Philip Glass, John Adams, and others who are minimalists. I can admire their genius, but I do not have an emotional connection on any level—except for irritation at the repetition!
In the popular vein, I have never had an affinity for much of it. After all, the Beatles came into stardom during my intense years of practicing, and I had no time to listen to them. I do not think that young musicians should be as narrow as I, but that’s where I was during that time.
What inspired you to write this book? What helped you the most with the writing process?
Once I began my studies as an Alexander Technique student, I began to revamp my playing to be more in tune with the principles of the technique. It began a quest for freedom of playing that I so longed for, and it afforded me answers that traditional teaching did not. After working many years to understand playing from this different point of view, I wanted to present a more or less complete picture of my lifetime of exploration, thought and practice. A book seemed to be the way to go.
My writing process was greatly enhanced when Joseph Mace, DM, came into my sphere as my assistant and developmental editor. He helped me turn my ideas into conceptual wholes that formed the structure of the book.
Tell us your favorite story about your time in the New York Philharmonic.
The true story for me is really a novel that extended the 44 years that I lived the marvelous musical and personal experience of being in the Philharmonic.
However, when thinking about personal stories, two come to mind. The first is a piece of history. When Philharmonic Hall (now David Geffen Hall) at Lincoln Center was built, they were certain that there would be no women in the orchestra. Fortunately, they were wrong, but they built no dressing rooms for women. Therefore when Orin O’Brien, the first tenured woman, and I, the second, won our respective auditions and were admitted to the orchestra, they originally solved the problem by having us change into our orchestral black costumes in a public women’s bathroom. They just put lockers in there. Finally after a few years and more women came aboard, they built a dressing room for us. I suppose they realised women were here to stay.
The other was an incident when playing an outdoor concert in Central Park. An enormous beetle flew down my blouse and got stuck there for what seemed like an eternity. At long last, there was a pause in the cello part when in the presence of thousands of people enjoying the music on the lawn, I was finally able to release it. This was certainly an unexpected hazard of playing the cello!
Featured image: “CELLO” by Robin Zebrowski. CC by 2.0 via Flickr.
The post In conversation with cellist Evangeline Benedetti appeared first on OUPblog.

10 myths about the vikings
The viking image has changed dramatically over the centuries. Romanticized in the 18th and 19th centuries, they are now alternatively portrayed as savage and violent heathens or adventurous explorers. Stereotypes and cliches run rampant in popular culture. Vikings and their influence appear in various forms, from Wagner’s Ring Cycle to the comic Hägar the Horrible, from History channel’s popular series Vikings to the Danish comic-book series Valhalla, and from J.R.R Tolkien’s Lord of the Rings to Marvel’s Thor. But what is actually true? Eleanor Barraclough sheds light on and dispels ten common viking myths.
1. They wore horned helmets.
Let’s get this one out of the way first. Nope. The first illustration of a viking wearing a horned helmet was a popular edition of Frithiof’s saga, produced in 1825. But in terms of enduring popularity we can probably blame Wagner, or at least his costume designer Carl Emil Doepler, who was responsible for the outfits of the first performance of the Ring cycle at the Bayreuth Festival in 1876 and decided to stick a few jaunty horns onto the helmets. And no, they didn’t wear winged helmets either. Blame the 19th century for that one too. Mysteriously, Viking-Age helmets are almost as rare as hen’s teeth: one was found in Ringerike in Norway, looking rather like a Batman mask but without the pointy ears. But, crucially, no horns.
2. Everyone in the medieval Nordic world was a viking.
Again, we can dispatch this one swiftly. No they weren’t. To the Norse, ‘viking’ was both a verb and a noun: a raid (víking) and a raider (víkingr). The Anglo-Saxons had a very similar word (wicing), which originally just mean ‘pirate’ but in time came to refer to Norse marauders. In any case, most vikings were young men off on their equivalent of a gap year, trying to get rich quick and have a few adventures before they settled down. In the Icelandic sagas, older men still going on summer raids are often presented as disruptive, antisocial elements within the community, who have never quite settled down or made much of their lives (like that single 40-something friend who still wants to stay up all night drinking and playing loud music when everyone else is ready to turn in for the night and the kids are asleep upstairs).

3. They ‘blood eagled’ their enemies.
That horrible thing the vikings were said to do to their victims, when they cut their ribs away from their spine and pulled out their lungs backwards like a bloody pair of eagle’s wings? Probably never happened. Or at least, it’s highly debatable. There are a few obscure references in Norse poetry to eagles being carved on people’s backs, but since such verses are notoriously cryptic and convoluted, the original meaning may well have been less literal than how it was interpreted in later texts. In any case, the details get nastier, bloodier, and more fantastical with every passing century, like a gory game of Chinese Whispers.
4. They burned their dead in ships.
Hardly ever, as far as we know. In the pre-Christian period, the dead could be cremated or buried, often with grave goods such as weapons, jewelry, and tools. If they were burned it was on a pyre, after which a mound might be built over the top. If you were extremely wealthy and important you might be buried in a ship, such as the famous 9th century ship burial from Oseberg in Norway, which contained the remains of two high-status women and countless grave goods. But nothing had been burnt. Our main evidence for the Norse burning their dead on ships is an account by the 10th century Arab diplomat Ibn Fadlan, who witnessed the funeral of a ‘Rus’ chieftain out in Russia. Ibn Fadlan includes details such as the sacrifice of a slave girl to join her master in the afterlife, and a naked man setting fire to the ship whilst covering his anus (for reasons that probably made sense at the time). But even here, we are on shaky ground, because the ethnic identity of the ‘Rus’ is disputed: originally they came from East Scandinavia, but in a few generations had been assimilated into the local Slavic population.
5. They were the only inhabitants of medieval Scandinavia.
Not true. Particularly at northern latitudes, Scandinavia was also inhabited by the ancestors of the people now known as Sámi, a semi-nomadic people who traditionally lived in the far north of Norway, Sweden, Finland, and parts of Russia. To the Norse, these people were known as Finnar, and there was plenty of interaction between the two groups, including high-status marriages and trading. But as far as the Norse were concerned, the Finnar were notorious for uncanny magical talents such as telling the future, out-of-body journeys and shape-shifting. In the Icelandic sagas, you cross the Finnar at your peril…
6. They drank from the skulls of their enemies.
Definitely not, despite what you might see in Asterix and the Normans. This time we can blame Ole Worm, not a geriatric invertebrate but a 17th century Danish antiquarian who in 1636 published a book called Runir seu Danica literatura antiquissmia… eller literatura runica (‘Runes or the Most Ancient Danish Literature’). In it, he quoted lines from a Norse poem in which the hero says that in Valhalla he will drink ale ‘from the curved branches of skulls’, a poetic way of describing a drinking horn. But Ole Worm misunderstood the phrase, and translated it into Latin so that the hero was now drinking ale ‘from the skulls of the slain’. A number of other tribes were said to drink from the skulls of their enemies, including the Lombards of Italy and the Penchengs of the Russian steppes. But it was the poor old vikings who got the bad press yet again.

7. They sailed in dragonhead ships.
Yes and no, but not as often as you might think. As far as the archaeological record is concerned, the evidence is patchy. The only surviving ship with a dragonhead was found at Ladby in Denmark, where it had been buried as part of a high-status funeral. The dragonhead itself doesn’t survive, but the stem is decorated with a ‘dragon’s mane’ of iron curls, and there is room for a dragon’s head to be slotted into the top. There are also smaller pieces of evidence such as Norse graffiti scratched onto the walls of Hagia Sophia in Istanbul, depicting a little fleet of ships with snouty dragonheads. But most references to dragonhead vessels come from later written sources from Iceland, such as the 13th century Book of Settlements, which describes how sailors were required to remove the dragon-heads from their ships when they approached Iceland, so as not to frighten the land spirits.
8. They were lawless, wild, blood-feuders.
Actually, the legal systems throughout the medieval Nordic world were sophisticated and complex, and several law codes contain the phrase ‘with law shall the land be built’. Even today, this is the motto of the Icelandic police force. In fact, the Althingi, Iceland’s national parliament, is one of the world’s oldest parliamentary institutions, having been established in AD 930 at Thingvellir (‘Assembly Plains’). Each year at the Althingi, everyone would gather at the Law Rock and the appointed Lawspeaker would recite the laws off by heart. When literacy reached the country, the laws were the first thing to be written down, in a chieftain’s farmhouse over the winter of 1117-18. (Although yes, there were quite a few blood feuds too.)
9. They wrote in runes.
Sort of, depending on why, when, and what they were writing. Runic inscriptions were brief: carved onto runestones commemorating the dead, or engraved onto smaller objects such as personal items, stone, or pieces of wood. Often the inscriptions were little memos, love tokens, or the name of the item’s owner: the equivalent of a few words scrawled on a post-it note. Occasionally these inscriptions are very rude, such as a piece of graffiti scratched onto the walls of the Neolithic chambered cairn of Maeshowe in Orkney, which reads: ‘Thorni f**ked, Helgi carved’. But the vast majority of Norse manuscripts were written down in Iceland in the later medieval period, and almost all use Latin script just like we do today.
10. Ragnar Hairy-Breeches had hairy breeches, Ivar the Boneless was boneless, and so on…
It’s true that there were some impressively badass/comical/unflattering nicknames knocking around the viking world, including beauties such as Ketil Flat-Nose, Eysteinn Foul-Fart, Thorbjorg Ship-Boobs, and Kolbeinn Butter-Knob. But some of the best-known Norse nicknames only start to appear many centuries later, and not necessarily for the reasons you might think. Typos, misunderstandings, and mis-translations are often to blame. For instance, there have been lots of theories as to why Ivar—leader of the Great Heathen Army that attacked England in AD 865—ended up with the nickname ‘boneless’: impotence, brittle bone disease, lameness, and extreme warrior prowess have all been suggested. But more recently, Elizabeth Ashman Rowe has argued that the word was a misreading of the Latin word exosus (‘detestable’) as exos (‘boneless’). As explanations go, this one is more convincing, but not quite as exciting.
Featured Image Credit: ‘Norsemen Landing in Iceland’. Frontispiece from Guerber, H. A. (Hélène Adeline). Myths of the Norsemen from the Eddas and Sagas. London : Harrap, 1909. Public Domain via Wikimedia Commons.
The post 10 myths about the vikings appeared first on OUPblog.

October 19, 2016
Place of the Year 2016 longlist: vote for your pick
Quite a lot has happened in 2016. The year has flown by with history-making events such as Brexit, the presidential election in the United States, and the blockade of Aleppo to name a few. To reflect on the year, we are opening up a poll to the public, asking you all to help us choose the one place in the world that truly defined 2016. This is an annual tradition for us, to celebrate the release of Atlas of the World–the only atlas that’s updated annually to reflect current events and politics. 2016 marks the 10th time we are choosing a Place of the Year and so we’re making it extra special.
We’ve consulted Oxford University Press employees around the world, spanning 4 continents, to ask them what they think belongs on the longlist for Place of the Year. OUP employees pulled through and came up with truly creative and brilliant answers. While submissions like “wherever a honey bee should live” and “Narnia” were appreciated, we narrowed it down to the below. Vote and let us know what you think the one place is that defined 2016.
Featured image: Globe by Unsplash, Public Domain via Pixabay. Place of the Year 2016
The post Place of the Year 2016 longlist: vote for your pick appeared first on OUPblog.

Blessing and cursing part 2: curse
Curse is a much more complicated concept than blessing, because there are numerous ways to wish someone bad luck. Oral tradition (“folklore”) has retained countless examples of imprecations. Someone might want a neighbor’s cow to stop giving milk or another neighbor’s wife to become barren. The fateful formula would be pronounced and take effect. More than one “witch” has been accused of such crimes and burned. Or an evil queen would turn her stepsons into ravens (they are swans in H. C. Andersen), for she too knew some terrible spell. The episode of Jesus’s cursing a fig tree brought to life tons of exegetic literature. A curse could consign one to eternal perdition or to a lighter punishment, and different words might be needed for each action. Compare the images evoked by such words as anathema and excommunication. This is not the place for a disquisition on theology, but we should realize the great difficulty the Anglo-Saxon missionaries had while adapting the basics of the new faith to the conditions of their apprehensive and often hostile audience.

I touched on some of such difficulties in the post on the history of the verb bless. “Bless” was not an item the missionaries could find in the vocabulary of the people they strove to convert. If Germanic blōtan stands behind this verb, its original meaning might have been approximately “to honor (a divinity) by sacrifice.” If the root of bless is the same as in the word blood (which seems to me less likely), the result appears to be nearly the same, namely “to redden the altar with the blood of the sacrifice.” In both cases, the action was expected to propitiate the deity (and guarantee a reward). Blōtan is thus close to but not quite the same as “to bless.” The Latin word the missionaries had in mind was benedicere “to speak well.” Yet, as we can see, Engl. bless has nothing to do with speaking, and it is etymology (that is, its inner form) was as opaque thirteen centuries ago as it is to us. The other Germanic languages made do with an adaptation of the Latin verb for “to give a sign” (German segnen, etc.). The sign was understood as a gesture bringing about the support of the external forces.

When it came to a word for cursing sinners, the way for borrowing a Latin term, common in the ecclesiastical language, was also open, as evidenced by the existence of the verb “to damn.” But for some reason, no one thought of it, and it had to wait until it was borrowed into Middle English from Old French. Another French verb of the same type is condemn, ultimately con + damn; it also penetrated English only in the Middle period. For worship a compound was coined (“worth” followed by the suffix ship—a peculiarly English formation), but, in dealing with the place of worship, the missionaries never resorted to the names by which pre-Christian sanctuaries and synagogues were designated. Some such old native words are known, for example, Gothic alhs and Old English ealh. They have not continued into Modern English. In other cases, an old word merged smoothly with bookish borrowings. For example, Old Engl. rōd meant “gallows” and came to mean “the cross on which Jesus was crucified,” but in some dialects rood still means “rod, pole, perch” (though not “gallows”; rod and rood are not related).
Church is an adaptation of a Greek word, and it was coined very early; temple, shrine “sanctuary,” fane (now almost forgotten), and their likes are non-native and comparatively recent. Church is a place designated for the Christians, so that calling it ealh was out of the question. Some objects and abstract concepts were occasionally given familiar names. This must have happened when associations did not seem too dangerous. Perhaps sometimes they were even welcome, for the flock would understand the new message without relapsing into heathendom. This would explain the retention of bless alongside blōtan (assuming that this derivation is correct and that bliss helped bless to stay in the language). It is the decision to use the Germanic word god for the god of the new religion that is the hardest to explain. Perhaps the idea of a Supreme Being in Christianity did not appear to those people too different from the idea of the all-powerful divinity of old.

Since the practice and vocabulary of cursing is ancient, there was, in principle, no need to Anglicize Latin maledicere, literally, “to speak badly” (as in Engl. malediction), the antonym of benedicere. Yet the missionaries could have translated maledicere element by element and produced a so-called translation loan, and indeed yfle cweðan (yfle “evilly, badly” and cweðan “to say, speak”: compare the related words quoth and bequeath; ð has the value of th in Modern Engl. this) has been attested. Or they could have followed the example of their German colleagues, who made do with fluohhon. Modern German still has Fluch “curse” and fluchen “to curse.” Perhaps we have a notion of why they did not do so.
The word had cognates everywhere in Germanic but displayed seemingly incompatible meanings: “bewail” in Gothic and “strike” in, for example, Old English. The secure cognates outside Germanic also refer to striking. According to the conjecture by Max Förster, an eminent scholar to whom I owe most of the material presented above, the initial idea was “to lament and beat the breast, while bewailing the misfortune.” If such was the case, the Old English verb retained the primordial sense more accurately than its Gothic congener. In any case, Old Engl. flōcan meant “to strike” and was probably not fit for rendering the idea of maledicere. But Old Engl. wiergan carried the same connotations as maledicere (“consign to perdition, including permanent perdition” and “outlaw,” that is, “excommunicate a person”) and Modern Engl. curse. The word occupied a significant place in the vocabulary of Old English; however, there is no trace of it in the language we now speak. Its total disappearance is a mystery.
Wiergan (also with a prefix) was a perfect match for maledicere. It even seems to have developed some additional meanings under the influence of the Latin verb. Yet in Old English (and only in that language), the verb cursian appeared, as though from nowhere. It was used for profane purposes (“to revile, vilify”) and for excommunication. The noun curs “curse” did not lag behind and displayed the same two senses. Finally, the verbal noun cursung “pronouncing a curse” and “damnation” followed suit. Where did cursian come from, and how did it succeed in ousting its well-established competitor? Definitive answers to those two questions are lacking, though the hypotheses are many, and in the nearest future we will examine all of them. In the entry curse, most dictionaries follow the OED and say “origin unknown.” Such a verdict, as we have seen more than once, conceals all kinds of nuances, from having no clue to a word’s history to being confronted with the embarrassment of riches: several explanations exist, but we have no way to decide which of them is the best; or all the fantasies look hopeless, so that the word’s origin is indeed “unknown.” We’ll have to decide where we are in this case.
To be continued.
Images: (1) “The Wild Swans” by Arthur Joseph Gaskin, Public Domain via Wikimedia Commons. (2) Blessed/Cursed image by Priscilla Yu for Oxford University Press, edited from Public Domain image by Gerd Altmann via Pixabay. (3) “Chapel Conversion – geograph.org.uk – 215095” by Roger Gilbertson, CC BY-SA 2.0 via Wikimedia Commons. Featured Image: “Lewis Morrison as “Mephistopheles” in Faust!, performance poster, 1887″ Lithograph by Dickman, Jones & Hettrich, Public Domain via Wikimedia Commons.
The post Blessing and cursing part 2: curse appeared first on OUPblog.

Australia in three words, part 3 — “Public servant”
‘Public Servant’ — in the sense of ‘government employee’ — is a term that originated in the earliest days of the European settlement of Australia. This coinage is surely emblematic of how large bureaucracy looms in Australia.
Bureaucracy, it has been well said, is Australia’s great ‘talent,’ and “the gift is exercised on a massive scale” (Australian Democracy, A.F. Davies 1958). This may surprise you. It surprises visitors, and excruciates them.
But in Australia the ubiquity and lustre of bureaucracy is taken for granted. In Australia, career public servants daily claim a public profile and prestige that elsewhere only central bankers could hope for. The average salaries of public servants are higher than in all but one of 26 OECD countries. And there they have unusual power. Australia is thick with ‘independent statutory authorities’ — states-within-a-state, each with a presiding potentate — which possess prerogatives seldom seen in other democracies.
Australia’s penchant for bureaucracy might be traced to the fact that Australia began as a colony. “The most enduring feature of any colonial regime,” it has been said, “one of the first to appear and the last to leave, is the administrator, the colonial bureaucrat, high, middle and low.” The highest stratum of management of colonial Australia was itself a bureaucracy, the Colonial Office, which was presided over by James Stephen, a “strict legalist” with a “passion for system and uniformity” (Australia: the Quiet Continent, D. Pike, 1962). Beneath it acted the governors, who, too, were public servants, in as much that they were accountable to the Colonial Secretary. The governors were eventually reduced to a ceremonial ornament, but Canberra soon replaced Westminster with its own host of Intendants working to achieve uniformity and centralization across the island continent.
The strength of the bureaucratic sphere in Australia may also reflect the weakness other spheres. The inevitable paucity of her social structure left bureaucracy’s claims of professionality and impersonality only weakly pressed against by other energies. Granted: the same could be said of many New World societies, including the anti-bureaucratic United States. So perhaps more important was the weakness of the market sphere in Australia, which in its frailty yielded so much of the field to bureaucracy.
The strength of bureaucracy surely also reflects the strength in Australia of the sphere of ‘prediction and control’ — or ‘science’ — that is so agreeable to the quantifying and rationalising impulses of bureaucracy. If modern Australia’s foundation in 1788 might be deemed a crazy ricochet of the Battle of Yorktown, it may be equally judged an unexpected precipitate of the Age of Reason. It was an international effort to compute the distance of the earth from the sun that dispatched Captain Cook to the south Pacific in 1770. In the subsequent settlement of Sydney, a completely misapprehended natural resources base drove its governors to resort hopefully to science: fittingly the first farmstead in Australia was named Experiment Farm. An ample supply of underemployed Scottish scientists made good the need for investigation and measurement.

It is, then, unsurprising that one of the most significant manifestations of the pre-eminence of bureaucracy in Australia has been the creation of massive research monoliths. The Australian Bureau of Statistics (ABS) is one of the most all-embracing national statistical agencies of any democracy. Australia’s CSIRO is a national scientific research body that in its size and general reach has no counterpart in the developed world. And the Productivity Commission — the economic and social counterpart of the CSIRO — has (beyond New Zealand) no equivalent elsewhere.
A more general consequence of the ascendancy of bureaucracy in Australia has been the high quality of her public administration. Thus in 1942 Nelson T. Johnson, the newly appointed US ambassador to Australia, found a panicked, demoralised, and seemingly leaderless country. The one favourable thing he could report to Franklin D. Roosevelt was that “it would be difficult to find a higher type of public servant anywhere in the world” (Australia through American Eyes, 1935–1945, P.G. Edwards, 1979).
Other consequences of bureaucratisation are more doleful. The higher reaches of the Australian public service were just too good. Too much talent was drawn there to waste itself in memoranda in triplicate. And, inevitably, the officiousness of bureaucracy inflamed the authoritarian tenor of Australian society. Thus the ABS in its recent census of 16 August 2016 was content to bandy the threat of fines of $180 per day for any person who did not complete it. (In New Zealand, by contrast, the maximum fine for non-completion of their most recent census was $500, and the media reports that no fine higher $200 was imposed.) Inflaming the offence, the ABS had decreed, in a characteristically high handed fashion, that the census must be completed on-line by all those who had not specifically requested a paper form. Predictably, the attempt of 10 million households to log in that August day concluded in ignominious computer failure, and the serious compromise of the census’ integrity. More grimly, and not long before, ‘the worst case of insider trading seen in this country’ — in the words of a judge — was hatched within the Australian Bureau of Statistics.
Most importantly, the prestige of bureaucracy has accommodated the evasion in Australian politics of questions of value. It has suited political actors to pretend to reduce every issue to a spuriously objective bureaucratic assessment. Ideals are slighted, and perish in their neglect.
Bureaucracy, too, needs ideals, and although its apparatus in Australia expands remorselessly, its spirit decays, as it fails to maintain its own ‘Presbyterian’ value system of the ‘high minded and tough minded.’ The rational-legal logic of bureaucracy is sapped by an ethos of pop charismatic leadership, importunately grafted from ‘the market.’ Expertise and experience are discounted, and bureaucracy becomes another managerial playground of the lavishly paid but dubiously competent Australian corporate class. The progeny of Nelson T. Johnson are the today futile officialdom of various policy fiascos.
Australia’s doubtful ‘talent’ has unquestionably curdled.
Featured image credit: Crowd queuing for rationing cards, 1947 by John Oxley Library, State Library of Queensland. Public domain via Wikimedia Commons.
The post Australia in three words, part 3 — “Public servant” appeared first on OUPblog.

The French Victory at Yorktown: 19 October 1781
The surrender of Lord Cornwallis’s British army at Yorktown, Virginia, on 19 October 1781 marked the effective end of the War of American Independence, at least in North America. The victory is usually assumed to have been Washington’s; he led the army that besieged Cornwallis, marching a powerful force of 16,000 troops down from near New York City to oppose the British. Charles O’Hara, Cornwallis’s second-in-command, surrendered to his American equivalent, Benjamin Lincoln. The presence of the young Alexander Hamilton, one of Washington’s aides-de-camp, who led a light infantry unit in the final stages of the siege, adds to the sense of its being a great American triumph.
In truth, Washington commanded an allied army, in which the French component was very important. A French army expeditionary force had been stationed in New England since 1780, and soldiers from this French contingent (when combined with others brought up from the West Indies) comprised nearly half of Washington’s forces. In a symbolically important gesture, O’Hara had tried to surrender to the Comte de Rochambeau, the French commander, only for Rochambeau diplomatically to insist that he was merely an American auxiliary. The reluctant O’Hara therefore offered his sword to Washington, who in turn insisted that his second-in-command should take the British surrender. Rochambeau followed military etiquette to the letter, but by doing so created a misleading impression of the French contribution.
Not only did French heavy artillery relentlessly pound Cornwallis’s defensive works, but French troops played a key part in capturing an important British redoubt. But before this moment, the French navy had sealed Cornwallis’s fate by leaving him trapped and without realistic hope of help.

Ever since the French entered the war as American allies in 1778, Sir Henry Clinton, the British commander-in-chief in North America, had worried about the potential of the French navy to co-ordinate operations with the American army and force a British outpost or army to surrender. In 1778 itself, the French Mediterranean fleet managed to elude the Royal Navy and arrive off New York to the complete surprise of the British garrison. On this occasion, the Americans could not co-ordinate their efforts on land with the operations of the French at sea. The same failure meant that the British garrison of Newport, Rhode Island, escaped capture a few weeks later. In 1779, the French and Americans worked together more effectively, besieging a British force at Savannah, Georgia. A French fleet blockaded the British garrison, but fearing that stormy weather would damage the ships, the French commander pressed for a premature attack on the British lines, with disastrous consequences. In the autumn of 1781, however, the French and Americans finally realized the potential of their alliance in dramatic fashion.
The French fleet that cut Cornwallis off by sea fended off a British attempt to relieve him, forcing the British vessels to retreat to New York to repair and refit. This naval battle proved to be the decisive episode in the siege, not least because the French victory persuaded Cornwallis that the writing was on the wall. Clinton made one desperate last attempt to save his beleaguered colleague, assembling as many men as he could spare from the New York garrison and putting them on board the British fleet. But by the time they set sail, Cornwallis had already opened negotiations and was preparing to surrender.
The vital role played by the French navy at Yorktown became more apparent in retrospect. Washington, flushed with triumph, wanted to go on and force the surrender of the remaining British strongholds at Charleston, South Carolina, and New York City, the British headquarters. But the French had other ideas. Their fleet and army sailed down to the Caribbean, with the aim of delivering a knock-out blow to the British by capturing Jamaica. Their hopes were dashed by Admiral Sir George Rodney’s victory at the Battle of the Saintes, which saved Jamaica and began a revival of British fortunes. Tellingly, in the absence of the French fleet, Washington could make no progress in forcing the surrender of the remaining British outposts in the United States. His troops prevented the British from penetrating far inland from their bases, but they alone could not compel the British to capitulate, as their garrisons could be sustained by British control of the coast. In the end, the British withdrew from Charleston at a time of their own choosing, and remained in New York until after the final peace treaties had been signed. Without the French navy, Washington could not pull off another Yorktown—and without the French navy, Yorktown itself may not have been the important British defeat that it was. Cornwallis would probably have held out until reinforcements sent by Clinton obliged Washington to lift the siege.
Featured image credit: “Surrender of Lord Cornwallis” by John Trumbull, 1820. Public Domain via Wikimedia Commons.
The post The French Victory at Yorktown: 19 October 1781 appeared first on OUPblog.

Holy crap: toilet found in an Iron Age shrine in Lachish
In September, the Israel Antiquities Authority made a stunning announcement: at the ancient Judean city of Lachish, second only to Jerusalem in importance, archaeologists have uncovered a shrine in the city’s gate complex with two vandalized altars and a stone toilet in its holiest section. “Holy crap!” I said to a friend when I first read the news. The Daily Mail was more subtle, publishing stunning photographs of the finds under the headline, “The Wrong Kind of Throne.” My social media feeds quickly clogged with toilet humor, but no one was pooh-poohing the discovery.
What’s a toilet doing in a shrine? And why were the shrine’s altars vandalized? Archaeologist Saar Ganor, who directed the dig on behalf of the Israel Antiquities Authority, sets the finds in historical perspective. Iron Age Levantine city gates were large architectural complexes where a wide variety of civic, religious, and judicial activities were carried out. Religious rituals were regularly conducted at other Iron Age city gates, for example at Bethsaida.
It is striking, then, that Lachish’s shrine would contain two cuboid stone altars with four characteristic protrusions at their top corners—their ‘horns’—broken off. The excavators regard this as an act of sacrilege intended to render the altars unsuitable for cultic activity. They interpret the toilet, which showed no chemical traces of being used, as a symbolic act of desecration.
Together, these acts of defilement point to the decommissioning of the shrine. Given the eighth-century date of the finds, the excavators relate this decommissioning to the Bible’s claim that Hezekiah, king of Judah, removed sanctuaries outside of Jerusalem: “He abolished the shrines and smashed the pillars and cut down the sacred posts” (2 Kings 18:4a, NJPS). The palpable excitement generated by this correlation between archaeology and the Bible is captured by Ze’ev Elkin, Minister of Jerusalem Affairs and Minister of Environmental Protection: “Before our very eyes these new finds become the biblical verses themselves and speak in their voice.”
My own work on Iron Age Levantine politics sheds light on three aspects of this lavatorial discovery. First, the finds reflect rituals of desecration rather than random violence. The placement of a toilet in the shrine is quite conspicuous and points to intent. Someone deliberately rendered the shrine unusable.
Although a wide variety of biblical literature regards human waste as unseemly, the conceptual background to this act of desecration is more specific. Deuteronomy 23:13–15 instructs Israelites to dig a hole outside their camp and bury their excrement there. The command is explained, “let Him [i.e., Israel’s god Yahweh] not find anything unseemly among you and turn away from you” (Deuteronomy 23:15b, NJPS).
Within the logic of this verse, the sight of human waste repulses the deity. The placement of a toilet in the Lachish shrine also seems aimed at repulsing the deity, thereby rendering the shrine useless. The logic of the desecration thus accords with a stream of biblical tradition—Deuteronomic tradition—associated with the religious reforms of kings Hezekiah and Josiah.
Second, Iron Age Levantine city gates were contested spaces. A wide variety of biblical literature—for example, Deuteronomy 22:13–21, Joshua 20:1–9, Amos 5:3–5, 10, 12—and Assyrian king Sennacherib’s description of his third campaign (Rassam Cylinder lines 42–58), in which he destroyed Lachish and handed some of Hezekiah’s territory to Philistine rulers loyal to him, suggest that towns were independent units within the geopolitics of the region in the Iron Age. These and other texts make clear that there was some tension between the distributed power of towns and the centralized power of Israelite and Judahite kings.

This tension often played out in city gates. For example, the biblical story of Absalom’s revolt presents him as playing on the strain felt by towns at the hands of centralized power as wielded by the current king (1 Samuel 15:1–6). And several ancient Near Eastern kings are known to have placed royal statues or inscriptions in existing city gates in an attempt to assert their power over the city. For example, a colossal neo-Hittite statue with an inscription by king Suppiluliuma was discovered at Tell Tayinat’s gate.
If the Lachish excavators are right in attributing the desecration of its shrine to Hezekiah, the find serves as a further case study in how the tension between towns and centralized royal power played out in city gates.
Third, Hezekiah was by no means the only ancient Near Eastern king to have decommissioned temples. The Lachish excavators present their find as analogous to Jehu’s desecration of Baal’s temple, described in 2 Kings 10:18–28. And Josiah’s reforms, recorded in 2 Kings 23:4–25, are well-known to readers of the Bible. Cult reforms have also been attributed to other ancient Near Eastern kings, including Akhenaten of Egypt, Muwatalli II of Hatti, Tudhaliya IV of Hatti, Nebuchadnezzar I of Babyolnia, and Nabonidus of Babylonia.
One analog in particular brings into focus a feature of the biblical description of Hezekiah’s reform. A fragmentary clay tablet (RIM B.6.14.1) discovered at Uruk describes the cult reforms of Nabû-šuma-iškun, who ruled Babylonia in the fifth century bce. The text, evidently written sometime after his reign, criticizes his changes to the cultic calendar, his removal of divine images from their temples, his plundering of temple treasuries, and his installation of foreign gods in local temples, among other perceived religious crimes. The text blames these acts of sacrilege for his downfall.
The biblical traditions about Hezekiah, by contrast, celebrate his desecration of shrines outside Jerusalem and praise his fidelity to Yahweh (2 Kings 18:3–6 and 2 Chronicles 31:1). Biblical historiography, this comparison reminds us, is not objective but presents the viewpoint of a circle who advocated the worship of Yahweh only and who sought to support Jerusalem’s unique position of power.
The eighth century residents of Lachish, in other words, would have described the desecration attested in these archaeological finds in different terms than those found in the Bible.
Headline image credit: LachishPalace053011.jpg by Wilson44691. CC-BY-SA 3.0 via Wikimedia Commons.
The post Holy crap: toilet found in an Iron Age shrine in Lachish appeared first on OUPblog.

October 18, 2016
Federalists and Anti-Federalists: the founders of America [infographic]
Between October 1787 and August 1788, a collection of 85 articles and essays were distributed by the Federalist movement. Authored by Alexander Hamilton, James Madison, and John Jay, The Federalist Papers highlighted the political divisions of their time.
The infographic below illustrates some of the key differences between the Federalist and Anti-Federalist movements.
Featured image credit: “Drawing on Parchment” by Hilke Kurzke. CC BY-SA 2.0 via Flickr.
The post Federalists and Anti-Federalists: the founders of America [infographic] appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
