Oxford University Press's Blog, page 290

December 16, 2017

What is the value of rationality, and why does it matter?

Rationality is a widely discussed term. Economists and other social scientists routinely talk about rational agents making rational choices in light of their rational expectations. It’s also common in philosophy, especially in those areas that are concerned with understanding and evaluating human thinking, actions, and institutions. But what exactly is rationality? In the past, most philosophers assumed that the central notion of rationality is a normative or evaluative concept: to think rationally is to think ‘properly’ or ‘well’—in other words, to think as one ‘should’ think. Rational thinking is in a sense good thinking, while irrational thinking is bad. Recently, however, philosophers have raised several objections to that assumption.


First of all, how can it be true that you should never think irrationally, if you sometimes can’t help it?


Secondly, picture a scenario where you would be punished for thinking rationally—wouldn’t it be good to think irrationally in this case and bad to keep on thinking rationally?


And finally, rationality requires that our mental states (in other words, our beliefs, choices, and attitudes in general) are consistent and coherent. But why is that important, and what is so good about it?


Having considered these three arguments, we can now debate which side is right. Does thinking ‘rationally’ mean thinking ‘well and ‘properly’, or not? However, looking at both sides of the issue, it becomes evident that we still need considerable philosophical arguments and analysis before we can arrive at any conclusion. The reason why is because the problem itself is not clearly defined, since we don’t know the meaning of some of the key terms. Therefore, as a next step in the analysis, we will review some recent work in linguistics, specifically semantics.


Most linguists believe that the key terms—’should’, ‘can’, ‘good’, ‘well’, and so on—are context-sensitive: the meaning of the word depends on the context. For example, ‘can’ sometimes expresses the concept of what a particular person has an ability to do (as when the optician asks, “Can you read the letters on the screen?”). At other times, it expresses the concept of what is possible in a more general sense (as when we say, “Accidents can happen”).


Rationality, in the end, is the feature of your mind that guides you—ideally (if you’re lucky) towards the goal of getting things right.

Most linguists accept that every concept expressed by ‘should’ implies some concept that can be expressed by ‘can.’ But there are many different shades of ‘can.’ So, even if there is a strong sense of ‘can’ that makes it true that you ‘can’t help’ thinking as irrationally as you do, there could still be a weaker sense of ‘can’ that makes it true that you ‘can’ think more rationally than you do. In this way, we may be able to answer the first objection: the sense in which it is true that we ‘should think rationally’ implies one of these weaker senses of ‘can’, which make it true that we ‘can’ think more rationally than we do.


The same sort of differentiation may help with the second and third objections. The meaning of terms like ‘good’, ‘well’, and ‘properly’ changes in different circumstances. Think about the scenario in which you would be punished for thinking rationally, and rewarded for doing the opposite. In one sense of good, it is good in this case to think irrationally, but in another sense, it remains good for you to think rationally, because rational thinking in itself is always good.


Instead of answering our questions, however, this line of argument raises more, because what we need to do now is define this sense of ‘good’, in which rational thinking is always ‘good.’ But here is a proposal about how to answer these further questions. When you have a belief, or when you choose a course of action, you have a goal—the goal of getting things right. After all, it would be absurd and nonsensical to say, “I know that this is the right thing to believe, but why should I believe it?” To get things right, your beliefs and choices must fit with the external world.


However, your beliefs and choices cannot be directly guided by what is happening in the external world. They can only be directly guided by what is going in your mind. Rationality, in the end, is the feature of your mind that guides you—ideally (if you’re lucky) towards the goal of getting things right.


Suppose that your belief does get things right in this way. The fact that you succeeded in getting things right is explained in part by the fact that you were thinking rationally. In other words, rationality matters because rationality is the means by which we pursue the goal of getting things right.


Featured image credit: Photograph of a boy in front of a chess landscape by Positive Images. Public domain via Pixabay.


The post What is the value of rationality, and why does it matter? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 16, 2017 00:30

December 15, 2017

5 canon-breaking influences on modern literature

As a discipline, literature is vast, encompassing poetry, prose, and drama from across history, as well as the more modern disciplines of film and media studies. In the modern world, the idea of literature has taken on new meaning as new concepts and technologies have emerged with the changing culture. From internet memes and viral content, to ecocriticism and even the occasional zombie—enjoy a wander through a five captivating and eclectic topics in the world of literature.


How are internet memes and viral content forms of literature?


The internet is home to endless textual variation. Short texts and clips are uploaded and removed, hyped and forgotten under the ceaseless pressures and incentives of capitalist identity formation, as identities of all kinds are and constantly solidifying and liquefying. The web has generated a rich range of “recombinant” appropriations—compiled videos, samplings, remixes, reboots, mashups, short clips, and other material involving text, sound, vision—typically found (and lost) on web-based video databases. Can these remix clips be described as adaptations or appropriations? How do they relate to transmedia storytelling? And what do they tell us about participatory culture and “mashup textualities”? The use of “recombinant” (two strands of DNA combined) suggests that these texts and practices conjoin all sorts of material from multiple sources, much in the way that DNA is a key factor in transforming genetic material and organisms. Evolutionary biologist Richard Dawkins famously initiated the adoption of biogenetic models into cultural studies, comparing genetic and cultural reproduction.


How has American literature been influenced by popular culture?


A relatively young tradition in world letters, American literature matured over a period that coincided with the rise of industrialization and the birth of consumer society. The field of American literature has always had to compete with mass and popular culture in the hierarchy of national tradition. Simply put, no American literary text emerged in isolation. Even among the likes of Edgar Allan Poe, Herman Melville, and Henry James, the business of professional authorship was identified with the need to sell their work to as wide a readership as possible. In that sense, the individuals we now deem authors of classic American literature had more in common with everyday consumers than previous scholars have acknowledged. Indeed, it is among everyday consumers themselves that the study of mass and popular culture has expanded our view of “literature” to a welcome degree. No longer confined to the classics, the field now considers bestsellers, genre fiction, and a range of non-print media to be valid, and necessary, objects of study.


What is Ecocriticism?


Ecocriticism is a broad way for literary and cultural scholars to investigate the global ecological crisis through the intersection of literature, culture, and the physical environment. Ecocriticism originated as an idea called “literary ecology” and was later coined as an “-ism”. Ecocriticism expanded as a widely-used literary and cultural theory by the early 1990s, and is often used as a catchall term for any aspect of the humanities addressing ecological issues primarily as a literary and cultural theory (like media, film, philosophy, and history).


This is not to say that ecocriticism is confined to literature and culture; scholarship often incorporates science, ethics, politics, philosophy, economics, and aesthetics across institutional and national boundaries. Originally, scholars wanted to employ a literary analysis rooted in a culture of ecological thinking, which would also contain moral and social commitments to activism. As Cheryll Glotfelty and Harold Fromm, editors of The Ecocriticism Reader, famously stated, “ecocriticism takes an earth-centred approach to literary studies,” rather than an anthropomorphic or human-centered approach.



Henry Louis Gates, JrHenry Louis Gates, Jr. in 2007. Jon Irons, CC BY 2.0 via Wikimedia Commons.

How are race and racism portrayed in post-colonial literature?


How has racism persisted in the production and maintenance of postcolonial cultural identity? More specifically, how have the notions of race and racism been conceptualized over the past several decades of postcolonial critical theory? The anti-colonial writings of Frantz Fanon and Albert Memmi, Henry Louis Gates Jr.’s poststructuralist turn in race theory (“Race,” Writing, and Difference), and the writings and lectures of Michel Foucault on biopolitics are just a few examples of the massive field of postcolonial theory over the past sixty years. These works have helped to posit racism as a negative and repressive structure, whilst simultaneously exploring racism as an aspect of governance within modern society as a whole.


Why are zombies important to adaptation theory?


Zombie apocalypse narratives represent a fascinating case of transmedia storytelling, since their characteristics as a genre are the result of a series of textual creations, flowing from the works of director George Romero and disseminated through novels, short stories, graphic narratives, videogames, apps for smartphones, and other films.


Modern audiences crave sequels, prequels, unfoldings, reformulations, amplifications, as well as the option to actively participate in the narratives as a reader, a spectator, or a computer game persona, all of which generate new narrative paths.


In this context, Julie Sanders’s accounts of “appropriation” and “adaptation” can be useful. Adaptations signal a relationship with an informing source text or original, and appropriations distance themselves “from the informing source into a wholly new cultural product and domain.” “Zompoc” texts are the perfect example of this adaptation theory, involving multiple elements of imitation, proximation, improvisation, and re-evaluation.


Featured Image credit: CC0 via pixabay.


The post 5 canon-breaking influences on modern literature appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 15, 2017 04:30

A composer’s Christmas: Malcolm Archer

We spoke with composer Malcolm Archer about the pleasure of driving his 1964 Austin Healey 3000 on crisp December days, the magic of the Christmas story, and spending Christmas in Chicago.


What’s your favourite thing about the Christmas season?


Being a church musician, I would have to say the choral singing and the services. It is also very gratifying to see the huge pleasure that choristers gain from singing Christmas music. There is something very direct and poignant about the Christmas settings, which goes right to heart of the gospel message about the birth of Christ.


Is there anything about Christmas that particularly inspires your composing?


The Christmas texts are very special and always an inspiration to composers, both the traditional ones and also discovering new ones. It is always nice to find inspiring words by a living author. Timothy Dudley-Smith is one of my favourite modern writers, and I have set a number of his texts over the years.


What marks the beginning of Christmas for you?


The beginning of the Christmas season starts for me on Advent Sunday, and the Advent Carol Service. Of course, Advent is not to be confused with Christmas, but there is a wonderful sense of longing and expectation which comes with Advent and that makes the arrival of Christmas all the more special.


It is so sad when the high streets, the media, and the world in general bring Christmas to us far too early. I like Advent to last as long as possible. That way, Christmas can still retain a special magic, and the first line of ‘Once in Royal David’s City’ from King’s on Christmas Eve really is a moving moment.



Chicago christmas lights by Casey Allen. Public domain via Pexels.

What’s your favourite Christmas film and why?


I don’t watch many Christmas films as I am usually too busy, but I can remember watching The Polar Express with my children some years ago, and being amazed by the wonderful animation and the atmosphere it managed to create.


What’s your favourite Christmas carol and why?


It would be very hard to pick one. I have a fondness for Pearsall’s arrangement of ‘In dulci jubilo’, Darke’s ‘In the bleak mid-winter’, and John Rutter’s lovely ‘Candlelight Carol’. Bob Chilcott’s ‘The Shepherd’s Carol’ comes pretty near the top of my list too.


Which one of your own Christmas works are you most proud of and why?


This is a difficult one.  I think there are three or four that I would mention: ‘A Child is born in Bethlehem’, because it was written when I was in my twenties, so I have fond memories of composing it. ‘Infant Holy’ and ’Jesus Christ the Apple tree’ because they were lovely texts to set, and my latest carol, ‘A Virgin most pure’; I am hopeful that choirs will really enjoy that.


Are there any seasonal activities that you particularly enjoy?


We always take the Winchester College Quiristers on tour just before Christmas. It is a great time to tour, especially for youngsters. Apart from that, driving my 1964 Austin Healey 3000 with the top down on crisp December days.


What does a typical Christmas day look like for you? 


When I was working in Cathedrals, it would be two services in the morning, followed by Christmas Lunch with the choir, then Christmas Day Evensong and then sleep! Since I work now at Winchester College, it means a rather more leisurely day, seeing the family and friends, and last year, tuning into ‘A Winchester Christmas’ recorded from Winchester College on Classic FM.


Why do you think music is so important to people at Christmas time?


Many people believe they have lost the faith they had in childhood, and the magic of Christmas has gone for them. Christmas music has the ability to re-awaken those beliefs and re-kindle that magic. The Christmas story is a truly remarkable one, and you have to be pretty hardhearted not to be moved by the extraordinary story of a baby boy whose arrival changed the world overnight.


What is the most memorable Christmas you have ever had?


Many have been memorable, but one of the most memorable was in 1995, when I was Acting Director of Music at St. Luke’s Church, Evanston in Chicago. I worked with some wonderful people at the church; the friendships made there have been lasting, and their Christmas services were also very special. Chicago looked amazing with all the Christmas lights everywhere; to enjoy Christmas in a city like Chicago is a very special experience, which I can thoroughly recommend!


Featured image credit: Christmas crib stall Bethlehem by Geralt. Public domain via Pixabay


The post A composer’s Christmas: Malcolm Archer appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 15, 2017 01:30

December 14, 2017

Animal of the Month: 12 facts about reindeer

We all know the stories about reindeer; Father Christmas’s magical steeds fly his famous sleigh to the homes of children around the world on Christmas Eve, eat the carrots we leave for them, and are recipients of occasional genetic mutations causing shiny red noses. But where do they come from? How do those not pulling Santa’s sleigh spend their time? And are their noses really that special?


The reindeer, also known as caribou in North America, is a species of deer of the tundra and subarctic regions of Eurasia and North America. From tales of glowing red noses to debates about the physics behind their annual circumnavigation around the world, talk of reindeer is at an all-time-high this time of year. But there’s a lot more to this charismatic winter mammal than their sleigh-pulling abilities. Below are twelve facts which you may not have known about reindeer.



Reindeer are thought to have originated around two million years ago in Beringia and moved from there further into both North America and western Eurasia.
Reindeer are extremely cold tolerant. Norwegian reindeer (Rangifer tarandus tarandus) can reduce their critical temperatures from 0 to -30⁰C from summer to winter, and Svalbard reindeer (R.t. platyrhynchus) can do so from -15 to -50⁰C.
Reindeer is the only species of deer in which both males and females grow antlers.
Domesticated reindeer populations have outgrown those in the wild. While we don’t know exactly when Rangifer were domesticated, evidence exists that herds were owned by the chieftain Ottar from Halogoland in 9th-century Norway.
Longer legs on reindeer make it easier for them to move around on hard surfaces and snow, but these long legs costs Rangifer more energy for maintenance and growth. Average leg length becomes shorter the further north the species live.
Nuclear fallout from the explosion at Chernobyl power station in 1986 seriously impacted Scandinavian reindeer herds, as the reindeer ate vegetation which had absorbed radioactive isotopes. The meat from these animals was rendered dangerous, large numbers were slaughtered, and the Saami cultural system changed as a result of this break in traditional practices.



Strolling reindeer (Rangifer tarandus) in the Kebnekaise valley, Lappland, Sweden by Alexandre Buisse (Nattfodd). CC BY-SA 3.0 via Wikimedia Commons.

Males have different reproduction tactics, including warding off male competitors or sneaking copulation with females in heat. Studies have shown that the most effective manner of reproduction allows for a combination of both tactics.
North American caribou migrate up to 5,000 km annually across a territory ranging in area of 1,000,000 km2. This is more than any other terrestrial mammal worldwide.
Climate change threatens to reduce the area of the tundra and decrease the space reindeer and caribou have for foraging mosses and lichens, particularly during calving season.
Summer is a tough time for reindeer. If temperatures rise above 10⁰C, this artic mammal becomes noticeably uncomfortable and if 15⁰C is reached the animals suffer physiological disorders. The short summer season allows reindeer to gain bodyweight that will allow them to overwinter; any disruptions to this will jeopardize their ability to do this.
Though Rudolf’s red nose has become the popular image of reindeer around the world, the image is misleading. That’s not to say reindeer noses aren’t amazing. These appendages cool the warm air coming out of the body by 21⁰C before it leaves the body.
Reindeer don’t really fly!

Featured image credit: ‘Reindeer herd, in winter, Lapland, Northern Finland – space for text’ by Terence Mendoza via Shutterstock.


The post Animal of the Month: 12 facts about reindeer appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 14, 2017 03:30

Hubert Parry (1848-1918)

2018 marks the centenary of the death of Hubert Parry, one of the finest and most influential ­musicians of the late nineteenth and early twentieth centuries.


Over the last few months I have had the privilege of making the first critical edition of his late choral masterpiece, the Songs of Farewell, with reference to the autograph manuscripts, held in the Bodleian Library, and a set of early printed versions. I’ve known these marvellous pieces for years—partly through the classic recording made by Christopher Robinson and the Choir of St George’s Chapel, Windsor in 1987—and the process of editing them has only increased my admiration for Parry. He composed the set over a period of nearly a decade, beginning in 1907: the manuscripts give us a vivid picture of the compositional process, with some motets emerging almost complete in early sketches, and others involving an apparently tortuous process of drafting and redrafting. The first and best known, My soul, there is a country, was subject to several revisions, including the recasting of its mystical opening, and obvious indecision about one particular soprano note toward the end. The fourth motet, There is an old belief, was the first to be composed, for a ceremony at the Royal Mausoleum at Frogmore in January 1907. In 1913 it was revised, and at this point Parry began to assemble a set around it. In the process he vacillated between two versions of the section “serene in changeless prime”, apparently deciding on the eventual form only at the last minute.


I am the Organist of New College, Oxford, and there is a particular connection between the choir I direct there and the Songs of Farewell. Early performances of some of the motets were given in services at New College, with the choir directed by Parry’s friend (and eventual successor as Director of the Royal College of Music) Hugh Percy Allen. Parry was sometimes present: one such occasion is mentioned in the memoir of a former New College chorister. He later heard the final motet Lord, let me know mine end performed by the Oxford Bach Choir, once more under Allen, a note of thanks for which is among Allen’s papers in the New College archives. The fact that the manuscripts generally ascribe the top vocal line to ‘Treble’ rather than ‘Soprano’ perhaps suggests that Parry had a collegiate-style choir with (in his time boy) choristers in mind, though there is no doubt that the motets sound magnificent performed by mixed and/or larger forces. Whatever the nature of the sound Parry imagined as he composed his motets, he writes superbly for voices. The parts (between four and eight of them) are held in perfect balance, at various points in intricate yet fluent counterpoint, clear and direct homophony, and even sometimes in dramatic unison.



2 Richmond Chambers, birthplace of Hubert Parry, composer of Jerusalem, the coronation anthem “I was Glad”, and “Blest Pair of Sirens” by Chris Sampson. CC-BY-2.0 via Flickr.

Perhaps because he is now best known for the coronation anthem I was glad when they said unto me and unison setting of Blake’s Jerusalem, Parry is often regarded as a quintessentially ‘English’ composer. But if we take that to mean that he was nationalistic or narrow-minded, we do him and his music a grave disservice. He was, in fact, a cosmopolitan, open-minded liberal, and his music should be heard in the context of the European (specifically German) ‘mainstream’ of the nineteenth century. It was this influence that so decisively refreshed musical culture in Britain during Parry’s lifetime, through his music, and that of his contemporary Charles Villiers Stanford and, later, Edward Elgar (of whom Parry was a notable supporter, while the rebarbative Stanford was jealous).


Two poignant further contexts in which we may hear the Songs of Farewell are the First World War and Parry’s declining health (he was suffering from heart failure, sometimes sustaining several small ‘heart attacks’ in the course of a week by the mid-1910s).


The slaughter of the war bore double significance for Parry. First, it signalled a bitter end to the many decades of cultural cross-fertilisation between Britain and Germany. This had influenced not only Parry’s own education (through his studies in Stuttgart, and in London with the German educated pianist Edward Dannreuther, which brought him into contact with Wagner among others), but also his professional life at the Royal College of Music, which had been founded in 1882 on the German Hochschule model. Secondly, it robbed the country of many talented young musicians known to Parry, including two alumni of the RCM: George Butterworth, who died on the Somme in 1916, and Ernest Farrar, whose death preceded Parry’s own by less than a month. It is difficult to hear the passage “and you whose eyes shall behold God” in At the round earth’s imagined corners without thinking of the sacrifice of these young men—even though its composition predates the death of Butterworth and Farrar. Likewise, the closing section of Lord, let me know mine end seems inextricably linked to the composer’s physical decline: “O spare me a little, that I may recover my strength, before I go hence, and be no more seen.” Parry died on 7 October 1918, just over a month before the Armistice, having heard all six motets performed separately, but never as a set.


Featured image credit: “Memorial to Hubert Parry in Gloucester Cathedral. Inscription by Robert Bridges” by Andrew Rabbott CC BY-SA 3.0 via Wikimedia Commons.


The post Hubert Parry (1848-1918) appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 14, 2017 01:30

December 13, 2017

The word “job” and its low-class kin

This post is in answer to a correspondent’s query. What I can say about the etymology of job, even if condensed, would be too long for my usual “gleanings.” More important, in my opinion, the common statement in dictionaries that the origin of job is unknown needs modification. What we “know” about job is sufficient for endorsing the artless conclusions drawn long ago. It would of course be nice to get additional evidence, but there is probably no need to search for it and no hope to dig it up. Finally, as I have mentioned time and again in the past, “of unknown origin” is a misleading formula. It can mean that we know nothing at all about the history of the word under discussion (a relatively rare case) or that our information is inconclusive, or that, though several competing hypotheses exist, it is impossible to choose among them (all look equally hopeless or equally reasonable). But our information on job is rather full, and I think we can offer the public a more satisfactory answer than “of undiscovered origin.”


Samuel Johnson, the author of a famous English dictionary (1755), called job “a low word now much in use of which I cannot tell the etymology” (a low word was his favorite phrase; it meant “vulgar” or perhaps what we today call slang).  He defined job as “a low, mean, lucrative busy affair.” To see how “low” it was, we may quote Thomas Sheridan, who made a name not only as a successful playwright but also as an active politician. Among other things, he declared: “…if from any private friendship, personal attachment or any view other than the interest of the public, any person is appointed to any office in the public service, when any other person is known to be fitter for the employment, that is a job.” It appears that Sheridan had to define or at least elucidate the word for his audience, but the OED has citations illustrating a similar sense dated to 1667. Apparently, that sense had not become  universally known, or perhaps at the peak of Sheridan’s career, job was supposed to be unpronounceable in good society; as late as 1755 Johnson said “a low word now much in use.”  It took the disreputable noun some time to lose its vulgar tinge and merge with the Standard. Also, toward the end of the seventeenth century, job “a piece of work,” seemingly devoid of negative connotations, surfaced in print. Perhaps even that sense was stylistically charged, something like the modern potboiler; it is hard to tell.


“Job-job-job.” Image credit: “Woodpecker Bird Picking Tree Feathered Forest” by werner22brigitte. CC0 via Pixabay.
“Kap-kap-kap, the Russian way.” Image credit: “Tap Dripping Drop” by StockSnap. CC0 via Pixabay.

Alongside the noun job “a piece of work,” the verb job “to strike, peck” existed. Lexicographers are not sure whether the two words are connected, but it is reasonable to assume that they are. The verb seems to be primary: you peck, peck, peck, and “a piece of work” is done. If this is how matters stand, then only the verb needs an etymology. Our seventeenth-century dictionaries do not feature job. The earliest suggestion about the origin of the verb seems to have been made in the 1820’s: job was identified with the Classical Greek noun kópos, which meant “a blow” and “work.” Of course, Engl. job cannot be a reincarnation of the Greek word, but the semantic parallel is perfect and should perhaps be used for supporting the idea that job1 and job2 are related.


Other than that, most of those who have dealt with job, have suggested approximately the same idea and believed that the verb job is sound imitative or sound symbolic, which in this case comes down to the same. As analogs or as the sources of job the following words have been offered: shog “jog along,” a dialectal variant of jog; chop; Irish gob “mouth, beak,” and French gob “lump” together with gobbet “fragment; lump of food” (cf. Engl. gobble “to swallow hurriedly”). At first, Skeat derived job from Celtic (at that time he in general overdid the influence of Celtic on English), but in the last edition of his dictionary (1910), he gave up the specious cognates and referred to such imitative verbs as chop, dab, and bob. However, he was not quite ready to give up the comparison between Engl. job and Irish gob.


Charles Mackay, the whipping boy of my blog, wrote several good books on the history of English and a fanciful etymological dictionary (1877) in which he traced hundreds of words to Irish Gaelic. He is easy prey for ridicule, and I try to mention him only when his pronouncements have to be refuted. This is one of such cases. Mackay derived job from the Irish noun ob “refuse.” In 2014, William Sayers considered a new etymology of job that in some way resembles Mackay’s, except that he began with Shelta gruber “work, job” and Irish obair “work.” (Shelta is the in-group language of itinerant Irish handymen.) I’ll skip some of Sayers’s combinations, because he mentions them only to conclude that they should be rejected. But some of his ideas are disappointing. Among them is the reference to slang being derived from thieves’ language. Obviously, he did not know Mackay’s etymology. After a circuitous travel through several languages he ended up offering the early Parisian slang word jobbe “fool” as the etymon of Engl. job. The French word goes back to the name of the biblical Job (from this name English has the noun jobation “reprimand, talking-to”).  He finished his excursus by the appeal: “…let us try to recapture some of the racy flavour of the word as used in Dr Johnson’s century” (the article was published in Notes and Queries; hence the British spelling of flavor). The flavor has been recaptured, but the etymology remained hidden. We are left wondering how a piece of French slang meaning “fool” became part of “low” English signifying “illicit affair.” In Sayers’s reconstruction, the ties between job, verb, and job, noun, have not been as much as mentioned.


“Friends give the afflicted Job a jobation.” Image credit: “The Patient Job” by Gerard Seghers. Public Domain via Wikimedia Commons.

At the moment, French gob is treated in dictionaries as the most probable source of job, though no one can explain the substitution of j- for g-. To my mind, those who connect the noun and the verb and derive the noun from the verb are right. Besides, I also share the opinion that job is a member of the family made up of chop, jog, shog, and their likes. The root of Greek kópos belongs here too. The Slavic cognate of kópos is Russian kopat’ “to dig” (with related verbs elsewhere; stress on the second syllable) and, very probably, kapat’ “to drip” (stress on the first syllable). Consider also Dutch kappen “to cut, chop,” which was borrowed in this form into German. Kop, cap, hop, gob, and so forth are typical onomatopoeias or “sound symbolic” formations. (See also the post of June 10, 2015 in which the verb dig is discussed.) In English monosyllables, initial j– outside unquestionable borrowings from French, often characterizes expressive words, which are, unsurprisingly (as journalists like to say), of obscure origin. Jag, jog, jam (verb), jaunt, jig, and jump belong here. Job (verb) could of course be an arbitrary, “symbolic” alteration of gob, but, as likely, it could be a “rootless” independent formation.


In sum, the etymology of job will not appear as a mystery if we agree to derive the noun from the verb “to strike, peck,” as is done, for example, in dealing with the history of stint “to diminish” (originally “to shorten”). The noun job first meant “a piece of work; a temporary occupation,” and it has retained this sense. One can have no permanent work but several jobs. The word owes its world-wide popularity to American English. Amazingly, someone who is unemployed is called jobless (compare the noun joblessness), even though workless also exists but does not seem to be used too often. My job is done.


“Who said that rootless things cannot flourish?” Image credit: “Boletus Mushroom Nature Terrain Landscape Autumn” by Winsker. CC0 via Pixabay.

Featured Image Credit: “Figures Professions Work Funny Fun Career Job” by Alexas_Fotos. CC0 via Pixabay.


The post The word “job” and its low-class kin appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2017 04:30

Will 2018 be a turning point for tuberculosis control?

Although tuberculosis (TB) has plagued mankind for over 20,000 years and was declared a global emergency by the World Health Organization (WHO) in the early 1990s, political attention and funding for TB has remained low. This looks set to change for the first time.


On 17 November 2017, 75 national ministers agreed to take urgent action to end TB by 2030. This landmark commitment, known as the Moscow Declaration, was made at the WHO Global Ministerial Conference on Ending Tuberculosis which was attended by leaders such as Russian President Vladimir Putin, the UN Deputy Secretary General, and the WHO Director-General. Further commitments from heads of state will be called for in September 2018 at the first UN General Assembly High-Level Meeting on TB. The resulting increases in investment and political pressure to end TB urgently is of course critical to progress. But to push ahead in the right direction requires critical analysis of strategies that have and have not worked in decades of TB control. And to push ahead at the right pace, which may not be the fastest pace possible, requires a deep understanding of the health systems that are expected to scale up TB control activities.


My encounter with a middle-aged paramedic in charge of “patient monitoring and data management” in Myanmar illustrates some of these complexities well. Sitting in a room surrounded by stacks of paper records with a hand-drawn keyboard on his desk, Tun Aung* explained that he was practising for when a computer arrives, which he had been hoping for years was imminent. This would allow him to computerise patient data and track whether patients in Yangon, Myanmar, are completing their drug resistant TB medication over the lengthy two-year course of treatment.



Tun Aung’s hand-drawn keyboard in Myanmar. Used with permission.

This situation not only demonstrates the lack of human resource capacity and infrastructure facing health facilities in countries like Myanmar but it also serves to highlight a conundrum of global health funding. With funding comes targets such as giving more people the medication they need. However, with TB in particular, where drug treatments are lengthy, it’s one thing to start patients on treatment but another to ensure they continue. Unless health systems and staff have the capacity to monitor this effectively, there is a real risk of non-completion and more drug resistance.


While ambitious goals to end TB help to galvanise support, this will not be the first time that TB control targets have been set, and lessons can be learnt from history. In the 1990s, global targets were set for the minimum proportion of cases that must be detected (70%) and successfully treated (85%) to result in an estimated 10% decline in new TB cases every year. With much investment from international and national funders, and efforts from stakeholders in low resource, high-TB burden countries, most countries have now met these targets.


But the decline in TB cases since 2000 has been minimal at only 1.5% per year, nowhere close to the anticipated impact of meeting the global targets. Western China, which has exceeded all TB related targets, failed to see a reduction in TB incidence. A recent investigation – the largest study of its kind analysing data from almost 80,000 TB patients diagnosed between 2006 and 2013 – shed light on a potential reason for this.


Simply put, the 70% target for detecting new TB cases did not represent how long it was taking TB patients to be diagnosed, only how many were being diagnosed. More than a third of TB patients were only diagnosed after more than 90 days of symptoms (most commonly a persistent cough), providing a lengthy period over which the infection could be transmitted. In light of these findings, it is perhaps less surprising that the number of new TB cases occurring in this region of China did not decrease as expected based on the high case detection rates.


“In the rush to meet targets for getting as many patients on treatment as quickly as possible, we forgot that drugs alone do not successfully cure patients – skilled and motivated doctors, nurses with tools to monitor patients’ progress do.”

In the study involving Tun Aung in neighbouring Myanmar, our findings have implications for increases in drug resistance, the global nemesis. In 2012 the world learnt about the explosive drug resistant TB problem in Myanmar, with an estimated 9,000 new cases being generated every year. Attracting most attention and outrage was the fact that, due to resource limitations, more than 90% of the drug resistant TB patients were not being treated. During a visit to Yangon’s overburdened health facilities in this period, the pressure on international funding bodies from growing patient waiting lists and stories of avoidable deaths while patients suffered untreated was palpable. The hand-drawn keyboard was just one example.


Soon after I returned, funding for drugs to treat Myanmar’s drug resistant TB patients was made available, and the government TB control was given aggressive targets to treat 100% of patients by 2015. This would require a huge increase in the number of patients treated each year, which would amount to several thousand. Unfortunately, funding for laptops and training for staff struggling to run Myanmar’s public health services did not receive as much attention.


In the rush to meet targets for getting as many patients on treatment as quickly as possible, we forgot that drugs alone do not successfully cure patients – skilled and motivated doctors, nurses with tools to monitor patients’ progress do.


The unprecedented political attention to TB is no doubt a huge victory resulting from years of advocacy efforts, but could the resulting urgency to act end up creating a worse drug resistance problem? The UN meeting on TB in September 2018 will bring together stakeholders from around the world to commit resources in order to accelerate towards a shared goal. However, acceleration will only lead to progress if we are moving in the right direction. For that we need to base our investment decisions of evidence of what has and has not worked.


*Name has been changed for confidentiality.


Featured image credit: Pulmonary Tuberculosis Collection by Puwadol Jaturawutthichai via Shutterstock.


The post Will 2018 be a turning point for tuberculosis control? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2017 03:30

December 12, 2017

Ten key facts about Puerto Rico after Hurricane Maria

Puerto Rico, this year’s Place of the Year, has been in the media spotlight in the last year for several reasons. First, the island is undergoing its most severe and prolonged economic recession since the Great Depression in the 1930s. Between 2006 and 2016, Puerto Rico’s economy shrank by nearly 16 percent, and its public debt reached more than 73 billion dollars in 2017. Second, the island is experiencing a substantial population loss, largely due to emigration. Between 2000 and 2016, the number of Puerto Rico’s inhabitants decreased by 10.6 percent, from 3.8 to 3.4 million people. Third, in June 2016, the US Congress approved the Puerto Rico Oversight, Management, and Economic Act (PROMESA), which effectively placed the island’s government under direct federal control. Finally, in September 2017, two major hurricanes, Irma and Maria, struck the island, wreaking havoc in a country already ravaged by economic misfortunes. Below are ten things that have occurred in the aftermath of the hurricanes.



On 20 September 2017, Puerto Rico suffered its worst natural disaster since 1928. Category 4 Hurricane Maria caused catastrophic damages on the island, including more than 94 billion US dollars in estimated property losses, hundreds of deaths, the virtual destruction of the electrical power grid, the elimination of thousands of housing structures, and the collapse of most telephone lines and cell networks.
On 28 September, the Trump administration temporarily waived the 1920 Jones Act for Puerto Rico, which requires that all merchandise be shipped to the island on US-owned and operated vessels, thus doubling the price of imported goods on the island. The Jones Act was reinstated ten days later.
On 3 October, President Donald Trump visited Puerto Rico to survey the damages caused by hurricane Maria. The president warned that federal emergency aid to the island could not be extended forever; he congratulated federal agencies for their rapid response to the disaster; and quipped that Puerto Rico’s public debt would have to be “wiped out” in the aftermath of Maria.
Puerto Rican communities throughout the United States—from Florida to New York—quickly organized themselves to raise funds and collect food, water, batteries, and other supplies for Puerto Rico. Several Puerto Rican and Latino celebrities, including Ricky Martin, Jennifer López, Marc Anthony, Lin-Manuel Miranda, Daddy Yankee, Gloria Estefan, and Pitbull joined forces to help the relief effort.





Many US universities and colleges, including several in Florida, New York, Connecticut, Rhode Island, Louisiana, Texas, and California, have offered in-state tuition rates, scholarships, and other financial assistance to students displaced from Puerto Rico. On 1 December, the University of Puerto Rico reported it had lost 1,561 students after hurricane Maria, most of whom transferred to mainland institutions of higher learning.
Forty-five days after hurricane Maria made landfall, 59 percent of all households in Puerto Rico still did not have electricity and approximately 17 percent did not have access to drinking water. As of 4 December, sixteen of 78 municipalities continued to be without power.
Poverty increased from 44.3 percent of the island’s population before the hurricane to 52.7 percent afterwards. The unemployment rate rose from 10.1 percent in August to 11.9 percent in November.
In the aftermath of hurricane Maria, Puerto Rico is facing increasing public health risks. Among them are possible outbreaks of mosquito-related diseases, such as Zika, Chikungunya, and dengue, as well as leptospirosis, a disease caused by drinking water contaminated by urine from infected rats and other rodents.
One of the most tangible effects of the post-Maria situation has been the substantial increase in migration from Puerto Rico to one of the fifty states of the American union, particularly to Florida. Between 3 October and 4 December, more than 212,000 Puerto Ricans had traveled to Florida, and most of the newcomers are expected to remain living stateside.
The recent tax bill approved by the US Congress will deal another serious blow to the Puerto Rican economy, by imposing a 20 percent tax on goods made on the island and shipped to the United States.

Featured image credit: Puerto Rican Day Parade by Ricardo Dominguez. CC0 public domain via Unsplash.


The post Ten key facts about Puerto Rico after Hurricane Maria appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2017 05:30

Communities can prevent violence

Every day the news is flooded with stories of different types of violence. On what seems like a daily basis, we’re bombarded with relentless reports of violence in this country. Our register of national tragedies keeps growing: hate crimes, mass shootings, and #Metoo headlines are only the most recent outbreaks of an epidemic of violence in our homes, public spaces, and communities.


It’s easy to feel overwhelmed in an environment where violence takes so many forms, adversely affecting public health in both clear and complex ways. But we’re not helpless. We don’t have to keep living—and dying—like this, leaving our children a legacy of trauma. We have the power to change.


Violence doesn’t occur in a vacuum. It’s often a result of a systematic wearing down of communities, families, and individuals by adverse circumstances. Over my decades-long career in public health, I’ve worked with communities across the country to understand and address the causes of violence. I’ve learned a lot from people working to prevent gun violence, domestic violence, and violence affecting youth.


It’s all connected.


To prevent violence, we need to understand that many forms of violence share underlying causes, including economic insecurity, trauma, racism, misogyny, social disconnection, and access to deadly weapons. Often, communities affected by one or more of these conditions experience multiple forms of violence. Recognizing shared risk factors empowers us to address multiple forms of violence simultaneously.


Promote a cycle of health, reduce the cycle of violence.


Health and safety are closely connected. For violence prevention strategies to be successful, promoting health is essential. Just as risk factors for violence may be shared across multiple forms of violence, we know that promoting resilience insulates people from violence. Education, strong social networks, and norms that support healthy relationships all protect against violence. People need access to jobs and education, as well as a sense of feeling safe and welcome in their environment to walk in their neighborhood, engage with their neighbors, or let their children ride bicycles in local parks. Those who are safe in their environments are more likely to have strong social networks, get enough physical activity, and consume healthier foods – whether by growing vegetables in an outdoor garden or walking or taking the bus to markets that sell healthy food.



Man in Brooklyn park by Brooke Cagle. CC0 public domain via Unsplash.

Communities most affected by violence know best.


Homicide in South Los Angeles, as in many urban areas across the United States, is one of the leading causes of premature death. “Parks After Dark” launched in 2010 as a prevention strategy of Los Angeles County’s Gang Violence Reduction Initiative. Three participating parks offered extended evening hours to welcome members of the community to interact in well-lit, supervised parks. Parks After Dark also promoted health by organizing sports teams and exercise programs; providing classes on healthy cooking, literacy, juvenile justice, and parenting and computer skills; and offering arts and crafts, free concerts and movie screenings, and access to health and social service resources. The program has since expanded with the support of community groups and the city’s Department of Public Health. The result? Violent crimes in the communities surrounding the original three parks declined 32% during summer months from 2009 to 2013. In nearby communities that did not participate in Parks After Dark, violent crime increased 18% during this same time period. According to surveys, 97% of people who joined Parks After Dark reported feeling safe.


These kinds of strategies have great potential to save lives and transform communities. Violence isn’t inevitable. By investing in what we know works, we can take meaningful action to prevent violence.


Featured Image Credit: Photo by Jerry Kiesewetter. CC0 Public Domain via Unsplash .


The post Communities can prevent violence appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2017 03:30

Ten facts about the evolution of Hollywood

Movie-going has been an American pastime since the early 20th century. Since 1945 we have seen Hollywood rise to its apex, dominating movie theaters across the globe with its massive productions. It was not always this way, though. Below are 10 facts about the evolution of the American film industry after the Second World War.



After WWII, Americans acquired more leisure time and a greater range of activities. Hollywood faced increased competition from other activities, from amusement parks to golf outings and weekend road trips. The biggest threat, however, came from television’s growing popularity during the 1950s.
The Supreme Court’s Paramount Decision of 1948 forced studios to divest from owning theaters and outlawed block-booking, the practice of forcing theaters to buy groups of films in order to book the most desirable films. Hollywood’s business strategy dramatically shifted from a focus on owning the premier downtown theaters to further controlling the channels of distribution.
While fortifying their distribution power, studios devised several strategies to drive attendance. Color film, occasionally used for major features in the 1930s and 1940s, was used in 50% of films by the mid-1950s, a visible alternative to the black-and-white television image.
Hollywood’s financial disaster during the 1960s helped sustain one of its most celebrated periods of artistic achievement, the Hollywood Renaissance. Beginning with innovative films The Graduate (1969), Bonnie and Clyde (1967), and Easy Rider (1969), inexpensive pictures that became runaway hits, Hollywood studios acknowledged both the growing importance and their growing unfamiliarity with the teenage and twenty-something audience that proved to be the most reliable moviegoers of this period.
The construction of multiplexes, usually adjacent to suburban shopping malls, boomed throughout the 1970s, creating venues convenient to the core suburban youth audience and allowing top films to occupy multiple screens per theater.
Image credit: “Millenium Falcon Falcon Star Wars Han Solo” by JAKO5D. CC0 via Pixabay.

Landmark hits like Jaws helped establish a new dominant strategy for Hollywood studios. They began targeting Christmas and summer for their top releases, advertised extensively on television, and booked top films on an increasing number of American screens.
More than anyone else, George Lucas and Stephen Spielberg helped define the blockbuster aesthetic of the 1980s and early 1990s, collaborating on Raiders of the Lost Ark and its sequels, while Lucas’ Star Wars sequels were each top hits and Spielberg’s E.T. (1982) earned over $350 million, as did Jurassic Park (1993). All of these projects were “high concept,” relied heavily on special effects, featured male protagonists, and exploited ancillary markets from toys to soundtracks and video games.
Hollywood began producing more films with action-filled plots because they translated well for global audiences, whose importance greatly increased by the early 1990s, when foreign profits surged ahead of domestic profits. A growing reliance on the foreign box office further increased marketing and production costs and reinforced Hollywood’s high-concept approach.
Rising costs and potential profits drove a new wave of conglomeration, this time led by media firms rather than more diversified corporations. Unlike the conglomerates of the 1970s, these media companies found “synergy” between Hollywood films and print media, cable networks, home video, television series, recorded music, and even amusement parks. These firms could exploit hit films such as the Indiana Jones franchise across various media all under the same corporate umbrella.
The media conglomerates that owned the Hollywood studios grew further integrated by the early 2000s, resulting in the handful of conglomerates that now control the vast majority of the film and television industry known as the “Big Six”: 20th Century Fox, Warner Bros., Paramount Pictures, Columbia Pictures, Universal Pictures & Walt Disney Pictures.

Featured image: “Movie Reel Projector Film Cinema Entertainment” by Free-Photos. CC0 via Pixabay



The post Ten facts about the evolution of Hollywood appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on December 12, 2017 02:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.