Oxford University Press's Blog, page 470

September 2, 2016

The NHS and the Church of England

Politicians are more than anxious over negative public opinion on the National Health Service, falling over backwards to say that the NHS is “safe in our hands.” Meanwhile, the Church of England (C of E) is concerned about losing “market-share,” especially over conducting funerals. One way of linking these two extremely large British institutions is in terms of life-style choices. Or, better still, over lifestyle and death-style choices. What have the NHS and the C of E in common over lifestyle? Just how do a people’s lifestyle link to this phrase—”death-style?” Are they closely linked or kept at respectful distances? These “styles” embrace key issues in our life involving grief, cemetery or woodland burial, cremation, and choice of who to take funerals. But one thing always leads to another and soon we have sporting, celebrity, political and royal deaths on our hands as well as afterlife beliefs of many sorts. Meanwhile, the media and social media flash opinions everywhere. Underneath all this lies a kind of power flow into and out of the British “establishment,” military and police, popular opinion and celebrity. Despite dense information many are unaware of the forces driving unfamiliar parts of British life with the Brexit vote flood-lighting the ignorance some have of other people’s views on the world.


So, what matters? It is my conviction that matters of life, health, sickness, well-being and death are what matter. They are universal, but shape up as British because of the National Health Service and, I think, because of the heritage and present influence of the Church of England. And let that church also stand as a short-hand for other religious groups. Tolerance, fair-play, and free speech are more familiar candidates for “traditional values” but the health factor drives our life concerns. How or where are these values united, nurtured and practiced, or ignored and trashed? Where do we “see” fair-play in terms of health-well being? While the obvious place to discuss this would be parliament and politics, an extensive loss of confidence and trust leads me back to NHS and the C of E. These two institutions, amongst the largest in the country, resemble invisible Trident submarines or starkly visible air-craft carriers that “carry” the cultural value of force through deterrence. “Carrier institutions” are vital for a people’s well-being. Justice under the law is a treasured prize in today’s power-ridden and often power-corrupted era. If trust is lost in these, trouble awaits. And sometimes trust is shaken and of dubious status.



Queen_Elizabeth_Hospital_Birmingham,_Edgbaston,_Birmingham,_England-7March2011Queen Elizabeth Hospital Birmingham, the largest single site NHS hospital in the United Kingdom. Photo by Tony Hisgett. CC BY 2.0 via Wikimedia commons.

Where then can the abstract values of a complex society be carried, carried-out, and be seen to do so? I think the Church of England helped do this for centuries and still, to some extent, does so in thousands of UK contexts. But I also think that the NHS is taking up the slack, especially over the fundamental human concerns of health, well-being, and death. At a time when people’s ideas focus more on this life than on an afterlife, these issues come to the fore. If we speak of salvation at all, and it is the core message of Christianity, many see it as a good life here and now, and a good death here and now. And it is the NHS that is best placed to capture the desire for health and well-being, as well as providing a frame and context for dying and death.


But, what are values? They are, I think, ideas pervaded by emotion. Moreover, if and when some values go further and help shape our identity they become “beliefs.” These may be religious, spiritual, secular, scientific, philosophical, political, naturalistic or any shape or form. And beliefs matter because our identity and sense of who we are matters. Historically, the NHS started as an idea but has swelled into a value and even into a belief as our emotions of fear and hope are lodged in it.


In an age anxious over old-old age and the fear of disruptive sickness, the NHS has emerged as caregiver “from the cradle to the grave.” Because life really matters, Britons have developed a real commitment to the National Health Service as the framework and safety-net for personal and family life. The NHS helps make Britain a safe and less risk-ridden society than it was before the Second World War. The rise of the Welfare State helped make industrialized Britain a far safer place for workers and their families alike. Treatment at the point of need no longer depends upon wealth. The system that emerged includes pre-natal classes, maternity provision, child-healthcare, accident and emergency care, health monitoring of ageing groups, and end of life care. These capture “cradle to grave” expectations. But once expectations are born they take on a life of their own. “Accident and Emergency” is not an idea, it is more like a symbol packed with emotion, more like a value or even a belief than a mere political idea.


This helps us see the force of arguments over which and whether drugs should be provided by the State to extend a person’s life by some months or years. Should medication be allotted to some groups at the expense of treatment for others? Should lifestyle drugs be free? And what about death-style drugs? This is where deep issues arise and prompt Medical Ethics. The question of Euthanasia and assisted dying are, notably, more important than ever before. It is not wonder that multiple collisions arise over “ethics”: but are they “religious” or “medical” ethics. It is fascinating that the closing decades of the twentieth century are often described as increasingly “secular”at the same time as many institutions set up their own “ethics committees.” Ethics committees seem to be one marker of decline of public religious influence.


So, what of the Church of England? Despite some bad cases and sleepy eras it has, for centuries, cared in its own way and with its sense of Christian Ethics for people from the cradle to the grave—from Christening to Cremation. The rites of infant baptism, confirmation, marriage, and funeral ceremonies, as well as such periodic festivities as Christmas, Easter, and Harvest Festivals, all played high-profile parts in British life. Royal and celebrity marriages and funerals, along with death in tragedies, have all been played out through Abbeys and Cathedrals as well as local parish churches across the country. Decline in active church attendance and the decreasing role of clergy in funerals can be easily discussed in terms of secularization but I want to think more in terms of options of care and of the core cultural values that drive a society. Through extended debates on women’s ordination, the Church has even been a carrier for women’s rights.


Today, the C of E, with other churches and religious groups, is still there even if serving a decreasingly engaged population. It is understandable that bishops and clergy would like to extend their pastoral activity into baptisms, weddings and funerals; it is also very understandable why politicians shout through the media that the NHS is “safe in their hands” – for that phrase really means “you are safe in our hands.” Once upon a time the idea of “salvation” included the idea of health and ultimate heavenly well-being: just now it roosts upon our owner-occupier housetops, and is much desired by the homeless.


Featured image credit: Photo by AvidlyAbide. CC BY 2.0 via Flickr.


The post The NHS and the Church of England appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2016 04:30

Hypnosis and the conscious awareness of intentions

A hypnotist tells a subject that their outstretched arm will begin to rise upward as though tied to an invisible balloon. To their astonishment, the subject’s arm rises just as suggested, and seemingly without their intention. While it may appear as though the subject is being controlled by the hypnotist, it is well established that nobody can be hypnotised against their will. Hypnosis therefore seems to present a paradox; to respond to a hypnotic suggestion that your arm will move is to voluntarily perform an apparently involuntarily action. How can this hypnotic response be explained?


Unconscious intentions


Over 30 years ago Benjamin Libet and colleagues conducted a series of classic experiments in which participants watched a clock and reported the time that they experienced an urge to lift their finger. Because brain activity thought to drive the movement was found to occur earlier than the reported time of the urge, these experiments suggest that we become conscious of our intentions after they have been set in motion. The wider implications of Libet’s investigations into the timing of intentions and the many studies he inspired are still contentious, but his method of measuring the time between the subjective experience of intending and the moment of an action provides a relatively simple way of investigating our conscious experience of intending and its relationship to hypnosis.



1169px-Hypnotisk_seans_av_Richard_Bergh_1887Hypnotic séance by Richard Bergh. Photo taken by Szilas in the Nationalmuseum, Stockholm. Public domain via Wikimedia Commons.

Higher order thoughts


Our brains are constantly processing vast quantities of information, yet we are conscious of only a few aspects of this information at a time. Therefore, our unconscious mental activity far surpasses what we are conscious of at any given moment. According to higher order thought theories of consciousness, an unconscious mental state becomes conscious when there is another mental state that is about it. If we accept that, as implied by Libet’s results, intentions can be unconscious. These theories then suggest that whether or not we become conscious of an intention is dependent upon whether or not we have a mental state about that intention. From here we can see how it might be possible to act voluntarily whilst experiencing the act as involuntary. The cold control theory of hypnosis argues that to respond hypnotically is to perform an intentional action whilst maintaining an experience of involuntariness about your action. So, at the unconscious level, the action is intended, but is experienced as involuntary because the mental state that would usually be directed at it to form the conscious experience of intending is inaccurate. By analogy with optical illusions in which conscious experience is not veridical, hypnotic responding might therefore be considered an ‘agentic’ illusion – to respond hypnotically is to consciously (and voluntarily) experience a voluntary act as involuntary.


Hypnotic responding as an ability


Scientific research into hypnosis makes use of hypnosis scales to divide the population by the ability to respond to hypnotic suggestions, or “hypnotisability”. To generate a hypnotisability score, a standardised hypnotic induction is followed by a sequence of suggestions of varying difficulty, and the subject’s response recorded for each suggestion. The resulting score can then be used to assign participants to low, medium, or high hypnotisability categories. If hypnotic responding requires maintaining a conscious experience of involuntariness whilst performing a voluntary act, we might expect these groups to differ in the relationship between their intentions and the conscious experiences that are about them. So, some highly hypnotisable people may be more easily able to avoid having accurate conscious experiences of their intentions because their intentions are less accessible to their conscious mental states. We used Libet’s timing of intention task to explore this hypothesis, asking people of varying hypnotisability to report the position of a fast moving clock hand at the moment they became aware of their intention to lift their finger.


Mindfulness of intentions


We also measured the timing of an intention to move in a group of experienced Buddhist mindfulness meditators. Mindfulness meditation involves the cultivation of awareness of mental states, including intentions, and Buddhist scholars have argued that meditators should have greater access to their intentions and should therefore be aware of their intentions earlier than non-meditators. The figure below shows the time between the moment of the finger movement and the reported time of conscious awareness of the intention to move. As predicted, highly hypnotisable people reported their awareness of intention as occurring late – in fact, after they had actually moved, while less hypnotisable people and mindfulness meditators reported earlier awareness of intentions.


Mean judgement of intention time per hypnotisability groupMean judgement of intention time per hypnotisability group. Figure created by the authors and used with permission.

These results are consistent with the suggestion that individuals vary in their conscious access to intentions, and that this variation tracks differences in hypnotisability – the ability to maintain a conscious experience of involuntariness whilst performing a voluntary act. Furthermore, there is evidence that mindfulness meditators are less hypnotisable than non-meditators, raising the possibility that mindfulness meditation may decrease hypnotisability by increasing conscious access to unconscious intentions. These results may inform theories of illnesses in which the experience of voluntariness is disrupted. Notably, a later awareness of intention has also been reported in functional motor disorder (FMD) patients. In FMDs, involuntary actions characteristic of nervous system dysfunction (e.g., tremors) occur in the absence of detectable damage to the nervous system. These disorders have been associated with hypnosis since the 19th century, and this study raises the possibility that they may be attributable to a dysfunction in mechanisms supporting the awareness of unconscious intentions.


Featured image credit: Hypnosis by DavidZydd. CC0 public domain via Pixabay.


The post Hypnosis and the conscious awareness of intentions appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2016 03:30

OHR Virtual Issue: from roots to the digital turn

We spend a lot of time in this space pointing to particular people or projects that we think are doing interesting things with oral history. In June we talked to Josh Burford, who is using oral history to start important conversations in North Carolina. In April, we heard from Shanna Farrell, who discussed Berkeley’s Oral History Summer Institute. Last September we talked to Doug Boyd about how he uses oral history in the classroom, and the incredible potential that OHMS (Oral History Metadata Synchronizer) has for making oral histories more accessible. We love highlighting the exciting things others are doing, but sometimes we can’t help but brag about our own work. We’ve done something really cool, and we are so excited to share it with you.


This week, in recognition of the OHA’s 50th anniversary, we are releasing our very-first-ever-super-exciting virtual issue of the OHR. This special edition draws from more than forty years of work in the Review, from as far back as our first issue in 1973 and as recent as 2013. The articles included all investigate – from different angles – the nature and value of oral history. Together they demonstrate some of the ebbs and flows at work within our discipline over the last four decades.


The earliest article in the virtual issue, “Black History, Oral History, and Genealogy” by Alex Haley (yes – that Alex Haley!) retraces his steps as he uncovered his family history, from fragmented oral histories, to international research, and which would eventually become the beloved book and multiple television series Roots. In the process, Haley makes a compelling case for the value of oral history, connecting it to his family’s oral memories. The article is especially timely with the reboot of Roots, and could be a useful teaching tool for oral historians hoping to demonstrate the power of oral history.


On the other end of the spectrum, the most recent article republished in the virtual issue “Shifting Questions: New Paradigms for Oral History in a Digital World” by Steve Cohen asks how the digital turn makes us reconsider fundamental questions about the form and presentation of oral history. Despite four decades separating these two articles, they both demonstrate the value of oral history – and the work it takes to do it well.


In addition to these authors, the issue includes work from Ron Grele, Michael Frisch, Charlie Morrissey, Gary Okihiro, Linda Shopes, Kathryn Anderson, Susan Armitage, Dana Jack, Judith Wittner, Alessandro Portelli, Valerie Yow, Daniel Kerr, Mark Feldstein, Jerrold Hirsch, Erin Jessee, and Siobhán McHugh.


Putting together this special issue required months of soliciting suggestions, digging through back issues, and continually narrowing down a long list of articles until we felt confident that the 15 selections we made were some of the best reflections of the nature and value of oral history. In the process we were continually reminded just how much incredible content we’ve published over the years. Over the next few months we’ll be unlocking some of these articles, in addition to those selected for the virtual issue, so make sure to keep an eye on our TwitterFacebook, and Google+ pages every #ThrowbackThursday for greatest hits from over 40 years of the Oral History Review.


Keep up to date with all of our exciting things happening in the oral history world by following us on TwitterFacebookTumblr, or Google+.


Note: the article “The Affective Power of Sound: Oral History on Radio” within the virtual issue contains audio files that work best in Internet Explorer. If the audio files appear as a link, you can access the files by right clicking on the audio links, saving them to your desktop, and playing them from there.


Featured image: Neon by nuzree, Public Domain via Pixabay.


The post OHR Virtual Issue: from roots to the digital turn appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2016 02:30

Conditioning in the classroom: 8 tips for teaching and learning

You are probably familiar with animal learning and conditioning. You probably know that certain behaviours in your pet can be encouraged by reward, for example. You may also know something of the science behind animal conditioning: you may have heard about Pavlov’s drooling dogs, Skinner’s peckish pigeons or Thorndike’s cunning cats. However, what you may not know is that the scientific study of animal conditioning has provided psychologists with an armoury of principles about how training can be most effective. From this research, general rules of learning have been determined, and further investigated in mammals that are rather less furry than your average pet cat or dog – humans. Humans, like other animals, are adept at being trained and learning new information, and one of the most important contexts in which this happens is in school. So, as you go back to school, here are some tips from the study of conditioning that can be applied to teaching and learning in the classroom.



Practice makes perfect. OK, maybe not perfect – but certainly better. One of the ways in which psychologists think learning takes place is through association: we learn by associating things together. It may be learning which elements, together, form hydrochloric acid, or it may be learning the sequence of the events in the water cycle. In both of these cases associations need to be formed between things, and the strengths of these associations grow the more frequently they are experienced; so practise.
Put things together. How should things be experienced in order for them to be associated? Close together in space and in time is best. So, when trying to connect concepts, or other information, put them close together in space and in time – the same goes for reward and praise, their effects are more likely to be effective if they closely follow a desired behaviour. Try it yourself – give yourself a treat every time you have successfully completed a study session.


Make it stand out. You can practise something all you want, but if the things that you are trying to learn about are not particularly significant to you then you will probably struggle. Humans and other animals learn better about things that are salient – things that stand out. So make the stuff that you are trying to teach, or learn about, stand out. Sometimes this can be hard, as not everything is intrinsically salient. You can overcome this by turning the thing that is dull into something that is not: salience can be acquired through association. For example, connect the stuff that you are trying to learn about with things that are important, or entertaining to you, these will be much easier to associate.


Dogs pets sleepy by 27707. Public domain via Pixabay .


Learn from your mistakes. Don’t be afraid to get things wrong. Learning progresses particularly well when there is a difference between what you think is correct, and what is actually correct. You can exploit this to help your revision with a technique called retrieval practise: test yourself on material that you have learned in the past and then, afterwards, check your notes or text book for accuracy. Getting things wrong will help you learn, so do not be put off when things get difficult.


Avoid interference. When you are trying to learn, the presence of other things around you can cause interference – extraneous sights and sounds can overshadow your learning. This is most obvious when something intrinsically distracting – such as a television – is present whilst you are trying to learn. But, as I noted earlier, even dull things can acquire the ability to stand out through learning, so be mindful of what accompanies the context of teaching and learning


Spread it about. As tempting as it is to cram all your exam revision into a short space of time, learning (and therefore teaching) is more effective when it is distributed. Spread out the instances in which you are trying to learn or teach and it will be more effectively recalled later (such as during an exam!)


Mix it up. Learning to tell stuff apart is more effective when it is interleaved. For example, identifying the differences between two very similar mathematical equations is easier if they are compared side by side. Staring at one for hours, and then then other is less productive. So, if you want to emphasise distinctions between things, interleave them rather than cluster them together.


Keep things similar. However, sometimes we want the information that we learn to transfer to new circumstances or contexts, and to things that we didn’t originally teach or practise. That is to say we don’t want our learning to be distinctive – we want it to be general – so that, for example, you can identify common principles rather than just a collection of specific facts. Learning generalises well between circumstances that are similar – so take advantage of this where you can. For example, make the conditions of your study time as similar as you can to the conditions of where you will be tested. A concrete way of doing this is to actually test yourself during study time.

Featured image credit: Colored Pencils stationary by PIRO4D. Public domain via Pixabay


The post Conditioning in the classroom: 8 tips for teaching and learning appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on September 02, 2016 00:30

September 1, 2016

Exotic – Episode 38 – The Oxford Comment

The word “exotic” can take on various different meanings and connotations, depending on how it is used. It can serve as an adjective or a noun, to describe a commodity, a person, or even a human activity. No matter its usage, however, the underlying perception is that is refers to something foreign or unknown, a function which can vary greatly in unison with other words, from enriching the luxury status of commodities, to fully sexualizing a literary work of psychology and anthropology, such as the Kamasutra.


In this episode of the Oxford Comment, we sat down with Eleanor Maier, Senior Editor at the Oxford English Dictionary, Giorgio Riello, co-author of Luxury: A Rich History, Wendy Doniger, author of Redeeimg the Kamasutra, Jessica Berson, author of The Naked Result: How Exotic Dance Became Big Business, and Rachel Kuo, contributing writer at everydayfeminism.com, to learn more about the history and usage of the word.



The post Exotic – Episode 38 – The Oxford Comment appeared first on OUPblog.




 •  0 comments  •  flag
Share on Twitter
Published on September 01, 2016 03:30

Executive remuneration

For many years executive remuneration has been one of the ‘hot topics’ in corporate governance. Each year there is a furore around executive remuneration with the remuneration of CEOs often being a particular area of contention. This year we have seen the spotlight focussed on the remuneration of CEOs at high profile companies such as BP and WPP resulting in much shareholder comment and media attention.


There has been a lot of shareholder dissent this year over the executive remuneration packages at FTSE 100 companies. David Oakley, Michael Pooler, and Scheherazade Daneshkhu in their article state ‘Some of Britain’s largest companies are preparing for a summer of tense consultations on executive pay after one of the biggest waves of shareholder protests since votes on remuneration were introduced in 2002.’ They highlight the top ten protests of 2016 (measured by % votes cast against remuneration packages) with BP, and Smith & Nephew receiving the highest levels of votes against, followed by Shire, Anglo-American, Devro, Aberdeen Asset, Lakehouse, SDL, Bunzl and Thomas Cook. However, the votes against that were in the majority were only at BP, and Smith & Nephew, though the level of dissent was significant and sufficiently high to give concern to boards and remuneration committees at all of the companies listed in the top ten protest votes.



Business man newspaper by Unsplash. Public domain via Pixabay.

The interesting case of Lloyd Blankfein, Chairman and Chief Executive Officer at Goldman Sachs, gave rise to corporate governance concerns on two fronts. Firstly, Blankfein has held the roles of Chairman and CEO since 2006, and secondly concerns over the executive remuneration packages for the CEO and other executive directors. Alistair Gray, in his article, reports that shareholders and corporate governance advisors, such as Institutional Shareholder Services (ISS), were concerned that the costs of a multi-billion dollar legal settlement relating to mis-sold mortgage-backed securities before the financial crisis were not taken into account when determining executive remuneration. He highlights that ‘about a third of votes were cast against the remuneration of top managers……[and] a proposal to separate the roles of chairman and chief executive after Mr Blankfein steps down received a similar level of support.’ Nonetheless, around two thirds of shareholders supported the executive remuneration plan; the fact that Mr Blankfein and other top executives each took a 1$million pay cut in 2015 may have influenced this voting outcome.


Of course, there is also concern over executive pay in many other countries. For example, recently the French government has taken action to give shareholders more power over executive pay. Anne-Sylvaine Chassany’s article highlights how a dispute at Renault between shareholders and the board of directors over executive pay contributed to a ‘UK-inspired provision in the Socialist government’s anti-corruption bill [which] will allow shareholders to vote on the pay packages of chief executives when they are hired or when the structure of their compensation changes. But it goes further than the UK say-on-pay approach by also allowing them to turn down the variable part, which is tied to companies’ annual performance, every year.’


It is clear that concern over executive remuneration packages will continue to generate shareholder dissent. The increasing emphasis on shareholder engagement should contribute to institutional shareholders in particular taking action against executive remuneration packages which are seen as over-generous — especially in cases where companies are underperforming.


Featured image credit: Architecture Banking Building by PublicDomainPictures. Public domain via

 •  0 comments  •  flag
Share on Twitter
Published on September 01, 2016 02:30

August 31, 2016

“Clown”: The KL-series pauses for a while

Those who have followed this series will remember that English kl-words form a loose fraternity of clinging, clinking, and clotted-cluttered things. Clover, cloth, clod, cloud, and clout have figured prominently in the story. Many more nouns and verbs belonging to this group deserve our attention, but, since the principle is clear, we should probably make a pause and turn to another subject. However, one kl-word is too interesting to fall by the wayside. Hence the grand finale: a post on clown.


Clown surfaced in English texts in the second half of the sixteenth century and was recorded in several forms: cl– and –n were stable, but the vowels varied for somewhat unclear reasons. The word’s initial meaning was “a countryman, rustic; peasant,” and Macaulay in The History of England, with his fondness for archaic terms, still found it possible to say in 1849: “The Somerset clowns, with their scythes…faced the royal horse like the old soldiers” (OED). I wonder how many of Dickens’s contemporaries understood this sentence correctly. The OED traces the sense “a fool or jester, as a stage character” to almost the same time (1600). The step from “jester; rustic buffoon” to a character in a circus performance is short, but the recorded examples of this usage are late (none antedates 1722).


Clown_chili_peppersOur modern idea of a clown.

The word’s meaning in the Elizabethan days suggests that we are dealing with slang and that clown was a term of abuse.  The coexistence of three early forms (cloyne, cloine, and clowne) may reflect the speakers’ uncertainty about the so-called true shape of the word (or, to put it differently, no standard form had yet been established). The word must have been fairly recent, perhaps borrowed from a dialect or from abroad. And indeed, all those who have thought about the origin of clown believe that we have here a loan from another language. However, this is where the consensus ends, for two hypotheses compete: clown, researchers say, may be of Germanic or of Romance origin. The Romance hypothesis is earlier, but it will be more convenient to begin with the Germanic one.


It so happens that in Low German, Frisian, and the Scandinavian languages rather many words resemble clown; they mean “awkward person” and “lump; piece of wood.” Next to them, we find words having the structure klint ~ klant ~ klunt (or with –nd at the end) and referring to all kinds of inanimate objects. The development from “a piece of wood” to “country bumpkin” is trivial. James Murray offered a detailed list of relevant forms and concluded that clown is probably a borrowing from a Low German source. In 1988, Frits van der Kuip investigated the origin of Frisian klúnje (the sense “piece of wood” takes center stage in his material), and the words he cited should supplant the traditional Frisian cognates in entries on clown.  (The article is in Frisian, but linguists have no language threshold. Right?) I am afraid that, while formulating his conclusion, Van der Kuip was not fully aware of the complexity of the kl-group; in any case, he too believed that Engl. clown is a word of Germanic origin.


Murray’s is a more reasonable derivation than Skeat’s, who probably depended too much on Modern Icelandic klunni “awkward boorish fellow” and declared clown to be of Scandinavian origin. But such late borrowings into English from Scandinavian, though possible, are rare, and, in dealing with clown, good reasons have to be given for preferring Danish or Norwegian as the source language to Low German. For comparison, clumsy, another sixteenth-century word belonging to the semantic field of clown, may be of Scandinavian origin because it is surrounded by numerous dialectal adjectives and verbs. Here perhaps a local word made it to London, as, for instance, pimp and slang once did, but clown is devoid of such a background. Yet even the history of clumsy is obscure, for all those Scandinavian words can be of German provenance. I would exclude klunni from any discussion of clown. It is said to be a reflex of klunþi (= klunthi).  However, this noun was attested late and is, more likely, non-native.


The earliest etymology of clown connects it with Latin colōnus “farmer, settler.” The idea is usually traced to Ben Jonson’s 1633 comedy A Tale of a Tub (not to be confused with Swift’s parody of the same name). In Act 1, Scene 2, a long dialogue occurs in which this derivation is argued, but Jonson might be making fun of what was common knowledge at his time. Apparently, he disliked the new-fangled word, as he disliked clumsy, which he ridiculed in The Poetaster. The first etymological dictionary of English appeared in 1616, and its author (John Minsheu) already thought that clown goes back to colōnus, though he also cited a possible Dutch cognate (the same that much later appeared in Wedgwood’s dictionary!). With very few exceptions, all the English dictionaries, including the many editions of Webster, shared Minsheu’s view. The same holds for the main modern German dictionaries. In the past, some dissenters offered fanciful hypotheses that need not be summarized here. Murray rejected the Latin etymon, for, in his opinion, evidence confirming the ties between clown and colōnus is wanting. The OED online, which has preserved Murray’s text almost intact, no longer mentions colōnus.


THIS is the original, authentic clown.THIS is the original, authentic clown.

Here I should recount a small but curious episode in the history of English etymology. On 2 January 1943, a letter, signed by Lexicographer, appeared in The Times Literary Supplement (p. 7). The anonymous author called his colleagues’ attention to the publication by Professor T. F. O’Rahilly of an article in the periodical Ériu that should be of interest to English scholars. He made a special note of O’Rahilly’s etymology of the word clown. The etymology, about which more will be said presently, is indeed interesting. I have no knowledge of the lexicographer’s identity. The person, I presume, must have been employed by the OED, and for some time I wondered whether it was Charles Onions who had written the letter. But this reconstruction cannot be correct, for in the fifties Onions edited The Oxford Dictionary of English Etymology, and, if he had thought so highly of the article in Ériu, he would have made at least some use of it. Perhaps someone will be able to enlighten us on this subject.


This is what O’Rahilly said. As early as 1577, a native of Dublin noted that in his days the husbandmen of Fingal (Fingall) were nicknamed collounes by their neighbors. Colloun must have been the Anglo-French reflex of Latin colōnus “farmer.” It is not unlikely that this word was imported to England from Ireland. The OED, as O’Rahilly adds, has colon(e) “husbandsman” (1621): “…to see…a country colone toil and moil.” This colone seems to be the same person as the country clown. Here is one more of his ideas: “I can… suggest that the word was assimilated to the personal name Colin ([from] Nicolin), which in English of Fingall would retain its original stress on the final syllable…. In a satire on the clergy by John Skelton        †1529) the typical country-man is called Colyn Cloute….”


Fingal(l): A possible place from which the word clown came to England.Fingal(l): A possible place from which the word clown came to England.

If O’Rahilly was right, clown does go back to colōnus, but via Irish. By the time this word turned up in English texts (1563), it might have been known for some time. But what about the Germanic words cited in connection with clown? Perhaps they need not be dismissed as irrelevant, but no evidence points to their currency in Elizabethan England, while the Irish route looks real. Let the question remain half-open.


Image credits: (1) “Clown chili peppers” by Rick Dikeman, CC BY-SA 3.0 via Wikimedia Commons. (2) Anglo-Saxon ploughmen, Public Domain via Wikimedia Commons (3) Island of Ireland location map Fingal, CC BY-SA 3.0 via Wikimedia Commons


Featured image: Circus Facade by reverent, Public Domain via Pixabay.


The post “Clown”: The KL-series pauses for a while appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on August 31, 2016 04:30

Poverty: a reading list

Poverty can be defined by ‘the condition of having little or no wealth or few material possessions; indigence, destitution and is a growing area within development studies. In time for The Development Studies Association annual conference taking place in Oxford this year in September, we have put together this reading list of key books on poverty, including a variety of online and journal resources on topics ranging from poverty reduction and inequality, to economic development and policy.


9780195393781


Oxford Handbook of the Economics of Poverty edited by Philip N. Jefferson


Have countries made progress in mitigating poverty? How do we determine who is poor and who is not poor? What intuitions or theories guide the design of anti-poverty policy? Are there anti-poverty policies that work? For whom do they work? This Handbook examines poverty measurement, anti-poverty policy and programs, and poverty theory from the perspective of economics.


Growth and Poverty in Sub-Saharan Africa edited by Channing Arndt, Andy McKay, and Finn Tarp


This is an open access title and can be read on Oxford Scholarship Online. This book comprehensively evaluates trends in living conditions in 16 major sub-Saharan African countries, corresponding to nearly 75% of the total population.


9780198703525


Global Poverty by Andy Sumner


An in-depth analysis of the global poverty ‘problem’ and how it is framed and understood. The volume questions existing theories of the causes of global poverty and argues that global poverty is gradually becoming a question of national distribution.


9780198743750The Politics of Poverty Reduction by Paul Mosley


Globally, there is a commitment to eliminate poverty; and yet the politics that have caused anti-poverty policies to succeed in some countries and to fail in others have been little studied. The Politics of Poverty Reduction focuses on these political processes and ‘pro-poor’ policy.


Written for the nonspecialist, The World Bank Research Observer informs readers about research being undertaken within the Bank and outside the Bank in areas of economics relevant for development policy. Poverty is a key focus of published research as evident from the recent articles ‘Population, Poverty, and Climate Change‘ and ‘How Long Will It Take to Lift One Billion People Out of Poverty?


9780198737407Diverse Development Paths and Structural Transformation in the Escape from Poverty edited by Martin Andersson and Tobias Axelsson


The volume demonstrates how analysis of current growth processes in developing countries can be enriched by paying closer attention to the multifaceted nature of both economic backwardness and successful pathways to escape it.


The Economics of Poverty by Martin Ravallion


The Economics of Poverty strives to support well-informed efforts to put in place effective policies to assure continuing success in reducing poverty in all its dimensions. The book reviews critically the past and present debates on the central policy issues of economic development everywhere. How much poverty is there? Why does poverty exist? What can be done to eliminate poverty?


9780198728450Economic Growth and Poverty Reduction in Sub-Saharan Africa edited by Andrew McKay and Erik Thorbecke


This volume discusses long-standing, but central, economic issues in Sub-Saharan Africa, including the nature of growth-poverty-inequality relations, agriculture, the labour market and openness, and globalization.


Multidimensional Poverty Measurement and Analysis by Sabina Alkire, James Foster, Suman Seth, Maria Emma Santos, José Manuel Roche, and Paola Ballon


A systematic conceptual, theoretical, and methodological introduction to multi-dimensional poverty measurement and analysis. It provides a lucid overview of the problems that a range of multidimensional techniques can address and sets out a synthetic introduction of counting and axiomatic approaches to multidimensional poverty measurement.


9780199684823The Shame of Poverty by Robert Walker


The Shame of Poverty discusses the origins of poverty and  invites the reader to question their understanding of poverty by bringing into close relief the day-to-day experiences of low-income families living in societies as diverse as Norway and Uganda, Britain and India, China, South Korea, and Pakistan.


The Oxford Handbook of the Social Science of Poverty edited by David Brady and Linda M. Burton


The Oxford Handbook of the Social Science of Poverty builds a common scholarly ground in the study of poverty by bringing together an international, inter-disciplinary group of scholars to provide their perspectives on the issue.


Wber 30-2_Cover.inddThe World Bank Economic Review seeks to publish innovative theoretical and empirical research concerning economic development and poverty for the purpose of designing, implementing and sustaining effective policy in low and middle income countries. ‘Estimating Quarterly Poverty Rates Using Labor Force Surveys: A Primer‘ and ‘Is Workfare Cost Effective against Poverty in a Poor Labor-Surplus Economy?‘ are recently published samples of the journal’s dedication to the study of poverty.


Featured image credit: Aaaarrrrgggghhhh! by Poverty. CC-BY-2.0 via Flickr.


The post Poverty: a reading list appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on August 31, 2016 02:30

The last -ism?

There has lately been something like an arms race in literary studies to name whatever comes after postmodernism. Post-postmodernism, cosmodernism, digimodernism, automodernism, altermodernism, and metamodernism rank among the more popular prospects. Each of these terms ostensibly describes a new mode of literary production after the alleged death of postmodernism. Yet the push to name a new period necessarily raises two important questions that should cause us to look back even as we squint toward the horizon:



What is an -ism?
Do we need periods at all?

If there is some kind of shift happening, if something new is taking place, if metamodernism, for instance, is supplanting postmodernism, then we implicitly assume that this new thing is new in contrast to the old thing that came before. In other words, to name a new -ism requires us to set the previous -ism in stone. But is, or I guess I should ask, was postmodernism ever an -ism at all?


In literary studies, we often use -isms to describe dual phenomena: historical periods that also seem to have their own unique philosophies of art. Now, of course things are never as neat and tidy as an -ism like romanticism seems to make them. Nearly all scholars would agree that romanticism does end on Tuesday, April 11, 1861, and they would agree it didn’t spring forth fully formed from a particular writer or literary work. Nevertheless, we use these historical-philosophical categories to make an otherwise unwieldy history a little more, well, wieldy. Literary historians have theorized and developed these -isms over time to better understand these writers, works, and time periods. We have also used these -isms to make these writers, works, and time periods more accessible to students. Such categories make it much easier to hold a lot of information in our heads.


  Did postmodernism represent a return to some earlier literary form or was it an entirely new practice?

But there has never been the kind of consensus about postmodernism that there has been about romanticism, realism, or modernism. That’s not to say that there’s no scholarly agreement, but it is to say that postmodernism has not been nearly as easy to presume in our attempts to name whatever comes next. After all, what was postmodernism? Was it an architectural aesthetic? Was it a philosophy? Was it a culmination of modernism or its antithesis? Did postmodernism represent a return to some earlier literary form or was it an entirely new practice?


The problem is the particular form of the -ism as a historical marker. An -ism, according to the OED, is “a form of doctrine, theory, or practice having, or claiming to have, a distinctive character or relation.” Platonism is a good example. Plato’s philosophy can be defined by a system of orthodoxies that, taken together, make up Platonism. When we use the -ism to periodize, however, we necessarily tend to treat what should be a story as a system. We treat history as doctrine. We treat what should be narrative as a list of principles. We conflate history and aesthetics, or maybe history and philosophy.


The question, “what is an -ism?” can thus prepare us for the question, “do we need periods at all?” In short, we need periods. Fredric Jameson says it this way: “we cannot not periodize.” The late Jacques LeGoff puts it like this: “So long as humanity is unable to predict the future with exactitude, the ability to organize a very long past will not lose its importance.” Periodization helps us account for changes in history; it helps us retain information; it makes the past more manageable. But there are many different ways to periodize. The -ism is only one approach. After all, we haven’t always used -isms this way. If literary historians must periodize, then perhaps the next question to consider is: should we continue to use the -ism to do so?


Featured image credit: The juxtaposition of old and new, especially with regards to taking styles from past periods and re-fitting them into modern art outside of their original context, is a common characteristic of postmodern art by Petri Krohn. CC ASA 3.0 via Wikimedia Commons


The post The last -ism? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on August 31, 2016 01:30

Is undercover policing worth the risk?

The recently published ‘guidelines’ on police undercover operations prove to be just ‘business as usual’.


The guidelines consist of eighty pages in which a new ‘alphabet soup’ of abbreviations describes each of a set of roles to be fulfilled by officers of given ranks. There are procedures set out for the authorisation of undercover operations and how they are to be managed. There are rules about what is permissible or not, most notably a prohibition on undercover officers forming sexual relationships with those they are spying upon. Although apparently an innovation for undercover operations, this reliance on rules, procedures and structures as a response to scandal is far from new. It gives the appearance of being decisive, but does precious little to prevent such wayward conduct in future.


The guidelines acknowledge that undercover work is sensitive, arduous and potentially dangerous. Nowhere does the report go into any detail of why precisely it is sensitive, arduous and potently dangerous. Its dangers are apparent: because spies risk exposure and might be harmed by those whom they have being spying upon. FBI Special Agent, Joe Pistone (better known by his pseudonym, ‘Donnie Brasco’) penetrated the American mafia in three cities over seven years, resulting in two hundred prosecutions, but all at the tremendous cost of now living in fear of mafia reprisal if his true identity is revealed. A similar fate might befall undercover officers who penetrate serious organised crime groups in the UK. It would certainly have been the fate meted out by terrorists in Northern Ireland who, we are told, were riddled with informers and undercover officers. Being unmasked is a serious risk faced by undercover officers and the guidance is correct in giving it such prominence, but it is not the only danger. Neither is it the danger that prompted the publication of these guidelines. This was quite a different danger, but much more pervasive one and yet it remains wholly unacknowledged in the guidelines.


Joe Pistone makes an interesting revelation in his autobiography of his undercover career. With the mass arrests that concluded Pistone’s undercover operation, his role as an FBI agent was soon revealed. The Mafia responded by swiftly murdering the local boss, Sonny Black, who had sponsored ‘Donnie Brasco’. Pistone tells the reader of his sense of regret, even remorse, about Sonny Black’s murder, for he had enjoyed a close amicable relationship with Black and now he had caused the man’s death. Joe Pistone’s feelings reveal an important truth about undercover operations.



Donnie_BrascoImage credit: Donnie Brasco.jpg by Federal Bureau of Investigation. Public Domain via Wikimedia Commons.

A ubiquitous feature of undercover work is that the officer must get up close and personal with those upon whom they are spying. Being so deeply involved requires that officers adhere to what sociologists call the ‘norm of reciprocity’. If someone is kindly to you, then you are expected to be kindly in return. This norm insinuates itself into the minutiae of everyday life of us all, including the life of a criminal gang or activist cell. But undercover officers will be perpetually conflicted throughout the small niceties of daily life, because they are violating an even more fundamental code of social interaction: they are deceiving others and being inauthentic. This creates what psychologists call ‘cognitive dissonance’, an unpleasant experience that demands a remedy. The easiest resolution is to cease being deceptive and genuinely reciprocate the friendship that others in the group extent to them. Over time this forms a bond, like that between ‘Donnie Brasco’ and Sonny Black. Indeed, the pressures of the ‘norm of reciprocity’ and ‘cognitive dissonance’ are so unrelenting that it is difficult to imagine how any undercover officer can withstand them.


Activist groups present a particularly seductive milieu for promoting identification. They are composed of ‘true believers’, eager to convince others of the truth that has been revealed to them. Often they feel beleaguered, a sense of threat that binds them closely together. Of course, they fear infiltration, and so any newcomer will need to establish the strength of their commitment, but an undercover agent will need to work especially hard at doing so. In order to accomplish this they must necessarily empathise with activists. They must laugh at the same jokes, regret the same setbacks, and celebrate the same ‘victories.’ Most of all, they must share the same beliefs. As they do so they might find that the activists ‘have a point’ and that stereotypes are misleading. It is easy to imagine how, under these circumstances, an undercover officer might form a bond with an attractive member of the group that matures into a sexual relationship. Certainly, rules, procedures and structures could not hope to prevent it.


All this might be a price worth paying if there is no alternative and the criminality is sufficiently serious, but this is hardly the case with small bands of political activists. I don’t minimise the threat that might emerge from such groups. They might aspire to damage coal–fired electricity generators and thereby inflict considerable damage, but more seriously they could jeopardise future investment in electricity generation by the firms upon which we all rely. But is undercover infiltration necessary to discover what the ambitions of such groups are? There is considerable open–source intelligence available about activist strategy and tactics that could inform policing at no risk at all. Activist groups are also extremely leaky: they publish promotional literature boasting of their achievements. They are neither agents of foreign states nor violent criminal gangs, because they reach out to attract new members. As true believers anxious to propagate the faith to the idly curious, this allows even an officer in full uniform to discover almost everything the police need to know, just ask them!


If the pickings from undercover surveillance are meagre, the costs to the police are considerable. The police can only operate if the public grants them the legitimacy to do so. Undercover policing has resulted in self–inflicted wounds to that legitimacy since Sergeant Popay infiltrated a subversive political group in 1833. It has periodically erupted ever since. Perhaps it is time for the police to learn of its dangers. Hopefully, mature reflection might encourage greater caution in using such tactics in future.


Featured image credit: “Police”, by Chris. CC by 2.0 via Flickr.


The post Is undercover policing worth the risk? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on August 31, 2016 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.