Oxford University Press's Blog, page 283
January 30, 2018
Seven key skills for managing science [video]
“Management” is a word we often associate with commerce and the business community, but the act of managing is common to most human activity, including academia. While there is a myriad of tools available for learning how to manage business, there are few resources out there which discuss the skills needed to manage academic scientific research.
Scientists are trained with a very specific set of skills that allow them to conduct their research to the highest standards; however, some proficiency in management is needed as a scientist moves from being part of a research team to becoming a research leader to running a large research department.
Below, Ken Peach provides seven key skills needed to manage science effectively.
1. The managerial “we”
We are familiar with the “royal we” meaning “I” (if you are a royal). The managerial “we” is another useful construct, meaning essentially “you”, as in “What I think we should do here is…”; it offers advice, almost an instruction (especially if the “I think” is omitted) but in a way that both empowers and protects – it confers approval and promises support, while making it clear who is responsible for the delivery.
2. Graduate students
Graduate students are the lifeblood of research; they bring enthusiasm, fresh ideas, and seemingly boundless energy to a research project. They are often attracted by an inspiring lecturer who communicates the fascination of the subject and presents its intellectual challenges. Research students need to know not only that the work will be interesting (if challenging) but also that they will be part of a team, and that they will be able to make the contacts that will enable them to progress in their career once they have completed their thesis.
3. Cooperation and competition
Despite the social Darwinians, the reason that Homo sapiens dominate the Earth is because of their ability to balance cooperation with competition. On our own, we can achieve some things but as a member of a team we can achieve much, much more. Cooperation is the means by which we increase the pool of knowledge and hence the advancement of society. Competition is the spur between teams, but taken too far it leads to inefficiency. Learning how to balance competition with collaboration is a key research skill.
4. Team building
Most research is done in small teams – a lead academic, perhaps a couple of colleagues, a few postdocs, engineers, technicians, and graduate students. Even the large experiments (the Manhattan Project, the Human Genome Project, the Large Hadron Collider, etc.) which have thousands of members, are made of large numbers of small teams, working quasi-independently but towards a common goal. The team can only work effectively if all of its members contribute fully and if their contribution is appreciated and acknowledged.
5. The office door
The office door is a barrier between the manager and the managed. The door should be open as much as possible, so that anyone who needs it has access. We are all suspicious of what goes on “behind closed doors”, and this suspicion can be much reduced, if not dispelled, by keeping the door open. Of course, there are some discussions (for example, appraisal reviews) that require privacy, but these occasions should be kept to a minimum. The manager needs to be readily available, to be able to inspire and motivate the members of the unit, and to know them as colleagues.
6. Management by walking about
Much of management is about solving problems: some arise in the research programme, some are personal problems, and some are created by higher management. This gives the busy manager a somewhat skewed view – just one damned problem after another. However, if you walk around, have coffee or lunch with colleagues, you get to hear of the successes. Of course, you also hear some of the grumbles, but this is also useful information. You also need to walk around with your eyes open – does the place look orderly, are all of the safety procedures in place and being followed, is the “buzz” in the laboratory positive, do the research students look happy and engaged, and so on; this information is rarely available in the “annual reports” that most teams are obliged to produce.
7. Research integrity and research ethics
Most academic research these days is publicly funded through competitively-awarded grants or contracts. While there is great pressure to produce results, it remains essential that research is performed to the highest ethical and scientific standards. Scientific fraud not only undermines scientific integrity, but also destroys the careers of the perpetrators when others try to reproduce the results and fail (as they surely will). Minor cases of scientific misconduct may go undetected at times, but it is essential that a strict code of scientific ethics is adopted which includes zero tolerance of such practices. Science without honesty is not science.
Featured image credit: Science and technology by 4832970. CC0 public domain via Pixabay.
The post Seven key skills for managing science [video] appeared first on OUPblog.

Feminist themes in TV crime drama
The fictional world has always featured women who solve crimes, from Nancy Drew to Veronica Mars. Although men crime-solvers outnumbered women on TV, women detectives have increasingly become more commonplace. This trend includes the policewomen depicted on CSI and Law & Order: SUV as well as private detectives like Veronica Mars and Miss Phryne Fisher who are the chief protagonists of their series.
There are a number of dimensions to the trend toward women investigators on TV crime programs, but we focus on four features that are especially notable: (1) depiction of women’s struggles to make it as detectives; (2) forensics and technology as important tools to women crime-solvers; (3) women crime-solvers with troubled pasts; and (4) women crime-solvers dealing with issues of race, ethnicity, immigration, social class, and sexual orientation. We also suggest that contemporary issues of social justice are ever-present on these programs.
Their Struggles
In many ways, the struggles of women detectives in TV crime dramas parallel the experiences of women entering and advancing in real world policing. They confront sexist stereotypes and harassment. On TV’s The Closer, notwithstanding being a high ranking and smart detective, Brenda Leigh Johnson (Kyra Sedgwick) faces unsupportive male subordinates who openly question her abilities. In Prime Suspect, Jane Tennison (Helen Mirren) experiences organizational warfare that ranges from insubordination to sabotage. Both protagonists exhibit intelligence and a laser-like focus that resolves cases and ultimately garners respect from their colleagues. Nowadays, these struggles are depicted less often, but on Tennison 1973, a prequel to the award winning series, young police constable Jane Tennison (Stefanie Martini) is relegated to fetching tea for her male superiors. The prequel is a good reminder not only of “the way it was” and for many women, the way it still is.
Forensics and Technology
Perseverance and intelligence are traits that help women crime-solvers survive and thrive, but another weapon in their arsenal is their use of science and technology. Beginning in the 1990s in Prime Suspect, Jane Tennison ushered-in crime drama that demonstrated the importance of forensics and other technology like surveillance videos. This opened the floodgates for later programs like CSI and its progeny. CSI women excel in their work: conducting lab experiments, accessing national crime data bases, or skillfully gathering evidence at a crime scene. Even in The Pinkertons, a historical detective series set in the 1870s, Kate Warne (Martha MacIssac) conducts scientific experiments—to the amazement of her male colleague—that yield important information, e.g., the poison that killed the victim. Interestingly, Kate Warne was a real Pinkerton operative and one of the first women detectives in the US.
The women in these programs exhibit rational thought, an approach that often openly challenges the “hunches” that characterized male TV detectives for years, and also transcends the stereotype of “women’s intuition.” These women are a virtual ad for the importance of STEM: science, technology, engineering, and medicine. Indeed, universities and communities increasingly work to attract women to STEM fields.

Emotional Baggage
Protagonists’ backstories have become increasingly popular in TV crime shows, suggesting perhaps that this is now a crime genre element. Thus, many women protagonists are supplied with backstories loaded with problematic emotional baggage. For example, we learn in the first episode that crime-solver Veronica Mars (Kristen Bell) was drugged and raped at a party. This angers her and causes her to help other teens who are victims of crime or bullying. In Miss Fisher’s Murder Mysteries, Phryne Fisher (Essie Davis) is a “modern woman” (a 1920s flapper) who is in charge of her life. However, we learn that during WWI she drove an ambulance and experienced shell shock, a condition that re-appears in high stress situations, like confronting a killer. Overcoming this baggage is another accomplishment for these women crime-solvers.
Social Identities
Although there were many women in the crime genre, until the past three decades, most detectives and police were white, heterosexual men, and most plots centered the perspectives of such protagonists. Today’s women-centered shows, however, feature stories and characters in which issues of race, ethnicity, immigration, and sexual orientation are part of the story, and white, heterosexual, masculine perspectives are challenged. Veronica Mars is a white, heterosexual teen, but one of her best friends is an African American student whose experiences and perspectives are centered in some episodes. Other peers and clients include Latinos/as and students who identify as LGBTQ+ who are harassed because of their social class, ethnicity and sexual orientation. Phryne Fisher, a wealthy white, heterosexual woman solves cases in which we learn of the historical experiences of laborers, immigrants, and the poor. Her best friend, a doctor, is lesbian, and we learn of the discrimination she faces. Jane Tennison, a white, heterosexual woman is tireless in her pursuit of justice for victims who are immigrants, homosexuals, or transgendered people. Because of her rank, Tennison forces subordinates to act in a similar manner notwithstanding their own views. However, we also see the flaws of women protagonists and the justice system: both are sometimes insensitive to the issues their clients face. Moreover, the stars of these series still tend to be white and heterosexual even if female. The 1974-1975 TV series Get Christie Love, offered a brief, but rare exception to this pattern wherein an African American actress, Teresa Graves starred as a fearless, undercover police detective. Two recent hit TV series feature African American women as lead protagonists. The first is Scandal, staring Kerry Washington as a highly paid crisis management consultant in Washington, D.C. Second, the series, How to Get Away with Murder, staring Viola Davis, presents a brilliant criminal law professor and students who become interwined in a murder plot. These series feature consultants and lawyers rather than more traditional crime sleuth protagonists such as police and private detectives.
Conclusion
TV crime stories that feature women crime-solvers are characterized by their struggles to survive and thrive as detectives, their use of forensics and technology, emotionally loaded backstories, and a greater awareness of the multi-faceted identities that comprise all phases of modern life, including crime and victimization. These presentations not only expand the scope of who is included in the sense of justice that crime-solvers produce, they reflect contemporary criminological thinking about crime and its victims and expose contradictions in the justice system and those who work within and around it. In this, the crime genre often both reflects and sometimes even leads our thinking on issues of justice.
Featured image: Prime Suspect, with permission of Rex Images.
The post Feminist themes in TV crime drama appeared first on OUPblog.

Interpreting a new work by John Rutter
I first met John Rutter in London during the Menuhin Competition 2016. It was one of my most memorable moments of that year, and it wasn’t just any old year, as it marked the centenary of the great Yehudi Menuhin’s birth. The Competition celebrated this centenary year by inviting past winners, myself included, to take part in special events and concerts. I was honoured to be invited to give the premiere of a new piece by John Rutter, Visions. Little did I know that this unique experience, and the relationship which developed from it, would have such a lasting impact on my musical life.
When I received the score, I was struck by the unadulterated lyricism and transparency of the violin part and the intertwining dialogue between my line and the choir’s. The music flowed very naturally under my fingers and I was moved to learn that John had moulded the piece to suit my specific playing style.
Working with John was a great pleasure from the very beginning. His warm and engaging character put me at ease immediately. I played his piece to him, and he said very little; I took this to be a good sign. I was delighted to discover that my initial approach was in line with his intentions, and that he was satisfied with my reflection of his vision.
The premiere was a beautiful occasion, greatly enhanced by the atmosphere and wonderful acoustic of the Temple Church, London.
I felt tremendously honoured when John then invited me to make the first recording of the piece with him. To bring a piece to life is a process which defines me, and I always strive to attune myself with the composer’s true intentions. It made me deeply happy to have been given the chance to record Visions, conducted by the composer.
I look forward to performing this piece many more times in the future!
Featured image credit: Collegium Records CD – John Rutter’s Visions, used by permission of Collegium Records.
The post Interpreting a new work by John Rutter appeared first on OUPblog.

January 29, 2018
The Death Cafe: A medium latte and a chat about dying
In early 2011, Jon Underwood decided to develop a series of projects about death – one of which was to focus on talking about death. Jon read about the work of Bernard Crettaz, the pioneer of Cafes Mortéls which were themselves inspired by the cafes and coffeehouses of the European Enlightenment. Motivated by Bernard’s work, Jon immediately decided to use a similar model for his own project, and Death Cafe was born.
The first Death Cafe in the United Kingdom was offered in Jon’s house in Hackney, East London in September 2011. It was facilitated by psychotherapist Susan Barsky Reid: Jon’s mother. Death Cafes have since spread across Europe, North America, and Australasia, and as of today, 5,583 Death Cafes have taken place in 52 countries. It is clear that there are people who are not only keen to talk about death, but are passionate enough to organise their own Death Cafe.
So what happens at a Death Cafe?
Well, to quote the organisation themselves, ‘at a Death Cafe people, often strangers, gather to eat cake, drink tea, and discuss death’. It’s as simple as that. The Death Cafe model does not have a set itinerary; meetings are group-directed conversations about death with no agenda, objectives, or themes. They are a discussion group rather than a grief support or counselling session, allowing people from all walks of life to share their ideas about death and dying.
To begin, the Death Cafe facilitator asks the group to introduce themselves and say why they have decided to come. This gives an opportunity for people to say and introduce any topics surrounding death and dying for later in the conversation, although people are not pressured to speak if they do not wish to. If the facilitator is taking part in the group, they take a turn to speak too. This part of the Death Cafe can take some time – often up to an hour in groups of ten or more! Next, the facilitator asks if anything came up for the group whilst people were speaking – thoughts, questions, or reflections. This leads on to a more general discussion, which often tends to last for the rest of the evening. Directed questions are kept to a minimum in order for the sessions to be steered by participants as much as possible.
The motto of the Death Cafe is ‘to increase awareness of death with a view to helping people make the most of their (finite) lives’. Whilst the notion of having a chat about death may seem incredibly morbid to some, death is a part of life as much as a baby being born, or a couple getting married. The existence of the Death Cafe, and the subsequent familiarity with death that it provides, allows those who experience it to fulfill the organisation’s motto entirely. The movement is a strong one, (people complete blogs and session write-ups so as to spread the word and keep people informed), a widespread one (to repeat – 52 countries!), and one for which there was clearly a need. The success of the Death Cafe demonstrates the dearth of discussions around death and dying, and Jon Underwood played a crucial role in changing that.
Sadly, Jon Underwood died suddenly on 27 June 2017 after collapsing on 25 June 2017 from acute promyelocytic leukaemia. His wife, Donna Molloy, wrote in the announcement of his death that
‘Through his life he helped tens of thousands of people all over the world to regularly come together, drink tea, eat delicious cake, and take time out to remember what really matters. I don’t think it’s an overstatement to say he has single-handedly started to change cultures around death and end of life awareness, not just in the UK, but across the globe’.
Death Cafes are a big step towards the taboos surrounding discussion around death and dying being eradicated, and the efforts of individuals including Jon Underwood are responsible for that. The Death Cafe movement remains as strong as ever, and, as per Jon’s request, continues to be run by Jon’s sister, Jools Barsky, and his mother, Susan Barsky Reid.
See below for a selection of images kindly shared with us from Death Cafes all over the world.

Members (and cakes!) of Death Cafe Lagos, Nigeria.
Shared with permission from Hope Ogbologugo.

Members of Death Cafe New York, NY, USA.
Shared with permission from Nancy Gershwin.

Maria Johnson and Lizzy Miles from Death Cafe Columbus, OH, USA.
Shared with permission from Lizzy Miles.

A cake baked by members of the Death Cafe Atlanta, GA, USA.
Shared with permission from Lisa Oliver.

A conversation between attendees of the Death Cafe Portland, OR, USA.
Shared with permission from Holly Pruett.
Featured image credit: ‘Coffee Art Halloween Coffee Takeaway’ by Mimzy. CC0 Creative Commons via Pixabay.
The post The Death Cafe: A medium latte and a chat about dying appeared first on OUPblog.

January 27, 2018
The hippie trail and the question of nostalgia
The term ‘hippie’ was coined around 1965; the term ‘hippie trail’ began to circulate in the late 1960s: it referred principally to the long route from London (or sometimes Amsterdam) to Katmandu. This was not an actual path, although disparate travellers often, by coincidence, followed a route that led through the same cafés, campsites, border-crossings and cultural sites. The travelers came from different Western European countries and the United States. Interviews with over 30 hippie trail-ers found that they had all traveled out east or south in the 1960s and early 1970s, to places like Morocco, Afghanistan, India, and Nepal. Some hitched, some relied on public trains and buses, some went in their own cars, and some traveled with one of the new coach companies that sprang up in the late 1960s. Their journeys took several weeks, even months. For most of them, these were journeys into the unknown: usually, they spoke no foreign languages, and had no great experience in travelling.
Some points these interviewees made were surprising. For example, we found that all of them, without exception, were emphatically positive about their experiences ‘on the road’. They certainly acknowledged that their 20-something selves had made some mistakes, and frequently smiled ruefully about incidents and accidents. But their conclusions were always the same: the trail had been a formative experience for them, and even that it had been the formative experience of their lives. Why?

At first sight, this seems like another example of the easy, romanticized nostalgia for the 1960s that produces Beatles cover bands and commemorations of Woodstock. But after talking to several interviewees, we felt that something else was going on here. Let’s note straight away that there’s very little commemoration of the hippie trail. Public memoriams of the experience tend to be dismissive, as can be seen in the only commercially-successful portrayal of the trail: the film Hideous Kinky, starring Kate Winslet. Aside from this film, coverage and discussion of the trail is limited to obscure, minority-orientated websites and publications. There has been a steady trickle of self-published works on the trail: we’ve attempted to contact the authors, and in most cases each of them thought they were the first people to write about the trail. In other words, there’s little sense of a shared nostalgia for the trail, little sense of an imagined community of ex-travellers, beyond that provided by a Facebook page or a specialist website.
Instead, what initially appears to be a romantic nostalgia could well be the expression of a deeper and more challenging feeling. The hippie trail-ers remember their weeks or months ‘on the road’ as moments of supreme freedom, when they were liberated from the constraints of jobs, mortgages and social convention. The tracks and trails formed a liberated zone, in which our interviewees felt that they could truly discover themselves. And with this experience, a sense of optimism was born. Here, it must be stressed that optimism wasn’t necessarily so easy in the 1960s. There was much to fear: the specter of the war in Vietnam was simply the most blatant image of a nightmare future for the human race. There were linked concerns about an on-coming environmental disaster, the slow, painful acknowledgement of the power of racial prejudice in Western society, and the first intimations of the problem of sexual inequality. Travelling on the trail was a way of stepping away from these specters, and imagining an optimistic future.
This is probably why the trail is remembered with such fondness. But this feeling is tinged with a deep sadness. Most hippie trail-ers took it for granted that the future would be better: it had to be, as the present looked so dark; it demanded change. Their apparent ‘nostalgia’ is therefore more akin to a sense of frustration, that their youthful optimism has been blocked, and their expectations thwarted. The interviewees readily recognized that the world has become a more dangerous place. They expressed sympathy for the problems that their children (and grandchildren) face.
Rather than being a simple expression of romanticism, the nostalgia for the trail is instead an expression of a deep desire for social change.
Featured image credit: Somewhere between Iran and Afghanistan, November 1975. Photograph © Dee Atlas, reproduced by permission.
The post The hippie trail and the question of nostalgia appeared first on OUPblog.

5 great unsolved philosophical questions
The discipline of philosophy covers the study of everything; from the study of knowledge, art, language, and the very nature of existence, to moral, ethical, and political dilemmas. Stemming from the Greek word philosophia (literally translating as “love of wisdom”), there isn’t much that philosophers haven’t disputed over the years. Despite this, there are many key debates and great philosophical mysteries that remain unsolved—and quite possibly always will. From Descartes’s discussions of knowledge and personhood, to Aristotle’s analysis of the nature of life and death, we’ve listed 5 of the greatest philosophical problems still contested today. What would make your list?
The problem of free will arises when humans reach a stage of self-consciousness about how profoundly the world may influence their behavior, in ways of which they are unaware. The advent of doctrines of “determinism” or “necessity” in the history of ideas is an indication that this higher stage of awareness has been reached. Determinist or necessitarian threats to free will have taken many historical forms—fatalist, theological, physical or scientific, psychological, social, and logical—but there is a core notion running through all forms of determinism that accounts for their importance and longevity. Any event is determined, according to this core notion, if there are conditions (decrees of fate, the foreordaining acts of God, laws of nature) whose occurrence can impact events, i.e. “It must be the case that if these determining conditions jointly obtain, the determined event occurs.” Although this has been greatly debated, there is no common philosophical consensus disproving these concerns.
Formulating and responding to the challenge of scepticism (the view that we can’t know anything) is often taken to be the central problem of epistemology (the study of knowledge). The most prominent starting points for discussions of skepticism are the works of René Descartes and David Hume, although a more general skeptical argument is often seen in Sextus Empiricus’ Outlines of Pyrrhonism (arguing that we should withhold judgment on all matters of fact, because no matter how we reason for a judgment, there is an opposing judgment that we can reason for in a parallel manner). “Know” is the sixth most common verb in English, and although it is often used in sentences such as “I know how to ride a bike” and “I know your friend Jane,” a large chunk of its use is taken up by claims of knowing something to be the case. One worry about skepticism is that, if true, it would require a dramatic revision in the way we think and talk.

What is the relation between ‘my’ mind and body? Many philosophers have held a dualistic view of the relation between mind and body. There have been those (like Descartes) who ascribe mental attributes to spiritual substances which are supposed to be logically independent of anything physical, but inhabit particular bodies. Others, like Thomas Hobbes, have admitted only a duality of properties, ascribing both mental and physical attributes to human bodies. Others have presented an “ultimate category of persons,” differentiating them from physical objects just on the ground that they possess mental as well as physical attributes. If dualism is the best answer, most believe that the most defensible form would be that in which we admit only a duality of properties. Despite this, the problem of showing how these combine to characterize one and the same subject has not yet been adequately solved.
It seems reasonable to say both that a creature dies when its life ceases, and that it dies when it ceases to exist. However, to understand death, we must first grasp how it is related to life and to the persistence of living beings. Here the philosophy of death intersects with the theory of personal identity, but philosophers haven’t yet reached agreement about what it is to be alive. According to Aristotle, something has the property “alive” if it has any of the typical capacities of living things: nutrition, appetite or desire, growth, reproduction, perception, motion, and thought. Nevertheless, non-living devices could hypothetically do many of these things. As for our identity over time, some philosophers have suggested that our persistent conditions are in part determined by our own attitudes, making it (at least theoretically) possible to survive death.
What would “global justice” look like?
In Shakespeare’s Merchant of Venice, Shylock makes a demand for a pound of his delinquent debtor’s flesh in the name of justice. Until the clever Portia finds a device for voiding the contract, the presumption is that it must be granted. Conceptually, demands of justice are the hardest to outweigh or suspend. But to this day, there is no universally accepted theory of justice. Increasing political and economic interconnectedness (especially with regards to current humanitarian crises) draws much philosophical attention to this notion, asking if claims of justice arise only among those who share membership in a state. Alternately, do they apply among all human beings simply because they are human? Inquiries into “global justice” differ from those into “international justice” precisely by not limiting inquiry to just what states should do. They also question the very moral acceptability of states and explore alternative options.
Featured image credit: Le Penseur (The Thinker) by Auguste Rodin, taken in front of the Legion of Honor museum in San Francisco, California, 2012. Drflet, CC BY-SA-3.0 via Wikimedia Commons.
The post 5 great unsolved philosophical questions appeared first on OUPblog.

January 26, 2018
Should Politics be taught within secondary school?
Despite the higher youth turnout than originally anticipated, it has been estimated that around one third of millennials did not vote in the EU Referendum. Whilst the outcome of the EU referendum was disappointing for the majority of Millennials – statistically – if everyone within the 18-24 age category had participated in the EU referendum (and voted remain) the 3.78% required to equalise the leave vote would have been met, and the UK would have likely remained a member of the European Union. These statistics reveal that a large majority of young people are not realising their political power. So why did so many choose not to participate?
The truth is that many millennials have said that they did not understand the concept of the European Union and what it meant for the UK to leave it, evidenced by the tumult of bizarre notions which flooded social media. These misconceptions demonstrate that whilst our democracy gives people the right to vote, our education system is not reflective of this right. That is not to say that millennials were the only generation lacking an understanding of the EU; Google searches about the EU peaked following the referendum. But could a better understanding of the European Union, and political affairs in general be achieved if Politics were taught more widely in schools? Would more young people be willing to engage with politics?

Most millennials do not opt to take Politics at A-level, having no prior experience of the subject at GCSE level. Less than 13,000 students opted to take Politics at A level in 2013, a low number compared to the uptake witnessed in other subjects such as History (54,000) and Geography (36,000), which are also subject options at GSCE level. Less than a dozen students in my year opted to study politics – for the rest of us (including myself) – our political knowledge remained stunted. When I turned 18, I knew little of contemporary politics and my knowledge of Benjamin Disraeli surpassed that of Gordon Brown. I was born in a constituency which had been represented by Labour since the seventies, and I was naturally compelled to support the party. But I couldn’t name or differentiate between parties and their policies, and I was not alone. Would teaching Politics in schools as a compulsory GCSE foster an interest and better understanding of political affairs which young people would carry into their twenties, and later life? I believe so.
The counter-argument to teaching Politics in schools is that the Politics curriculum would be skewed in favour of the presiding Government at the time. The possibility of indoctrination would also be a risk as teachers could use their classrooms to influence bias – for instance – schools in Labour / Conservative majority areas could encourage their students to support the constituent party. The problem with this counter-argument is that this is already occurring – in addition to fifteen years of a Labour government during millennial’s formative years at school, many schools are already being criticised and accused for their left-wing bias. Society encourages us to be tolerant, but how can we achieve political tolerance if teachers are not presenting the full spectrum of political ideologies to their students? It could be argued that these schools are inadvertently raising a radically left-wing generation, who may be left-wing without choice.
This lack of choice is being exacerbated by the role of social media, which has come to play a huge part in recent elections and referenda. Millennials have and are continuing to develop their political understanding through social media, such as Facebook and Twitter. The issue with this is that millennials are only seeing a limited snapshot of political affairs, such as ‘trending’ news. Trends do not necessarily convey truth, and in our post-truth world saturated with fake news, where anyone can publish online, how are we to teach young people to analyse and question multiple sources of political information before they form an opinion? In other subjects such as history, literature, and science – this questioning begins in the classroom. Why not with Politics?

Besides this, political knowledge is subject to bias on social media, which is currently being manipulated for political affairs within 18 countries and systematically used to spread hatred against individuals, groups and collective ideas. As Millennials are more likely to discover political news on social media than on any other outlet, and are more likely to engage with similar-minded individuals within their demographic – they are highly susceptible to this bias, prime targets for political manipulation, and are incorporated into a network which promotes shared thought, whilst simultaneously attacking ‘the other.’ Social Media has created a digital echo chamber – a shared reality – where individuals feel secure in the knowledge that their opinions are validated by others. The issue here is that individuals are becoming increasingly narrow-minded: a recent study published by the Journal of Experimental and Social Psychology revealed that 63% of participants chose to receive $7 for reading an article which agreed with them, rather than receive $10 for reading an article which challenged their views.
Adding Politics to the GCSE curriculum could inform students of the full political spectrum and enable them to form their political opinions before they consult social media first. If funding and resources complicated this addition, could Politics be incorporated into General Studies at A-Level? General studies prepared me for little in life; it did not teach me how to pay the bills or how to budget and save money effectively. A subject still ignored by most Universities, its relevance may be increased if it taught students about their right to vote, the parties they can vote for, and why they should be interested in the political future of their country.
The aftermath of the Brexit referendum has reverberated with millennials who are actively engaging with Politics more than ever before – as evidenced by the high youth vote in the 2017 UK general election. However, if the present and coming generations are to retain this engagement, they need more than the history of Victorian Prime Ministers to motivate them. Teaching politics in schools could inspire the coming generations to vote in greater numbers and realise their political potential sooner. I educated myself about political affairs when I left school, however I would have understood the relevance and importance of Politics in my own life much sooner had I been introduced to the fundamental aspects at school. Teaching Politics in school may not eradicate the bias prevalent in political and social media, but it would help to eradicate the bias present in secondary schools, present the full political spectrum to students, and give them opportunity to form their own political reasoning.
Featured image credit: Classroom by Wokandapix. Public domain via Pixabay .
The post Should Politics be taught within secondary school? appeared first on OUPblog.

January 25, 2018
Animal of the Month: 13 nutty squirrel species [slideshow]
To help you appreciate squirrels in celebration of Squirrel Appreciation Day earlier this month, take a gander at a selection of a diverse range of members of the Sciuridae family in the slideshow below. Most of these critters belong to the Sciurus genus which is from the ancient Greek, “skia” meaning shadow or shade, and “oura” for tail. Despite the variation within these different members of the same family, the evolutionary record shows that squirrels have actually changed very little over millions of years. If it ain’t broke…

Sciurus anomalus
Commonly called the Caucasian squirrel. Found in forests of the Middle East and extreme southwestern Asia, its call resembles that of the green woodpecker – whether a coincidence, a way of outwitting predators, or a penchant for mimicry we just couldn’t say.
Image: Sciurus anomalus – Jeita Grotto by Peripitus. CC BY-SA 3.0 via Wikimedia Commons.

Spermophilus tridecemlineatus
Unlucky for some, this is the thirteen-lined ground squirrel. While better insulated than other ground squirrels, drops in temperature act as an arousal stimulus during periods of hibernation – though if it gets too cold (approaching zero degrees Celsius) then this poor little chum might not wake up at all.
Image: 13 lined ground squirrel by Laetitia C. CC BY-SA 3.0 via Wikimedia Commons.

Sciurius niger
Male and female fox squirrels can’t be distinguished by size or colour and fossils from the miocene epoch (approx 23-5 million years ago) are also indistinguishable from the modern ancestors – they’ve clearly found a winning formula.
Image: Fox squirrel by Arthur Mouratidis. CC BY 2.0 via Wikimedia Commons.

Sciurus variegatoides
The epithet ‘variegatoides’ probably refers to the variable coloration of this species. Send in photos of any blue ones you might come across!
Image: Variegated Squirrel at Montezuma, Nicoya Peninsula, Costa Rica by Hans Hillewaert. CC BY-SA 3.0 via Wikimedia Commons.

Sciurus aureogaster
Commonly called the red-bellied squirrel (but is quite variable in colour as this photo might prove), this squirrel enjoys the usual diet of acorns and pine but also occasionally treats itself to corn and cacao – it can’t resist that chocolatey goodness.
Image: Mexican gray squirrel (Sciurus aureogaster) on branch by Gerardo Noriega. CC BY-SA 3.0 via Wikimedia Commons.

Tamiasciurus hudsonicus
The pine squirrel is easily recognized by its small size, reddish back, white belly, and slightly tufted ears, and is native to the northern United States and Canada.
Image: American red squirrel eating a nut by Connormah. CC BY-SA 3.0 via Wikimedia Commons.

Sciurus carolinensis
You can tell the age of the eastern gray squirrel by their tail molt pattern and pigmentation, the pigmentation of their bodies, and the colouration of their genitals.
Image: An Eastern Grey Squirrel (Sciurus carolinensis) in St James’s Park, London, England by Diliff. CC BY-SA 3.0 via Wikimedia Commons.

Tamiasciurus douglasii
For most species of tree squirrels body size tends to increase with latitude. However, this little fella is one of the smallest and live at the highest latitudes. Their small size gives them the advantage of being more agile when scurrying around the small conifer branches.
Image: Douglas Squirrel on a Pacific Silver Fir (Abies amabilis) branch by Walter Siegmund. CC BY-SA 3.0 via Wikimedia Commons.

Sciurus aberti
For most species of tree squirrels body size tends to increase with latitude. However, this little fella is one of the smallest and lives at the highest latitudes. Their small size gives them the advantage of being more agile when scurrying around the small conifer branches.
Image: Douglas Squirrel on a Pacific Silver Fir (Abies amabilis) branch by Walter Siegmund. CC BY-SA 3.0 via Wikimedia Commons.

Marmota caligata
The marmot with many names including groundhog, whistling pig whistler of the rocks, rockchuck, mountain marmot, and most suitable for fantasy fiction: the watcher of the crags. We’re not sure which ones they prefer to answer to.
Image: Marmota caligata (Hoary Marmot) by Steven Pavlov. CC BY-SA 3.0 via Wikimedia Commons.

Sciurus griseus
The western gray squirrel is the largest squirrel within its range. It has dichromatic vision meaning that it only sees in two colours.
Image: A western gray squirrel (Sciurus griseus) on a branch, looking something above it by Aaron Jacobs. CC BY-SA 2.0 via Wikimedia Commons.

Cynomys parvidens
The Utah prairie dog has been in and out of the endangered species list a few times since 1968 due to massive culls reducing their population significantly. Its status was last assessed in 2008 and is currently classified as endangered.
Image: Utah Prairie Dog (Cynomys parvidens) – Bryce Canyon National Park by Chin tin tin. CC BY 2.0 via Wikimedia Commons.

Tamias minimus
Also known as the least chipmunk, this smallest – and perhaps cutest – member of the Sciuridae family is commonly found in North America among sagebrush and coniferous forest habitats.
Image: Tamias minimus by Phil Armitage. Public Domain via Wikimedia Commons.
Featured image credit: “Ninja squirrel” by Saori Oya. CC0 public domain via Unsplash.
The post Animal of the Month: 13 nutty squirrel species [slideshow] appeared first on OUPblog.

January 24, 2018
Mad as a Hatter
About every well-known English idiom one can nowadays find so much interesting material on the Internet that almost nothing is left for an ambitious etymologist to add. Mad as a hatter has been discussed especially often, and my detailed database contains nearly nothing new. Yet I decided to join the ranks of the researchers of woeful countenance because of my slightly untraditional approach to the problem.
This is what has been said about the phrase. Since English speakers are apt to drop their aitches, hatter may stand for atter. Engl. adder “viper” is sometimes cited in this scenario, though the change from dd to tt remains unexplained. The merger of t with d between vowels is typical of American English, in which sweetish and Swedish, Plato and playdough, and the like become homonyms pairwise. There have been attempts to trace our idiom to America, but, as far as I can judge, unsuccessful. Although angry vipers are known to be extremely aggressive, we are interested in the consonants rather than the snake’s temper. The German cognate of adder is Natter, but mad as a hatter is English, not German. English has the noun attercop “spider.” Old English āt(t)or meant “poison.” If vipers are famous for their irascibility, spiders do not play such a visible role in the north as to inspire our simile.

Other linguistic games
The verb to hatter “bruise with blows; harass, etc.” exists. Perhaps this verb was substantivized (that is, turned into a noun), and an angry hatter came into being. The origin of the verb is unknown, but it means approximately “to batter” and looks like its next of kin (possibly a sound-imitative word). Also, dialectal gnattery “irritable,” related to gnatter “to nibble; grumble; talk foolishly,” looks mildly promising as a clue. What if people used to say mad as a gnatter and changed the rare gnatter to hatter? Yes, what if? A citation has been found for as mad drunk as a hatter, so that, not improbably, the current idiom is an abridgement of a more sensible one (see the end of this post!). Finally, as mad as… need not end in a hatter; among several other candidates, the best-known one is a March hare. Of note is the fact that mad, in addition to “crazy,” can mean “angry; wildly excited.” However, the problem of the ill-tempered hatter remains. Perhaps the phrase is a borrowing? I am leaving out of consideration Charles Mackay, who, not unexpectedly, derived the phrase (at hatter) from Irish Gaelic. His etymology is fanciful. The French say: “Il raisonne comme une huître” (“He reasons like an oyster”). Couldn’t the French oyster, while crossing the Channel, turn into a mad hatter? Even stranger things happen at sea.
If I am not mistaken, all those hypotheses look rather unconvincing. And here I’ll say why I announced at the beginning that I have my own point of view. The main problem with the idiom is not its inherent silliness but its late attestation. No written records of the phrase mad as a hatter predate the 1820s. Even if it was current some time earlier, it certainly did not exist in Old or Middle English, so that tracing hatter to some ancient word is an unrealistic procedure. Rather probably, mad as a hatter appeared in English approximately when it was first recorded, and was slang. If the idiom was indeed slang, it may be useful to see whether real mad hatters are known. Indeed, some candidates have turned up.
Real characters
(1) “William Collins, the poet, was the son of a hatter… at Chichester, Sussex. The poet was subject to fits of melancholy madness, and was for some time confined in a lunatic establishment at Chelsea. The other lunatics, hearing that his father was a hatter, got up saying, ‘Mad as a hatter’.” Alas for the chronology! Collins (1721-1759) died before the idiom became known. (2) Around 1830, a Mr. Harris was elected at the head of the poll for Southwark. He was a hatter in the Borough, and proved to be out of his mind. According to another version the “day on which he was ‘chaired’ in his own carriage was exceedingly hot, and his head during the whole time of the procession being uncovered by removing his hat, he was attacked by brain fever.” He died soon after that, but earlier one of Mr. Harris’s canvassers addressed the crowd so: “You’ve a shocking bad hat on. I’ll send you a new one.” During election campaigns, changing hats, with reference to changing one’s views was, was a well-known procedure. “A considerable number of hats consequently changed owners, and the saying having been put into the mouths of so many persons, it was taken up by the gamins [street urchins], and was in vogue for some time.” This is entertaining but probably useless stuff for discovering the origin of the idiom. I wonder: How did it happen that as early as 1868 no one knew the true story, and people kept offering all kinds of conjectures?

Hatters as a profession
(1) Professional shepherds in Australia lead a lonely life and are considered “to be to a certain degree mad.” “…shepherds and hut-keepers… are very fond, wherever they can get the materials, of making cabbage-tree hats. The industry distracts their thoughts, and the hats are sold at a good price.” Conclusion: the idiom is an import from Australia. Unfortunately for this derivation, the idiom did not turn up in Australia before it was recorded in England. (2)“A lead miner in Derbyshire or a gold miner in Australia who works alone… is called a hatter.” He is said to work under his own hat and “is looked upon as eccentric; and it seems to be presumed that the solitary worker does not work in partnership with other miners because he is a little mad.” Once again we can see that the roots of the idiom are supposed to be hidden in some local custom. The migration of a phrase or a word from one part of the country to another and becoming slang in the capital is not improbable, for just the foreignism of the item may contribute to its becoming part of the “street urchins’” language. But my question remains: Why did the origin of the idiom mad as a hatter become the object of guesswork so soon after its emergence? After all, were not dealing here with some exotic item like kybosh.

I’ll start a fresh paragraph for the last conjecture I know because it is the present favorite of our dictionaries. The hypothesis was offered in 1900, and its author (Thomas J. Jeakes) repeated it twice. I’ll reproduce his second note: “…the hatter’s madness was dipsomania [alcoholism], induced by working with hot irons in a heated atmosphere and in a standing position. The tailor works under similar conditions, but seated; his condition is therefore less aggravated, and he accordingly gets credited only with pusillanimity and lubricity [that is, lechery, wantonness?].” Poor mean-spirited, promiscuous tailors! See my post on whipping the cat for July 22, 2015, and for consolation the post on nine tailors for April 6, 2016.

The rationale behind the hypotheses in the last section of the present post is the same: mad hatters abounded; hence the idiom. I doubt that we are on the right track. There must have been a well-known incident (like the one recounted about Mr. Harris), but no promising story has come down to us. Nor did the “madness” of dipsomaniacal hatters become the talk of the town around 1829. Thus, I’d rather say: “Origin unknown.” As a final flourish, I would like to mention the British writer Joseph Archibald Cronin, who at one time was very popular. I have no idea whether he still is. One of his novels (not his best) is titled Hatter’s Castle. The cruel hatter in that story is not mad but certainly crazy. I don’t think Cronin chose the protagonist’s profession by chance, and I am sure other people have offered the same guess. As to Alice’s mad hatter, I decided to leave him in peace: everybody else discusses him and states that the idiom emerged in the language before the publication of the famous book. Identifying the model for that character is also on old chestnut. Consult the Internet.
Featured image: A possible prototype of the mad hatter. Image credit: “Rattlesnake Toxic Snake Dangerous Terrarium Viper” by Foto-Rabe. CC0 via Pixabay.
The post Mad as a Hatter appeared first on OUPblog.

The Origins of the Reformation Bible
One of the side effects of the Protestant Reformation was intense scrutiny of the biblical canon and its contents. Martin Luther did not broach the issue in his 95 Theses, but not long after he drove that fateful nail into the door of the Wittenberg chapel, it became clear that the exact contents of the biblical canon would need to be addressed. Luther increasingly claimed that Christian doctrine should rest on biblical authority, a proposition made somewhat difficult if there is disagreement on which books can confer “biblical authority.” (Consider, e.g., the role of 2 Maccabees at the Leipzig Debate.) There was disagreement—and there had been disagreement for a millennium or more beforehand. Almost always, the sixteenth-century disputants pointed back to Christian authors in the fourth century or thereabouts for authoritative statements on the content of the Bible.
But fourth-century Christians themselves disagreed on precisely which books constituted God’s authentic revelation. Especially with regard to the books most in dispute in the sixteenth century—the so-called deuterocanonical books of the Old Testament—the fourth century could provide no assured guide because even those ancient luminaries, St. Jerome and St. Augustine, disagreed particularly on the status of these books.
The deuterocanonical books—as they would come to be called by Sixtus of Siena in 1566—are essentially those portions of Scripture that form part of the Roman Catholic Bible, but not the Protestant Bible. (Sixtus had a slightly wider definition of the term “deuterocanonical.”) In this sense, there are seven deuterocanonical books: Tobit, Judith, Wisdom of Solomon, Wisdom of Sirach (Ecclesiasticus), Baruch, and 1–2 Maccabees. There are also two books with deuterocanonical portions: Daniel and Esther.
In the sixteenth century, it was not clear whether these books belonged in the Bible or not; different theologians and church authorities took different positions on the matter, and there had never been a council that settled the issue for the entire church. While these books appeared in biblical manuscripts and printed Bibles, it was not uncommon in the Latin Church to question their status. For instance, one of the great publishing ventures of the early part of the century was the Complutensian Polyglot, a Bible printed in multiple languages in the Spanish town of Alcalá (Latin name: Complutum), produced under the oversight of the Roman Catholic Cardinal Jiménez de Cisneros and granted permission for publication by Pope Leo X in 1520. While this Bible includes the deuterocanonical books, Cardinal Jiménez explains in a preface that they “are books outside the canon which the Church has received more for the edification of the people than for the authoritative confirmation of ecclesiastical dogmas.”

When Martin Luther debated Johann Eck at Leipzig in 1519 on various Catholic doctrines that Luther rejected, Luther probably did not feel that he was stirring controversy by disputing the canonicity of 2 Maccabees, since Catholic Cardinals in full communion with Rome were doing the same thing at the time. Such a position became unacceptable for a Roman Catholic only after the Council of Trent in 1546 declared all of the deuterocanonical books to be fully canonical, a position that made many Protestants more vehement in their rejection of these books. But the earlier Protestant position had valued these books for Christian edification. When Luther translated them as part of his German translation of the entire Bible, he sounded much like Cardinal Jiménez in describing them as “books that do not belong to Holy Scripture but are useful and good to read.”
Both Protestants and Catholics pointed to earlier times, especially the fourth century, as confirming their own views.
They were both right.
The origins of the Bible stretch back a long way before the Common Era, but the fourth century CE was an important time for the Bible. One could say that the Bible was invented in the fourth century, since for the first time all of Scripture could be—and was—contained in a single cover (see Codex Sinaiticus and Codex Vaticanus), rather than in small codices or scrolls that included only a few biblical books. It was also a time when some Christians were interested in clarifying which books counted as revelation from God, and which books didn’t. They composed lists of the books of the Bible: these lists were very similar to one another, but they also differed in important ways.
In the West, the biblical canon lists usually (not always) agreed completely on the books of the New Testament. In the East, the New Testament was, again, usually very similar across the lists, though the Book of Revelation was long disputed. But the status of the Old Testament deuterocanonical books in both the East and the West proved challenging. The fourth-century lists in the East—composed by such figures as Origen of Alexandria/Caesarea, Athanasius of Alexandria, Cyril of Jerusalem, and Gregory of Nazianzus—omitted almost all of them, but these books found a warmer welcome in the West.
Here we come to Jerome and Augustine, the greatest biblical scholar and the greatest theologian, respectively, in the early Latin church. These two churchmen composed lists of Old Testament books within a few years of each other, during the last decade of the fourth century. As for the deuterocanonical books, Augustine did not even mention the issue within his discussion of the canon; he quietly listed all these books in their respective sections of the Bible.
Jerome, the primary translator of the Latin Vulgate, took the opposite path. Not only did he exclude the deuterocanonical books from his biblical canon, but he was far from silent on the issue. In his most well-known statement on the matter—a preface to his translation of the books of Samuel and Kings—Jerome listed all the books of the Old Testament in (what he took to be) the order of the Jewish Bible, thus without the deuterocanonical books. Then he brought up the issue, asserting that the books we call deuterocanonical are actually “apocrypha” and should be excluded from the Bible.
Like the Protestants and their Catholic opponents, neither Jerome nor Augustine were coming up with a new teaching on the biblical canon—they were both passing along what they took to be Christian tradition as they had received it. Augustine was right that for many Christians, particularly in the West, the deuterocanonical books had functioned as Scripture and appeared in canon lists for decades before the late fourth century. Jerome was right that for many Christians, particularly in the East and those Latin-speakers influenced by the East, the deuterocanonical books had not been considered on par with the other books of the Bible and had consequently been omitted from many biblical canon lists.
Catholics and Protestants have different Bibles today because of the disputes of the sixteenth century, when the opposing sides each claimed that the early church supported their own views.
The bottom line is: they were both right.
Featured image credit: Saint Jerome in His Study by Vincenzo Catena (1470–1531). Public domain via Wikimedia Commons .
The post The Origins of the Reformation Bible appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
