Oxford University Press's Blog, page 436

November 26, 2016

Are we responsible for our lifestyle diseases?

Within the last couple of decades more and more research have shown a number of diseases, such as type 2 diabetes and cardiovascular diseases, to be associated with particular lifestyle characteristics such as smoking, lack of exercise, and over-eating. Confronted with such research, it is timely to raise questions about individual responsibility for getting those diseases (or the increased risk thereof), and to think closer about issues such as blame, stigma, and economic burdens.


For instance, a 2011 Danish study asked Danes whether weight loss surgery should be financed by the obese themselves (or the state). 46.5% responded yes, and 20.3% responded that they did not know. Most interestingly, however, 74.5% of those who responded that weight loss surgery should be financed by the obese themselves also responded that if there is evidence that the patient is not responsible for the obesity, then they would change their mind.


The common sense causal conception of responsibility


Are individuals with particular lifestyles responsible for increased risks of corresponding lifestyle diseases? Well, most of us associate the question of personal responsibility with causality. We cause or control our own behaviour and are therefore responsible for our own behaviour, other things being equal. However, the “other things being equal” clause is important. A great deal of research seems to show that obese parents, circumstances associated with lower socioeconomic classes, famine suffering during childhood, and certain genetic dispositions are factors that predict obesity to such a degree that we also have strong reason to believe they are causes of obesity.


Confronted with evidence of such unchosen causes, most of us seem to lower the strength of our judgment about personal responsibility for obesity. Even though we might be the cause of our actions, something external (or indeed internal) is causing us to do them. Social, biological, and environmental factors that are shown to causally influence individual lifestyle simply seem to count as responsibility-softening or even undermining.



scale-403585_1920Health by mojzagrebinfo. CC0 public domain via Pixabay.

Responsibility in a natural world


It is natural to think that all events have a set of necessary and sufficient causes. If so then our actions are linked to causal chains that go way back in time, and obviously we cannot be responsible for them. If, on the other hand, some of our actions happen randomly, as some interpret quantum mechanics, then they are obviously outside of our control. Therefore, we cannot be responsible for anything neither in a deterministic nor a probabilistic world. Operating in a naturalistic causal framework, responsibility has only one possibility:


 Agent-causality is the hypothesis that individuals (agents) can start new causal chains that are neither predetermined nor random. We ought to consider if we, as a matter of genuine free will, possess the ability to perform actions agent-causally. The problem with this hypothesis, however, is that as a matter of definition, agent-causal performance cannot have any further causal explanation. If someone decides to eat a cake agent-causally, then we can infer no causal explanation as for why she decided so. She decided so, and that is the causal explanation. But such an explanation is incompatible with our scientific worldview. Whether we study humans at a psychological, sociological, or biological level, we look for causes as for why we act as we do, and we explain our actions by reference to these causes. If an act actually results from an agent-causal performance, then a scientific causal explanation of that act is necessarily mistaken. Inferring to the best explanation, agent-causality is therefore mistaken, and holding the common sense causal conception of responsibility, we are logically forced to conclude that we are never responsible for anything. As a matter of personal responsibility for obesity and lifestyle diseases, or indeed anything, any new causal findings are necessarily redundant.


Alternative conceptions of responsibility


This conclusion is in no way new among philosophers. Finding it hard to accept it, however, many have attempted to come up with alternative understandings of responsibility that are immune to the implausibility of agent-causality. One approach, by Harry Frankfurt, determines responsibility as a matter of identity confirmation. According to Frankfurt, responsibility requires a correspondence between an individual’s first-order desire and second-order volition. A smoker who, by addiction, has a first-order desire to smoke might also have a second order volition not to be a smoker. If so, he is not responsible for smoking, on Frankfurt’s view, because smoking does not correspond to who he really wants to be. To be responsible, his second order volition should confirm his first order desire. Another general approach, most comprehensively laid out by Fischer and Ravizza, concerns whether we respond properly to reasons. If, for instance, an alcoholic would refrain from drinking the next bottle if we promised him a million euros, then he is reason-responsive, and, other things equal, responsible for drinking.


New evidence of the causes of our behaviour makes no difference


However, these alternative approaches to responsibility, so-called compatibilistic approaches, do not operate in a naturalistic causal framework. They are immune to the implausibility of agent-causality, but therefore also to external causal explanations of our behaviour. The widespread common sense intuition that causal influences on our lifestyle count as responsibility-diminishing cannot be accommodated in these alternative approaches. They simply do not regard causality and the question of whether we are responsible for our lifestyle is therefore insensitive to empirical findings of social, biological, and environmental causes as for why we do as we do.


To conclude, any new evidence of causal influences on our lifestyle and disease-relevant behaviour will have absolutely no rational impact on discussions of whether we are responsible for our lifestyle diseases. On one hand, we can accept that responsibility is to take place in our natural world – in which case we can rule it out by way of mere conceptual thinking. On the other hand, we can deny that responsibility is to take place in our natural world – in which case new evidence of “external” causal influences on our lifestyle will not make any difference. It will not make a difference simply because responsibility, according to alternative approaches, is insensitive to any evidence of causes of our behaviour. Of these conclusions, we find it more rationally convincing to declare that we are, indeed, never genuinely responsible for any of our actions.


Featured image credit: Yoga by gazarow. CC0 Public Domain via Pixabay.


The post Are we responsible for our lifestyle diseases? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 26, 2016 04:30

How did Shakespeare originally sound?

We all know the classic Shakespearean lines – “To be or not to be,” “O Romeo, Romeo! Wherefore art thou Romeo?” or “Shall I compare thee to a summer’s day?” — but how would these famous lines have sounded to Elizabethan audiences? Are we currently misinterpreting the Bard? This question has been on the mind of Shakespeare scholars, directors, actors, and audiences for a long time, and has proved a tricky problem. The central issue is how to re-create an oral phenomenon, with predictably few pronunciation guides to work from.


Many Shakespearean sonnets no longer seem to rhyme, with the classic final couplet leaving modern readers non-plussed. Take Sonnet 154 for example:


Came there for cure and this by that I prove,


Love’s fire heats water, water cools not love.


Should this be ‘Pruv’ and ‘Luv’, or ‘Proove’ and ‘Loove’? Or Sonnet 61:


For thee watch I, whilst thou dost wake elsewhere,


From me far off, with others all too near.


Again, should we be reading this ‘Where’ and ‘Nhere’ or ‘Weer’ and ‘Neer’?


Through analysis of spelling variants, rhymes, and current usage, these questions are finally being answered. To modern ears, Shakespearean speech sounds a strangely familiar cross between contemporary Irish and Scottish, with a hint of Yorkshire and Bristolian thrown in. Test your knowledge of common Shakespearean words with our quiz, and listen to recordings of all the phrases – as the Elizabethans’ would have heard them. How well do you really know Shakespeare?




Featured and Quiz Image Credit: ‘Edwin Landseer, William Shakespeare’ by Chaos07. CC0 Public Domain via Pixabay .


The post How did Shakespeare originally sound? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 26, 2016 03:30

Helping students excel with integrity

On 19 October 2016 the International Center for Academic Integrity called for education institutions to join an International Day of Action Against Contract Cheating. Using the hashtags #defeatthecheat and #excelwithintegrity, students and staff were invited to share their declarations of why ‘contract cheating’ (that is, paying someone to do your academic work) is wrong. The idea was to raise awareness – not just within institutions but in the public and legislative domains too.


A quick look at those hashtags on Twitter reveals a range of reasons why students are opposed to contract cheating – from missing opportunities to learn and develop key skills, worrying about impact on future career, and moral and ethical objections.


But the reality is that cheating does occur.


In September 2016, Associate Professor Tracey Bretag, Director of the Office for Academic Integrity at the University of South Australia Business School, visited the University of Canberra to give a talk on ‘The rise of contract cheating’. She cited recent news stories around contract cheating in Australia, including the 2015 MyMaster scandal which saw students suspended, expelled or stripped of degrees at several universities across the country. The investigation found that as many as 1000 students from 16 different universities had accessed MyMaster’s ghostwriting and test-sitting services.


As educators, we need to look at student workload across a whole course and try to avoid having multiple assignments due at the same point in time, so that students don’t feel overwhelmed.

Assignment Helps, Buy Term Papers Online, My Excellent Writer, Ninja Essays, Brain Trust Academic… just a handful of the ‘cheat sites’ based in Australia from which students can purchase written assignments and other services.


In the teaching and learning space, there has been much discussion around “designing out” cheating – i.e. designing assessment tasks in such a way that cheating is more difficult. As an Educational Designer, I work with teaching staff to help them rethink their assessment design – supporting a shift towards assessment tasks that are authentic, scenario-based, reflective, collaborative… But as Bretag highlighted in her talk, it’s not as simple as that – on today’s market, even ‘authentic’ and ‘personalised’ assignments like reflective accounts and work-based portfolios can be bought.


Clearly, we need a multi-faceted approach to this complex issue. As educators, we need to look at student workload across a whole course and try to avoid having multiple assignments due at the same point in time, so that students don’t feel overwhelmed. We need to provide regular opportunities for formative assessment, making sure that students receive constructive feedback on their progress. We need to create a space in which students feel comfortable admitting that they don’t understand something – perhaps using tools like confidence-based marking to send the message that it’s okay to be unsure, it’s all part of the learning process.


Universities also need to make sure that students have access to the support they need. A flexible self-study course like Avoiding Plagiarism helps students to understand what plagiarism is and gives clear guidance on good practice in referencing and citations. Students can take the course at a time that suits them and can revisit the Epigeum materials when they need to – for example, when writing an assignment.


We need to offer students support in developing their language and academic writing skills so that they feel confident using their own words. Interactive workshops, drop-in sessions, peer mentoring from other students, online tutoring services – all of these can help to ensure that we reach students on and off campus.


University admissions processes need to be rigorous enough that students aren’t set up for failure and are only admitted to courses when they have the necessary pre-requisites that will allow them to be successful. And, once admitted to the university, if a student is found to have plagiarised or cheated, then we need to have an appropriate response – viewing the incident as a learning opportunity and offering additional support whenever possible. Persistent cheating needs to be dealt with appropriately within a framework of institutional guidelines and national legislation.


There’s still work to be done in this area and there’s a need for greater awareness, more discussion, and collaboration between institutions and governments. Ultimately, all of us who work in higher education have a responsibility to help our students excel with integrity.


Featured image credit: Student working. By StartUp Stock Photos. CC0 Public Domain via Pexels


This article originally appeared on the Epigeum Insights blog


The post Helping students excel with integrity appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 26, 2016 01:30

November 25, 2016

The origin of Black Friday and other Black Days

Across the US, those who are not too replete with their Thanksgiving feast will be braving the crowds in order to secure themselves one of the bargains associated with Black Friday, the day following Thanksgiving which is often regarded as the first day of Christmas shopping in the US. Even on the Thanksgiving-less shores of Britain, we are starting to see this tradition sneak in. Hunting down bargains is all well and good, but here at Oxford Dictionaries, we are much more interested in hunting down the histories of words. Which other Black days have been marked through history, and does Black used in this way always denote negativity?



Black Friday is seen as a day of huge profit in the world of retail, enough for some to have theorized that its origin is the day’s ability to take a company in debt, or in the red, and pull them back into the black. This origin story may make this the first Black day where the Black is seen to be bringing positive associations, although an earlier theory holds that the name is a reference to the congestion caused in city centres particularly in Philadelphia. This is nonetheless a step away from the disaster and ruin that has typically been carried by Black in this context. We explore the merits of these opposing theories in the video above.


When was the first Black Friday?

Though those working in customer services may wish that this year will be the last time we mark Black Friday, when was the first Black Friday? The earliest evidence for the term found by researchers at the Oxford English Dictionary (OED) is from 1610. It will surprise no one to hear that this Black Friday had very little to do with sales, or Thanksgiving. The first Black Friday did not refer to a specific Friday, but rather was used in schools to refer to any Friday on which an exam fell. It is something of a comfort to know that, even in the 17thcentury, exams were regarded with that same familiar dread.


We have found no evidence from before 1951 of Black Friday referring to the day following Thanksgiving, and in this instance its sense was markedly different to how we use the term today. In this context, instead, the day was associated with staff absences from factories following the Thanksgiving holiday. The first citation found for Black Friday in the sense of the start of the Christmas shopping season comes ten years later, in 1961.


Which other Fridays have been Black?

The moniker has been attached to a number of different Fridays in the years between 1610 and 1951. The next one noted in the OED is Friday 6 December 1745, which was the date that the Young Pretender’s landing was announced in London. The Young Pretender was hardly a welcome visitor, but the extent to which his proximity caused panic across the capital is a matter of debate, but this panic—real or a tool of political spin—nonetheless earned the day its dark title.


The next date to be designated a Black Friday noted in the OED was again one of widespread panic: Friday 11 May 1866, saw the failure of the London banking house Overand, Gurney, & Co. On the very next day, it was reported in the Times, with some clairvoyance, that “The day will probably be long remembered in the city of London as the ‘Black Friday.’” This is the first sense of Black Friday with strong financial associations, and it seems these only grow stronger into the 20th century.


The third (and last) Black Friday listed in the OED happened just three years later, on Friday 24 September 1869, when the introduction of a large quantity of government gold into the financial market precipitated a day of financial panic on Wall Street. The mid to late 1860s saw the beginning of a dramatic climb in use of the term Black Friday in both British and US varieties of English, showing the impact of these events on the language.


This is the last Black Friday to be found in the OED, but not the last day to have gained the title in popular use. The majority of those following Black Friday of 1869 echo the sense of financial ruin, or the associations Black days also carry with loss of life.


Spreading around the world

In recent years, we can be fairly sure which of these many Black Fridays is the subject of discussion in our New Monitor corpus, as the term sees almost no use through the year, and then skyrockets in November, petering out rapidly in December, and so coinciding with only one Black Friday on the calendar. Interestingly, this holds true even for British English, and Englishes in other parts of the world, where Thanksgiving is not celebrated. Though the term is much more common in US English than in British English, its use in the US appears to be declining: November 2015 saw only two thirds as many instances of Black Friday in our corpus as November 2012. In contrast, use in British English is seeing a year on year increase, more than doubling between November 2012 and November 2013, and then seeing more than a 50% increase again between November 2013 and November 2014. It looks like the Brits might be catching up…


Black Monday, Black Tuesday, Black Wednesday…

Of course, Friday is not the only day to have found itself blackened. In fact, there is not a day of the week that has not earned its dark stripes through some disaster or other. The first day evidenced to have Black prefixed to it was a Monday, more specifically Easter Monday; a quotation referring to Easter Monday as Black Monday has been found as early as 1389. There are a few competing theories for what caused the day to be so named. One historical theory holds that the name refers to a severe storm on Easter Monday in 1360, which led to the deaths of many soldiers of Edward III’s army during the Hundred Years’ War. A different historical theory purports that Black Monday is a reference to the massacre of English settlers in Dublin by the Irish on Easter Monday 1209. The name may be unrelated to either event, and may instead be linked to a general belief in the unlucky character of Mondays, possibly influenced in this case by the view that misfortune will naturally follow a celebration like that of Easter Sunday.


The next Black Monday, first quoted in the OED as far back as 1735, echoes our first Black Friday; this was school slang referring to the first day of term following a vacation. The mindset of the pupils bleakly returning to the classroom is readily recognizable and easy to imagine.


A third Black Monday—and the final one to have been noted in the OED—is affixed to a specific date: Monday 19 October 1987, which is the day of a world stock market crash. This reflects the wider trend of days of great financial disaster being marked as BlackBlack Wednesday is used to refer to the 16 September 1992, when there was a great surge in sales of the pound. And the Wall Street Crash of 1929 was so disastrous as to leave two days painted black in its wake: Black Thursday, 24 October 1929, which marked the first day of panic selling on the New York Stock Exchange; and Black Tuesday, the following week, which is widely regarded as the day the stock market crashed.


The most recent day to be referred to as Black Saturday in the OED was Saturday 4 August 1621, when the articles of Perth were ratified while a brutal storm cast its shadow over the day. Almost a century earlier, the first Black Saturday—and also the first Black day in the OED attached to a specific date—took place on Saturday 10 September 1547, denoting the day of the Battle of Pinkie Cleugh, which saw Scotland catastrophically defeated.


The advent of Cyber Monday

Though not a Black day in itself, Cyber Monday follows Black Friday both on the calendar, and in word formation. Cyber Monday takes the traditional bargains of Black Friday to an online environment, but does it leave behind the last remnants of negativity that Black Friday is carrying? Perhaps not: of the first ten noun collocates of cyber that our Oxford English Corpus finds, only two are either positive or neutral (security; café). The other eight (including criminal, attack, and bullying) are all negative. This suggests that cyber might not be carrying the happiest connotations along with it, though it is doubtless an improvement on the memories of failed battles and financial collapse that cling to Black Friday.


Given the cyber nature of Cyber Monday, it might be expected that it is more international than Black Friday. So far, this does not seem the case: use of Cyber Monday in our New Monitor Corpus is still overwhelmingly US in origin, although its use in US English seems to be in decline, while its use in other varieties is climbing. This mirrors the trends we saw earlier with Black Friday, suggesting that taking place online is not a major factor in making Cyber Monday a globally recognized event. If it continues to follow on in the footsteps of Black Friday, we may find ourselves fighting it out digitally as well as in the shops in order to grab the best bargains for Christmas.


A version of this article originally appeared on the OxfordWords blog.


Featured Image Credit: “Century 21” by Chor Ip. CC BY-SA 2.0 via Flickr


The post The origin of Black Friday and other Black Days appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2016 04:30

Can more be said about statins?

Statins are drugs that are very effective in reducing the level of cholesterol in the blood. They have been shown in many trials to reduce the incidence of heart attacks and strokes. They are taken by very many people, but some argue that even more would benefit from doing so, although not everyone agrees.


I am waiting to be reported to the General Medical Council. I don’t think I’ve done anything wrong, but I’m pretty sure that a patient I saw a couple of months ago has grave suspicions about me. He is a very clever man, now in his mid-80’s. His mind is sharp, but perhaps less flexible and less able to come at issues from different angles than it could do 40 years ago. He doesn’t remember, but he was a professor who taught me physiology when I was a medical student then.


He had been sent to my clinic because his GP wasn’t sure what to do and wanted some help. Was he just getting old, or did he have something that could be treated? It was an unusual referral letter, clearly crafted with more care than most, and reading between the lines it was clear that the GP didn’t think he was missing anything, but the patient wouldn’t accept reassurance that there didn’t seem to be anything terribly wrong.


Why my clinic? Blood tests had been taken, and these were unremarkable, excepting that the report described him as having Chronic Kidney Disease (CKD) Stage 3. He didn’t like the sound of this. Stage 3 must be bad, and it might explain his symptoms. His GP had little choice but to make a referral to the renal clinic.


 No-one sensible argues that these drugs haven’t yielded massive health benefits…

The patient had a variety of symptoms, none of which were dramatic, but the ones that were causing him most distress were fatigue and muscular aches and pains. As a result of his studies, he thought that these might be due to hyperparathyroidism, which is overactivity of the parathyroid glands that occurs in chronic kidney disease. I didn’t agree, key points in my argument being (1) he didn’t have significant chronic kidney disease – 30% of people over the age of 75 years have CKD Stage 3 and (2) blood tests showed that his parathyroid glands were not overactive.


So what was causing his symptoms? Medications commonly cause side effects, and further enquiry revealed that his muscular pains had come on after he had started taking a statin, which he had begun a year previously. Could they, I asked him, be responsible? Could he omit them for a month or so and see what happened? It’s fair to say that he wasn’t keen, and although measured in his replies his expression suggested that he thought that the public required protection from any doctor who could suggest such a thing.


…statins seem to induce fatigue and aches and pains in more people that the trials report.

A week or so following this, the Lancet and British Medical Journal featured—yet again—debates about statins. No-one sensible argues that these drugs haven’t yielded massive health benefits to many, but there’s lots of argument about the threshold for recommending their consumption, and about their propensity to cause side effects. The trialists argue that there should be no debate, and some have made pronouncements that give the impression that they believe—like my patient appeared to do—that any doctor who dares question the matter should be reported to the GMC. The BMJ editorialists repeated their plea for sharing of individual patient level data. My experience in the clinic is that statins seem to induce fatigue and aches and pains in more people that the trials report, and an n=1 trial of stopping them for a few weeks, and then re-challenging, is an approach accepted by most patients.


Will we ever get to the bottom of the matter? I’m not sure. When a commentator like Richard Lehman writes in his BMJ blog that ‘the main adverse effect of statins is to induce arrogance in their proponents’, it seems likely to me that finding out the truth is going to be a casualty of protagonists wanting to win the battle at all costs.


Featured image credit: Drug by PublicDomainPictures . CC0 public domain via Pixabay .


The post Can more be said about statins? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2016 03:30

A note of thanks, a dose of sanity

2016 has had far more than its share of horribleness. Many of us are ready to leave this year far behind, even as we’re terrified of what the coming years may bring. At a time when many people are being told that their voices and lives don’t matter, we think oral historians have a vital role to play in amplifying silenced voices and helping us all imagine a better future. Before we say goodbye to 2016, Troy Reeves reflects back on some of the moments in which the support of friends, colleagues, and even strangers throughout the oral history world has helped to make the present survivable and the future imaginable.


Well, it has been a difficult twelve months. Andrew Shaffer and I both saw our mothers suffer through cancer diagnoses and recovery. We are both immensely thankful for those tasked with helping our mothers’ diagnoses and operations, and who assist with their continued convalescence. And I’m thankful for Andrew for carrying on with the social media during a difficult summer and fall.


I lost my father in July, and our Editor-in-Chief Kathy Nasstrom lost both her parents this summer. As we helped each other through our loss, we grew closer, which I did not think possible. So, I’m extremely thankful for Kathy, not only as the best damn developmental editor in the business but also as one of my best damn colleagues.


On top of that stress, the Oral History Association lost several key members in the last year. The organization’s executive director, Cliff Kuhn, passed away last November. And we lost long-time and well-known scholars Horacio Roque Ramírez and Leslie Brown since the 2015 OHA Annual Meeting in Tampa.


So, when #OHA2016—our 50th Annual Meeting titled OHA@50—commenced last month in Long Beach, the feeling of loss weighed heavily on me. But when I got there, my colleagues reminded me that family does not mean just blood. And they offered a shoulder for me to cry on.


So along with my extended family—my in-laws and “laws”—I’m truly madly deeply thankful for my OHA family. Some of them I have known since my first OHA (Anchorage, 1999) some I just met, or really got to know, in the last couple years. All of them offer more to me in terms of advice, support, friendship, than I can give in return. To list them all here would serve little purpose; they know who they are.


I’m also quite thankful for the aforementioned venue, the OHA Annual Meeting, for furnishing all of us a place to meet and discuss all the myriad aspects of our profession. Thanks here can focus on a few, specifically Gayle Knight and Kristine Navarro-McElhaney, as well as the program and local arrangement committee members; I will rank this year’s conference as one of my favorites as well as most memorable. A shout out must go, too, to the Mentoring Committee; they have forced me to meet someone new the last two years, which all long-time conference attendees need.


In advance of OHA@50, Andrew and I, with the help of OHA leadership, asked my aforementioned oral history family to state why they love the OHA and/or its Annual Meeting. We listed some of them last month. I won’t bore you with mine, at least not in its entirety. But this year in Long Beach, #OHA2016 was indeed my yearly dose of sanity. While it sounds cliché to say it, in my case it rings true: I’m not sure what I’d have done without it.


Featured image credit: Thank you by Free for Commercial Use. CC-BY-SA-2.0 via Flickr.


The post A note of thanks, a dose of sanity appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2016 02:30

Reflections on ‘chatbot’

A chatbot, or ‘chatterbot’, is computer program designed to engage in conversation through written or spoken text. It was one of the words on the Oxford Dictionaries Word of the Year 2016 shortlist. The idea of a chatbot originates with Alan Turing’s mid twentieth century aspiration to build a thinking computer. Turing proposed a test to determine what might count as success in this venture. If, over a five-minute text-only conversation, a human judge could not distinguish the machine from a human, then we might call it a thinker.


In the years since Turing, chatbots have travelled the path from philosophical thought experiment to Internet commonplace. But there’s a gap between the goal of a chatbot that we might call a thinker and what Amazon, Apple, Facebook, and others have done with Turing’s idea. If and when Amazon’s Alexa, Apple’s Siri, or the chatbots on Facebook Messenger, pass the Turing Test they will do so almost as an afterthought. It may not be so much that those driving the development of chatbots lack the capacity to build machines that pass the Turing Test, but that they have other priorities that they judge to be more important.


Turing gave voice to a sense of wonder about minds whose basic architecture is radically different from our own. What would it be like to share ideas with such a thing? We find this sense of wonder in those who seek to communicate with chimps, dolphins, and parrots. It explains the appeal of the various machine minds that populate science fiction. HAL 9000, the computer in Stanley Kubrick’s movie 2001: A Space Odyssey presents as an invariant red light centred on a yellow dot. HAL’s statements have more emotional resonance than anything said by the movie’s fairly deadpan humans. We sense its anguish as astronaut Dave shuts it down – “I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it.” HAL seems to be undergoing something more momentous than the progressive loss of capacities we should expect from the incremental shutdown of a machine.



2814913319_2e28c07c8a_zTomy Chatbot, by Michelle M.F. CC-BY-SA-2.0 via Flickr.

The current proliferation of chatbots is a spread of ersatz humanity. Successful confidence tricksters enlist our willingness to play along with the scenarios they create for us. So it is with some of the deficiencies of today’s chatbots. ELIZA, a 1960s psychotherapist program, did a pretty good job of simulating understanding by mechanically turning elements of its interlocutor’s statements into requests for further information. Sample dialogue – ELIZA: “How are you today … What would you like to discuss?” Human: “I’m feeling a bit sad.” ELIZA: “Do you believe it is normal to be feeling a bit sad?” Human: “No, can you help me?” ELIZA: “Do you want me to be able to help you?” We welcome the opportunity to interact with something that is responsive in some way as we unload our problems. ELIZA has zero propensity to judge even the most shaming confessions. If ELIZA is helping you to feel better, then why spoil things by exposing it as a woeful Turing Test failure?


This willingness to play along with sufficiently human interlocutors may be a consequence of our evolutionary past. Evolutionary psychologists hypothesize that human brains are built with a Hyperactive Agency Detector. In the environments for which humans evolved, the costs of failing to detect an agent when one is present could be ruinously high. Mistakenly detecting an agent when there is none might be inconvenient. Not noticing a spear-carrying adversary could mean death. We are evolutionarily primed to detect agency in rustling trees and unusual cloud formations. This evolutionary legacy guides our interactions with Siri and the rest. We treat Siri as a sometimes helpful, sometimes frustrating agent in spite of her Turing Test fails.


The recent hack and data-dump from the Ashley Madison “Life is short. Have an affair” dating site exposed some well-known people. It also revealed a large number of chatbots impersonating sexually available women. Suitors commenced their discussions with potential dates already half seduced by a fetching profile photo. Those who programmed the bots understood that the inquiries of romantically interested men tend to fall into patterns. Dating sites acknowledge the use of chatbots. They offer the disclaimer that their sites offer no guarantee of meeting that special someone – or, given Ashley Madison’s advertised rationale, of cheating on that special someone. They emphasize that their purpose is “entertainment”. This makes Ashley Madison seem less like a dating site and more like an update of the 1980s computer game Leisure Suit Larry in which players seek to make selections from a range of pre-programmed statements that will prompt a female character to partially disrobe. Sexually-oriented dating sites may be more interested in chatbots capable of a narrow range of sexually suggestive banter than in bots capable of passing the Turing Test that might prefer to discuss the philosophically vexed issue of rights for artificial beings.


It’s instructive to take the long view of all of this. It’s fun to chat with chatbots. But are they precursors of a future in which we increasingly accept ersatz humanity as a more convenient, cheaper, more entertaining substitute for the real thing? Will our future be one in which we interact with artificial beings that do not pass the Turing Test but which reliably interact with us in ways that Apple and Ashley Madison manage to market at us?


Featured image credit: AI-tech, by geralt. CC0 Public Domain via Pixabay.


You can learn more about the Oxford Dictionaries Word of the Year 2016.


The post Reflections on ‘chatbot’ appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2016 01:30

The year of hating immigrants

2016 has been a year of bitter political debates fueled in large part by drastic divides regarding how immigrants affect national well-being. The US presidential race, the British Brexit vote, as well as other challenges within the European Union, and growing competition against the otherwise durable German Chancellor Angela Merkel all display deeply rooted fears of inadequately controlled immigration. New immigrants are blamed for the steady erosion of economic conditions for the working-classes; disintegrating cultural and social norms; and rising crime rates and threats of terrorist attacks. Politicians who call for the securing of national borders and eviction of targeted groups of newer immigrants have attracted vocal and vehement supporters.


The appeal of such slogans is obvious, if one considers that regulating immigration is one of the most direct ways by which countries can socially engineer their populations to gain advantages of all sorts. These days, immigration laws generally welcome immigrants who are believed most likely to contribute economically because they bear useful skills, educational credentials, entrepreneurial energies, and investment capital. Ideally, immigrants would also share political values and be readily absorbed socially and culturally, although laws that select for such attributes may invoke unacceptable forms of discrimination, as in Republican president elect Donald Trump’s call to ban immigration of Muslims.


Almost no one believes that borders should be completely open, although there is wide disagreement about what rationales should take priority in limiting who can immigrate legally and who can gain citizenship. Should we really prioritize economic migrants? What of people fleeing instability and danger in their homelands—do they have rights to safe passage and new homes as refugees How do immigrants affect the common good in terms of costs to social services weighed against their economic contributions? How is assimilability measured, and how much and what kinds of diversity can be accommodated into democratic societies?


Disagreements about what priorities immigration restrictions should advance are compounded by the reality that immigration laws have proven almost impossible to enforce completely, fanning fears that unauthorized immigrants are ignoring and undermining national interests.


In this fraught climate, the human realities of those who migrate often disappear from view. Whether they cross borders for economic or political reasons, migrants display high levels of aspiration in the risks and sacrifices they undertake in search of better lives. They leave behind loved ones and familiar homes for uncertain futures, often paying large sums of moneys to brokers who manage their travel, drawn by hopes that their living conditions will improve whether through better job or business opportunities, greater safety, freedom from persecution, environment, or reunification with family and friends. They are usually driven more than most to succeed precisely because they have given up so much in migrating and are prepared to work hard and undertake employment shunned by others in order to do so.


 Whether they cross borders for economic or political reasons, migrants display high levels of aspiration in the risks and sacrifices they undertake in search of better lives.

As a scholar of migration, I write histories of migration that try to strike a balance between the lived realities of migrants, both legal and unauthorized, and the legal and bureaucratic conditions imposed by the nations in which they travel. I have found that immigration laws become more effective when they work with, rather than against, the ambitions of migrants seeking better lives. Channeling their considerable energies and dreams can produce mutual benefit both for their countries of new settlement and for themselves, rather than pitting the two forces in costly and dehumanizing opposition.


Asian Americans became model minorities even though they had been the earliest targets of enforced immigration restriction in the United States, and banned from citizenship for most of US history (1790-1952) because they were viewed as racially inassimilable. Nonetheless, Asians continued to immigrate and made lives for themselves in the process of helping to develop the United States through railroads, farms, fisheries, a plethora of businesses large and small, trade, education, and many forms of civil service. During World War II, laws and attitudes began to shift so that such racial discrimination became unacceptable. With the normalizing of Asian immigration and access to citizenship, numbers grew, as did the high visibility of “model minority” Asian Americans. Most have immigrated through employment preferences in the 1965 Immigration Act, now just past its 50th anniversary, which selects for Asian immigrants trained and educated for professional, entrepreneurial, and white-collar livelihoods and success. That Asian Americans in the aggregate have such high levels of employment and educational attainment demonstrates the power of laws and bureaucracies to screen for “highly skilled” immigrants who succeed economically, but also to brand as illegal and invasive those who share many of the same aspirational traits and capacities to gain employment, but don’t fit into legislatively imposed categories of welcome immigrants.


Featured image credit: Statue of Liberty Landmark by Unsplash. Public domain via Pixabay.


The post The year of hating immigrants appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 25, 2016 00:30

November 24, 2016

Black Friday: the dark side of scarcity promotions

Each year, reports of violent incidents between consumers during shopping-crazed holidays (e.g. Black Friday) emerge. These incidents involve individuals physically and verbally assaulting, robbing, and even shooting fellow consumers. While others have shown individuals will resort to aggression and violence when survival resources are in short supply, violence also emerges while attempting to acquire luxury products in resource-rich consumer environments. Very little is currently known about the drivers of such acts.


Does simply encountering a scarcity promotion, such as a newspaper or television advertisement or online pop-up ad, cultivate seeds of aggressive behavior in consumers and predispose them to act in a violent manner? Is marketplace aggression not merely the outcome of crowds during shopping holidays, but activated beforehand at ad exposure?


A scarcity promotion is a marketing tactic that emphasizes limited availability (either in quantity or time) of a specific product or event. Firms utilize scarcity promotional tactics throughout the year, but their most salient usage is high-profile shopping-oriented events in which large discounts are offered on highly desirable items, but available quantity is often limited, as is the time to access the promotion (only that day or week).


To test this, a series of 7 laboratory experiments with a total of 1,173 participants was conducted. Participants were exposed to a promotional ad for a highly desirable product.


The promotion varied, however, such that the available quantity was either very low (scarcity), very high (control), or made no mention of product quantity. Shortly after ad exposure, participants moved on to a purportedly different study: one in which they had the opportunity to behave aggressively (e.g., playing violent video games, encountering a jammed vending machine, or selecting between violent/non-violent experiences).


The results showed exposure to scarcity promotions that limit product quantity led to increased aggressive behavior, and this was because consumers perceive other shoppers as a potential competitive threat to obtaining the desired product. Further, scarcity promotion exposure has a physiological effect on consumers. Specifically, exposure to the limited-quantity ad increases testosterone levels, which may facilitate aggression when an opportunity becomes available. Although aggression and competition are related, scarcity promotions heightened the likelihood that consumers will engage in aggressive, competitive actions like shooting, hitting, and kicking, but it did not increase non-aggressive, competitive actions like working or thinking harder.


In some conditions, however, aggressive behavior does not result from exposure to scarcity promotions. Cues that directly minimize consumers viewing other shoppers as competitive threats attenuate the aggressive responses to scarcity. For example, exposure to a scarcity promotion led to increased aggression when the promotion limited quantity (e.g., Only 5 Available), but not when it limited time (e.g., One Day Only). This is because promotions that limit product quantity inherently pit consumers against each other and heighten the competitive threat others pose to securing the desired good. Put another way, when product quantity is limited, consumers will miss out if they do not get to the product before other consumers.


Conversely, in promotions that limit time, all consumers who want to secure the promotional product will do so as long as they arrive within the allotted time, making the perceived competitive threat other consumers pose in inhibiting product acquisition minimal (i.e., consumers only compete against the clock and not each other).


In sum, when the doors open on Black Friday and the consumers rush in, racing towards the few discounted items, the aggression that ensues may have originated long before they entered the store, perhaps as soon as they saw the first Black Friday ad.


Featured image credit: Black Friday by Powhusku. CC BY-SA 2.0 via Flickr.  


The post Black Friday: the dark side of scarcity promotions appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 24, 2016 23:30

How does the brain work?

The media are full of stories about how this or that area of the brain has been shown to be active when people are scanned while doing some task. The images are alluring and it is tempting to use them to support this or that just-so story. However, they are limited in that the majority of the studies simply tell us where in the brain things are happening.


But the aim of neuroscience is to discover how the brain works. Yet the spatial resolution of these images is only of the order of millimetres. This means that within any one area that is shown to be active there are many millions of nerve cells or neurons. It is an awesome challenge to find out how these are connected and how their combined activity carries out particular operations. And just think: there are an estimated 86 billion neurons in the human brain as a whole!


One way of trying to find answers is to appeal to the operations that are carried out by computers. In other words, we can appeal to ‘computational’ theories. One of the pioneers of what we now call ‘Computational Neuroscience’ was David Marr. In his seminal book on Vision (1982), he suggested that we address the problem at three levels: 



The computational level – what is being computed?
The algorithmic level – what transformations are being carried out?
The level of implementation – how are these transformations implemented?

In a computer, the operations are carried out by silicon chips, but in the brain, by circuits of neurons. It is not so difficult to understand how silicon chips work because we have made them. The problem is that this is not the case for neuronal circuits.


…there are an estimated 86 billion neurons in the human brain as a whole.

So it is simply astonishing that when he was a graduate student in the late 1960s David Marr produced theories as to how three of the structures in the brain actually work. These were the cerebellum, involved in the automation of motor skills; the hippocampus, involved in retrieving memories from our life; and the neocortex, involved in categorising and classifying the sensory information that we receive from the world. At the time, the majority of scientists were addressing questions as to what each brain area did. Yet here was someone having the daring to ask what transformations were performed and how they were implemented in neuronal circuitry.


These theories turned out to have a very widespread influence in the scientific literature. And they were so far ahead of their time that it was not for ten years that the most basic prediction made in the cerebellar theory was tested and found to be correct. Since then, neuroscience has advanced rapidly, with the advent of many new methods. We can trace the anatomical connections of individual neurons or groups of neurons; we can record the activity of individual neurons or groups of neurons while animals or people are performing tasks; we can record the magnetic signals that are produced when neurons become active, and do so in people using detectors from outside the head; and we can even turn off the activity of particular neurons using light. So we are now in a position to ask how Marr’s early theories hold up in the light of the findings of today’s neuroscience.


Neuroscience is coming of age.

It is a tragedy that David Marr did not live to find out for himself. He died in his mid-thirties in 1980, though his wife, the neuroscientist Lucia Vaina, is still working. Had he lived he would have been 70 last year. Without question, there would have been a scientific gathering to congratulate him on his achievements.


As it is, there is no better way of celebrating one of the most influential scientists of the modern era than to use current knowledge to produce theories of how the cerebellum, hippocampus, and neocortex actually work. And there are scientists around the world who are now doing so. Neuroscience is coming of age.


Featured image credit: Computer by Lorenzo Cafaro. CC0 public domain via Unsplash.


The post How does the brain work? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on November 24, 2016 04:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.