Oxford University Press's Blog, page 563
January 8, 2016
Words of 2015 round-up
Word of the Year season has closed with the selections of the American Dialect Society this past weekend, so it’s time to reflect on the different words of the 2015. The refugee crisis and gender politics have featured prominently in selections around the globe as well as the influence of technology.
In the English-speaking word:
Collins Dictionary named “binge watch” as their Word of 2015.
Oxford Dictionaries selected the “Face with Tears of Joy” emoji.
Dennis Baron selected the gender-neutral singular “they” as his Word of the Year.
Quartz’s (unofficial) nomination for Word of the Year is also the singular “they”.
The Australian National Dictionary Centre’s Word of the Year is “sharing economy”.
Dictionary.com selected “identity”.
Merriam-Webster selected the suffix “-ism”.
Cambridge Dictionaries selected “austerity”.
The Association of National Advertisers (ANA) selected “content marketing”.
Global Language Monitor
In New Zealand, Public Address selected “quaxing”.
Ben Zimmer thought “they” as a gender-neutral singular pronoun was an early front-runner.
Lynne Murphy’s US-to-UK Word of the Year is “mac & cheese” and UK-to-US is “backbencher”.
Nancy Friedman selected “refugee”.
Gary Nunn reflected on the whole Word of the Year phenomenon.
Samantha Bennett also reflected on Word of the Year nominees.
David Barnhart offered a brief list of nominees.
Lake Superior State University released their annual list of words to be banned.
Michael Skapinker was interested in a few new verbs this year, such as “to database”.
Wordspy has selected “feardom”.
“Atticus” is the
The name of the year is announced by the . The overall name of the year is Caitlyn Jenner. #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Gender-neutral they wins Most Useful by a landslide! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
And the winner of Most Creative is ammosexual! No run-off required. #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Manbun wins Most Unnecessary! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Fuckboy wins Most Outrageous! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Netflix and chill wins Most Euphemistic! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Most Likely to Succeed goes to ghost—to abruptly end a relationship by cutting off communication #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Sitbit wins for Least Likely to Succeed! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
wins for Most Notable Hashtag! #WOTY15 #LSA2016
— Am. Dialect Society (@americandialect) January 8, 2016
Q&A with audio transcriptionist Teresa Bergen
As you may have heard, Wisconsinites love the people who can quickly turn our spoken words into written text. Transcriptionists are the unsung heroes of the oral history world, helping to make sure the incredible audio information stored in archives across the globe is accessible to the largest audience possible. To learn more about the work they do, this week we bring you an interview with freelance transcriptionist Teresa Bergen.
How did you get started with transcription?
While getting an MFA in fiction writing at Louisiana State University, I worked part time in the T. Harry Williams Center for Oral History and did contract Louisiana history research projects to put myself through graduate school. After graduating, I worked at the Williams Center fulltime for a while, mostly indexing and editing oral histories. In 2000, I moved across the country to Portland, Oregon, planning to find a fulltime writing job. I soon realized it would be more lucrative and interesting to freelance as both a transcriptionist and a writer. I’ve been doing both for many years now.
How is technology changing the work you do?
I remember wondering if “this digital stuff was going to catch on.” It sure has! A couple of my clients still prefer cassette tapes, but now it’s overwhelmingly digital. While I prefer digital files, it’s a bit more confusing because there are so many formats. I use Express Scribe software, which is compatible with many but not all digital files. When I find myself battling with a mysterious file extension, I miss cassettes just a tad. They might have been distorted and prone to snap or melt, but a cassette was a cassette.
What was your most interesting transcription job?
I’m lucky because many different topics interest me, so I’m seldom bored. That is the hardest question to answer! A few projects that stand out are Civilian Conservation Corps workers building Acadia National Park in Maine, rural healthcare in Kentucky in the early 1900s, places that disappeared, such as an old company lumber town in Louisiana and the Nez Perce salmon fishing grounds on the Columbia River that were dammed years ago. Whenever people find an old box of forgotten cassettes, that’s pretty exciting. Usually they’ll be problematic – degraded tapes with poorly or unidentified narrators and/or interviewers – but I feel like I’m doing my little part to rescue a lost piece of history. I’ve listened to narrators who were born in the late 1800s, who remembered World War One and the 1918 influenza epidemic, who told their stories in the ‘60s then stayed locked in a box for 50 years. When I transcribe something like that I feel hugely privileged that I get to have this immediate experience of a long-gone person speaking right into my ears. I love transcription and while some people view it as lowly – I’m using skills from my eighth grade typing class (thank you, Mr. Medina of Dana Junior High in San Diego) more than from graduate school – I enjoy listening, typing, and making people’s stories accessible to interested researchers and descendants.
I feel hugely privileged that I get to have this immediate experience of a long-gone person speaking right into my ears.
What do you want oral historians to know about your work?
In order for me to do my best work, oral historians need to control the interviews. While most people I work with do an excellent job of this, some don’t. They interview groups of people and allow them to talk simultaneously and interrupt each other. Family members and spouses who constantly dispute or finish each other’s stories are the worst. I don’t have supernatural hearing. Nor am I talented at telling apart the voices of a group of people. So I’ll try my best, but after a few listens I’ll mark a passage [unclear] and move on.
Every once in a while this type of interviewer says, “Please speak one at a time. This will be too hard for the transcriptionist.” While I appreciate this consideration, the people who really lose out are the interviewer and the narrators. The interviewer is losing their precious research, plus the money they’re paying me for a product I can’t really deliver. And the interviewer is wasting the narrators’ time by not setting up an environment where the stories can be well preserved. This always pains me. Please, oral historians, don’t let your narrators run amok!
Also, if you’re interviewing a group of people, identify, identify, identify. Not just at the beginning. Ideally, every time a narrator speaks, he or she begins with, “This is Shirley” or whoever. Or the interviewer can say it for them. “That’s Shirley talking now.” This may seem like overkill, but it makes your transcript much more reliable. And the narrators will see that you’re serious about honoring them as individuals with something to say of enduring historic relevance.
I love transcription. For me, it can be a very intimate experience with a stranger. When I listen closely enough to type word for word, I feel like a channel for somebody’s story, or even their life, especially if it’s a very old person or somebody who died between recording and transcription. I laugh, I cry, and they never know who I am.
We think all of our transcripting friends are beautiful, and we are eternally grateful for the work they produce. Add your voice to the discussion in the comments below or on Twitter, Facebook, Tumblr, or Google+. If you’d like to discuss an innovative project you’re working on, consider submitting it for publication on this blog.
Image Credit: “Typing” by Sebastien Wiertz. CC BY 2.0 via Flickr.
The post Q&A with audio transcriptionist Teresa Bergen appeared first on OUPblog.

The traumatising language of risk in mental health nursing
Despite progress in the care and treatment of mental health problems, violence directed at self or others remains high in many parts of the world. Subsequently, there is increasing attention to risk assessment in mental health. But it this doing more harm than good?
The continuing focus on risk, well-intentioned as it is in reducing harm and increasing people’s safety, has a stigmatising, and, in some cases, traumatic effect on people using mental health services. It reinforces the myth that people who are mentally unwell are an inevitable risk to society, and that through risk assessment we can minimise or even eliminate this threat. It is the often unquestioned acceptance of the effectiveness of risk assessment, and the unconscious bias that emerges from this narrative that poses the biggest risk.
Why do we need risk assessment?
Risk assessment seeks to identify the likelihood of harm to self or others with a view to preventing or minimising such harm. National crime statistics and suicide data show that three times as many people with mental health problems will take their own life, than will take the life of others.
Risk algorithms (based on asking specific questions, and generating a subsequent ‘score’), allow for a calculation of clinical utility similar to ‘numbers needed to treat’ (NNT). For example, before using a particular intervention to reduce harm to others – it would be useful to know how many people would need to be treated using the intervention in order to reduce harm by at least one incident. An NNT of ten would mean that ten people would need to be treated to reduce harm by at least one incident. Thus, the intervention might be seen to have value. An NNT of 100 would mean that 100 people would need to be treated to reduce harm by at least one incident. In this case, the intervention might be seen to have less value.

In a similar vein to such discussions, is the calculation of ‘numbers needed to detain’ (NND). Requirements for legal detention in most jurisdictions are that a person, who is judged to have a mental disorder of a nature or severity that threatens the safety of the person or others, may be detained involuntarily. The issue of risk is incredibly pertinent here. As long as risk is linked to decisions around legal detention, mental health practitioners with the powers to detain people involuntarily would benefit from access to the best available risk assessment methods.
The unseen risks in ‘risk’
In his 2010 Gresham College lecture, George Szmukler, Professor of Psychiatry at the Institute of Psychiatry, Psychology and Neuroscience, illustrated how the issue of risk discriminates against people thought to be mentally ill. A person deemed to be mentally ill who is thought to be at risk of harm to self or others can be detained even if they have never committed a violent act, have insight into their risk, and have plans to manage it, i.e. they have mental capacity. Conversely, a person with a life-threatening physical illness who refuses treatment that can save their life and who is deemed to have mental capacity can refuse treatment without fear of involuntary detention.
The number of people in the general population who may be deemed risky to others, and who are unlikely to have a mental disorder, greatly exceeds those with a mental disorder deemed risky. Yet these people are not eligible for involuntary detention under mental health law. Therefore, detention on the grounds of risk discriminates against people who have a mental disorder, and this situation should sit uneasily in democratic societies with strong civil libertarian principles enshrined in law.
If researchers, clinicians, managers, and policy makers are serious about reducing or preventing harm, they will need to identify the likely causes, develop interventions to address these causes, and evaluate these interventions in well-designed studies. We could take steps to reduce harm with more confidence if we knew that A (e.g. ‘unsafe’ staffing levels) causes B (harm) and taking steps to reduce A leads to a reduction in B. Therefore, only a thorough understanding of what causes harm will allow us to discover causal risk predictors that we can then address to increase safety.
Harm in the mental health context is linked to environmental factors such as the use of physical restraint and seclusion, over occupancy, staff-patient ratios, ward rules, and staff characteristics. Yet, risk measures seldom include such factors; they are almost without exception, individual centric. This person-centric approach helps explain why many service users do not engage with risk assessment or consider the process stigmatising, disempowering, and even traumatic. A consequence of this approach is that risk assessment methods lack credibility in the eyes of many service users for whom risk is something that is done to them. It also violates the principle of ‘no decision about me without me’.
From risk to safety: an idea whose time has come

Risk assessments are seldom linked to improved therapeutic outcomes, and often further marginalise disenfranchised groups by labelling people rather than understanding and helping to resolve their difficulties. Moving forwards, risk assessment needs to be focussed on safety issues – secured by a desire to improve, reintegrate, retrain, and foster recovery. Marginalised groups such as those living with mental health issues are further burdened as they often live in communities associated with recurrent harm and crime. The label of ‘risk’ promotes stigma by classifying individuals as unsafe, thereby giving society’s prejudices and fear the stamp of scientific approval. Clinicians using risk assessment measures usually have benevolent intentions, but history has shown that such intentions do not always lead to benevolent outcomes.
A simple way to address the pitfalls in risk assessment is to focus on the issue of safety. The language of risk punishes and stigmatizes, in some cases it may traumatize. The language of safety nourishes and protects. A collaborative approach to safety assessment – in partnership with service users, is recommended. Given that discussing safety issues is a sensitive topic, this discussion should be fully integrated throughout an assessment interview – not added on as a parting shot ending a clinical encounter. By placing ‘safety’ at the heart of our work around risk; acting with both compassion and clinical-knowledge, we can ensure better outcomes for all involved.
Featured Image Credit: ‘Rime Risk Ryze Pysa Saber Askew Augor Zes MSK LTS WCA TMD AWR SeventhLetter LosAngeles Graffiti Art’ by A Syn, CC BY-SA 2.0 via Flickr.
The post The traumatising language of risk in mental health nursing appeared first on OUPblog.

Separating investment facts from flukes
There are hundreds of investment products in the market that claim to outperform. The idea is that certain information is identified that allow us to pick stocks that will do better than average and those that will do worse than average. When you buy the stocks that will do better and short sell the ones that you think will do worse, you have potentially identified a strategy that will ‘beat the market.’
Most of the empirical academic papers that try to identify these factors are wrong.
Here is the intuition. Suppose we identify some variable ‘X’ that we think will predict the market. We measure the correlation. In the usual approach, we want to have 95% confidence that we have identified something real. In other words, we are willing to tolerate a 5% chance that we have a false positive – or a fluke. The usual test here makes sure that the correlation is two standard deviations away from zero. If the variable passes this hurdle, we declare it ‘significant’.
Now let’s change the problem. Instead of examining a single ‘X’, we look at 20 different ‘X’s, that is X1, X2, …, X20. We measure the correlation for each one and we find one of these variables, say ‘X7’ that passes the two standard deviation test. We declare it ‘significant.’ However, this is a serious mistake.
Given that we are trying so many variables, one of them might show up with a large correlation purely by chance, i.e. the fluke. In this particular case, finding ‘X7’ is two standard deviations from zero, does not mean there is a 5% chance of a fluke finding – it means there is a 64% chance of a fluke. In short, when you try lots of variables, the two standard deviation rule fails. We must increase the hurdle or you will be disappointed with the performance of your investment.
It is important to consider the history of factor discovery which began in 1964. Indeed, 316 factors have been published in top finance and economics journals and it is likely that most of them are flukes.
In the exhibit below, the green curve and the right-hand side axis shows the history of factor discovery as well as an extrapolation out through 2030. The dashed line (and left-hand side axis) shows the usual rule (two standard deviations from zero) to declare a finding ‘significant’. The blue line shows a new recommended hurdle for declaring something ‘significant’. The ‘x’ symbols represent some of the more prominent discoveries.

This exhibit is supposed to ‘rewrite history.’ Take for example, 1992, the year that the famous Fama and French paper was published in The Journal of Finance, which argued for a three factor model. The extra two factors they discovered are labeled HML (a value factor where you buy high book value to market capitalization stocks and sell low book value to market capitalization stocks) and SMB (buy small stocks and sell large stocks). Both HML and SMB clear the traditional hurdle of two standard deviations. However, in 1992, already many factors had been tried. The blue line shows that if multiple testing was allowed for in 1992, only HML would have been declared significant. The three factor model is really a two factor model. Interesting, post-1992, SMB has failed to deliver a positive average return. This is consistent with its discovery in 1992 being a false positive.
There are two additional considerations.
First, it is important to control for correlation of tests. The existing tools in statistics do not allow for specific correlations. Many strategies in finance are correlated. For example, the correlation of 20 different momentum factors is very high whereas the correlation of 20 different global macro factors could be quite low. It is possible to adjust the blue line in the exhibit for correlation.
Second, while 316 factors have been published in top journals there are many more factors published in lower tier journals and potentially thousands of factors that were tried by academics but never made it to publication. In addition, there are hundreds of practitioner researchers involved in data mining and they don’t even try to publish their factors. Hence, this reinforces the case that the hurdle for significance needs to increase.
Featured image credit: Business calculator calculation by edar. Public domain via Pixabay.
The post Separating investment facts from flukes appeared first on OUPblog.

Is an engineering mind-set linked to violent terrorism?
In a British Council report Martin Rose argues that the way STEM subjects are taught reinforces the development of a mind-set receptive to violent extremism. Well taught social sciences, on the other hand, are a potentially powerful intellectual defence against it. Whilst his primary focus was MENA (Middle East and North Africa) he draws implications for education in the West.
The process by which young people are radicalised is very complex and poorly understood. As Scott Atran has said, the “first step to combating Isis is to understand it. We have yet to do so … What inspires the most uncompromisingly lethal actors in the world today is not so much the Qur’an or religious teachings. It’s a thrilling cause that promises glory and esteem … Youth needs values and dreams.”
Rose notes that engineering, medicine and other technical subjects are regarded as superior education in many MENA countries. These subjects may attract people with mind-sets that like simple solutions, little ambiguity, nuance or debate. Rose calls this an ‘engineering mind-set’. He says that in these courses there is a tendency to concentrate on rote learning and exam-passing with little or no questioning. Those mind-sets may then be re-enforced by the way they are taught. Rose emphasises that young people need to be taught how to think to immunise their minds against ideologies that seek to teach them what to think. In other words they need to be encouraged to think critically as in the social sciences.
There are two main points I want to highlight here. First, as Rose states, the sparse data that we have indicates that there are a disproportionate number of STEM students and graduates recruited into Jihadist terrorism. That needs to be explained.
At least part, but only part, of the complex answer may rest on the second point. How do Rose and others characterise an engineering mind-set and how does it relate to the way engineers actually think? For sure the way Rose describes it needs unpacking. Rose quotes Diego Gambetta in 2007 (who in turn quotes the work of Seymour Lipset and Earl Raab who wrote on right wing and Islamic extremism in 1971). He says this mind-set has three components; a) ‘monism’ – the idea that there exists one best solution to all problems; b) ‘simplism’ – the idea that if only people were rational remedies would be simple with no ambiguity and single causes and remedies; c) ‘preservatism’ – an underlying craving for a lost order of privileges and authority as a backlash against deprivation in a period of sharp social change – in jihadist ideology the theme of returning to the order of the prophet’s early community.
Rose’s characterisation, like many attempts to capture something complex and protean, contains some truth – but it is far from adequate.

A good start at an analysis would be the report by the Royal Academy of Engineering, ‘Thinking like an engineer’. One could easily counter Rose’s three components, and be nearer the mark, by using the trio pluralism, complexity, and sustainability. I believe that we should perhaps look for an explanation by examining the huge gap that has existed (and still exists – though reduced) between engineering science and practice. For example theoretical engineering mechanics rests on the certainty of deterministic physics from Newton to Einstein with its consequent time invariant dynamics. Determinism means that all events have sufficient causes – literally that the past decides the future. Einstein is reputed to have said – time is an illusion. No practitioner takes these interpretations of certainty seriously – but she uses them as a model to make decisions because they are the best we have and they work. But there is one big and important proviso – they work in a context that must be understood. The Nobel Prize winner Ilya Prigogine has shown that evolutionary thermodynamics rests on complex processes far from equilibrium. Contrary to dynamics theories the laws of thermodynamics show that time is an arrow going only in one direction.
Quantum physics has blown away all pretence at certainty. Practitioners intuitively know that their theories are human constructs – imperfect models built to provide us with meaning and guide our ways of behaving. They use them to help make safe and functional decisions to provide systems of artefacts that are fit for purpose as set out in a specification. They use them but they know there are risks. There is no certainty in engineering practice (as some seem to believe) – witness the few tragic engineering failures (like Chernobyl). Risk and uncertainty is managed by safe dependable practice – always testing always checking – taking a professional duty of care. Were it not for the creativity, dedication and ingenuity of engineers such disasters would be more frequent. Just think of the amazing complexity of building and maintaining the international space station – truly inspirational and built by engineers. So yes there is a paradox. Deterministic theory points to single solution, to black and white answers – but in practice we use it only as a model – a human construct which has enabled us to achieve some incredible things.
Engineers use a plurality of methodologies and solutions. Anyone who has designed and made anything knows that are multiple solutions. Any attempt at optimising a solution will only work in a context and may be dangerously vulnerable outside of that context. Even a simple hinged pendulum behaves in a complex chaotic way. Bifurcations in its trajectory make its actual performance very sensitive to initial conditions. All successful practitioners know that people make decisions for a variety of reasons – some rational some not so rational. Only in theory does simplism apply. The financial crash of 2008 put paid to simplism in economic theory. Lastly, yes engineers do like order. Like life itself they create negentropy. They impose order on nature but do it to improve the human condition. The modern challenge is to do it more sustainably and to create resilience in the face of climate change.
So these are the ideas that engineering educators work too. Most engineering courses include design projects. Students learn about understanding a need, turning it into a specification and delivering a reality. To do so they must think creatively, consider the needs of multiple stakeholders, think critically to exercise judgement to determine criteria to make choices. Many undergraduate engineering courses (but admittedly not all) now include ethics. In practice the products of the work of engineers is continually tested by use. If engineers didn’t think creatively and critically about such use they would soon be out of a job.
In summary to characterise the engineering mind-set as one that thinks problems have single solutions devoid of ambiguity and uncertainty is derogatory and disparaging of our ingenious engineers. It is quite wrong to characterise the engineering mind-set as one that does not help students how to think though of course engineering educators are constantly striving to do better. Any claim that an engineering mind-set is linked to violent terrorism needs to be examined with great care.
Featured image credit: Engineering, by wolter_tom. Public domain via Pixabay.
The post Is an engineering mind-set linked to violent terrorism? appeared first on OUPblog.

January 7, 2016
The paradox of jobless innovation
The United States faces a paradox: being on the cutting edge of technology seems to have in recent years only a marginal effect on job creation. The history books and our traditional economic theories seem to have failed us – whereas before, technological revolutions usually led to tremendous growth in both GNP and employment, now, on the eve of some of the most impressive innovations we’ve ever seen, the economy and employment are recovering since the 2008 “Great Recession” at the slowest rate since the Depression.
Puzzling, indeed. The changing relation between technological advance and economic growth is not a new story. The story is not entirely mysterious, and esteemed economists have given their own readings of it, but there’s an almost an irony in how elusive a full explanation has been.
Think of the problem like this: each step forward in technology, theoretical and physical, implies investment and social benefit. These have usually led to the growth of the economy, because of the new opportunities for everyone, and to job growth for workers involved in new technologies and the value chains they create. The US still spends more money than other nations on R&D, even if the ratio of this spending to GDP is declining. But where’s the money going?
Some recent explanations blame growing income inequality, either explicitly or implicitly. Essentially, some economists argue growth has slowed because greater wealth is less useful if it’s concentrated, and technological advance facilitates concentration. Productivity gains mean automation, which can mean less employment in the short term; similarly, these gains mean an increase in the rates of return to capital, which accelerates wealth concentration when rates of return outpace economic growth. Finally, an ever greater skill premium in the United States widens the gap between the upper and lower middle class.
Something still seems unsatisfying about these answers: dazzling inventions, our gut tells us, should still lead to dazzling growth. The United States faces difficult social problems, and the 2008 financial crisis is still recent memory, but the limping growth seems to demand greater explanation. Not to mention that all these together would seem to imply not a job disappearance, but a job shift, from which greater productivity should mean an increase in real wealth and in which new industries, for all the fear of automation, actually create new job categories with higher skill premiums.
This is what three centuries of industrial history has told us. It’s tempting to argue that this revolution has been different, but this assumption seems unlikely on its face. Certainly, IT advance has rendered many jobs obsolete, but, considering the burgeoning IT sector, this revolution seems almost gentle in its job destruction compared to previous revolutions in agriculture and factory automation.
There’s an intuitive answer that makes our paradox seem banal. It supplements the other theories, while providing an important new perspective; it’s built on one of the traditional theories of growth slowdown, but adds an important second step that more fully explains why the phenomenon does not seem to have stopped: the short version is that it’s a subtle effect of technological change and globalization – more subtle than the simple offshoring of low-skilled, low-productivity jobs.
First the US production economy is being hollowed out. Economists have typically seen this as the US embracing its comparative advantage. The United States’ cultural fixation, if not its actual policy focus, on cutting edge tech is built around the idea that the United States can shed less technical jobs in favor of more skilled ones. But this should again suggest a trend toward higher skill, greater productivity, and growth in high-tech employment.
But the second step of globalization suggests something more insidious. The United States is losing a tug of war on the value-added chain. The US is not simply losing jobs, it is losing increasingly sophisticated jobs, not only in manufacturing but also in process development and design, which are important and often neglected areas of innovation. Our lost expertise and equipment is percolating upward. Separating innovation and production is a flawed strategy because the two are symbiotic. By creating a fault line between the two, and distributing our production, we’ve begun the process of eroding our advantage in high tech.
There’s a scary possibility ahead of us. We assumed that the problem was that we were innovating here and producing there, but we may very well be innovating there and producing there.
Featured Image Credit: “Atmospheric Technology” by Savannah River Site CC BY 2.0 via Flickr
The post The paradox of jobless innovation appeared first on OUPblog.

Learning from music education – Episode 30 – The Oxford Comment
More than ever before, educators around the world are employing innovative methods to nurture growth, creativity, and intelligence in the classroom. Even so, finding groundbreaking ways to get through to students can be an uphill battle, particularly for students with special needs. It’s no wonder that success stories—every educator’s dream—might seem few and far between.
In this month’s episode, host Sara Levine chats with Alice Hammel, co-editor of Winding It Back: Teaching to Individual Differences in Music Classroom and Ensemble Settings, Taylor Walkup and Adam Goldberg, two music educators in New York City, and Karmen Ross and Hyacinth Heron Haughton, the parents of special needs students. Together, they engage in fascinating conversation about the role of music in special needs education, illustrating its challenges and—most importantly—its rewards.
Image Credit: “2nd Grade Music Program” by Bill and Vicki T. CC BY 2.0 via Flickr.
The post Learning from music education – Episode 30 – The Oxford Comment appeared first on OUPblog.


Learning from music therapy – Episode 30 – The Oxford Comment
More than ever before, educators around the world are employing innovative methods to nurture growth, creativity, and intelligence in the classroom. Even so, finding groundbreaking ways to get through to students can be an uphill battle, particularly for students with special needs. It’s no wonder that success stories—every educator’s dream—might seem few and far between.
In this month’s episode, host Sara Levine chats with Alice Hammel, co-editor of Winding It Back: Teaching to Individual Differences in Music Classroom and Ensemble Settings, Taylor Walkup and Adam Goldberg, two music educators in New York City, and Karmen Ross and Hyacinth Heron Haughton, the parents of special needs students. Together, they engage in fascinating conversation about the role of music in special needs education, illustrating its challenges and—most importantly—its rewards.
Image Credit: “2nd Grade Music Program” by Bill and Vicki T. CC BY 2.0 via Flickr.
The post Learning from music therapy – Episode 30 – The Oxford Comment appeared first on OUPblog.


Those four new elements
The recent announcement of the official ratification of four super-heavy elements, with atomic numbers 113, 115, 117, and 118, has taken the world of science news by storm. It seems like there is an insatiable appetite for new information about the elements and the periodic table within the scientific world and among the general public. Maybe it’s all about nostalgia and the fact that the periodic table is like an old friend to anybody who has ever taken any chemistry class whatsoever? Or maybe it has to do with the meme-like status that the elements, their symbols, and their table have achieved in recent years. The elements are everywhere.
One of the most repeated statements about the latest four elements, none of which have names yet, is that their discovery has brought about the completion of the seventh period of the periodic table. But this is not necessarily the case. In recent years a perfectly respectable and alternative periodic table that was first proposed in the 1930s has been making a comeback. This is the left-step periodic table proposed by the French polymath Charles Janet, which is being actively supported by many periodic table experts. This table is obtained by making two small changes to the conventional table. First the element helium is moved to the top of the alkaline-earth elements, on the basis of its two electrons, just as the alkaline earth elements have two outer-shell electrons. Secondly, the entire two-column s-block is chapped off the left side of the table and transported to the right edge. This very elegantly shaped left-step table shows greater regularity than the conventional table by featuring the repetition of every single period length including the first very short period of two elements.

But to return to the new elements, the left-step table shows that the seventh period is not in fact complete even after the inclusion of elements 113, 115, 117, and 118. Only when elements 119 and 120 are created will it be true that this period is really complete. Of course most of the news articles have acknowledged that the ‘completion’ of the seventh period does not preclude the discovery of elements even heavier than element 118. The question is where does it all end? The predictions depend on just how one conducts the calculation. According to a simple argument that relies on Einstein’s theory of relativity the highest atomic number that can possibly be produced is 137. Alternatively, if one takes into account the finite size of the nucleus of the atom, the answer is either 172 or 173.
But there are many other interesting questions connected with the periodic table that were not even mentioned in the recent flurry of news articles. For example, in addition to asking about elements beyond number 118, one might ask about the possibility of elements lying somehow within the current range of elements? This question is not as far-fetched as it might seem. There are many serious academic studies that have considered the possibility of quarkonium matter. Since a proton consists of three quarks it is not inconceivable that elements with atomic numbers increasing in units of one third, rather than by integral values, might actually exist.
But leaving aside such exotic species there is a great deal of debate concerning some very stable light elements such as hydrogen and helium. It has long been recognized that the element hydrogen could be placed at the head of the halogen group containing fluorine, chlorine, bromine, iodine and astatine. There is no clear-cut criterion that demands that it should be kept in group 1 of the periodic table. The case of helium has already been alluded to. In the left-step table this element is no longer regarded as a noble gas but should be placed into group 2.
Finally there is an ongoing debate concerning the membership of group 3 of the periodic table. Some periodic tables show the group as containing scandium, yttrium, lanthanum, and actinium while many others feature scandium, yttrium, lutetium, and lawrencium. Which version is more correct or indeed does it make sense to seek one objective correct version of the periodic table? At the same time that the International Union of Pure and Applied Chemistry (IUPAC) announced the ratification of the four new elements, they also approved the formation of a task force, led by yours truly, to discuss and make recommendations on the constitution of group 3 of the periodic table. Clearly the elements and the periodic table will continue to be in the news for some time to come.
Featured image: Elements Background. (c) Eyematrix via iStock.
The post Those four new elements appeared first on OUPblog.

An interview with oboist Heather Calow
This month we’re spotlighting the unique and beautiful oboe. We asked Heather Calow, lifelong oboe player and now an oboe teacher based in Leicester, UK, what first drew her to the instrument.
When did you first start learning to play the oboe?
It was just after my eighth birthday. I wanted to start earlier than that but I had to wait for my arms to be long enough to hold up the oboe!
What first made you choose the oboe?
My whole family is very musical so I was exposed to a large variety of orchestral instruments from a young age. My brother and I were encouraged to pick the more unusual instruments to play, and one day I watched a video of someone performing an oboe solo in Shostakovich’s seventh symphony and I fell in love with the sound. The woman playing the solo ended up being my teacher so I had the best start.
Do you play any other instruments? If not, which would you love to learn?
The two instruments I play the most apart from the oboe are the cor anglais and the piano. The cor anglais is the big sister of the oboe and most oboists learn to play it alongside the oboe as there are many beautiful orchestral parts written for it. It has a more rich and mellow sound and is somewhat easier to play. I would also love to play the harp though, I always think harpists are the most elegant looking members of the orchestra.
Do you play in any ensembles, bands, or orchestras?
I play in many ensembles based in and around Leicestershire. I am a member of the Bardi Wind Orchestra, the chamber woodwind group Musicamici, and occasionally the Bardi Symphony Orchestra, as well as many others.
What has been your most memorable concert and why?
That’s such a difficult question to answer! I think my first orchestral concert, where we played the Holst’s The Planets Suite was a real highlight – I’d never played a piece that complex or exciting before. Another favourite was performance of Beethoven’s Wind Octet in Eb with the Bath Spa University Chamber Wind Ensemble. It was entirely different to the previous concert, just a small audience in a little church, but it was still really exciting. Playing chamber music where there are just a few performers is a real challenge because everyone is so exposed but it is very rewarding and I love the rich sound of woodwind instruments.

How do you prepare for a performance or concert?
The main thing I do is to make sure that all my reeds are working and select the right ones for which piece I’m playing. Oboe reeds can be very temperamental, the slightest knock or even just a little change in the weather can make the difference between sounding nice or sounding a bit like a duck! Other than that I try not to be alone too much, otherwise I tend to overthink and make myself nervous.
What is your favourite piece to perform?
If I’m playing in an ensemble or orchestra, I love playing loud and thrilling pieces, such as Shostakovich’s fifth symphony. If it’s a solo oboe piece, I prefer slower, more mellow pieces. I love the Mozart oboe concerto in C major, which is the epitome of classical oboe music. If I ever got the chance it would be my dream to perform Ralph Vaughan Williams’s Oboe concerto, because it shows off the beautiful haunting tone of the oboe to perfection.
Do you teach your instrument?
I taught a few beginners when I was 16, then had a break whilst I was at university in Bath. I now teach a few pupils, but unfortunately due to its rarity, there aren’t many people out there wanting to learn.
What do you enjoy the most about your instrument?
Definitely the amazing opportunities it provides. There are a relatively small number of oboe players, so I have had the chance to do lots of great performances. I also love that the oboe has some of the most beautiful music written for it, both in ensembles and in the solo repertoire. The sound and range of the oboe is most similar to that of the human voice so I often get to play the most romantic music.
Do you get nervous before a performance?
Yes and no. If I’m performing on my own, I can get very nervous so I have a range of breathing exercises to calm me down. If I’m performing in an orchestra or a group however, I’m usually just really excited.
What is the most challenging thing about playing the oboe?
I think it’s the stamina you need to play it. It takes quite a lot of effort just to produce a sound and blowing all your air through a small reed at the top can create a lot of pressure in your head. You also use a lot of muscles you never knew you had! I also find producing a good sound is also quite tricky, it takes years to perfect and I’m definitely not there yet.
What advice would you give to someone starting to learn the oboe?
Don’t give up! There are lots of struggles when you’re first learning but it’s so worth it. You’ll get so many great opportunities and you’ll make life long friends along the way.
A modern oboe with a reed (Lorée, Paris) by Hustvedt, CC BY SA 3.0 via Wikimedia Commons
The post An interview with oboist Heather Calow appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
