Oxford University Press's Blog, page 169
November 15, 2019
To-Day and To-Morrow; the rediscovered series that shows how to imagine the future
Almost a century ago a young geneticist, J. B. S. Haldane, made a series of startling predictions in a little book called Daedalus; or, Science and the Future. Genetic modification. Wind power. The gestation of children in artificial wombs, which he called “ectogenesis.” Haldane’s ingenious book did so well that the publishers, Kegan Paul, based a whole series on the idea. They called it To-Day and To-Morrow, and between 1923 and 1931 published over 100 volumes, by rising stars like Haldane, and leading thinkers like Bertrand Russell, who answered Daedalus with a much gloomier warning about the future of science, called Icarus.
The books were highly diverse. They covered technological subjects – aviation, wireless, automation, politics, the state, the family and sexuality. Others focused on culture and everyday life topics – theatre, cinema, the press, language, clothes, food, drink, leisure, and sleep.
They got people talking. Aimed at educated readers rather than a mass market, they still had a profound impact. Winston Churchill read Daedalus and immediately wrote an essay titled ‘Shall we all Commit Suicide?’ Haldane’s friend Aldous Huxley also read it, and in Brave New World imagined a society in which ectogenesis was combined with mass production.
A publishing sensation until the Depression hit, the series attracted leading writers – Vernon Lee, Robert Graves, Vera Brittain, the scientist J. D. Bernal, Hugh MacDiarmid, critic Bonamy Dobrée, philosopher C. E. M. Joad, novelist and biographer André Maurois – and many more. Other major modernist authors knew them. Joyce read twelve of the books. T. S. Eliot reviewed some, saying: “we are able to peer into the future by means of that brilliant series of little books called To-day and To-morrow.” Virginia Woolf’s husband Leonard reviewed at least eight. Evelyn Waugh tried to write one, called Noah, or the Future of Intoxication, but it was rejected.
There were some duds. Eliot was right to be unimpressed by Pomona, Basil de Sélincourt’s waffly book on the future of the English language. He would have thought less of R. C. Trevelyan’s Thamyris; or, Is there a Future for Poetry? (He admired John Rodker’s understanding of modernism in The Future of Futurism.) Some are unsurprising. Some are mad. Tank strategist J. F. C. Fuller, in Pegasus, which sounds like it should be about flight, suggested the world would be a better place if half-track vehicles were careering about off-road, tearing up the landscape. There is some chilling eugenics, though the series also contains some of the period’s most cogent critiques of eugenics’ scientific claims. Other writers betray assumptions about class, gender, and race that make us wince now.
But, overall, this is brilliant, exuberant writing, rich with bracing ideas which can make subjects we thought we knew well look fresh and different. So why are these books not well-known?
Perhaps it’s because they elude the stereotypes of literary history. They don’t conform to the image of a generation traumatised by the war they had just been through. Here, even the writers best known for their iconic works about such trauma, Graves and Brittain, are enjoying looking ahead; being playful and witty while simultaneously writing Goodbye to All That and Testament of Youth.
They seem equally hard to square with traditional ideas of modernism. They are (of course) future-oriented, where modernists are usually perceived as placing a present seen as degraded against a classical past. To-Day and To-Morrow’s Latin or Greek titles were perhaps a bit of camouflage, or at least a provocation.
In terms of genre they are hard to place too. Hybrid, shapeshifting from essay to prediction to science fiction to future history to satire and parody, they don’t conform to more familiar modernist genres of story, novel, or poem.
Most of the writers were socially progressive: advocating women’s rights, sexual liberation, experimental redesign of relationships and families, socialism, internationalism, and anti-imperialism. As such, they were anathema to Tory modernists.
The series illuminates the inter-war period. But it can do much more. It can tell us about our time too, about how we think about the future – or don’t. One of the most striking things about reading it now is how different its visions are from today’s tomorrow. Contemporary futurology is mostly unremittingly bleak, dystopian. Certainly we should worry about climate catastrophe, antibiotic resistance, mass extinctions. But such future anxiety inhibits constructive thought about how best to avoid disaster and remake society.
To-Day and To-Morrow’s writers wanted prediction to be more scientific. The period they helped usher in transformed its methodologies. Nowadays prediction comes from multi-disciplinary teams, think tanks. Methods of scenario planning, horizon scanning, forecasting, and above all the use of machine learning to analyse data patterns, have vastly increased the accuracy of short range predictions, from weather to epidemics.
Yet the individual visionaries represented in the series have not been superseded. The long form essay allowed them greater scope than in speeches or journalism. They could elaborate their visions, take them further than more systematic methods can reach. Big data, after all, is past data. It can tell us what’s trending, and we can extrapolate those trends. Group-based futurology irons out idiosyncrasies. But it’s the individual imagination that, sometimes, can make the quantum jumps that bring the genuinely new into being. That’s what we need more of now –not to delude ourselves that we can know the future with any certainty, but to imagine futures that are worth working towards.
Featured image credit: “Skyscrapers in China” by Maksim Samsonov. CC0 via Unsplash.
The post To-Day and To-Morrow; the rediscovered series that shows how to imagine the future appeared first on OUPblog.

November 14, 2019
Thomas Kuhn and the paradigm shift – Philosopher of the Month
Thomas S. Kuhn (1922–1996) was an American historian and philosopher of science best-known for his book, The Structure of Scientific Revolutions (1962), which influenced social sciences and theories of knowledge. He is widely considered one of the most influential philosophers of the twentieth century.
Kuhn was born in in Cincinnati, Ohio, the son of Samuel Lewis Kuhn, an industrial engineer, and Minette Stroock Kuhn. He obtained his Bachelor of Science, Master of Science, and PhD in physics from Harvard University. While completing his PhD, he worked as a teaching assistant for Harvard President James B. Conant, who designed and taught the general education history of science courses. This experience allowed Kuhn to switch from physics to the study of the history and philosophy of science. From 1948 until 1956, Kuhn taught a course in the history of science at Harvard. Subsequently he taught at the University of California at Berkeley, then at Princeton University, and finally at MIT (Massachusetts Institute of Technology) where from 1982 until the end of his academic career in 1991 he was the Laurance S. Rockefeller Professor of Philosophy and History of Science.
In The Structure of Scientific Revolutions Kuhn challenged the prevailing philosophical views of the logical empiricists about the development of scientific knowledge and introduced the notion of the scientific paradigm. He argued that science does not progress in a linear and consistent fashion via an accumulation of knowledge, but proceeds within a scientific paradigm – a set of fundamental theoretical assumptions that guides the direction of inquiry, determines the standard of truth and defines a scientific discipline at any particular period of time. He used the term “normal science” to describe scientific research that operates in accordance with the dominant paradigm.
Khun believed that normal science can be interrupted by periods of revolutionary science when old scientific theory and method fail to address the problem or explain new phenomena, or when anomalies occur to undermine the existing theory. If the failure is perceived as serious and persistent, a crisis can arise, culminating in revolutionary changes of theory. A paradigm shift occurs when the scientific community adopts the new paradigm, which leads to the beginning of the new period of normal science. Khun also maintained that the new and old paradigms were ‘incommensurable’ and thus could not be compared. Well known examples of paradigm shifts are the change from classical mechanics to relativistic mechanics, and the shift from classical statistic to big data analytics.
The Structure of Scientific Revolutions became an influential and widely read book of the 1960s and sold more than a million copies. It had a profound impact on the history and philosophy of science (and also brought the term “paradigm shift” into common use). It was also controversial since Kuhn challenged the accepted theories of science of the time.
Kuhn’s other important works include his first book, The Copernican Revolution (1957), The Essential Tension: Selected Studies in Scientific Tradition and Change (1977), and Black-Body Theory and the Quantum Discontinuity: 1894–1912 (1978).
Featured Image credit: “Pink and purple plasma ball” by Hal Gatewood via Unsplash.
The post Thomas Kuhn and the paradigm shift – Philosopher of the Month appeared first on OUPblog.

November 13, 2019
“To lie doggo,” an idiom few people seem to know
Last week (November 6, 2019), in passing, I mentioned my idea of the origin of the word dog and did not mean to return to this subject, but John Cowan suggested that I consider an alternative etymology (dog as a color word). I have been aware of it for a long time, but why is my idea worse? It may even carry more conviction, because I offered a hypothesis that takes care not only of dog but also of bug and a few other similar-sounding animal names: (ear)wig, frog, and stag. Finally, why should hog be Scandinavian any more than Celtic or Common North European? The etymology of hog “castrated animal” from Icelandic höggva, related to German hauen and Engl. hew, reminds me of a fanciful derivation of the phrase to go the whole hog, allegedly from “the whole blow.” The Scandinavian words for “hog” bear no resemblance to the English one. Since I have nothing to add to my series of posts on dog (Spring 2016), for the time being, I will let the many beasts mentioned there sleep in peace. Anyway, all our etymologies are arrows shot into the air and may not hit even an earwig.


I forget where I came across the phrase to lie doggo, but, strangely, I have known it most of my life. Perhaps British speakers still understand it. In American English, it has no currency. Although idioms tend to be local, one should beware of broad generalizations. Agatha Christie’s Hercule Poirot often finds himself in a brown study, that is, in a state of deep (and usually gloomy) meditation. This phrase falls on deaf ears whenever I use it, but it does occur in some late American novels, and, strangely, Huck Finn knew it. Likewise, my advice to students not to lean on a broken reed arouses nothing but wonder and suppressed merriment. Perhaps this biblical idiom is hopelessly obsolete, or perhaps it never had any currency in American English.
The same holds for the idiom to lie doggo. Michael Quinion (World Wide Words) found several old examples of lie doggoh (sic) antedating the earliest (1882) citation in the OED. The etymology of this odd phrase remains unknown, and here I can perhaps be of some help. In 2019, I published an article dealing with lie doggo. Regrettably, when I was writing it, I did not know James Murray’s 1896 one paragraph suggestion on doggo in Notes and Queries (I wonder how it escaped my fine-toothed comb!) or Michael Quinion’s essay on the Internet. Dictionaries of course cite the phrase in question and try to explain it by referring to the behavior of dogs. The origin of -o in doggo is usually taken for the familiar suffix, as in kiddo, typo, weirdo, and the rest (no one has discussed –oh, for no one has been aware of the form doggoh in this idiom).

I owe my idea to chance. While reading some story or book in Icelandic, I came across the phrase sitja upp við dogg. It means “to sit or half-lie, supporting oneself with elbows.” Sitja upp corresponds to Engl. sit up. Við, a cognate of with (ð = th in Engl. this), means “against” (such is the oldest meaning of this preposition: compare not only German wider but also Engl. with in withhold and withdraw). However, dogg (the accusative of doggur), a word known in texts since the eighteenth century, has nothing to do with dogs. It means “a vertical cylindrical object.”
Engl. doggo has no independent existence outside a recent meme (which cannot interest us here) and the phrase under consideration. By contrast, in Icelandic, the verbs meaning “rise,” “lie,” and “hold oneself” alternate with “sit up” before við dogg. The origin of Icel. doggur is unknown, except that the root dogg– occurs in words meaning “to do something mechanically, without giving thought to the matter; persistent; weak, feeble, depressed.” The most probable Norwegian (dialectal) cognate of doggur means “boathook” (not a cylinder but also an implement). A broader look at the relevant words yields round objects, round sticks, a windlass, and possibly dolls. The vague unifying feature of all them seems to be roundness. Such is, in my opinion, also the most ancient semantic feature of Engl. dog, presumably a baby word for “toy,” “doll,” and “pet animal” (“pup”).

Beginning with the fifteenth century, Engl. dog began to be used for mechanical devices having or consisting of a tooth or claw, used for griping or holding (so explained in the OED). Some such devices are also called cat. I would like to suggest that the Icelandic phrases with dogg (the accusative of doggur) refer to people’s various positions in front of some implements: they lie, sit up, rise, etc. before windlasses, poles, sticks, and cylinders. In any case, dogg– could not refer to an animal name, because in all the Scandinavian languages the word for “dog” is hund– (a cognate of Engl. hound; in English, this ancient noun was ousted by the newcomer dog).
The rest is murky. I suspect that phrases like the one we know from Icelandic also had some currency on the continent, perhaps including the Dutch-speaking area and northern Germany. One of them seems to have reached English, and to lie doggo, presumably borrowed from the language of itinerant artisans, acquired the meaning “to stay put.” The missing link has not been found (that is why it is called missing!). Nor do we know whether the Icelandic idiom is native (if my reconstruction has any value, it reached Iceland from the continent). I referred to the lingua franca of professional handymen in the post on ajar (August 22, 2012) and in the entry adz(e) in my 2008 book An Analytic Dictionary of English Etymology. With regard to dog as the name of an implement, see the post “It rains cats and dogs” (March 21, 2007).
Finally, what is –o in doggo? James Murray wondered whether this vowel might not be added in imitation of the Latin ablative, to express the idea “to lie the way dogs do.” This is a clever guess, but here we may probably do without Latin. Several possibilities exist. Perhaps the phrase came to England from some Dutch, German, or Scandinavian dialect in approximately the form known to us, with dogga or doggu at the end, and was transformed into doggo. The spelling doggoh makes the suggestion of the humorous ablative unlikely. Not quite improbable is the suggestion that dogga or doggu lost its ending in English and turned into lie dog, after which the slangy suffix –o was added to the noun, to make it sound like other words with -o. The recorded English forms are so late that their history may be called lost. When to lie doggo had established itself, folk etymology associated doggo with the animal name, and people began to invent explanations of what dogs have to do with the idiom. Apparently, they barked at a wrong tree.
And now a last blow to the hapless porker. Charles E. Funk, the author of several books on English idioms, researched the origin of the American phrase (as independent) as a hog on ice and came to the conclusion close to the one given in The Century Dictionary. Hog, it appears, refers to an implement used in a game played on ice. This explanation looks reasonable, while all references to the animal make little or no sense. In an indirect way, the story of a hog on ice throws additional light on the origin of the puzzling English idiom to lie doggo. Language historians often boost their conclusions by recourse to analogy. Another broken reed? Not really.
Feature image credit: Free-Photos from Pixabay.
The post “To lie doggo,” an idiom few people seem to know appeared first on OUPblog.

Seven events that shaped country music
Developed from European and African-American roots, country music has shaped American culture while it has been shaped itself by key events that have transformed it, leading to new musical styles performed by innovative artists.
1. 1927, Bristol, Tennessee: country music’s “Big Bang”
In late July of 1927, New York producer Ralph Peer arrived in a Bristol, Tennessee, to find new country artists to record. Among the many acts he discovered were a young ex-railroad brakeman and guitarist named Jimmie Rodgers, and a rural tree salesman named A.P. Carter, who travelled with his wife Sara and sister-in-law Maybelle from rural Virginia for the opportunity to record. The presence in one place of these two seminal acts—representing two important country styles—has been called country music’s “big bang.” Rodgers’ blues-influenced singing embodied one strand of the country sound, drawing on traditional African-American music; while the Carters represented the other, the older Anglo-American traditions.
2. 1934: Tulsa, Oklahoma: Bob Wills launches the Texas Playboys
In the 1930s, country musicians incorporated pop instruments like the accordion, the electric steel guitar, and even bass and drums into their performances. The new style wed elements of pop, jazz, and old-time fiddle music and became known as Western Swing. Vocalist/fiddler Bob Wills was the best known of the Western Swing bandleaders. Wills’s band had two distinctive elements: the newly introduced electric steel guitar and Tommy Duncan’s smooth singing. By the end of the decade, the group had grown to include a large brass section, rivaling the popular big bands of the day in size and sound.
3. 1952, Nashville, Tennessee: Kitty Wells records “It Wasn’t God Who Made Honky Tonk Angels”
After World War II, honky tonks were gathering places where men could come after work to enjoy a few beers and listen to music. Performers adopted electrified instruments to be heard over the considerable din. Songs about drifting husbands, enticed into sin by the “loose women” who gathered in bars, and the subsequent lyin’, cheatin’, and heartbreak created by their “foolin’ around,” became standard honky-tonk fare. While many honky tonk performers were men, there were also female singers who rose to the challenge. Kitty Wells’s “It Wasn’t God that Made Honky Tonk Angels” asserted that men had to share the blame for the fallen women who frequented these rough-and-tumble backwoods bars. The song shot up the country charts, establishing Wells’s popularity and paving the way for other women to be country performers.
4. 1968, Nashville, Tennessee: Tammy Wynette records “Stand by Your Man”
In the 1960s, as American society was undergoing great changes, country music became increasingly conservative. The women’s liberation movement was particularly disturbing to the country audience, which was dominated by white, working-class men. Recognizing that women’s role in family life was changing, producer Billy Sherrill encouraged Tammy Wynette to record “Stand by Your Man”—which like Merle Haggard’s “Okie from Muskogee” reflected Nashville’s discomfort with a changing world. Wynette’s song would resonate years later when Hillary Clinton used it derisively to emphasize her independence from her husband.
5. 1971, Baltimore, Maryland: Gram Parsons meets Emmylou Harris
Country-rock pioneer Gram Parsons was looking for someone to sing backup for him on his first solo album when he heard for the first time Emmylou Harris. However, when he first saw her in a folk club, he wasn’t sure if she could cut it as a country singer. He tested her by asking her to sing the hardest duet he could think of, George Jones and Gene Pitney’s “That’s All It Took.” “She sang it like a bird,” Parsons recalled, “and I said, ‘Well, that’s it.’” After Parsons’s death, Harris became a champion of country‑rock. Through the 1970s and 1980s, Harris employed many musicians who would later become well‑known on their own, including Rodney Crowell, Ricky Skaggs, and Vince Gill. Despite some occasional returns to a more rock-oriented style, Harris has continued to be an icon in country circles, inspiring countless other female performers.
6. 1973, Dripping Springs, Texas: Willie Nelson’s first picnic
After struggling for a decade to establish himself in Nashville as a performer, successful songwriter Willie Nelson returned to his native Texas where he knew he could make a living performing at the state’s honky tonks and fairs. To thank his fans, he threw the first of what became legendary picnics on the 4th of July weekend on a ranch in rural Dripping Springs, Texas. Besides Nelson, the lineup featured many other stars of the so-called out-law country movement, songwriter/performers who brought a new sensibility to country music. The picnics themselves attracted a huge audience that combined Texas rednecks with young hippies, showing how country music could cross cultural lines. Although ultimately these huge gatherings became too unwieldly to continue, Nelson went on to become a huge star both on the country charts and in major Hollywood films.
7. 1997, Nashville, Tennessee: Shania Twain launches a new era for women in country music
A new era for country-pop crossover—and for women—was launched by Shania Twain in the later 1990s. Born in the small town of Timmins, Ontario, Twain began singing at the age of three, performing on Canadian television from her early teens. By her late teens, she was performing in a Vegas-style revue. However, after her parents were killed in a car accident, Twain decided to move to Nashville in search of a career. She was quickly signed to record, hitting it big with songs marked by a spunky forthrightness that appealed strongly to women, while their non-threatening messages made them attractive to men. She had her greatest success with her next album, Come On Over. Although a country album in name, it was really mainstream pop in the style of singers like Gloria Estefan or Celine Dion. The album sold over 18 million copies, becoming by Billboard’s estimation the best-selling recording by a female artist of all time, in any genre.
Truly the voice of the people, country music expresses both deep patriotism as well as a healthy skepticism toward the powers that dominate American society and has long been a marker of American identity.
Featured Image Credit: Grand Ole Opry by Todd Van Hoosear. Public Domain via Flickr
The post Seven events that shaped country music appeared first on OUPblog.

November 12, 2019
How meningitis has (almost) been conquered
Scientific discovery is often a messy affair. It’s sometimes intentional, sometimes accidental, sometimes cluttered with error, and always complicated. The ultimate value of scientific observations may not be recognized for many years until the discovery emerges to shed new insight on old problems and become etched in the scientific canon. Such is the story of the conquest of meningitis, a devastating infection of the brain that is usually fatal if not treated.
The physician Richard Pfeiffer discovered one type of bacteria causing meningitis in babies during the Russian flu pandemic in 1892. For two decades Pfeiffer convinced the world that he had found the cause of influenza. He was wrong, but ever since, even after viruses were recognized as the true cause of flu, Pfeiffer’s bacteria, Haemophilus influenzae, still carries in its name residua of the early misperception.
Eventually antibiotics were discovered to treat bacterial infections. The first such drug used in America, Prontosil, was given in 1935 to treat, unfortunately without success, Katherine Woglom, a ten-year-old with meningitis. But this sulfa compound wasn’t the first antibiotic discovered. Before Prontosil was penicillin, identified accidentally in 1928 by the physician Alexander Fleming when he returned from vacation to find the bacteria on his agar plates contaminated with, and killed by, the mold Penicillium notatum. Fleming’s mold juice, penicillin, wasn’t the first antibiotic discovered, either. That honor fell to a substance emitted by Penicilium [sic] glaucum which, in 1874, contaminated the sterile vials in Dr. William Roberts’ experiments on bacterial spontaneous generation and killed the microbes. Roberts noted the phenomenon, with irritation, in a footnote to his scientific paper, and it remained hidden for decades.
In 1944 Oswald Avery and his colleagues discovered DNA as the carrier of inheritance. However DNA activity was first described in 1927 by Frederick Griffith, a shy, hard-working, British medical officer who studied the epidemiology of pneumonia in a dilapidated government lab, which also housed a post office. In the course of his work on Streptococcal pneumoniae, which also causes meningitis, he infected mice with live strains that had no capsule, along with dead strains that possessed a capsule. The bacteria he recovered from the mouse blood were live strains with a capsule. Something from the dead strains had transformed (his word) the live strains so that they then possessed a capsule. Griffith published this monumental observation in an obscure scientific paper and then moved on to other things. Sadly, before he saw how his work influenced genetics and the entire scientific enterprise, he and his colleague, McDonald Scott, were killed by a Nazi bomb in the 1941 London Blitz.
The mysteries of bacterial genes and their complicated ways fascinated Hamilton Smith, a brilliant physician who preferred lab work over patient care. When a new graduate student, Kent Wilcox, joined his lab, Smith directed Wilcox to transform phage P22 DNA into H. influenzae strain Rd. While control H. influenzae DNA moved into Rd with ease, the phage DNA just wouldn’t go. In trying to figure it out, they discovered restriction enzymes that digest foreign DNA and, thus, protect H. influenzae from being infected with phages. For this discovery, Smith received the 1978 Nobel Prize for physiology or medicine. Smith went on to explore how DNA got into H. influenzae cells and discovered that only DNA containing strings of nine specific nucleotides, called the signal sequence, could enter the bacteria. Smith then realized the actual reason Wilcox couldn’t make phage P22 go into strain Rd was because the phage DNA lacked the signal sequence. Thus, its failure to enter strain Rd had nothing to do with restriction enzyme digestion. This new understanding did not devalue the importance of his Nobel Prize-winning work with the enzymes at all; it merely emphasized the complexity of scientific discovery.
Meningitis is a horrid disease and physicians had long been frustrated with its poor outcomes (deafness, blindness, intellectual disability, seizures, and sometimes death) and with their inability to prevent it. Two pediatricians, David Smith at Harvard University and John Robbins at the National Institutes of Health and their colleagues, the chemist Porter Anderson and the pediatrician/immunochemist Rachel Schneerson, respectively, set about to rejigger the H. influenzae capsule into a vaccine that would protect babies from meningitis. The two research groups independently found a way to chemically join poorly immunogenic capsules to highly immunogenic proteins, thus tricking the babies’ immune systems into making antibodies as if the capsule were a protein.
Important scientific discoveries build on one another, like blocks in a pyramid, to achieve even more important discoveries. As a result, we now have vaccines that have nearly eliminated bacterial meningitis in children living in resource-rich countries.
Featured image credit: “Laboratory at the Central Cancer Research Labs’” by the National Institutes of Health, part of the United States Department of Health and Human Services. Public domain via Wikimedia Commons.
The post How meningitis has (almost) been conquered appeared first on OUPblog.

November 9, 2019
Video surveillance footage shows how rare violence really is
Watching the news, violence seems on the rise all around us. Most Americans think crime is going up, have a pessimistic outlook of the future, and feel increasingly unsafe. As a result, people accept more and more surveillance to protect themselves from violent and criminal behavior. Surveillance cameras are installed with the assumption that we need to fear each other and be vigilant of the potential predator next door. Mobile phones are increasingly used to capture violent events – the crazy brawl unfolding right in front of our eyes that will later go viral on YouTube or Facebook and make people think “oh my, it’s getting worse every day.” And in a vicious cycle, the more such footage we have, the more we see violence on the news: a large brawl in London, a brutal assault in Glasgow, a shooting in New Jersey, a mass murder in Dayton, violent unrests in Hong Kong or Paris. Again, such news reassure us that humanity is becoming crueler, more violent, more dangerous. Yet, a systematic look at the same video data suggests quite the opposite. On many levels, we can, in fact, be optimistic. How so?
Today, videos from closed-circuit television, body cameras, police dash cameras, or mobile phones are increasingly used in the social sciences. For lack of other data, researchers previously relied on people’s often vague, partial, and biased recollections to understand how violence happened. Now, video footage shows researchers second-to-second how an event unfolded, who did what, was standing where, communicating with whom, and displaying which emotions, before violence broke out or a criminal event occurred. And while we would assume such footage highlights the cruel, brutal, savage nature of humanity, looking at violence up-close actually shows the opposite. Examining footage of violent situations – from the very cameras set up because we believe that violence lurks around every corner – suggests violence is rare and commonly occurs due to confusion and helplessness, rather than anger and hate.
Armed robberies are an example in point. We would assume robbers to resort to violence if clerks fail to hand over what is in the register; after all, that is the fundamental proposition of the situation. Instead, video surveillance shows that robbers become afraid of the unexpected situation they are in and run away. It shows that criminals, like most people, rely on situational routines that offer familiarity and reassurance. In my research of surveillance footage of robberies clerks laughed at a robber’s assault rifle, and robbers, rather than shooting or hitting the victim, were startled and gave up. When a robber showed slight gloominess, a clerk cheered him up and the robber became even sadder, discussed his financial problems with the clerk and left. If clerks treat robbers like a child, surveillance footage shows how robbers may react according to this role and become hesitant and plead to be taken seriously. This means even in an armed robbery, where perpetrators are prepared and committed to the crime and clerks usually fear for their lifes, robbers as well as clerks tend to make sense out of the situation together, avoid violence and get into shared rhythms and routines.
We can see similar patterns when looking at video recordings of protest violence and violent uprisings. In some protest marches, certain groups attend with the clear goal to use violence; they mask up and come prepared with stones to throw at police. In other protests, police decided on a zero-tolerance strategy and plan to use force at the slightest misstep by activists. Despite such preparations for and willingness to use violent means, violence rarely actually breaks out, and people usually engage in peaceful interactions. If violence does erupt, we see that it does so not because people are violent or cruel, but because routine interactions break down, which leads to confusion, distress, uncertainty, anxiety, and fear, and ultimately violent altercations.
Similarly, research on street fights, or mass shootings shows that most people that have the will to fight and kill are actually bad at “doing” violence –as are the great majority of humans. Only very few people in very specific situations manage to be violent effectively, and it is those outliers that make it to the news. Contrary to common belief, rates of violence and crime have never been as low in most Western countries, as they are today.
Such findings have implications; fear of people’s cruel nature and violence lurking around every corner perpetuate everyday actions, drive voting behavior, and impact policymaking through worst-case-scenario thinking. Fearing fellow humans as inherently violent and cruel not only lacks empirical grounding, but research also shows it leads people to make bad decisions. Surveillance videos and recent research on violence challenge this notion that we need to fear each other. They counter the idea that we need elaborate protection from each other and constant state surveillance, which not only tends to cost public funds but also often curtails civil and human rights (e.g., privacy, free speech, free movement, right of asylum). The optimistic outlook offered by scientific analyses of videos might mean we can spend our time more wisely; instead of fearing each other and investing time and resources to protect ourselves from exaggerated dangers, we could enjoy society and our remaining civil rights and freedoms a little more.
Featured image credit: camera wall by Lianhao Qu on Unsplash
The post Video surveillance footage shows how rare violence really is appeared first on OUPblog.

Introducing the nominees for Place of the Year 2019
2019 has been a year of significant events – from political unrest to climate disasters worldwide. Some of the most scrutinized events of the past year are tied inextricably to the places where they occurred – political uprisings driven by the residents of a city with an uneasy history, or multiple deaths caused by the very location in which they happened. Listed below are eight places that caught the eyes of the world this year. But only one can be our 2019 Place of the Year. Explore each, vote for your pick, and keep an eye out for our shortlist to vote for the winner!
Mt. Everest
Our planet’s tallest peak made headlines this summer for having an especially deadly climbing season. Eleven people have died this year, many due to overcrowding by inexperienced climbers on the dangerous path. Nepal’s government drew criticism for issuing a record 381 permits, a symptom of what some call too-lax requirements for climbers. In response, Nepali officials proposed new safety rules, including requiring climbers to prove that they have more than three years’ experience with high-altitude expeditions and have scaled another major peak.
Hong Kong
In early 2019, the Hong Kong government proposed the Fugitive Offenders and Mutual Legal Assistance in Criminal Matters Legislation (Amendment) Bill, which sparked rallies beginning in March, eventually turning into mass protests in June 2019 that are still ongoing. Protestors laid out five key demands: complete withdrawal of the extradition bill from the legislative process, retraction of the “riot” characterization of protestors, release and exoneration of arrested protesters, establishment of an independent commission of inquiry into police conduct and use of force during the protests, and finally, the resignation of Chief Executive Carrie Lam and the implementation of universal suffrage for Legislative Council and Chief Executive elections. While the extradition bill has been withdrawn from the legislative process, large-scale demonstrations continue as protestors push for the rest of their demands.
New Zealand
On 15 March, a white supremacist terrorist attacked two mosques during Friday prayers, killing 51 people and injuring 49. This attack was the first mass shooting in New Zealand since 1997. One week after the attacks, 20,000 people gathered to pay their respects at one of the targeted mosques in a nationwide moment of silence and prayer. On 21 March, Prime Minister Jacinda Ardern announced a ban on military-style semi-automatic weapons, and the legislation was voted into place by the House of Representatives on 10 April.
Venezuela
Since 10 January 2019, there has been an international crisis regarding the presidency of Venezuela. Nicolas Maduro took the oath as president of Venezuela in January 2019; however, as of June 2019, Juan Guaidó’s presidency has been recognized by 54 separate countries. Additionally, Maudro’s relationship with the United States deteriorated when Maduro accused the United States of backing a coup and Guaidó’s presidency to make Venezuela a puppet state. Furthermore, governments of the United States, the European Union, Canada, Mexico, Panama, and Switzerland all applied individual sanctions against people associated with Maduro’s administration. Tensions remain high; on 3 November El Salvador ordered all Venezuelan diplomats to leave the country.
Greenland
In an unprecedented loss, Greenland had two large ice-melts, which culminated in a record-breaking loss of 58 billion tons of ice in one year—40 billion more tons than the average. In the political realm, U.S. President Donald Trump publicly implied that he would like to purchase Greenland from Denmark multiple times. Trump’s – and China’s – interest in Greenland revolves in part around new shipping lines which are becoming possible due to melting ice sheets.
Palace of Westminster
Brexit has been dragging on since 2016, but since July, the politics involved have become uncharacteristically chaotic. After her third Brexit proposal was voted down, Prime Minister Theresa May resigned on 7 June. Notoriously unconventional Boris Johnson was elected and promptly achieved a new record by facing seven consecutive defeats in his first seven votes in Parliament. In a bold, bipartisan act, some Conservatives joined the opposition to pass a law ensuring that Britain could not leave the European Union without a deal – an act which prompted Prime Minister Johnson to expel 21 members from the conservative party (the largest number to leave a party at once since 1981). After a brief (and unlawful) suspension of Parliament, Parliament agreed to a Withdrawal Agreement Bill for the first time… but denied a 31 October exit. The new Brexit deadline is 31 January 2020 – as long as the General Election, called by Johnson, and set to occur on 12 December this year, doesn’t drastically change Parliament – or Britain’s future – yet again.
Paris
Paris hasn’t left the public’s attention since March 2019, when the Yellow Vest Movement came into the international news after police arrested and fired tear gas at protestors. Less than a month later, a major fire engulfed the historic Notre Dame Cathedral, which resulted in the roof and main spire collapsing. Just after hosting the Women’s World Cup, Paris recorded the all-time hottest day on record. In October, at the Paris police headquarters, a policeman stabbed four of his colleagues to death and injured two others before being killed at the scene by police.
The Atmosphere
The summer of 2019 tied for hottest summer on record in the northern hemisphere, continuing the trend of extreme weather set by deadly cold winter temperatures, heavy snowfalls, and catastrophic mudslides and typhoons worldwide. Climate change claimed its first Icelandic glacier as a victim, where researchers marked the event with a memorial plaque. All of these climate events are driven by the carbon dioxide being poured into the oceans and Earth’s atmosphere by human activities. 2019 is projected to be the year with the highest carbon emissions of all time, and while the fact that the ozone hole is the smallest it’s been since its discovery might sound like good news, it’s actually being kept on the smaller side by the record heat in our atmosphere.
Voting for the longlist closes Friday 15 November – be sure to check back in on Wednesday 20 November to see which places made it to the top four and vote for the Place of the Year winner!
Featured image credit: earth-lights-environment-globe via Pixbay
Place of the Year 2019 Longlist
The post Introducing the nominees for Place of the Year 2019 appeared first on OUPblog.

November 8, 2019
Q&A with author Craig L. Symonds
There are a number of mysteries surrounding the Battle of Midway, and a breadth of new information has recently been uncovered about the four day struggle. We sat down with naval historian Craig L. Symonds, author of The Battle of Midway, to answer some questions about the iconic World War II battle.
There has been a lot written on the Battle of Midway over the years. What prompted your interest in this battle?
My editor at Oxford University Press, Tim Bent, urged me to take it on. Oxford has a series on “Pivotal Moments in American History” and Tim thought that Midway belonged on that list. I certainly did not disagree with him, but I told him that there were already several fine books on the battle—notably Walter Lord’s and Gordon Prange’s—plus an excellent recent book (by Anthony Tully and Jon Parshall) on the Japanese side of the battle.
Tim, however, wanted a book on Midway in the Oxford series, and he wanted one that would appeal to a broad general audience—an audience that, 70 years after the fact, did not know a lot about Midway and its importance. I am afraid that I also succumbed to his blatant flattery when he told me that however many books there were on Midway, none of them were written by me! There is no limit to an author’s willingness to be flattered.
Research is a key to writing any good history. More and more information has surfaced over the years. What information had come to light that provided you with a unique take on the battle? Or to put it another way, did any research lead to significant new information that could add to other views of the battle?
I think of all the sources I consulted, the oral histories left behind by the participants made the greatest impression on me and significantly influenced my narrative. When I read the transcripts of those oral histories I felt like I was in communication with the men who were there. Individually, each of them offers only a small glimpse into the overall story, but collectively they merge to form a dramatic narrative. In addition to the oral histories in the Naval Institute’s Collection, I found a rich trove of interviews at the National Museum of the Pacific War in Fredericksburg, Texas, that I do not believe anyone else had researched. While I am at it, let me put in a plug for this museum which, because of its out-of-the-way location, too often gets overlooked. It is worth a visit.
You have referenced other works and other comments made on the battle. Most prevalent has been the idea that this battle was won because of some “incredible” luck or the result of a “miracle.” You argue that the battle was less a result of good fortune and more a result of the men and leaders present at the time. Could you elaborate a little?
Sure. I do not mean to discredit the idea that luck and fortune—even Providence—played a role in the battle. But I did want to emphasize that it was not all luck and chance. By asserting that the American victory at Midway was all, or even predominately, the result of luck, it demeans the bold decisions and brave actions of the participants. To some extent it was the Japanese, and especially Mitsuo Fuchida in his widely-read and influential book, who argued that the outcome of the battle was due to luck. He emphasizes how amazing it was that the one search plane assigned to the sector where the American carriers lurked was the very one that had engine trouble; he emphasizes the curious timing of the early arrival of the American torpedo planes that brought the Japanese CAP down to low levels; he writes about Nagumo’s fateful decision to delay a launch until he rearmed his strike planes; he notes the timing of the arrival of the Enterprise and Yorktown bombers. Those events allowed Fuchida to claim that the Americans did not beat the Japanese, they were just lucky! Parshall and Tully have shown how Fuchida was simply wrong on many of these issues, and I tried to show how some of those events (the engine trouble on the search plane, for example) were actually strokes of luck for the Japanese. Luck plays a role in all battles, but in the end, it is the men who win and lose them.
When researching particular parts of the battle, and trying to answer some of the mysteries, did you find answers? Did you find that your research led you to other questions you didn’t expect, and if so, did you find answers to those, or do some of them remain unresolved?
Well, I found answers, to be sure, whether they are the final answers is another question. I suppose that to some extent there will always be issues that are left unresolved. There is, and always will be, a veil of uncertainty about what happened to the Air Group from the USS Hornet that morning. Researching was a real adventure for me because I found that one issue often led to another, and I felt like I was following a trail of clues. This is the way it is supposed to work, of course, but it was especially true in this project.
One aspect of the battle has been endlessly debated, and is referred to as “The Flight to Nowhere.” You make a convincing argument for the final and best analysis of the mystery. Plus you also had many actual participants that were on that flight support your conclusions. Can we ever be sure that this will conclude the mystery or are there still unresolved aspects that we will never know, for instance where all the After Action reports went?
Well, first, I am gratified that you find my explanation convincing. I actually resisted the argument that I eventually presented. Indeed, I fought pretty hard against it. I simply could not imagine why Mitscher would conceal the actual events, and how so many men would conspire to keep them secret for so long. And of course, there were always some who insisted until the day they died (Clay Fisher, for one) that the Hornet Air Group did not go on a “Flight to Nowhere.” I set out at first to argue that Fisher was right, but in the end, I was compelled by the evidence to conclude otherwise. Mitscher, I think, did what he believed was best for the country and for the service: first by seeking to find and destroy the supposed “second” group of Japanese carriers, and then by concealing events that would have cast the Navy in a poor light. It is hard to argue, even now, that he made the wrong decision.
As for the missing After Action Reports, one possibility is that Mitscher simply told the Squadron Commanders not to submit one; that he debriefed each of them orally, and then wrote—or ordered his staff to write—the only report we have from the Hornet. The other possibility is that the reports were submitted and that Mitscher had them destroyed, but that seems much less likely to me.
What was the hardest part of writing the book? Many of the key participants were not around any more. Did other people help in your research? Were other authors who did interview key witnesses helpful?
The hardest part was solving the puzzles you refer to above. This was much harder than doing the research or writing the book itself.
As to receiving help from others: I am very beholden to both those who went before me and to those still working in the field. Both Walter Lord and Gordon Prange left copies of all their interviews in their papers (Lord’s are at the Naval History and Heritage Command Library in the Washington Navy Yard; and Prange’s are in the Hornbake Library at the University of Maryland). In addition to interviews of American participants, they each commissioned Japanese-speaking researchers to interview the Japanese participants and then to transcribe those interviews into English. Those interviews, too, are in their papers.
In addition, contemporary scholars were very generous with their time and expertise. Two in particular, both of whom are acknowledged in the book, went well beyond normal collegiality and read the entire manuscript, offered insights and suggestions, and led me to sources I would otherwise have missed. These two worthies are John B. Lundstrom and Jonathan Parshall. I owe them much.
As I said before, many of the veterans of the battle are not around anymore to interview, but there are some still with us. Did any of them help shed light on missing facts?
I must acknowledge that among the very first people I talked to after my editor at Oxford were William Hauser and John “Jack” Crawford, both veterans of Midway and avid champions of its memory. They actually came to see me at the Naval Academy where I was teaching, and urged me to do the book. Bill was on the cruiser Nashville up near the Aleutians, and Jack was on the Yorktown when it went down. (I tell the story of Jack’s role on the Yorktown in the book.) ther veterans, including Dusty Kleiss, provided information by e-mail. I had a very pleasant and lengthy lunch with Donald “Mac” Showers, who worked in Hypo during the battle and who helped me understand some of the details (and the tedium) of code breaking. Most of my “interviews,” however, were second hand in that I depended on the oral histories and typed interviews now in various archives.

Something about the Battle of Midway that is particularly interesting is the story of what happened to key participants after the battle. Why do you think that’s so important?
Since I argue that “people make history,” it seemed only right to follow those people—or at least some of them—into their lives after the battle to see what became of them. I was astonished to learn that Miles Browning, who was about as humorless a person as anyone, turned out to be the grandfather of a famous comedian. (If you don’t know who it is, buy the book! For that matter, buy it anyway.)
You are familiar with more than just the battle itself; you go into detail about events beforehand, including naval doctrine, and earlier carrier battles. Why did you do this?
No historical event exists in a vacuum, and it is always necessary to provide context and background, but I agree that this book provides more than usual. Still, I felt it was essential to provide this background to put the battle in its full doctrinal, strategic, and technological context. A complete account of the Battle of the Coral Sea, for example, explains the impact of that experience on Fletcher, Browning and Oscar (Pete) Pederson, among others. It also allowed me to introduce many of the characters early, to explain the role of swiftly changing technologies, and to explore the culture of each side that influenced the decision-making.
Intelligence played a key if not decisive role in the battle. How much was it an intelligence victory as well as a military victory?
For a while after the battle, the role of code-breaking remained a secret. Then when it was revealed, there was a tendency to exaggerate the role it played. From playing no role, it went (in some accounts) to explaining everything. The answer lies somewhere in the middle. While Joe Rochefort and his colleagues were absolutely essential to American victory, they did not provide Nimitz or anyone else with a complete blueprint of Japanese plans. That matters as we assess the battle, because if Nimitz had only small bits of intelligence available to him, his decision to act boldly takes on new importance. Rochefort could have been wrong, and another admiral besides Nimitz might have played it safe and waited to see. In the end, therefore, it was both an intelligence victory and a military victory.
Technology played a significant part as well. How much of the victory might be attributed to the technological advantage of things like radar?
The Japanese had a few technological advantages of their own: the longer range of their panes (mainly due to less armor), and especially their torpedoes—which actually worked! But the American possession of radar trumped them both. Being able to see the Japanese planes en route allowed the Americans time to prepare. If the Japanese had been able to do that at 10:20 on the morning June 4, the outcome might have been altogether different.
You write from a distinctly American perspective. Was there any reason you approached the subject from the American side rather than both sides?
I have two responses to this. The first is that I did try to include a lot of information about the Japanese: their culture, the political infighting among the various groups in the government and in the Navy, the personalities of the leading decision makers, and the emergence of their technology. Still, I focused more on the American side of the story because I felt the Japanese story had been told so well by Anthony Tully and Jonathan Parshall and did not think there was much that I could add to it. What I did include about the Japanese I included because I thought it was necessary to the narrative.
Why were some promoted and others not after the battle? Mitscher had already been selected for promotion to Rear Admiral. Did his performance at Midway affect his later assignments, and why was Joe Rochefort shelved afterward?
Spruance became Nimitz’s chief of staff after the battle, and almost certainly discussed the battle, and Mitscher’s role in it, with Nimitz privately. There is no evidence of this, but it is hardly likely in a close six month relationship between two men who literally lived together under the same roof, that the topic never came up. And Spruance knew that there was something fishy about Mitscher’s After Action report. He as much as said so in his own report. It is not impossible that Spruance or Nimitz, of both of them, actually confronted Mitscher about it afterward. In any case, they apparently agreed that there was nothing to be gained by dirtying the Navy’s laundry in public. But Nimitz did move Mitscher to a shore command, one that was not a step up from a carrier group commander. It was a kind of exile and it lasted for six months. After that “time out” Mitscher was restored to a sea command with the creation of the Fast Carrier Task Force under Spruance (TF 58).
Rochefort is a different story. He had never been popular in Washington where the Redmond brothers resented his independence and unwillingness to be a team player. Their views influenced Ernest King as well, and after Midway Rochefort was transferred to other duties. There has been a lot of discussion about Rochefort’s not getting the Distinguished Service Medal that Nimitz recommended for him. King disapproved the recommendation on the somewhat specious grounds that it was inappropriate to give one man a decoration to honor a whole command. Only years later, under President Reagan, did Rochefort’s descendants receive the medal.
Featured image credit: “Grayscale photography of flying” by Chandler Cruttenden. CC0 via Unsplash.
The post Q&A with author Craig L. Symonds appeared first on OUPblog.

How firms with employee representation on their boards actually fare
Board-level employee representation has re-entered the political agenda. Even in countries that have traditionally been skeptical about giving employees more say in corporate decision-making now discuss board-level employee representation. Former UK Prime Minister Theresa May suggested changes in this direction in her country in 2017. More recently, Senator Elizabeth Warren, one of the leading presidential candidates in the United States, has introduced a legislative proposal to require that at least 40% of board members in US corporations be elected by employees. Both proposals follow a recent trend in giving employees more say either at the plant level or at the board level. The map below shows that all OECD countries, except the United States and Singapore, have policies requiring some level of employee representation in at least some firms.

However, while the political debate continues to be heated and controversial, the evidence of the actual impact of employee representation on corporate governance is still pretty limited. Human capital arguably represents the largest source of risk in the world of work. During industry downturns, employers often downsize their workforces. Thus, losing their jobs as a consequence of economic distress is a widespread fear among employees.
Starting in the 1970s, economic theorists have proposed a simple solution: Workers enter agreements with their employers (called “implicit contracts” in the jargon of the economics profession) in which employers guarantee job security and employees accept lower wages. The benefits of such an arrangement are easy to see: Employees receive employment protection and firms receive an insurance premium in the form of reduced wage payments, which reduces their costs. However, to make such agreements work, the contract has to be enforceable—firms must be able to commit to their promise even during economic downturns, when they have an incentive to renege on their promise.
Germany is one ideal country to test how this idea works in practice. The country, at the forefront of board-level labor representation starting in the 1920s, is about average in terms of employment protection rights in the OECD. Germany introduced the “parity-codetermination” act in 1976, which requires that 50 percent of the seats on supervisory boards are elected by employees for firms that have a domestic workforce of 2,000 employees or more.
In industry downturns, a sizeable fraction of employees at firms without employee board representation lose their jobs, whereas those at firms with 50% worker representation (parity firms, hereafter) do not. However, employment insurance does not cover everybody. Only white-collar and skilled blue-collar workers benefit from board-level employee representation. Unskilled blue-collar workers, on the other hand, receive no insurance and lose jobs at about the same rate as those at regular firms. This is because unskilled blue-collar workers are not represented on supervisory boards. This result underscores the importance of participation in governance to enforce agreements.
Employees certainly pay for employment insurance. They accept, on average, 3.3% lower wages when they work for parity firms. Therefore, shareholders benefit from such arrangements through lower wage costs. However, parity firms lose the flexibility to adjust the workforce during industry downturns. Do shareholders still profit from shouldering such risks? And does the employment insurance put parity firms in a bad financial position in the long run? Skeptics of such agreement usually argue that allowing labor representation on the board will have a harmful effect on firm performance, because when workers have too much influence on governance firms may misallocate resources, leading to lower earnings and share prices.
It’s true that economic shocks hit parity firms particularly hard: Their profits and share prices decline more during industry downturns. However, the savings from employees’ wage concessions seem to be just about sufficient to compensate the higher risk. Over the whole business cycle there appear to be no long-term differences in performance and firm value between the two types of firms. This means that all the net benefits go to the workers, who profit from the employment insurance, whereas shareholders receive just enough wage concession to offset the costs of the arrangement, but do not receive any additional gains. It is therefore unsurprising that firms do not voluntarily adopt labor representation, and that such codetermination agreements spread only through regulatory intervention.
Featured Image Credit: ‘Greyscale photography of corporate room’ by Drew Beamer on Unsplash
The post How firms with employee representation on their boards actually fare appeared first on OUPblog.

November 7, 2019
Two years into the opioid emergency
Two years ago the Trump administration declared the opioid crisis in the United States a public health emergency, positioning federal agencies to respond to what has been called the public health crisis of our time. Congress followed, appropriating billions of dollars to federal agencies and state and local governments to support a variety of programs to address opioid addiction treatment and overdose prevention.
What have we accomplished in the two years since the president’s declaration? A lot depends on what we define as an accomplishment, but I have seen many. While the opioid response remains a work in progress, states and territories have demonstrated positive momentum in ways that has benefited individuals, families, and communities nationwide. This includes expanded access to addiction treatment and recovery services; reductions in opioid prescribing; faster, better, more accurate prescription drug monitoring programs; more research on pain management and alternative treatments for pain; and state health agency standing orders expanding access to overdose reversal medication and a U.S. Surgeon General’s advisory for all Americans to carry it (I was able to get mine hassle-free for a $35 co-pay at my local pharmacy).
Several states are also reporting decreases in overdose deaths in the last two years, encouraging news that hopefully will become a national trend. While we are seeing decreases in the prescription overdose death rate, many people are continuing to die from illicit opioid overdose, including heroin laced with fentanyl.
As prescription opioids become harder to acquire, states have seen increases in the use of illicit drugs and increases in infectious diseases from injection drug use including HIV and Hepatitis C. This is a concerning trend that we need to monitor very closely. Similar to the Scott County, Indiana HIV outbreak from 2011 to 2014, West Virginia and several other states are currently experiencing a rise in HIV rates among people who inject drugs. States are also facing crises from substances other than opioids, including other drugs such as methamphetamine, cocaine, and benzodiazepines.
So, are we any closer to ending the opioid crisis two years after the emergency declaration?
As someone who works with public leaders across the country, I see lots of bright spots: places where state and territorial health leaders are doing great work that is indeed making a difference by making recovery and treatment more available to the people that need it. We need more of those places, but we are indeed making progress.
That said, there is an urgent need for more leadership and financial resources to expand public health efforts to prevent addiction in the first place: what public health professionals call primary prevention. Primary prevention includes programs and policies that mitigate the impact of adverse childhood events and adverse community experiences by building individual, family, and community wellness and resilience.
Emerging research demonstrates that preventing adverse childhood events can lower the risk of someone developing a substance use disorder (and several other chronic diseases). When families and communities have the resources they need we see substance misuse, suicide, violence, and a host of other issues become less prevalent. In order to fully address this crisis, we must address the impact of stress in young children, work with schools and school-age children, build resilient communities, and increase investment in programs that work to address the other influences on health, like meaningful employment, safe communities, and access to stable and secure housing.
Eliminating the stigma associated with substance use disorder is another area where we need more work. Stigma is a long-standing public health problem because it keeps people from seeking the treatment and recovery supports they need for fear of job loss, family separation, criminal prosecution, or just labeled an addict.
A national effort to change the narrative of addiction from a focus on someone’s moral “failing” to an emphasis on addiction as a chronic brain disease will help support people who may be afraid to seek treatment today. We can all work to change how we talk and think about people living with substance use disorder. Former White House Office of National Drug Control Policy Director Michael Botticelli led the charge to change how we talk about addiction sharing that language reflects our belief system and we cannot end the opioid crisis if we refer to those impacted the most as junkies or addicts.
As we reflect on two years of progress, we must continue to respond with the resources necessary to ensure that proven prevention, treatment, and recovery services are available consistently, regardless of where one lives. To do that, we need to work with other government agencies, healthcare providers, law enforcement, as well as local, state, and national organizations to counteract stigma and view addiction as a chronic health condition that affects the brain. If we apply appropriate, evidence-based strategies, addiction is both preventable and treatable. Preventing individuals from misusing opioids and other substances in the first place is the best way to end our nation’s opioid emergency and improve the health of all Americans.
Featured Image: Aerial View via Unsplash
The post Two years into the opioid emergency appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
