Oxford University Press's Blog, page 676
April 17, 2015
Narrating nostalgia
The most recent issue of the Oral History Review will be zipping across the world soon. To hold you over until it arrives, we interviewed one of the authors featured in this edition, Jennifer Helgren, about her article, “A ‘Very Innocent Time’: Oral History Narratives, Nostalgia and Girls’ Safety in the 1950s and 1960s.”
Can you talk a bit about how this project began? Did you set out to write about safety and nostalgia, or did it arise on its own?
The article arose from an entirely different project. I was actually at the Roganunda Council of Camp Fire Girls in Yakima, Washington researching Native American girls in the Camp Fire Girls, a popular national girls’ organization, for an article I was writing for the American Quarterly. I came across a scrapbook that included newspaper clippings about the disappearance, rape, and murder of a nine-year-old girl named Candy Rogers who had been selling Camp Fire candy. I could not get this story out of my mind and began researching how Camp Fire dealt with the incident. I found no direct public comment on Candy Roger’s case in Camp Fire’s national publications, but the organization distributed more robust recommendations designed to keep girls safe, especially for door-to-door sales. Although some critics called for girls’ organizations to end public sales, implying that girls had no right to public spaces, the Camp Fire Girls never considered renouncing candy sales. (This was from the necessity of fundraising but also from a commitment to girls’ opportunity to learn about fundraising and publicity through interaction with the public.)
Still, I wondered over girls’ perceptions of their safety. In an interview from the 1980s, a Spokane Girl Scout leader spoke of the effect that Candy’s murder had had on her as a child. Hearing footsteps behind her one day, she felt filled with fear that it was “the man who had killed Candy…. You expected strangers to be lurking behind trees waiting to jump” (Spokane Chronicle, 1985). Such comments, however, are difficult to find in the archives, so I conducted fifteen oral history interviews to find out how women remembered safety and vulnerability in their girlhoods.
In the article, you demonstrate the inconsistencies in the interviews and show how your narrators use them to negotiate gender and class identities. Were you able to discuss these inconsistencies with any of your narrators?
I only recently sent the published article to the narrators. Another contemporary who I shared the article with provided another related explanation for the sense of security women remember in their girlhoods that concurs with my idea that the representation of safety is a middle-class strategy of representation. She commented that her own sense of safety came from the silence around sex crimes, which is consistent with middle-class investments in respectability and purity. Without a 24-hour news cycle—and in a culture that regarded victimization as a sign of weakness—parents protected children by not openly discussing crimes. Rape victims, for example, rarely spoke out but often internalized shame instead. Therefore, children might be warned not to take candy from strangers, but their minds did not construct that as fear of sex crimes, but rather as fear of poison.
Nostalgia and the sense of the past are inextricably intertwined with our identities.
The article suggests that the nostalgic past created by narrators serves as a critique of the present, by showcasing the loss of innocence and safety. Would it make sense to think of nostalgic oral history as a history of the present?
I really like that phrase: “oral history as a history of the present.” Indeed, the narrators told me very little about rising crime rates in the 1950s, nor did they seem to be aware that crime rates had been falling in the U.S. since the 1990s. Although I reject the idea that nostalgia represents false consciousness as disingenuous to narrators whose memories are real, nostalgia’s purposes are important for the narrator in her current circumstance. It does reveal more about the present than the past. It may, as Michelle Boyd argues in Jim Crow Nostalgia, provide “a haven from uncertainty, disappointment, and the inadequacy of the contemporary period.” Or it may simply reveal the longings of the narrator. Nostalgia and the sense of the past are inextricably intertwined with our identities.
Moreover, narrators construct narratives about their past based on their current roles and conditions. In one interview, which I did not have space to include in the article, a business woman and mother who had recently sent her only daughter to college reflected on her own childhood and adolescence through comparisons to her daughter’s experiences. She marveled at her memory of her own freedom to roam in 1970s Santa Cruz, in contrast to the tight leash with which she had supervised her child at the turn of the 21st century. For this narrator, the interview became a space for her to explore her parenting rationale, and to question and ultimately justify its outcomes. She commented on her closeness with her daughter while contrasting it to the estrangement she had felt as a girl from her own mother. The timing of the interview as the narrator came to terms with the next stage of her life—as the mother of an adult daughter—marked the narrator’s reflections throughout. At different points in the lifecycle, autobiographical narration certainly performs different functions.
Oral histories are often used to fill in the gaps of ‘official’ archives, but your piece seems to do the opposite. What does it mean to use state archives, like crime statistics, to fill the silences of your interviews?
As I noted above, I viewed the oral history interviews as an opportunity to add girls’ voices to the historical record. Girls’ studies, with the exception of the work of Vicki Ruiz and of Rebecca C. Haines, Shayla Thiel-Stern, and Sharon R. Mazzarella, while centering girls in the research in other ways, have not engaged oral history to the degree that I would wish. Thus, my project was born out of traditional oral history efforts to fill gaps. When I listened back to the interviews, however, I just saw this other dimension. Not only were the women’s memories of their girlhoods multifaceted—remembering safety and the freedom to roam alongside memories of caution and self-monitoring—but also their memories of safety run counter to the discourse of their era. Newspapers and the FBI thought girls were vulnerable, even though the women I interviewed do not necessarily describe any particular vulnerability. They felt safe despite the state statistics and media reports that emphasized their vulnerability.
My analysis, then, focused on why they felt secure. In part, as middle-class girls, they experienced protective structures in family, community, and girls’ organizations. In addition, the narrative of safety serves to construct a respectable middle-class girlhood and to highlight the problems the narrators see in the world today. We live in a climate of fear even if crime statistics show a drop in crime rates.
Was there anything you left out of the article that you’d like to include here?
In addition to questions about gender and memory that continue to interest me, a theme that appears in the article but did not receive the full attention that it deserves is girls’ senses of safe spaces. My broader research on American girlhood uses girls’ organization records as key sources. Girls’ organizations were, after all, central institutions for the dissemination of ideas about gender and childhood. My interviews indicate that girls’ organizations and other all-girl spaces are remembered as safe spaces and that women perceive the equal partnership of coeducational spaces with some ambivalence. Even as they recognize the feminist achievements of desegregating educational institutions, they yearn for spaces like Camp Fire where they got to be girls. I am working now on a project on the Camp Fire Girls and have found that the longing for the pre-1970s all-girl organization is especially strong among older alumni. By contrast, younger alumni and the women who worked on the transition to coeducation champion the progressive character of the organization.
Image Credit: “NYC MTA Bus Peering Child 1977 70s – 50 Cent Fare” by Anthony Catalano. CC BY NC-SA via Flickr.
The post Narrating nostalgia appeared first on OUPblog.

Publishing the Oxford Medical Handbooks: an interview with Elizabeth Reeve
Many medical students and trainee doctors are familiar with the “cheese and onion,” but not the person responsible for the series. We caught up with Oxford Medical Handbooks’ Senior Commissioning Editor, Elizabeth Reeve, to find out about her role in publishing Oxford’s market leading series.
How are you involved with Oxford Medical Handbooks?
I am the Senior Commissioning Editor for the Oxford Medical Handbooks series, so I’m ultimately responsible for the strategy, future direction, and success of the programme, in both print and digital formats. I determine which new titles / editions are educationally or medically important for our markets, and commission well-respected new authors, working with my team to ensure manuscripts are delivered on time. Editorial is really the core of publishing, so we are involved with all other functions of OUP, including Sales and Marketing, Production, Design, Rights, Stock Control, and so on. This is one of the reasons it’s such an interesting job.
Do you have a favourite part of your job?
I enjoy having the opportunity to work with so many different people, authors and colleagues, across the Press. I like commissioning Handbooks that receive positive reviews and sell well — it’s rewarding to play a part in furthering medical education, and to make a difference to patient care. More generally, my job always provides a challenge of one type or another, and there’s always something new to learn.
What about a least-favourite part?
Negotiating difficult contracts and too many emails!

Have you ever heard any interesting stories related to the Handbooks series?
We often get comments and photos of readers with their OMH in far and away places. For the OH Clinical Medicine 25th birthday party, we asked readers to tell us why OHCM was important to them, and the response was overwhelming. We’re very aware of how much OHCM is loved from all the feedback over the years, but we were totally surprised and delighted when we received songs, music, paintings, even poetry about it!
People often comment on how something in OHCM, such as a drawing or a quotation, has stuck in their mind, and how at a later date they were able to diagnose a patient from their recollection. A recent example was the image of a ‘Lemon on sticks’ which helped a doctor to diagnose a patient with an endocrine disorder.
After spending so much time with the Handbooks, do you think you could perform a diagnosis?
Possibly…with a little help from the OH Clinical Diagnosis, that is.
Have the Handbooks changed in your time as editor?
Over the years a lot of work has gone into making the handbooks more distinctive and consistent across the series in terms of style, structure, formatting, layout, and design. We want readers to be confident that there is the same level of authoritative content and practicality across the Handbook series, in all specialty areas. Users know for instance that chapters can be quickly accessed from the back cover tabs, that emergencies are grouped together and highlighted for quick reference; that all content included is core. We listen to what medical professionals want and need from the OMHs, and then do what we need to.
What was the state of medical publishing when you began your career vs. now?
When I started in publishing, the process was far more straightforward. Authors delivered a manuscript, which went into production after a few standard checks. There has and will always be the problem of late manuscript delivery, and for good reason, with medical authors leading such busy lives. But now, we ask much more of everyone concerned, including authors, publishing staff, freelancers, and typesetters. There’s just so much more involved at every stage with the move to digital.
What are some of the other challenges in moving to an online environment?
Publishing is traditionally a print environment, so the move to digital has been a steep learning curve for publishers, and also for authors, who instead of writing a “book”, are writing “content” to be published in multiple formats. All stages of the publishing process take this shift into account, from commissioning a new product or edition, writing, securing permissions, to development and production. In this way, we ensure all content is suitable for multiple print and digital outputs.
Where do you think medical publishing is headed in the future?
Doctors need the most up-to-date clinical information, but they also want to be sure that that information is accurate. With so many sources and formats available, it can be difficult to know where to look and what to trust. I think publishers need to continue to adapt with developments and deliver what the market requires, whether that be print, eBook, app, or online. Different people have different ways of learning, and also different preferences for accessing information. Whilst many might prefer a handheld format for clinical information when on the ward, an eBook may be more suitable for background reading, or a print copy for revising. I don’t think there is any one model which fits all, so publishers need to continue to produce a variety of models to suit different users and uses.
And finally, have you picked up any good advice for Medical Students in your time as commissioning editor?
That studying medicine is never an easy option, but, during the tough times, think back to the reasons you chose to do it in the first place, re-read the new Hippocratic Oath in OHCM, and keep at it! It will be amazing in the end.
The post Publishing the Oxford Medical Handbooks: an interview with Elizabeth Reeve appeared first on OUPblog.

Living with multiple sclerosis
Multiple sclerosis (MS) is widely thought to be a disease of immune dysfunction, whereby the immune system becomes activated to attack components of the nerves in the brain, spinal cord, and optic nerve. New information about environmental factors and lifestyle are giving persons with MS and their health care providers new tools with which to manage the disease and live healthier and more productive lives.
It has been known for decades that there is a higher incidence of MS the farther away one is from the equator. One explanation for this may be that in colder climates there is less sunlight and less Vitamin D production by the body, because sunlight helps stimulate the production of Vitamin D. Low Vitamin D levels early in life have been shown to correlate with an increased risk of MS later on. Additionally there is some evidence that in someone who already has MS, lower Vitamin D levels may increase the chance of a relapse. People who have MS should consult with their health care providers about Vitamin D levels and appropriate supplementation and monitoring; too much vitamin D can also be dangerous.
Another environmental factor that has recently been identified as possibly influencing Mmultiple sclerosis is salt. In the animal model of MS, a high salt diet makes the “mouse MS” worse. Another study in humans suggests that a high salt diet is associated with more severe disease. These are preliminary findings and more studies in this area are needed. Diets high in saturated fats may also be a problem for persons with MS, as they can promote obesity, and fat tissue has been demonstrated in some models of disease to be a contributor to inflammation.

While no specific dietary regimen has been shown in controlled trials to impact the course or progression of MS, new data are emerging that the bacterial population of the intestinal tract, the “gut microbiome,” may have an important role in the mechanisms of MS, and the composition of this bacterial population is heavily influenced by diet. Diets that are high in plant-based foods, and low in saturated fats and refined sugars are thought to promote a more beneficial bacterial population.
Smoking tobacco has definitely been shown to be bad for multiple sclerosis. Not only do smokers have an increased risk of developing MS, but persons with MS who smoke have higher rates of disability and shorter life spans.
Exercise not only has the same benefits for persons with MS as it does for the general population — i.e. in terms of promoting cardiovascular and bone health and general fitness — but some studies have demonstrated that exercise may help a person with MS to manage certain symptoms of the disease. These include fatigue, weakness, spasticity, and depression. There is no one established exercise regimen for persons with MS, but some experts suggest a combination of aerobic exercise and resistance training, for 20-30 minutes, at least 3-4 days/week. Other forms of exercise such as yoga or Pilates®-type training may help some people feel and function better by addressing problems of spasticity and balance.
In summary, in addition to disease management with medication and standard rehabilitation therapies, there are several lifestyle choices that a person with multiple sclerosis can make to feel and function better, and possibly impact their disease course. Don’t smoke, eat a “heart healthy” diet, and get some regular physical activity.
Featured Image: Nerve cells in the brain. CC0 Public Domain via Pixabay.
The post Living with multiple sclerosis appeared first on OUPblog.

From Carter to Clinton: Selecting presidential nominees in the modern era
Franklin D. Roosevelt broke the two-term precedent set by George Washington by running for and winning a third and fourth term. Pressure for limiting terms followed FDR’s remarkable record. In 1951 the Twenty-Second constitutional amendment was ratified stating: “No person shall be elected to the office of the President more than twice…”
Accordingly, reelected Presidents must then govern knowing they cannot run again. The amendment transformed what was speculative into a definitive statement. No more than two. The result? Washington and the media soon turn to consider who will be next—speculation of a different order, to include among those asking “Why not me?” Sitting Presidents begin to fade in their last two years.
In three post-1951 two-term limit cases (Eisenhower, Reagan, and Clinton) the Vice President as heir apparent was the nominee. Two lost (Nixon to Kennedy and Gore to Bush), one Vice President won (Bush defeating Dukakis). Nixon later was elected twice but resigned due to the Watergate scandal. Ford, his Vice President who took over, lost to Carter in 1976. And Bush watched several candidates in both parties battle in an open contest for the nomination (not including Vice President Cheney). Obama was nominated over Hillary Clinton and won the general election.
Now we are observing an extraordinary nomination contest. Having won a Senate majority in 2014 and having increased their numbers in controlling the House of Representatives and among governors encouraged several Republicans to seek the nomination in 2016. Three first-term senators have already announced their candidacies at this writing (Ted Cruz, Texas; Rand Paul, Kentucky; and Marco Rubio, Florida). Former and present governors are preparing to join the race. Why so early—ten months before the first state contests? Fundraising, hiring staff, creating an organization, media attention, name recognition, testing themes, endorsements, preparing for debates, to name a few of the more important reasons.
Polls show a wide-open Republican contest, along with a few straw votes by organizations as barometers at this early date. Media outlets have made a turn in coverage to who will be the GOP presidential nominee months before the first caucus and primary.

Meanwhile on the Democratic side, the nominee appears to be a new version of an heir apparent trying a second time. Ordinarily the successor has been the Vice President. At this point, however, Joe Biden has made no obvious moves to candidacy. Rather, Hillary Rodham Clinton has emerged as the leading Democratic prospective successor to President Obama. Indeed, she is acknowledged to have created an aura of inevitability. The contrast with the challenges facing the Republican field is stark. She has ample funds, an experienced staff and tested organization, a nationally and internationally familiar image, continuous media coverage and public exposure, and a former President as her husband and advisor. Personally she has served as First Lady of Arkansas and the United States, White House manager of a national health care proposal, Senator from New York, a near-miss presidential candidate in 2008, and Secretary of State in the Obama first term.
One may well ask: Why should Hillary Clinton, the inevitable candidate, announce along with three freshman Republican senators with so much to accomplish? Perhaps to exclude others, and seal the deal, possibly to do it in her own style, or possibly both. The video announcement itself was soft, low-key, even gentle as she stated her intention of meeting with real people in Iowa, listening to them door-to-door so as to serve better as their “champion.” However, as several commentators wrote, her plans to listen avoid clarifying her broader, and possibly more contentious, goals in serving as President.
Other questions related to the Democratic nomination are: Why the inevitability? Are there no other major contenders? No freshman senator “hot shots,” apart from Elizabeth Warren who isn’t running? No accomplished governors seeking national recognition? The answer seems to lie in the weakness of the Democratic “farm team” in the Senate and the State Houses. The only senators mentioned as thinking about running for President are declared socialist Bernie Sanders, a sitting senator from Vermont and James Webb, a former one-term senator from Virginia. And the sole governor is Martin O’Malley, retired from serving two terms in Maryland (which now has a Republican governor). Joe Biden is mentioned should Hillary Clinton falter. But no one else merits media attention, even speculatively. Comparatively, the Republican “farm team” now shows 31 governors, 54 senators, and 247 representatives.
In summary, the 2016 nomination race at this point is wildly unsettled in the Republican Party. The announced and prospective candidates range from those with little or no executive experience to those with impressive records as sitting and retired governors to those lacking both elected executive and legislative backgrounds. No one today can claim a commanding position. Meanwhile, Democrats offer one option, Hillary Clinton, who, as one commentator noted, may have made her announcement serve also as her national convention acceptance speech. The next 20 months should delight card-carrying political junkies here and abroad, perhaps even those objecting to the Twenty-Second Amendment.
Feature image credit: American Flag. Public domain via Pixabay.
The post From Carter to Clinton: Selecting presidential nominees in the modern era appeared first on OUPblog.

April 16, 2015
Five lessons from ancient Athens
There’s a lot we can learn from ancient Athens. The Greek city-state, best recognized as the first democracy in the world, is thought to have laid the foundation for modern political and philosophical theory, providing a model of government that has endured, albeit in revised form. Needless to say, the uniqueness of its political institutions shaped many of its economic principles and practices, many of which are still recognizable in current systems of government. Today, in a global environment of economic turmoil, ancient Athens may be even more relevant that ever before, imparting a handful of lessons for today’s citizens of the world.
1. Freelancing is the best way of life.
No Athenian citizen had a full-time job. It was beneath their dignity to work for someone else. Nearly all citizens had a farm so they would spend some time working there, but few farms produced a saleable surplus and most people had to buy much of their food. They might be called up for a military or naval campaign, which would bring in a little money. They got paid if they attended the assembly, though it only met a few days per month. They could also get paid if they were selected by lot to sit on a 500-person jury, but this was necessarily sporadic. To make ends meet, most people turned to manufacturing, making simple objects for their own use, for exchange with neighbours, or for sale in the marketplace. Besides the freedom to spend time philosophising, attending festivals, and going to the theatre, this flexible lifestyle allowed citizens to attend to their formal democratic responsibilities in the assembly and in the courts. It is what enabled Athens’ democracy and her justice system to function in a way we can only envy.
2. Casual manufacturing is a good way to supplement your income.

Before the industrial revolution, most manufacturing was done in small craft shops by a handful of people. There were only a few ways to build a large business, essentially through branding a product that required more than one person to make, or through pre-empting a valuable location, like a mine access point for ore processing or a permitted site for tanning. The industrial revolution brought cost and information benefits to those who could invest in large-scale operations, and in developed economies the small craftsman almost disappeared. New technology is changing that dynamic again and eliminating the advantages of scale on every front, from online apprenticeships and 3D printing to desktop CNC machinery and sales and promotion. The effort involved in making things for self or for sale is once again competitive with shop prices, and casual manufacturing can form a satisfying and financially important part of a freelance portfolio.
3. You don’t need business regulation to enforce community standards.
The Athenians had a very strong belief in fairness and justice—so strong it enabled them to dispense with business regulations. If you made a contract, you were expected to abide by it. Anyone who felt aggrieved by the actions of an associate could bring a legal case and it would be judged by 500 of his fellow citizens, selected by lot. The judges were not concerned with legal interpretations and precedents, just with who seemed believable and what seemed fair.
4. You can finance public goods without an income tax system.
The Athenians had no income tax system yet managed to pay for a lot of public goods, far more than could be afforded out of the state’s main sources of revenues, which were harbour taxes, mine leases, and protection money from its allies. They also believed it was important that public goods should be paid for by those who could best afford it. To achieve this, they developed a system of donations or “liturgies” ensuring that costs fell on rich Athenians, who might be asked to fund a boat, a festival, or an unexpected military contingency.
If you were selected for a liturgy, there were two ways of getting out of it. You could show you had made one recently—which could be easily checked—or you could point to someone richer than you and say they should do it. There was one catch: you had to offer to swap all your assets for theirs, so you would want to be sure you were right.

5. The public has a right to know exactly how its money is spent.
The Athenians had a very clear system of accountability for public projects. They were approved by a democratically elected council and the general assembly, and the task of supervising delivery and costs was assigned to a representative body of citizens to whom the “architect” (project manager) would report. There were no attempts to claim that contracts were commercially confidential and full details were published. Inscriptions on stones near temples carried the names of everyone who worked on the project, their social status, and details of what they did, how many days they worked, and what they were paid. Another inscription by the city walls giving detailed financial records says it was prepared for “anyone who wants to know how the finances were managed.”
Image Credit: “Odeon of Herodes Atticus, Athens, Greece. 2003″ by black.stllettos. CC BY-NC-ND 2.0. via Flickr
The post Five lessons from ancient Athens appeared first on OUPblog.

Nostalgia and the 2015 Academy of Country Music Awards
The country music tradition in the United States might be characterized as a nostalgic one. To varying degrees since the emergence of recorded country music in the early 1920s, country songs and songwriters have expressed longing for the seemingly simpler times of their childhoods—or even their parents’ and grandparents’ childhoods.
In many ways, one might read country music’s occasional obsession with all things past and gone as an extension of the nineteenth-century plantation song, popularized by Pittsburgh native Stephen Collins Foster, whose “Old Folks at Home” (1851) and “My Old Kentucky Home” (1853) depicted freed slaves longing for the simpler times of their plantation youths. Many early country recording artists tapped into the nostalgia of the nineteenth-century song repertory in their recordings from the 1920s, while still others composed original works that explored similar themes, as in Atlanta recording artist Fiddlin’ John Carson’s 1927 recording of “Old Folks at Home,” titled here “Swanee River.”
Similarly, the bluegrass tradition—which emerged as a subgenre of country music in the mid-1940s—openly embraced nostalgia in a rich repertoire of home songs that are intimately connected with the mass migrations of residents of Appalachia and the US South to northern and midwestern cities during the middle of the twentieth century. In songs such as “The Old Home Place” (written by Missouri transplant Mitch Jayne and popularized by J.D. Crowe and the New South), we hear a musicalized version of author Thomas Wolfe’s reminder that “you can’t go home again.”
Listening to this year’s nominees for the Academy of Country Music’s “Single Record of the Year,” we can hear another resurgence of nostalgia in contemporary country music. Following the brief rise and near demise of “bro country,” which celebrated generic rural spaces filled with trucks, beer, and beautiful “girls” (represented in this year’s nominees by Dierks Bentley’s “Drunk on a Plane,” which depicts a broken-hearted “bro” who buys drinks for everyone aboard a flight to a honeymoon for one in Cancun), the majority of this year’s “Single Record of the Year Nominees” train their eyes on nostalgia for both the distant past of the 1960s and the more recent past of the 1980s and 1990s.
Miranda Lambert—whose award-winning 2010 recording of Tom Douglas and Allen Shamblin’s “The House That Built Me” explores nostalgia for her childhood—meditates on the apparent simplicity of a world in which people used public telephones and couldn’t download the latest hits from the internet in “Automatic.”
Whereas Lambert’s “Automatic” might be heard as a call for a return to a simpler way of living, Kenny Chesney’s “American Kids” seems to be an exercise in Time-Life nostalgia for the mediated version of youth culture that has dominated much of the public discourse around the era since the 1970 release of the documentary film Woodstock. Celebrating casual consumption of alcohol and “smoke” and championing a sort of sanitized youth rebellion, “American Kids” also draws upon two iconic 1980s popular songs that engaged in a more nuanced critique of the 1960s and the broken promises of the American Dream: Bruce Springsteen’s “Born in the U.S.A.” (1984) and John Mellancamp’s “Pink Houses” (1983). As the accompanying music video—which features a group of conventionally attractive, white twentysomethings riding around in a psychedelic school bus—indicates, perhaps Chesney and his collaborators did not listen carefully to the songs they drew upon.
“Bro country” pioneers Florida Georgia Line also contribute to the nostalgic tendencies of this year’s Academy of Country Music “Single Record of the Year” nominees” with “Dirt,” a celebration of the soul upon which rural people build their lives. Although the song’s lyrics embrace the conventional “bro country” imagery of rural spaces filled with beautiful women, the video superimposes a narrative of love, loss, and a simple life well-lived as we learn of a romance between “Rosie” (played by Lindsay Heyser) and a man played by songwriter and Nashville actor J.D. Souther that began in 1968 and that concluded with her passing and burial in the same dirt that they farmed, built a house, and raised a family.
Although nostalgia is certainly not a new addition to the country music tradition, the prevalence of such themes in the upcoming Academy of Country Music Awards’ “Single Record of the Year” category does raise some interesting questions about why nostalgia—particularly nostalgia for the late 1960s and 1980s—seems to pervade contemporary country music. In a 2013 essay titled “Melancholic Subjectivity” and published in Subjectivity in the Twenty-First Century: Psychological, Sociological, and Political Perspectives, psychologist Stephen Frosh has suggested that nostalgia “for more certain times, in which… ‘identity’ was stable and one’s social role [was] clear and secure” may be connected to the rapid pace of life in the early twenty-first century (87-88). Is this response, then, a response to the uneven economic recovery from the 2008 recession? Or perhaps these songwriters, artists, and audiences are reacting to constant headlines of government gridlock and partisan bickering? Or could this simply be yet another example of country music’s persistent longing for simpler times?
Headline Image: Harmonica. CC0 via Pixabay.
The post Nostalgia and the 2015 Academy of Country Music Awards appeared first on OUPblog.

Can marijuana prevent memory decline?
Can smoking marijuana prevent the memory loss associated with normal aging or Alzheimer’s disease? This is a question that I have been investigating for the past ten years. The concept of medical marijuana is not a new one. A Chinese pharmacy book, written about 2737 BCE, was probably the first to mention its use as a medicine for the treatment of gout, rheumatism, malaria, constipation, and (ironically) absent-mindedness.
So what does marijuana do in the brain? It produces some excitatory behavioral changes, including euphoria, but it is not generally regarded as a stimulant. It can also produce some sedative effects but not to the extent of a barbiturate or alcohol. It produces mild analgesic effects (pain relief) as well, but this action is not related pharmacologically to the pain-relieving effects of opiates or aspirin.
Finally, marijuana produces hallucinations at very high doses, but its structure does not resemble LSD or any other hallucinogen. Thus, marijuana’s effects on our body and brain are complex. Just how does it achieve these effects and are they beneficial? The chemicals contained within the marijuana plant cross the blood-brain barrier and bind to a receptor for the brain’s very own endogenous marijuana neurotransmitter system. If this were not true, then the marijuana plant would be popular only for its use in making rope, paper, and cloth.

The first endogenous marijuana compound found in the brain was called anandamide, from the Sanskrit word “ananda,” roughly meaning “internal bliss.” Anandamide interacts with specific receptor proteins to affect brain function. It is now thought that our brain contains an entire family of molecules that mimic the action of marijuana. Although we do not understand completely the role these molecules play in the brain, the great abundance of them and the wide distribution of their receptors gives an indication of importance of the endogenous system in the regulation of the brain’s normal functioning. Recent investigations have also shown that stimulating the brain’s marijuana receptors may offer protection from the consequences of stroke, chronic pain, migraines, psychic pain, and brain inflammation.
Surprisingly, it may also protect against some aspects of age-associated memory loss. Ordinarily, we do not view marijuana as being good for our brain and certainly not for making memories. How could a drug that clearly impairs memory while people are under its sway protect their brains from the consequences of aging? The answer likely has everything to do with the way that young and old brains function and a series of age-related changes in brain chemistry. When we are young, stimulating the brain’s marijuana receptors interfere with making memories. However, later in life, the brain gradually displays increasing evidence of inflammation and a dramatic decline in the production of new neurons, called neurogenesis, that are important for making new memories.
Research in my laboratory has demonstrated that stimulating the brain’s marijuana receptors may offer protection by reducing brain inflammation and by restoring neurogenesis. Thus, later in life, marijuana might actually help your brain, rather than harm it. It takes very little marijuana to produce benefits in the older brain. My lab coined the motto “a puff is enough” because it appears as though only a single puff each day is necessary to produce significant benefit. The challenge for pharmacologists in the future will be to isolate these beneficial effects of the marijuana plant from its psychoactive effects.
The post Can marijuana prevent memory decline? appeared first on OUPblog.

How well do you know Anthony Trollope?
Next week, 24 April 2015 marks the bicentenary of one of Britain’s great novelists, Anthony Trollope. He was an extremely prolific writer, producing 47 novels, as well as a great deal of non-fiction, in his lifetime. He also worked for the Post Office, and introduced the pillar box to Britain. So, do you think you know Anthony Trollope? Test your knowledge with our Trollope bicentenary quiz.
Image credits: (1) Quiz Background: Anthony Trollope by Frederick Waddy. Public domain via Wikimedia Commons (2) Featured image: Classical writing © Creativeye99, via iStock
The post How well do you know Anthony Trollope? appeared first on OUPblog.

April 15, 2015
The Oxford Etymologist gets down to brass tacks and tries to hit the nail on the head
I have always been interested in linguistic heavy metal. In the literature on English phrases, two “metal idioms” have attracted special attention: dead as a doornail and to get (come) down to brass tacks. The latter phrase has fared especially well; in recent years, several unexpected early examples of it have been unearthed. I say “unexpected,” because the examples turned up in obscure local newspapers, a repository of many language nuggets buried with little hope of resurrection (I’ll return to the burial metaphor at the end of the post) and because those working on the first volume of the OED (1884) did not have a single citation of the phrase. In the First Supplement, the earliest one goes back to 1903, but now we have examples dated to 1863. The feeling has always been that to get down to brass tacks is an American coinage, and indeed it first surfaced in print in Texas. As we will see, some other evidence also points in the direction of the United States.
We hardly ever speak about brass tacks, so why did they achieve such prominence? Who and in what industries gets down to them? Some hypotheses are rather ingenious, but below I’ll mention only two of them. The others are “a click away,” as they now say. Unless an idiom happens to be a so-called familiar quotation, its origin is usually unknown. (The origin of familiar quotations is another problem.) Somebody whips the cat, takes care of the whole nine yards, is dressed up to the nines, or, conversely, kicks the bucket. Trying to guess how those phrases came about is a worthy occupation, but, unfortunately, it seldom results in significant discoveries. Suffice it to say that every idiom, like every word, was once coined by an individual. The cleverest and the most memorable words and phrases stayed, and now they are common property, while the inventors’ names are forgotten. My contribution today will be modest. In 1931, there was a lively discussion of the brass tacks idiom in Notes and Queries, and, since not everybody has looked through the entire set of that delightful old periodical (at present it has quite a different format), I’ll recount the most salient moments of the exchange with minimal commentary.
In recent history, the phrase spread to England from overseas. However, it took quite some time even for Americans to learn it. In his novel Jennie Gerhardt (1911), Theodore Dreiser still used it in quotes (“It was like his brother to come down to ‘brass tacks’. If Lester were only as cautious as he was straightforward and direct, what a man he would be!” Chapter 43). Also, though everything points to the United States as the idiom’s home, and now, after the most recent discoveries, we find ourselves in Texas, it lacks a specifically Texan flavor (or so it seems). What then is “American” about it?

All the contributors to the discussion were speakers of British English. One insisted that the correct form of the idiom is let us get down to tin-tacks and believed that it was “like many other war-time phrases.” He obviously thought that he was dealing with military slang traceable to World War I (the Great War). The next discussant (a captain) agreed: “The army humour lies in calling common tin tacks ‘brass’, intimating they were of a special kind.” The exchange began to gain its own momentum: a wrong premise acquired pseudo-solid confirmation. This should teach all of us to be careful in dealing with language history. However, the man was rebuffed. His opponent quoted a 1904 example, which can now be found in the OED (from Horace Lorimer’s book Old Gorgon Graham). So the military cookie crumbled almost at once. Yet the brave captain did not surrender, and what he said may be of some interest. The phrase get down to brass tin tacks, he explained, “was undeniably in everyday use in the British army between 1914 and 1920.” And he insisted that only with tin and brass in modifying tacks does the idiom make sense. “Brass, or bronze, tacks, used in boat-building, do not rust, and are far superior to so-called ‘tin’ tacks for durability. Common ‘tin’ tacks appear to have not even a nodding acquaintance with tin, and are apparently made of galvanized iron.”
This statement was followed by a useful addition. After the suggestion that “[a] brass tin-tack would be possible, however paradoxical the name, and an enduring kind; though doubtless army users liked the phrase for its apparent absurdity” (beware of those who use the adverb doubtless), the writer noted that in the United States he had only heard the simple get down to brass tacks and also “a kindred saying” hungry enough to eat brass tacks. I wonder: Did those who opened Brass Tacks Sandwiches in Oregon and invented the name of the establishment allude only to the voracious appetite of their customers, or did they know the idiom about being hungry enough to eat brass tacks? Most likely, it is a coincidence. I have not been able to find examples of the phrase about hunger and brass tacks and have no information about its possible currency in today’s America English (my expertise is limited to being able to eat a horse). Members of the American Dialect Society will doubtless have better luck. In any case, if at some time brass tacks existed as a model of something truly solid, our popular idiom loses part of its mystery, and there is no need to refer to the occupations in which people dealt with those implements. This is my only and most important conclusion.

From the subsequent discussion in Notes and Queries we learn that, according to Hardware Trade Journal (has it been excerpted for the OED?), the most common expression is tin tacks, which is a “corruption” of tinned tacks. “There are also ‘Blued Tacks’ and copper tacks, but no brass tacks. A search through invoices back to 1878 and through very old catalogues fails to disclose a single brass tack.” Does this mean that in the United States brass tacks rather than tin tacks were especially common? What is known about the use of American brass tacks in the eighteen-sixties and before? Perhaps those who searched for the origin of the idiom did not pay enough attention to the object that has brought it into prominence. I never stop repeating that a student of words should pay equal attention to things.
In 1927 Frank H. Vizetelly, Managing Editor of Funk and Wagnall’s New Standard Dictionary (his name has already turned up in this blog: see “Jes’ copasetic, boss,” for 5 July 2006) wrote to a correspondent of Notes and Queries that the origin of the idiom had not been discovered and cited two explanations. One is familiar from the current discussion on the Internet, namely that in the upholstery trade brass tacks hold a protective leather band in place on a chair; driving them is one of the last finishing touches. According to the other one, “brass tacks, being the last decoration that is put round a coffin, when a man gets down to brass tacks, he faces conditions as they are.” Brass nails do have an association with coffins, and the variant of the idiom to get down to brass nails exists, but, as has been observed, getting down to brass tacks is synonymous with getting down to bedrock or coming to the point, rather than putting the finishing touches. The last participant in the discussion wrote: “Before guessing at the origin, it would seem advisable to make certain what the phrase really does mean: and whether it has, or has had, more than one meaning.” This is what I call getting down to brass tacks.
Image credits: (1) Chair. © sag29 via iStock. (2) George Horace Lorimer, half-length portrait, seated, 1922. Photo by Ellis. Library of Congress. Public domain via Wikimedia Commons.
The post The Oxford Etymologist gets down to brass tacks and tries to hit the nail on the head appeared first on OUPblog.

Before Bram: a timeline of vampire literature
There were many books on vampires before Bram Stoker’s Dracula. Early anthropologists wrote accounts of the folkloric vampire — a stumbling, bloated peasant, never venturing far from home, and easily neutralized with a sexton’s spade and a box of matches. The literary vampire became a highly mobile, svelte aristocratic rake with the appearance of the short tale The Vampyre in 1819. However the body of literature surrounding the vampire myth was as broad and varied as European culture at the time. Below is a timeline of the vampire stories that preceded – and inevitably influenced – Bram Stoker’s classic tale. This is a timeline of the vampire stories that preceded – and inevitably influenced – Bram Stoker’s Dracula.
Image: Cemetery, CC0 via Pixabay.
The post Before Bram: a timeline of vampire literature appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
