Oxford University Press's Blog, page 498

June 19, 2016

Making the case for quality: research quality assurance in academic training programs

Successful scientific research requires an enormous investment of resources, education and effective mentoring. Scientists must be innovative, organized, flexible and patient as they conduct their research. Those entrusted to contribute to the research body of knowledge also rely on a support structure that recognizes and accepts the role of setbacks in the discovery process. In scientific research, three steps forward may rapidly result in two steps back. However, that single step forward may be the one that changes everything. The ‘negative’ (but accurate and reproducible) results generated by one group today, may be the critical data needed to clarify uncertainty for another group tomorrow. Nearly all research data are important, and should be presented for review in order to move the collective effort forward. This comprehensive cohort of research data shared throughout the research enterprise is needed to fuel scientific progress. However, if the quality of the data is poor and unrecognized, progress will be impeded- sometimes for years to come. Therefore, scientists conducting (and reviewing) research must be fully equipped with the appropriate tools, training and expertise to ensure and evaluate data quality in order to confidently advance or strategically retreat in response to research outcomes.


The complexity of scientific research is rapidly expanding. Hopefully, with increasing access to compelling new evidence and rapidly advancing technical tools, we will discover novel solutions to challenging and critical problems. Alternatively, depending on the quality of the data, we may also encounter disappointing setbacks resulting from false starts, dead ends or irreproducible results.  While false starts, unexpected failures and data detours may eventually contribute to positive research outcomes, irreproducible research rarely delivers any measurable return on investment. The frequent (and oft-reported) inability to reproduce scientific research data is a major disappointment to scientists, institutions, funding and publishing agencies, as well as the general public. Irreproducible research is recognized as a critical impediment to our scientific progress and thoughtful people are working on thoughtful solutions.


Strategies to improve the way that scientists design, propose, describe and report research studies have been initiated by entities that fund and publish research effort. The National Institutes of Health (NIH) hopes to influence research rigor (and enhance reproducibility) by providing new training opportunities and establishing new grant proposal submission requirements designed to improve the design, description and methodology of the research  they fund. Similarly, scientific publishers have much at stake as they facilitate the flow of research data throughout the research network.  In support of research reliability, they are collaborating to improve scientific submissions, encourage data sharing and enhance the potential for successful research replication.


Increasingly, scientists must be able to demonstrate that their data are accurate and repeatable in order to ensure efficient use of scarce resources, promote the quality of their research, and warrant continued funding

Academic Institutions are also being called upon to address concerns associated with irreproducible research outcomes. In the United States; most graduate training programs include mandatory instruction on the ethical conduct of research and the humane treatment of animals. In addition, mentoring programs are designed to offer a robust and rigorous introduction to research conduct. However, additional scientist support may be required as the impact of irreproducible research grows. The science of today is very different than the science of the past, and all participants are struggling to keep pace with the opportunities and challenges presented. The complexity and scope of data (big-data, meta-data), and the expectations associated with these data (shared-data, cloud-data, private data, public data) require that scientists attain an entirely new level of data literacy. Multiple strategies are required to ensure that scientists have the skills and infrastructure they need to conduct their important research and manage their complex data.


One proposed strategy is to integrate and implement frequently recommended (but rarely adopted) principles of research quality assurance (QA) into academic research environments. Quality Assurance systems are typically used to support the quality and reconstruction of data in manufacturing or regulated research environments. They are established to address the processes by which scientific data are generated, collected and used. Quality Assurance systems are implemented in order to provide assurance that data are fit for their intended purpose, and that the processes under which they have been generated are accountable and transparent. This is achieved by ensuring that appropriate records (for example: training, equipment, procedures (standard operating procedures), specimens, reagents, supplies, facility and data management) are maintained so that the work can be recreated if necessary to answer questions about data accuracy or reliability.  Quality Management Systems (QMS) are quite rigorous in regulated research environments; however, core principles of these systems could be strategically (and more simply) adopted in order to design a program with an appropriate and sustainable scope for application in the basic research environment.


Quality Assurance training programs in academic environments are rare, even though the adoption of simple, sustainable and risk-based research QA best practices (sometimes called ‘Good Research Practices, GRP) have been recommended for many years. As a result of this unfortunate gap, most scientists remain unaware of how QA could improve research documentation and increase the likelihood for accurate data reconstruction (which should also improve research reproducibility). In addition, graduates leave their training institutions unequipped and under-prepared to transition quickly into careers where research is routinely conducted within environments where robust QMS are in place.


Integrating research QA best practices within basic research environments will take time, expertise, resources and support. However, the time is right for the development of effective training and implementation models that address data quality and data literacy. These models need to be voluntary, sustainable, science-centered, and risk-based to ensure that they add value (and not bureaucracy) to the user. The documentation and records that are generated through the implementation of research QA best practices will provide credible evidence that data are accurate, reliable and can be reconstructed. Training in research QA will complement the new initiatives related to research premise, design, reagent characterization and bias being developed by the NIH, as well as those related to data sharing and reporting as initiated by publishing agencies.


Increasingly, scientists must be able to demonstrate that their data are accurate and repeatable in order to ensure efficient use of scarce resources, promote the quality of their research, and attract and warrant continued funding. Academic institutions have the opportunity to promote scientific excellence and improve research training by introducing innovative QA and data literacy programming as promising institutional best practices. If the Institutions fail to do so, or respond too slowly, scientists should adopt research QA best practices on their own (using currently available resources) to showcase the quality of the important work they do.


Featured image credit: Reading room of the Sainte-Barbe Library, Paris, by Marie-Lan Nguyen. CC-BY-2.0 via Wikimedia Commons.


The post Making the case for quality: research quality assurance in academic training programs appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 19, 2016 00:30

June 18, 2016

What Jane heard

Music is everywhere and nowhere in Jane Austen’s fiction. Everywhere, in that pivotal scenes in every novel unfurl to the sound of music; nowhere, in that she almost never specifies exactly what music is being performed.


For film adaptations this absence of detail can be a source of welcome freedoms, since the imaginative gap can be variously filled by choosing more or less appropriate historical repertoire or by commissioning a new score, depending on the desired tone and effect of the scene. For cultural historians of music and literature, however, it can be a stumbling block in understanding both how music fed Austen’s creative imagination and how her novels illuminate the musical culture of her day.


It can be tempting to make connections to famous composers roughly contemporary with Austen. This usually means the ‘greats’ of late 18th- and early 19th-century Vienna: composers such as Mozart and Beethoven, whose music is most familiar to classical music audiences today. Studies that draw comparisons between Austen’s fiction and Viennese instrumental music rest on the (usually unstated) assumption that music we now consider great encapsulates its own era in some way that can be related to Austen’s achievement. There is no attempt to establish a historical context in which Austen herself may have known this music or become familiar with its strategies. Thus while such comparisons may generate new readings of her work for those willing to accept the somewhat problematic aesthetics of the point of departure, they provide little insight on Austen’s own musical experience or its relation to her writing. And they are no help at all in understanding the English music culture of her time, the environment her own readers would have understood as the frame of reference for musical scenes in her fiction.


Austen’s family music books furnish a snapshot of at least some of the music Austen knew and performed. The books, held by Jane Austen’s House Museum and private owners descended from the Austen family, are now freely available online as digital facsimiles. Just under half of the surviving volumes belonged to Austen herself, and she certainly knew the books owned by other members of her family. In total, the 18 books include just over six hundred pieces of music, mainly for voice, keyboard and harp, all of it suitable for domestic performance by skilled amateur musicians.



William by Thomas BillingtonJane Austen’s copy of ‘William’ adapted by Thomas Billington from Franz Josef Haydn’s Piano sonata in C major, Hob. XVI/35. Copyright 2015 Jane Austen’s House Museum. Image by the University of Southampton Digitisation Unit. Used with permission.

It is immediately striking how little the repertory overlaps with textbook music-historical accounts of the late 18th and early 19th-centuries. Comic and sentimental songs by English theatrical musicians such as Shield, Hook, and Dibdin and piano works by London School composers such as Clementi, Dussek, and Cramer appear alongside French overtures and romances arranged for harp and duets from Italian operas. Viennese music is certainly there—particularly represented by Haydn, though there is a little Mozart too—but often in very unfamiliar forms. For example, Jane Austen copied the immensely popular song ‘William’ into her manuscript songbook, which she probably began in the mid-1790s. This turns out to be an arrangement of the first movement of Haydn’s piano sonata in C major, Hob. XVI/35, transposed to F major and with English words added. She also made a copy of the sonata itself in another manuscript: I wonder if learning the song first—it is the earlier manuscript—was her introduction to the piece.


In a few cases, later family memoirs mention specific pieces that Austen performed. For example, Austen’s niece remembered her singing a setting of Burns’s poem ‘Their groves o’ sweet myrtle’ during the years Austen was drafting Sanditon, with its extended comparison of Burns and Walter Scott. The family music books not only identify which of the many settings of this poem Austen sang, but also show her intervening in the song text in fascinating ways. Austen herself names only one composer in her novels—Cramer—and specifies only a single piece—the traditional Irish song ‘Robin Adair’. Both appear in a single scene in Emma, published in 1816 (and enjoying its bicentenary this year). The text of ‘Robin Adair’ was reputed to be by Lady Caroline Keppel, written while separated from her beloved after her parents forbade their marriage. The song’s theme of parted lovers has been seen as an important musical clue to the real state of affairs between Jane Fairfax and her secret fiancé Frank Churchill in Emma.


Here the music books suggest that what Jane Fairfax plays on the new Broadwood piano may be a set of variations on ‘Robin Adair’ by George Kiallmark (1781-1835), a colleague of Austen’s piano teacher and a frequent visitor to Hampshire during Austen’s lifetime. Kiallmark’s variations appear in the Austen music books in a binder’s volume of items that originally belonged to several different family members, and the copy may well have been Austen’s own. Like most variation sets, this one rings increasingly complex changes on its song model, while never completely losing track of the melody at its heart.



http://blog.oup.com/wp-content/uploads/2016/06/06-Robin-Adair-variations.mp3

Sound file credit: George Kiallmark, ‘Robin Adair’, David Owen Norris, piano. From Entertaining Miss Austen: Newly Discovered Music from Jane Austen’s Family Collection (Dutton Epoch CDLX 7271, 2011). Used with permission.


My favourite scene in the 2008 ITV television series Lost in Austen is when the hapless 21st-century visitor, Amanda Price, is asked by characters from Pride and Prejudice to provide musical entertainment. Her desperate choice of a song from her own childhood—Petula Clark’s ‘Downtown’—and her audience’s subsequent confusion use the incompatibility between 19th- and 21st-century musical knowledge and expectations to produce comic gold. It’s an amusing moment but also a reminder that however well we think we know Jane Austen, there are parts of her world that can still surprise.


Featured image credit: Mansfield Park, ch 11. Tout le monde est gaiement réuni autour de Mary, au piano, pour chanter un glee, sauf Fanny et Edward. Illustration by Hugh Thomson. Public Domain via Wikimedia Commons.


The post What Jane heard appeared first on OUPblog.




 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2016 05:30

Preimplantation genetic screening: after 25 years and a complete make-over, the truth is still out there

More than 25 years ago, it was found that human embryos of about three days old cultured in the lab, showed chromosomal abnormalities in more than half of them. Many of these abnormalities did not come from the sperm or the egg, but occurred after the embryo has cleaved two times, creating four cells, or three times, reaching the eight-cell stage. The – not unreasonable – hypothesis arose that these chromosomal abnormalities were responsible for the low efficiency of human in vitro fertilisation (IVF), and a new addition to the assisted reproductive technologies (ART) was born: preimplantation genetic screening, or PGS. The idea was to not transfer embryos with chromosomal abnormalities.


PGS is achieved by removing one or two cells from the total of eight cells the embryo normally has on the third day of development, and by analysing these biopsied cells by a method called fluorescent in situ hybridisation or FISH. As only embryos with a normal number of chromosomes would be transferred to the uterus, this would increase the number of pregnancies after IVF, right? Wrong: after the results of a number of clinical trials were published about ten years ago, it appeared that the idea was clever but too simple. It turned out that FISH could not fully represent all the chromosomal abnormalities present in the embryo, as a maximum of nine of the total 23 chromosomes could be counted, and moreover, the eight-cell stage was particularly prone to chromosomal errors and so the timing of the biopsy was wrong. Of considerable concern was the finding that many embryos were so-called “mosaics”: some of their cells could be completely normal, while other cells of the same embryo would show chromosomal abnormalities usually found in cancer cells. One or even two cells taken from an embryo would, in many cases, misrepresent its genetic content, and therefore add nothing to the selection done by embryologists on the morphology of the embryo alone.


In comes what has been dubbed PGS 2.0: genetic analysis technologies took a giant leap in the last decade, and it became possible to screen the whole chromosomal content of a single cell. The biopsy at day three was replaced by biopsy at day five, when the embryo has more than 100 cells and has already developed into an inner cell mass which will become the foetus, and an outer layer of cells destined to become the placenta. It is generally believed that embryos at this stage, called blastocysts, have fewer cells with chromosomal abnormalities, and up to ten cells can be taken from the embryo causing presumably less harm than a one-cell biopsy at day three.


But still not all is well: this so called PGS 2.0 is advocated even more strongly than the first generation, although the scientific community is still waiting for evidence to support its usgae. In the absence of this, this topic is heavily debated. Advocates of PGS 2.0 deem that sufficient proof is available that PGS 2.0 increases the chances of their patients becoming pregnant. Others are adamant that the only way to introduce a new treatment – and that includes IVF treatments – is after its usefulness was shown in solid randomized controlled trials.


In June 2015 we started to collect the opinions of all the leaders in the field. During the process of opinion collection it became clear that consensus is lacking regarding all major aspects of PGS 2.0. This starts with the question which patient groups, if any at all, can benefit from PGS 2.0. Recently, a new and highly efficient method for freezing of human embryos, called vitrification, was introduced. Recent reports state that over 80% of embryos frozen with vitrification will survive the thawing procedure, and retain the same likelihood of establishing a viable pregnancy as an embryo that is transferred a few days after fertilisation. The high efficiency of vitrification of blastocysts has added a layer of complexity to the discussion, and it is not clear whether the best strategy to follow would be PGS in combination with vitrification, or PGS alone, or vitrification alone, followed by thawing and transferring embryos one by one. The opinions range from favouring the introduction of PGS 2.0 for all IVF patients rather than using PGS as a tool to rank embryos according to their implantation potential, to scepticism towards PGS pending a positive outcome of robust, reliable, and large-scale randomized controlled trials in distinct patient groups.


We were confronted with difficulties and inconsistencies regarding the costs of PGS for the patients, which ranges from 350 € to approximately 9000 €. Our colleagues rightly commented that details of what the costs include needed to be clear. Only biopsy and analysis? Or the complete cycle? However, frozen embryo transfer also carries its own costs, such as ultrasound examinations, medication, embryo thaw and culture, and embryo transfer, as well as indirect costs such as lost wages or cost for childcare. Some of our colleagues argued that this aspect should not be included in the article and the discussion, as it was not part of the “scientific” discussion. We felt that this is urgently needed, especially for a procedure where the debate of its usefulness is still raging, but it is hard to get these data. However, the future of PGS not only depends on clinical or scientific evidence but also on a cost-benefit analysis at two levels. First, the costs of the whole selection procedure in terms of dollars or euros should be considered. Second, the costs of PGS 2.0 should be balanced with transferring untested embryos one by one in terms of time to pregnancy. As long as both these costs are unknown the future of PGS use will remain based only on opinion.


Featured image credit: Blastocyst. Author’s own photo used with permission. 


The post Preimplantation genetic screening: after 25 years and a complete make-over, the truth is still out there appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2016 04:30

Queering oral history

In their substantial essay from OHR 43.1 on the peculiarities of queer oral history, authors Kevin Murphy, Jennifer Pierce, and Jason Ruiz suggest some of the ways that queer methodologies are useful and important for oral history projects. Moving between Alessandro Portelli and recent innovations in queer theory, the piece offers both practical and theoretical suggestions about what oral historians can learn from queering oral history. This week on the blog we bring you an interview with contributor Jason Ruiz, who explains some of the motivations behind the project, the erotics of oral history, and how others can build on the successes of the Twin Cities GLBT Oral History Project.


In the article, you draw a really productive distinction between identity politics and the politics of sexuality, explaining that doing so can help get beyond some of the problems with identity based research. Can you talk about the issues or conversations that helped to make the importance of that distinction clear?


We started the Twin Cities GLBT Oral History Project about a decade ago, at a time when a lot of us in graduate school were learning to critique identity categories—something that seems so obvious to students today. We had countless conversations about how to collect and explore sexual histories and the personal histories of people who identify as queer or sexually marginal in any way without reifying categories like “gay” or “lesbian.” For example, just calling ourselves the Twin Cities GLBT Oral History Project was a huge compromise, since so many of us personally rejected a gay or lesbian identity and would have called ourselves “queer.” But we also wanted a name for the project that would be legible by older queer people and, to be honest, funders too, and so “GLBT” made sense for us­­ on a practical level.


Another one of the important aspect of the queer methodology you lay out is the erotics of oral history, or the role desire can play in the process of creating oral history. How have you productively navigated desire within interviews?


This is something that I think about a lot and have written about in a chapter in Bodies of Evidence, edited by Nan Alamilla Boyd and Horacio Roque-Ramirez. And of course, some of the early and seminal works in the field, such as Esther Newton’s and Boots of Leather, Slippers of Gold, address the oral historical encounter as a potentially erotic one, so I’m not exactly the first to make note of this. Part of what I argue in the Bodies of Evidence piece is that it was much easier for me to get gay-identified men discuss sex with me than it was to elicit those kinds of narratives from women. This has to be due in part to the fact that erotic spark between the gay-identified men I interviewed and me and/or my intern was an undercurrent in many of our interviews. When those men shared details of their sexual pasts with us, they mostly relished all of the fun that they had had. And I can’t deny that they shared some very hot stories with us. Oral history isn’t only about people bearing witness to big historical shifts or patterns or tragedies, it’s also about—and much is revealed by—the fun romps and sexy secrets our narrators can tell us about.


Given those limitations and possibilities, can you give some pointers to people interested in doing similar oral history projects?


First and foremost, form a team. A large-scale oral history project has a lot of moving parts and many hands make light work. One of the most important aspects of our work with the Twin Cities project was that our team was pretty diverse in terms of academic field and institutional standing. This was, I think, a great strength for us, so I’d also advise those launching a new project to go beyond their own fields and assemble a team that is as intellectually diverse as possible.


Second, take a look at all of the great models out there. I recently chaired a panel at the Organization of American Historians that featured the excellent and quite varied work of the Queer Newark Oral History Project, the University of Minnesota’s Transgender Oral History Project, and StoryCorp’s OutLoud Project. These three endeavors are all really different from one another, but, taken together, provide a wealth of ideas about how to collect and interpret oral histories. When we were starting the Twin Cities project, we looked to oral history projects that had very little to do with sexual identities and practices since there were so few out there at the time, but today there are many more wonderfully inspiring and provocative models from which anyone interested in starting an oral history project can draw.


Finally, I’d suggest that the designers of a new project just have a ton of fun. I know that not all projects explore light or amusing topics, but I found the interview process to be a very fun process, even when the subject matter was very serious. I just taught a queer studies class at Notre Dame that required the students to collect and interpret an oral history for their final project, and I couldn’t emphasize to them enough that it’s a privilege and so much fun to get to meet with a stranger and ask them a wide variety of questions about their lives. Sure, it’s also exhausting, but the only way to counteract that fact is to have fun with your project.


Is there anything you couldn’t address in the article that you’d like to share here?


We wished that we could have name-checked so many more of the innovative and exciting oral history projects currently underway, but there was, of course, not enough room to do so. On a personal note, I wish that I could have gone more deeply into some of the artistic endeavors that we mention in the piece. Right around the time that essay was published, I was lucky enough to see Anna Deveare Smith talk and perform live at St. Mary’s College in Indiana near where I live. When I saw how Smith interpreted her interviewees’ words on race relations in America and helped deepen my understanding of historical events like the L.A. Riots, I cringed at how much more I could have written on her remarkable work and others like it, such as E. Patrick Johnson’s work (which we do discuss in the essay but which I have not been fortunate enough to see in person).


Many of our readers likely have your book, but for those who don’t, could you discuss some of the interventions you make there?


Queer Twin Cities speaks to a broad and diverse history of sexual difference in Minneapolis, St. Paul, and environs, but does so from a distinctly queer point-of-view. Whereas we had to make the compromises that I describe above in launching the project and collecting oral histories, it is in the interpretation of those histories that our intellectual perspectives really became clear. This was, for many of us, the fun part: we were able to take the incredible stories that our oral history narrators shared with us and interpret them from the intellectual perspectives in which we were all immersed. I’m proud that we, the editors, and the authors, were able to interpret oral histories in the ways that felt intellectually vital and provocative to us. This is also part of the work that we Kevin Murphy, Jennifer Pierce, and I try to do in the piece for the Oral History Review. In that essay, which is a natural extension of our collaboration on Queer Twin Cities, we lay out an argument for how queer studies has and should influence the oral historical endeavor, and explore a variety of methodological, historical, and interpretive frameworks that make queer oral history different.


Featured image: Minneapolis I-35W Bridge • Rainbow Colors • Twin Cities Pride by Tony Webster, CC BY 2.0 via Flickr.


The post Queering oral history appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2016 04:30

What is college for?

1 May was National College Decision Day in the U.S. – the deposit deadline for admission into many U.S. colleges and universities. Early indications suggest that we’re poised for a fifth straight year of declining enrollments. In the Atlantic earlier this year, Alia Wong pointed out that this trend continues the widening gap between high school graduation and college enrollment in this country: in 2013-14, 82 percent of high school seniors made it to graduation (an all time high), yet only 66 percent immediately enrolled in college (down from 69 percent in 2008). As social scientists and educators continue sifting the data for causes, it is worth asking some big questions.  What is education supposed to do?  Why go to college anyway?


In this pluralistic age – allergic to overarching, one-size-fits-all accounts of anything – it is very difficult to name a singular purpose for the lengthy exposure to the fields of expert knowledge we call a college education. Yet for the cultural architects of liberal education, this singular purpose was – and could only be – a definite human good.  For Aristotle, it was the happiness that comes with contemplation of the truth. For Augustine it was properly ordered affection for God and neighbor. In effect, these goods create reasons to endure the crucible, to invest financially, and most importantly, reasons to put energy into the project of becoming one’s best self.


So the question must become: What visions of a good human being will guide higher education into the new millennium? I say visions because the history of liberal education supplies no uniform definition of this good. And there is deep disagreement within our educational institutions today. This is a good thing. I don’t mean that as a banal appeal to pluralism or relativism. For to recognize plurality and to disagree with one another is an achievement that will clarify the visions that animate our institutions – often encrusted in mission statements or left to individual teachers. Disagreement helps refine understanding and make our visions worthy of college students’ lives.


Still, what unites the longstanding tradition of liberal education, stretching back at least as far as ancient Greece, is a disposition to answer this mega-question in terms of the moral and intellectual virtues that educators cultivate in their students. Such virtues are the excellences of soul that one gains through sustained, intentional practice. Liberal education – whether in its classical pagan, medieval Christian, or modern humanist phases – has furnished us with an extraordinary list of virtues. Like stars in the night sky, each virtue tells the story of a particular constellation within our historical firmament of goods. Taken as an illuminated map to the human pursuit, they help travelers discover the way to education, to realized humanity.



9513749014_1221b24851_zImage credit: University Life 169 by Francisco Osorio, CC BY 2.0 via Flickr

The most compelling voices reassessing the reason for higher education today are those authors who bring virtue – rather than professionalism – into view. Martha Nussbaum retrieves Socratic self-examination and Stoic duty to humanity, updated to the tasks of liberal education and democracy in the developing world, in her book Cultivating Humanity: A Classical Defense of Reform in Liberal Education. James K.A. Smith reimagines a broadly Christian anthropology – rooted in the writings of Augustine of Hippo – that connects love of learning with the highest human vocation of love for God and neighbor in his book Desiring the Kingdom: Worship, Worldview, Cultural Formation. William Deresiewicz renews the Romantic virtues of authenticity, imagination, and empathy in the face of hollowed careerism and priestly academic specialization in his book Excellent Sheep: The Miseducation of the American Elite and the Way to a Meaningful Life. These are the voices that deserve our ears. Forward looking and attuned to the global and economic demands of contemporary higher education, they are appreciative nevertheless of the riches of our inheritance.


In my own work as director of the great books curriculum at a small university outside of Portland, Oregon, I am continually surprised by the interest in our program from students in the STEM disciplines and professional majors. These students and their families choose to devote significant time and money to pursue a liberal arts curriculum only when they see the connection between our curriculum and the character formation it drives.


To be sure, material benefits are essential to a complete vision of human flourishing. Yet our default vision of the good life today is too often restricted to consideration of material accomplishment – resumés, salaries, and zip codes.  If we do not have a rich understanding of what education is for, we’ll lose more than mere enrollment counts. We don’t need merely smarter, wealthier, or more politically civilized people. We need good people. We need higher education to reclaim its distinctive role in moral formation – in the transition from youth to adulthood – situated at the crossroads of family, religious community, workplace, and state. The task now is renewing the conversation about the sorts of excellence that characterize the good people we want to form. Education without moral vision is no education at all.


Header image credit: College by Jacob Roeland, CC BY-SA 2.0 via Flickr


The post What is college for? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 18, 2016 03:30

June 17, 2016

From “O Fortuna” to “Anaconda”: A playlist of musical profanity

Almost everyone swears, or worries about not swearing, from the two-year-old who has just discovered the power of potty mouth to the grandma who wonders why every other word she hears is obscene. Whether they express anger or exhilaration, are meant to insult or to commend, swear words perform a crucial role in language. But swearing is also a uniquely well-suited lens through which to look at history, offering a fascinating record of what people care about on the deepest levels of a culture–what’s divine, what’s terrifying, and what’s taboo.


One of the best ways to observe how our cultural standards on profanity have evolved is to look at the music we have enjoyed and the lyrics that have provoked censors, parents, and the general populace over the past several centuries. In the playlist below, profanity expert Melissa Mohr highlights some of the most shocking songs from modern history.


“O Fortuna” from Carmina Burana



Though the music was written in the 1930s, the lyrics are from early thirteenth-century Goliardic poetry, written for the most part in Latin by traveling scholars who celebrated drinking, sex, and gambling in stark contrast to the religious literature more prominent in this era. Think: Spring Break, 1230 edition.


“Farai un vers de dreyt nein” by Guilhem de Peitieus, the Duke of Aquitaine



This is a song from around 1100 by one of the first troubadours: Guilhem de Peitieus, the Duke of Aquitaine. Troubadours usually sang verses like this one, about courtly love—a knight’s abject devotion to a powerful and standoffish lady. Guilhem, though, also wrote one about how two women dragged him to a cave, tortured him with a cat, and kept him as their love slave. He reports that he “fucked them” [“fotei” in medieval Occitan] 188 times over eight days—that’s once every 61 minutes. That song is available on Youtube.


“Now is the month of Maying” by Thomas Morley



This song from 1595 contains no explicit swearing, but pretty much every word is a double entendre, from “playing” to “maying” to “barley-break.”


“Sir Walter enjoying his damsel, Z. 273” by Henry Purcell



Considered to be one of the greatest composers of seventeenth-century England, Henry Purcell wrote operas, church music, and a collection of tavern songs like this one, which features a musical depiction of a woman’s orgasm.


“Shave ‘Em Dry” by Lucille Bogan



This 1935 track from Lucille Bogan is years ahead of its time—it is one of the most obscene songs ever recorded, as “shave ‘em dry” was African-American slang for sex. When Bogan brags that “my cock is made of brass” she is referring to her vagina, not a surprise penis (cock was southern African-American slang for the female genitalia even through the 1980s).


“Louie Louie” by the Kingsmen



When “Louie Louie” came out, people had such difficulty understanding the words that concerned parents decided the lyrics must be horribly obscene and urged the FBI to investigate. After almost three years, the FBI found that it had no idea what The Kingsmen were saying either, and so dropped the case.





Concerned parent
FBI
Actual lyrics


At night at 10 I lay her again
?
Three nights and days I sail the sea


Fuck you girl, Oh, all the way

Think of girl, constantly


Oh, my bed and I lay her there

On that ship, I dream she’s there


I meet a rose in her hair.

I smell the rose in her hair.



“The Bad Touch” by The Bloodhound Gang



The twentieth-century equivalent of “Now is the month of Maying.” It dares to ask the question: How much of 1990s pop culture can be put to use in sexual innuendo? Answer: Lots.


“Anaconda” by Nicki Minaj



Nicki Minaj may be one of the most lyrically-obscene female artists out there. She uses profuse swearing to present herself as strong and active in the frequently misogynistic world of rap, where women are often “fucked,” but less often do the “fucking.” Plus, this song has lines like “He toss my salad like his name Romaine.


“Love Yourself” by Justin Bieber



Though Justin sings “love yourself,” he means “fuck yourself”—a classic technique to evade radio censorship and avoid outraging the parents of younger listeners. Enrique Iglesias’s “Tonight (I’m Lovin’ You)” employs a similarly evasive technique.


“The Hills” by The Weeknd



The Weeknd played this song at Jingle Ball concerts last year. I waited to see whether he would change the lyrics in deference to the audience, which was mostly tweens and their mothers, and he did censor the first f-word.  By the end, though, thousands of 12-year-old girls, including my daughter, were singing along, “I just fucked two bitches ‘fore I saw you”—an interesting parenting moment.


Find the complete playlist here:



Featured image: “Nicki Minaj” by Eva Rinaldi. CC BY-SA 2.0 via Flickr.


The post From “O Fortuna” to “Anaconda”: A playlist of musical profanity appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 17, 2016 05:30

Historical lessons for modern medicine

When looking at the use of drugs in modern medicine, specifically anaesthesia and intensive care – it is important to realise that this is nothing new at all. The first attempts at general anaesthesia were most likely herbal remedies and opiates, evidence of which has been found as early as the third millennium BCE. Antiseptics, from the Greek words anti (against) and sepsis (decay) were also used in ancient times – with the Egyptians using resins, oils, and spices to preserve bodies, and the Greeks and Romans quickly realising the antiseptic properties of honey, vinegar, and wine.


Today, when looking at medicinal and anaesthetic drugs – it is important to consider those classed as ‘historical’. Take ether and halothane for example; ether (as a surgical anaesthetic) was first demonstrated by William T. G. Morton in Massachusetts in 1846, whilst halothane (an inhaled general anaesthetic) was first used clinically by M. Johnstone in Manchester, in 1956. Despite such agents no longer being in routine use in countries such as the United Kingdom, in other places they are still widely available. With this in mind, knowledge of discontinued drugs may prove extremely useful to a range of healthcare professionals.


Indeed, it could be argued that many more agents should still be used, as they have in the distant past, to treat commonly encountered conditions. For instance, take honey, white wine, flax, and flour. A useful medical tool kit? Many wouldn’t think so, but they were all used to successfully treat a severe facial injury, suffered by Henry V. These ‘antiseptic’ agents, together with the skill of a surgeon (no anaesthetists, intensivists, or microbiologists existed in 1403) led to the prince’s survival.



‘Henry V of England, c. 1520’ from The Royal Collection. Public Domain via Wikimedia Commons.

On the 21st July 1403, during the Battle of Shreswbury, Prince Henry was struck by an arrow that penetrated his cheek, possibly entering his maxilla (the jawbone). Despite the efforts of numerous royal physicians, the metal arrowhead remained lodged in Prince Henry’s face. A surgeon named John Bradmore was summoned to attend to the young prince.


Firstly, he used probes soaked in rose honey to clean the wound. This was a common practice (and indeed is still used today), as the medicinal importance of honey had long been documented – it maintains a moist wound condition, offers antibacterial activity, and serves as a barrier preventing further infection. Secondly, Bradmore made a device similar to a pair of tongs to remove the barbed arrowhead, before finally washing the wound with white wine. The use of wine in the dressing of wounds dates back to the Greek physician Hippocrates, and can be seen as a precursor to our modern pure-alcohol! The wound was cleaned daily with a paste-like mixture of honey, flour, and flax.


The concept employed by Bradmore, of ‘source control in sepsis’ is one still at the heart of modern sepsis management. It is also an example of how humans are perhaps more able to deal with localized sepsis more effectively compared with systemic sepsis (i.e. wider infection). Indeed, Henry himself is rumoured to have died of dysentery (an infection of the intestines) during the siege of Meaux in 1422.


Moving into more relatively recent times, sodium pentathol, which is still used for the induction of anaesthesia, has for many years been associated with the statement:


More US servicemen were killed at Pearl Harbor by pentothal than by the Japanese


Whilst I was taught this as a young trainee anaesthetist, the reality, as always, is somewhat different. Rather than deaths being caused by the drug itself, it was more the doses of pentothal being administered to patients that caused cardiovascular collapse. Whilst doctors were acting with their patients’ best interest at hearts, it was a lack of understanding of drugs and their impact, which led to the demise of so many.


So what can we expect in future anaesthetic drug development? A number of agents are currently in development, and ‘duration of action’ is a key area that is being targeted. Agents based on ‘benzodiazepine receptor agonists’ (a group of prescription drugs that slow down the body’s central nervous system) are being studied – but with more rapid onset and a shorter duration of action. ‘Etomidate derivatives’ used for induction of anaesthesia are also being developed that do not have the problematic adrenocortical suppression (adrenal glands producing less hormones) that is associated with the former’s use.


Whatever the future may hold, drugs used in anaesthesia and intensive care will continue to develop and progress. Whether it is the newest technology or the most ancient of methods, it is as important as ever that health professionals have readily accessible and sound pharmacological information. This evidence-based approach not only improves patient safety, but also the efficacy and efficiency of treatment – lessons from history we can all benefit from!


Featured image credit: ‘Honey’ by maxknoxvill, CC0 Public Domain via Pixabay.


The post Historical lessons for modern medicine appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 17, 2016 04:30

Lord Byron’s Passion

Two hundred years ago today Lord Byron wrote a brief, untitled Gothic fragment that is now known as ‘Augustus Darvell’, the name of its central character. The most famous author in the world at the time, Byron produced the tale when he was living at the Villa Diodati, on the shores of Lake Geneva, and in the daily company of Percy Bysshe Shelley, Mary Godwin (the future Mary Shelley), and John Polidori, Byron’s personal physician. Wet, uncongenial weather had for several days kept the party indoors, where together they read a collection of ghost stories, and Byron issued a challenge that produced some of the most spectacular results in English literary history: ‘We will each write a ghost story,’ he declared. Shelley produced a verse fragment beginning ‘A Shovel of his ashes took,’ which may constitute a vestige of his contribution to the competition. The other three got further. Byron wrote ‘Augustus Darvell’. Polidori composed The Vampyre. Mary Godwin created Frankenstein.


Polidori openly acknowledged that ‘Augustus Darvell’ lay the groundwork for The Vampyre. In both tales, two male friends travel from England to the Levant, where one of them dies, though not before securing ‘from his friend an oath of secrecy with regard to his decease.’ At this point, Byron’s story breaks off, but in Polidori’s tale, the dead friend of course comes back to life as a vampire, returns to England, and gluts his thirst at the throat of his friend’s sister. Byron does not explicitly mention vampires in ‘Augustus Darvell’ but as he made clear three years earlier in his immensely successful ‘Oriental Tale,’ The Giaour (1813), they enthralled him. ‘But first, on earth as Vampire sent, / Thy corse shall from its tomb be rent,’ he writes;


Then ghastly haunt thy native place,


And suck the blood of all thy race,


There from thy daughter, sister, wife,


At midnight drain the stream of life.


Perhaps most remarkably, as Polidori used ‘Augustus Darvell’ as the model for The Vampyre, so he used Lord Byron himself as the model for Lord Ruthven, the blood-sucking lady-killer of his tale. Like Lord Byron, Lord Ruthven is a man of wealth, good-looks, mobility, callousness, and keen sexual appetites. In transforming the bestial ghoul of earlier vampiric lore into, in effect, Lord Byron with fangs and eternal life, Polidori created the modern vampire, a glamorous figure whose potency both attracts and appals us, and whose grip on the popular imagination has for two centuries now remained fascinatingly strong. Given Byron’s contribution to the tale, it is perhaps not surprising that, when it was first published in April 1819, it appeared under his name, though Polidori was soon able to establish his authorship, in part because Byron published ‘Augustus Darvell’ in response, and in order to demonstrate how far it resembled The Vampyre.



 


800px-John_William_Polidori_by_F.G._GainsfordJohn William Polidori, by F.G. Gainsford. Public Domain via Wikimedia Commons

The importance of Byron’s tale, though, goes beyond its role in the formation of the modern vampire. ‘Augustus Darvell’ also gives voice to the complicated nature of Byron’s sexuality. Britain in his day was more homophobic than at any other time in its history. ‘Even to have such a thing said is utter destruction & ruin to a man from which he can never recover,’ Byron declared. When men were convicted of ‘sodomitical’ acts, they were hanged, and scores of men in Byron’s day met this fate. On the Continent, however, homosexuality was not even illegal, let alone punishable by death. In Italy, as Byron quipped in 1820, ‘they laugh instead of burning.’ During his first tour of the Levant in 1809-11, Byron revelled in more than two hundred homosexual encounters, though when he returned to England he went straight back into the closet, and for the next four years all his known intimacies were heterosexual (though he enjoyed it a lot when Lady Caroline Lamb dressed up for him as a page boy). Scandal drove Byron back to the Continent in April 1816, less than two months before the writing of ‘Augustus Darvell’, and before long he was enjoying physical relationships with both sexes, though it seems clear that his deepest passions were reserved for young men.


Nearly eighty years before Lord Alfred Douglas wrote of the ‘love that dares not speak its name,’ Lord Byron suggestively invokes his own homosexuality in ‘Augustus Darvell’, whose private history contains ‘peculiar circumstances,’ and who is ‘prey to some cureless disquiet.’ Darvell can control, but he cannot altogether disguise, the intensity of his feelings, for like Byron and many other homosexual men in the nineteenth century and far beyond, he has had to develop ‘a power of giving to one passion the appearance of another.’ It is the strain of pretending to be what he is not, of having to hide his love away, that seems to be producing in him ‘an inquietude at times nearly approaching to alienation of mind.’ It may also be the reason why this tale of an intimacy between two male friends and travelling companions hinges on one of them taking an oath of secrecy regarding the other.


In The Vampyre, Polidori reimagines ‘Augustus Darvell’ as a tale in which Lord Ruthven’s relationship with his male companion is subsumed in his sinisterly opportunistic consumption of women, which Polidori elevates to the level of myth. In ‘Augustus Darvell’ itself, Byron concentrates on the relationship between the two men. During the previous four years he had been the most famous man in England, before falling spectacularly from grace, and in response to his own challenge to write a ghost story, he produced a tale in which he conjures both the terrors of sodomitical persecution in Britain, and – from the safety of the Continent – his own hidden life of alienation, secrecy, and homosexual desire.


Featured image: Mary Shelley, Frankenstein and the Villa Diodati via The British Library, Free from known copyright restrictions.


The post Lord Byron’s Passion appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 17, 2016 03:30

Inequality of what?

Has inequality increased over the last several generations? The answer depends upon the “currency” for inequality assessment. An item has been distributed among the population of interest, and we are using a number to summarize that distribution. But which item is it?


Consider the United States. Income inequality among the US population has gone up since the 1970s. If we apply a standard inequality metric (such as the Gini coefficient or a variance-based metric) to the population-wide distribution of annual income, the time trend since the 1970s shows a clear increase in inequality. On the other hand, it appears that inequality of longevity in the US has decreased over the same period. This can be calculated by taking a mortality table for a given year, showing the number of deaths during that year of individuals in their first year of life, second year of life, etc.; applying an inequality metric to this distribution of age-at-death; and then looking at the time trend in the inequality scores.


Inequality of happiness in the US also appears to have decreased. A large body of work now seeks to quantify happiness by asking individuals survey questions such as: “How happy are you on a scale of 1 to 5?” Betsey Stevenson and Justin Wolfers looked at the variance in the distribution of these happiness scores in the US, for each year starting in the 1970s, and found that the variance of happiness has gone down.


The lesson of these examples is that the empirical project of measuring inequality depends upon a prior normative determination, regarding the appropriate “currency” for inequality assessment. The same is true for the project of assessing a society’s overall condition. GDP is the most widely used indicator of how countries are faring. The GDP calculation is based on the money value of marketed goods and services produced in the country during a given year. GDP per capita is thus a measure of the average flow of market value to the country’s population. But we could, instead, quantify overall social condition by calculating the average happiness, health, life expectancy, or educational level of individuals in the population, or the average level of environmental quality (pollutant load) to which they are subjected.


Analogous points hold true, once more, with respect to the measurement of poverty. Doing so means identifying some threshold (the poverty level); and determining the fraction of the population with holdings below this level and the below-threshold distribution. But first we must ask: Poverty of what? The traditional approach (as with inequality measurement and GDP) is to focus on material well-being —specifically, the percentage of the population whose incomes are below an income-poverty level, and the distribution of income among this income-deprived group. However, the burgeoning literature on so-called “multidimensional” poverty identifies a plurality of goods; the degree of poverty in a population is measured as a function of dimension-specific cutoffs, and the estimated distribution among poor individuals (those below the cutoff on at least one dimension) of multidimensional bundles of the referenced goods.


In short, the assessment of inequality, poverty, and overall social condition requires a prior determination regarding the “currency” for such assessment. But how exactly to move beyond the traditional approaches?



Tape picMeasuring Tape by Jamie. CC-BY-2.0 via Fickr.

One, straightforward, possibility is the so-called “dashboard.” This means specifying a plurality of goods (income, health, longevity, education, environmental quality, etc.), and then applying an inequality, poverty, or overall-condition metric to the population distribution of each good, taken separately. Yet the dashboard ignores the correlation among goods. For example, a given distribution of income is less fair if those with high incomes are also highly educated, healthy, long-lived, etc.


A more attractive possibility is to measure each individual’s well-being, as a function of their multidimensional bundle (their holdings of each good); and then assess the population distribution of individuals’ well-being numbers. Various specific well-being measures are possible, corresponding to different normative positions regarding the nature of well-being: (1) the individual’s level of happiness, as produced by her holdings of the goods (the hedonic view of well-being); (2) a “utility” number, that takes account both of the individual’s holdings and of her preferences (the preference-based view of well-being); and (3) an “objective” number, determined just by the individual’s holdings, as opposed to her happiness or preference-satisfaction (the objective-good view of well-being).


Yet a third possibility is a correlation-sensitive multidimensional metric: one that takes account of the correlation among goods, but does not attempt to measure well-being at the individual level. This, indeed, is the structure of the main methodologies used in the literature on multidimensional poverty mentioned above.


Researchers Koen Decancq and Dirk Neumann examine a large data set, the German Socio-Economic Panel (SOEP), a large representative sample of German citizens that records information about the individuals’ income, health condition, employment status, stated “life satisfaction” (a marker of happiness), and various demographic attributes. For each individual, they calculate five different indicators: (a) income; (b) a composite objective measure of individual well-being, combining information about the three goods income, health, and employment status; (c) two preference-based measures of well-being, combining information about those goods and individual preferences for combinations thereof (these preferences themselves estimated in a subtle way from the data); and (d) life satisfaction. Decancq and Neumann then compare the worst off individuals (the lowest decile according to each of the five indicators).


They find large divergences. For example, the worst off individuals according to the income measure are, by definition, those in the lowest decile of the income distribution; by contrast, the average income of the worst off according to the life-satisfaction measure is barely lower than the average income of the entire population; and the worst-off according to the preference measures tend to be much less healthy than the income- or satisfaction-poor (reflecting a strong and unfulfilled preference for health among this group). Decancq and Neumann also quantify the overlap of the five measures—finding that very few individuals are worst off according to all five measures, and surprisingly few even according to two or three.


Their conclusion: “measurement matters.” We need not only to debate the causes and remedies for inequality and poverty but also (and indeed first) to ask: inequality and poverty of what?


Featured Image Credit: homelessness charity poor difference by ptrabattonui. Public domain via Pixabay.


The post Inequality of what? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 17, 2016 02:30

Bleak skies at night: the year without a summer and Mary Shelley’s Frankenstein

Two hundred years ago this month, Mary Shelley had the terrifying ‘waking dream’ that she subsequently molded into the greatest Gothic novel of all time; Frankenstein. As all who have read the book or seen one of the many film adaptations will know, the ‘monster’ cobbled together out of human odds and ends by rogue scientist, Victor Frankenstein, is galvanised into existence by the power of electricity. In reality, however, the monster and all the book’s characters were brought to life by the power of the Earth.


Shelley had her waking dream while holidaying on the shores of Lake Geneva, with her husband Percy, Lord Byron, and others in 1816. The weather was appalling during the late spring and early summer; incessant rain and unseasonable cold weather imprisoning the party indoors, where they amused one another with ghost stories. The awful meteorological conditions were not, however, the result of some haphazard fluctuation in the European weather, but part of a pattern that saw much of the northern hemisphere experience exceptionally wet and unseasonably cold conditions. Neither was this so-called ‘Year Without a Summer’ a random climatic occurrence, but the wide-ranging upshot of a gigantic volcanic eruption in Indonesia during April the previous year.


Topping out at an impressive 4300m — making it the highest peak in the East Indies — the huge Tambora volcano towered over the Indonesian island of Sumbawa. The volcano rumbled into life in 1812, with swarms of earthquakes and gas emissions hinting that magma was finally on the move after more than two  thousand years of dormancy. It took three years, however, before things really started to get interesting. On 5th April 1815, a massive detonation shook the volcano, dumping ash across much of the island. But that was not the end of it, five days later — on the evening of 10th April — the titanic climactic blast began; a 24 hour long eruption-fest that obliterated the landscape of Sumbawa, and reached out across the planet to wreak havoc with the world’s weather.


The eruption was the largest historically and, quite possibly, the greatest since the end of the last Ice Age. It also had the most fundamental societal impact of any volcanic blast in modern times; hardly surprising given the stats of the event. Tambora ejected around 100 cubic kilometres of magma — sufficient to inundate the whole of Greater London to a depth of more than half a metre — mainly in the form of ash and the deadly torrents of gas, pumice, and rock known as pyroclastic flows. At the height of the eruption, which could be heard as far as 2,000km away, it was spurting out more than one and a half cubic kilometres of ash and debris every hour. The column of ash reached upwards to the edge of space and spread out across the whole of South East Asia and beyond.



Aerial view of the caldera of Mt Tambora at the island of Sumbawa, Indonesia by Jialiang Gao. CC-BY-SA-3.0 via Wikimedia Commons.

When all eventually went quiet on the evening of 11th April, the top half  kilometre of the volcano had been sliced off; replaced by a great steaming caldera, six kilometres wide. An estimated 12,000 people lost their lives during the eruption, but at least 60,000 died in the aftermath; succumbing to starvation or disease following the destruction of the harvest and contamination of water supplies. Meanwhile, the sixty million tonnes of sulphur gases lofted into the stratosphere slowly and methodically working its wickedness; combining with atmospheric water vapour to form a veil of sulphuric acid mist that blocked out the sun right across the planet, causing temperatures to plunge.


The awful summer of 1816 which lit the touchpaper for the creation of Victor Frankenstein and his monster, was the second coldest in the northern hemisphere in more than 600 years. The conspiracy of wet and cold conditions wiped out the harvest across the continent and in the UK and Ireland, so that bread prices spiraled out of the reach of even those on average wages. Famine and food riots stalked the continent, while rampant typhus took tens of thousands of lives in Ireland alone. On the other side of the Atlantic, heavy snow fell across New England in June and hard frosts throughout the summer slashed the growing season in half. Governments and economies struggled to survive as the western world faced what economic historian, John Post, has called its ‘last, great, subsistence crisis.’


The cooling effect of the Tambora blast, which saw global average temperatures fall by around 1°C, lasted for at least three years, after which the world’s weather returned to normal. Nothing like it has been seen since, although large volcanic explosions, such as the 1991 Pinatubo eruption in the Philippines, have caused measurable — although smaller — reductions in global temperatures. Tambora started to rumble again in 2011, and a moderate eruption soonish cannot be ruled out, although this would be very unlikely to match the scale of the 1815 blast, which followed more than a two millennia of dormancy. This does not mean, however, that we can relax. There are plenty of currently restless volcanoes out there capable of spawning gigantic climate-perturbing eruptions that could make life very difficult for us all.


Most worrying are two South American volcanoes. At Uturuncu in Bolivia, a 70km bulge has been growing since the early 1990s. Meanwhile, at Chile’s Laguna del Maule volcano the ground surface has been swelling at an astonishing 25cm a year above a huge, expanding, body of magma just 6km down. A major eruption at either volcano in the near-to-medium-term would not be a surprise. With mounting evidence that accelerating climate change is also able to promote volcanic activity in certain circumstances, the stage may well be set for a repeat of the Tambora experience, with all that this might entail for future food supply and security in our increasingly interconnected and globalised world.


Featured Image Credit: Hawaii volcano fire by tpsdave. Public domain via Pixabay.


The post Bleak skies at night: the year without a summer and Mary Shelley’s Frankenstein appeared first on OUPblog.


1 like ·   •  0 comments  •  flag
Share on Twitter
Published on June 17, 2016 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.