Oxford University Press's Blog, page 766
September 6, 2014
Royal teeth and smiles
Much of the comment on the official photographic portrait of the Queen released in April this year to celebrate her 88th birthday focussed on her celebrity photographer, David Bailey, who seemed to have ‘infiltrated’ (his word) the bosom of the establishment. Less remarked on, but equally of note, is that the very informal pose that the queen adopted showed her smiling, and not only smiling but also showing her teeth.
It is only very recently that monarchs have cracked a smile for a portrait, let alone a smile that revealed teeth. Before the modern age, monarchs embodied power – and power rarely smiles. Indeed it has often been thought to be worrying when it does. Prime Minister Tony Blair’s endlessly flashing teeth caused this powerful statesman to trigger as much suspicion as approval. The negative reaction was testimony to an unwritten law of portraiture, present until very recently in western art. According to this, an open mouth signifies plebeian status, extreme emotion, or else folly and licence, bordering on insanity. As late as the eighteenth century, an individual who liked to be depicted smiling as manifestly as Tony Blair would have risked being locked up as a lunatic.
The individual who broke this unwritten law of western portraiture was Louise Élisabeth Vigée Le Brun whose charming smile –- at once twinklingly seductive and reassuringly maternal – was displayed at the Paris Salon in 1787. It appears on the front cover of my book, The Smile Revolution in Eighteenth-Century Paris. The French capital had witnessed the emergence of modern dentistry over the course of the century – a subject that has been largely neglected. In addition, the city’s elites adopted the polite smile of sensibility that they had learned from the novels of Samuel Richardson and Jean-Jacques Rousseau. Madame Vigée Le Brun’s smile shocked the artistic establishment and the stuffy court elite out at Versailles, who still observed tradition, but it marked the advent of white teeth as a positive attribute in western art.

Yet if Vigée Le Brun’s example was followed by many of the most eminent artists of her day (David, Ingres, Gérard, etc), the white tooth smile took much longer to establish itself as a canonical and approved portrait gesture. The eighteenth century’s ‘Smile Revolution’ aborted after 1789. Politics under the French Revolution and the Terror were far too serious to accommodate smiles. The increasingly gendered world of separate spheres consigned the smile to the domestic environment. And for most of the nineteenth century, monarchs and men of power in the public sphere, following traditional modes of the expression of gravitas, invariably presented a smile-less face to the world.
Probably the first reigning monarch to have a portrait painted that revealed white teeth was Queen Victoria. This may seem surprising given her famous penchant for staying resolutely ‘unamused’. Yet in 1843, she commissioned the German portrait-painter Franz-Xaver Winterhalter to paint a delightfully informal study, that showed the twenty-four year-old monarch reclining on a sofa revealing her teeth in a dreamy and indeed mildly aroused smile. Yet the conditions of the portrait’s commission showed that the seemly old rules were still in place. For Victoria had commissioned the portrait as a precious personal gift for her ‘angelic’ husband, Prince Albert. What she called her ‘secret picture’ was hung in the queen’s bedroom and was not seen in public throughout her reign. Indeed, its display in an exhibition in 2009, over a century after her death, marked only its second public showing since its creation. This was three years after Rolf Harris’s 2006 portrayal of the queen with a white-tooth smile, a significant precursor to David Bailey’s photograph.
If English monarchs have thus been late-comers to the twentieth-century smile-fest, their subjects have been baring their teeth in a smile for many decades. As early as the 1930s and 1940, the practice of saying ‘cheese’ when confronted with a camera became the norm. Hollywood-style studio photography, advertising models and more relaxed forms of sociability and subjectivity have combined to produce the twentieth century’s very own Smile Revolution. So it is worth reflecting whether the reigning monarch’s early twenty-first century acceptance of the smile’s progress will mark a complete and durable revolution in royal portraiture. Seemingly only time – and the Prince of Wales – will tell.
The post Royal teeth and smiles appeared first on OUPblog.










September 5, 2014
Migratory patterns: H-OralHist finds a new home on H-Net Commons
It is hard to believe that it has been nearly one year now since I was approached with a very unique opportunity. I was working as a newly appointed staff member of the Baylor University Institute for Oral History (BUIOH) when then-Senior Editor Elinor Maze asked if I would be interested in joining the ranks of H-OralHist and guiding the listserv’s transition to a new web-based format, the H-Net Commons.
My journey began with a nomination to the H-OralHist editorial team, a journey I took with BUIOH Editor Michelle Holland. For the uninitiated, H-OralHist originally served as an e-mail subscription listserv for those interested in current topics in the field of oral history. Participants could submit a question, a news announcement, or details on an upcoming conference or event, and H-OralHist would circulate that information to its membership. For every topic, members had the ability to respond and provide further information or answers as they saw fit. The H-OralHist editors would moderate this discussion, making sure the flow of information stayed relevant.
After our induction in October 2013, Michelle and I became the first editors trained in the new web-based system. Previously, editors merely interacted with listserv members via e-mail exchanges using the H-Net mail server. The H-Net Commons has a much more robust interface to navigate, including both the public face through which the entire membership interact, plus the back-end review system where editors select and work with submissions. The new features and training are quite substantial. H-Net Commons now provides multiple avenues of interaction, ranging from the familiar discussion posts to the ability to upload photos, write blog posts and more.
While Michelle took the editorial reigns of H-OralHist in early 2014, still operating under the old listserv system, I worked with the H-Net administrators to prepare our list for migration to the new Commons platform. In late March, it was our turn in the migration schedule, and we went live on the new platform in April 2014. Michelle and I worked out the initial bugs, and pretty soon the conversations were flowing again. Users of the new H-OralHist may now choose how they stay on top of new discussions. They can continue to have individual topics pop up in their e-mail inbox, receive a digest system for daily summaries, or work exclusively with the new online platform. The Commons functions much like a typical online forum now, allowing one to reply to discussions from the topic page. For those interested, the archive of prior discussions still exists and is available from the splash page sidebar under “Discussion Logs.”
At the moment, the remainder of the H-OralHist editorial team is working through the new training. We have had one successful editorial transition already this summer, with two more planned for the rest of the year. My hope is that as we enter 2015, the entire staff will have the necessary experience under their belts and editorial shifts will proceed like clockwork. As for me, I am currently revisiting the old resource materials and adding/cleaning links to the various oral history collections and centers across the world. Additionally, with the help of Oral History Association President Cliff Kuhn, we have planned an H-OralHist open forum event for this year’s annual meeting in Madison, WI. It is scheduled for noon on Thursday, October 9th. It will be an opportunity for anyone — especially our 3690 subscribers — to stop by and ask questions about the new web interface or offer suggestions on what other tools we should employ on the Commons. I hope I will get an opportunity to meet many of you there as we continue the discussion on the future of this invaluable resource we call H-OralHist!
Headline image credit: Migrating birds. Public domain via Pixabay.
The post Migratory patterns: H-OralHist finds a new home on H-Net Commons appeared first on OUPblog.










Dallas Cowboys: seven strategies that will guarantee a successful 2014 season
As a football team, the Dallas Cowboys are mired in mediocrity. In the 19 years since they last won the Super Bowl, their regular season record is a middling 146-142. The team made the playoffs seven times during that span, with only two wins to show for its efforts. The prognosis for the 2014 season is more of the same.
As a business, however, the Dallas Cowboys are extraordinary. Forbes values the team at $3.2 billion, ranking number one among all NFL teams, and there are few signs of slowing growth. Win or lose this season, the Cowboys will still be a profitable and healthy business by year’s end.
The franchise has achieved the gold standard in sports business: making money and remaining relevant regardless of their performance on the field. Here are seven strategies that underlie the Cowboys’ success as a company:
Distinct Identity: Widely known as “America’s Team,” the Cowboys have become part of the American cultural fabric, much like Coca Cola, Apple, and Disney. When a team has a strong and differentiated identity, especially one that evokes patriotism and nostalgia, it ensures loyal connections through good times and bad.
Star Powered Narratives: The Cowboys are a reliable source of dramatic storylines for the insatiable sports media, keeping the team top-of-mind throughout the year. Starring in leading roles are owner Jerry Jones as P. T. Barnum, quarterback Tony Romo as the star-crossed quarterback who can’t seem to realize his full potential, and wide receiver Dez Bryant as the prima donna on the sidelines. Put them together, add in other supporting roles (e.g. coach Jason Garrett as the man on the hottest seat in sports), and the result is a real-life soap opera broadcast live from Dallas.
Maximized Media Rights: Due to the NFL’s rights agreements with the major networks, the Cowboys will receive more than $250MM in revenue per year from 2014-2017, an increase of over $62 million from the 2013 season. Because there was a $133.9 million salary cap for Cowboys player compensation in 2013, this all but ensures profitability for the team whether they win or lose.
Engaging Sportscape: In the battle versus the fan cave, Jerry’s World, also known as AT&T Stadium, offers compelling reasons to get up off the couch and buy a ticket. Opened in 2009, this sports playground is headlined by the largest Jumbotron in the universe, roomy luxury suites, an art gallery that has become a destination unto itself, and new in-stadium mobile experiences powered by AT&T. Jerry’s World is not just a stadium, but a piece of pop culture.
Public-Private Partnerships: Jerry Jones and his staff are adept at working with local governments to offset the costs of their state-of-the-art facilities. The city of Arlington funded $325 million of the $1.2 billion cost of AT&T Stadium, and the city of Frisco is paying $115 million to build a new practice facility/team headquarters in their city, with the Cowboys organization handling any overages.
Robust Sponsorship Roster: Three consecutive seasons of 8-8 and no playoff appearances didn’t deter Hublot and Carnival Cruise Lines from becoming new sponsors of the Cowboys this offseason. The team continues to offer each sponsor the ability to target lucrative customers and generate a significant return on investment for its marketing spend irrespective of the team’s record.
Commitment to Inclusivity: Sports, like almost every industry, is experiencing demographic shifts in the marketplace, specifically with regard to the growing Hispanic population. The Cowboys have been first movers in embracing Latino fans in the United States and Mexico through a variety of outreach programs.
No sports business is immune to consistent losing. If the Cowboys fell victim to multiple seasons of futility (think 3-13 for three consecutive years), the fan base and revenue strength could weaken. However, these are extreme and unlikely circumstances. For the foreseeable future, the Cowboys strategic approach will ensure continued success during periods of mediocrity and position the team to reap even greater rewards when the pendulum of winning swings back in their favor.
The post Dallas Cowboys: seven strategies that will guarantee a successful 2014 season appeared first on OUPblog.










Education and service in residency training
America’s system of residency training — the multi-year period of intensive clinical study physicians undergo after medical school and before independent practice — has dual roots. It arose in part from the revolution in scientific medicine in the late nineteenth century and the infatuation of American educators of the period with the ideal of the German university. However, it also had roots in medical practice, particularly in the apprenticeship system. Accordingly, it developed many characteristics of an institutionalized apprenticeship. These dual roots of the residency system account for its defining dilemma: the tension between the responsibility of residency training to provide high-level professional education and the desire of sponsoring hospitals to extract as much inexpensive labor from their residents as possible.
The “education versus service” tension has shaped the residency system in America at every moment of its development. Graduate medical education has always imposed on house officers a vast amount of chores (called “scut work” by interns and residents) that easily could be done by individuals without the MD degree. These include drawing blood specimens, starting intravenous lines, carrying blood to and from the blood bank, remaining physically present throughout a blood transfusion, filling out routine forms and requisitions, labeling specimens and carrying them to the laboratory, transporting patients to and from the X-ray department or procedure rooms, holding retractors during surgery on patients they do not know, and performing a variety of chemical, bacteriological, and hematological tests. Since the 1990s, traditional chores have become fewer, as many hospitals have hired more phlebotomists, IV teams, transport personnel, and laboratory technicians. However, house officers have increasingly found themselves with a much larger burden of administrative chores, such as scheduling tests, arranging for procedures, calling for consultations, and handling all discharge arrangements. “Scut work” has not disappeared, though it has changed in form.
The burden of “scut work” has never fallen on all house officers equally. In general, interns and junior residents have been burdened with a far larger amount of chores than more senior residents. The amount of “scut work” has always been much more in some fields, such as general surgery, than in others, such as psychiatry. The burden of chores has always been the greatest at small, private, community hospitals not affiliated with a medical school and at city and county hospitals, where funds available for support personnel are often scarce. However, even at the most prestigious teaching hospitals, house officers have never been strangers to “scut work.”

The problem with these activities is not that they are unimportant to patient care. Rather, it is that they can be done equally well by ancillary staff without medical training. These duties can be effectively performed by phlebotomists, transport personnel, laboratory technicians, nurses, and clerks. In doing these tasks, house officers are working far below their levels of competence. These chores typically come on top of their medical duties and hence easily interfere with their learning and care of patients. The work usually gets done, but the cost is frequent exhaustion and frustration.
The tension between education and institutional service requires all involved to remember certain points. House officers and faculty alike know that “education” does not simply mean spending time at conferences and lectures. Rather, they understand that most learning comes from the direct care of patients, with discussions, reading, and reflection supplementing the process. In addition, house officers and faculty know that the sine qua non of a good residency is the opportunity for house officers to assume responsibility in patient care. This means, among other things, that the responsible house officer will do anything necessary for their patients’ care, even if it is not really in their job description. Thus, the responsible house officer will draw a sick patient’s blood at 2 AM if no one else is around to do it. What distinguishes work that is legitimately a part of clinical responsibility from “scut work”? Common sense. It is one thing for an intern to draw blood at 2 AM from their own patient. It is quite another to come in an hour or two early every morning to draw blood from every patient on the floor because the hospital does not wish to spend the money to have a phlebotomy team.
Over the past century, medical educators have been keenly aware that house officers have been saddled with many tasks that carry little educational value. Yet, the economic exploitation of house officers has continued unabated.The reason for this is not hard to find: hospitals and faculties alike benefit too much from the work of house officers. Hospitals benefit financially. With house officers, hospitals can hire fewer clerks, dispatchers, orderlies, laboratory technicians, and phlebotomists. Private practitioners, both at teaching and community hospitals, know that a good house staff allows them more efficient days in the office and calmer, more restful nights and weekends at home. Full-time faculty members similarly benefit. Relieved by house officers from many details of patient care, they are free to devote more time to research, scholarship, and their own private practices.
Thus, exhortations to reduce the service burden of residency have accomplished little because no one has addressed the underlying financial issue — that is, how to pay for the services rendered by house officers. Since Medicare became the primary funder of graduate medical education in 1965, house officers have seen their salaries rise, but their working conditions have remained brutal as they continue to care for patients with too few nurses, orderlies, ward clerks, phlebotomists, and other important aides. Accordingly, the residency system has plodded along, its house officers desperately overworked. Many commentators over the years have wondered why the residency system has seemingly been so resistant to change, particularly in terms of lessening the workloads of house officers. The reason is that medical staffs and hospitals have been unable or unwilling to provide the necessary funds to do so.
Featured image: RCSI Bahrain White coat ceremony by Mohamed CJ. CC-BY-SA-3.0 via Wikimedia Commons.
The post Education and service in residency training appeared first on OUPblog.










The ubiquity of structure
Everything in the natural world has structure – from the very small, like the carbon 60 molecule, to the very large such as mountains and indeed the whole Universe. Structure is the connecting of parts to make a whole – and it occurs at many different levels. Atoms have structure. Structures of atoms make molecules, structures of molecules make tissue and materials, structures of materials make organs and equipment and so on up a hierarchy of different levels as shown in the figure. Within this hierarchy of structure, man-made objects vary from the very small, like a silicon chip to the very large like a jumbo jet. Whereas natural structures have evolved over aeons, man-made structures have to be imagined, designed and built though our own efforts.
Many people, including much of the media, attribute this activity solely to architects. This is unfortunate because architects rely on engineers. Of course the responsibilities are close – it is a team effort. Architecture is the devising, designing, planning and supervising the making of something. Engineering is the turning of an idea into a reality – it is about conceiving, designing, constructing operating and eventually decommissioning something to fulfil a human need. The fact is that engineers play a critical creative role in making structural forms that function as required. They should be given at least equal credit.
Your personal structure is your bones and muscles – they give you form and shape and they function for you as well – for example bone marrow produces blood cells as well as lymphocytes to support your immune system. Your musculoskeletal system also includes all of your connecting tissue such as joints, ligaments and tendons which help you move around. On it are hung all of your other bits and pieces, such as your heart, brain, liver etc. Without structure you would just be a blob of jelly – structure supports who you are and how you function.
In a similar way the structure of a typical man-made structure, like a building, will have beams and columns together with all of the connecting material such as joints, slabs, welds and bolts which keep it together. On it are hung all of the other parts of the building such as the equipment for heating, lighting, communication and all of the furniture, fixtures and fittings. Without structure a building would just be a random pile of components – the function of structure is to support all the other functions of the building.

We can think of the form of a structure from two different points of view – I’ll call them architectural and functional. If you were a building, then the architect would decide your gender, what you look like, your body shape and appearance. However the architect would not decide what is necessary to make the various parts of your body function as they should – that is the job of various kinds of engineer. In other words the architectural form concerns the sense and use of space, functional occupancy by people, symbolism and relationship to setting. It can be decorative and sculptural. The role of an architect is to understand and fulfil the needs of a client for the ways in which a building is to be used and how it will look – its overall form, appearance and aesthetic effect. But the architects who design buildings are not engineers and rarely have the level of scientific knowledge required of professionally qualified engineers. So for example structural engineers must design a structural form that has the function of making a building stand up safely. Indeed engineering safety dominates the design of large structures such as sky-scrapers, bridges, sports stadia, dams, off-shore platforms, fairground rides, ships and aeroplanes.
So what happens when the best architectural form and the best structural form are different – which takes precedence?
Safety and functionality are important necessary requirements – but of course they aren’t sufficient. We need more than that and herein lies the issue. Functionality is often taken for granted, assumed and dismissed as not needing an artistic, creative input – requiring ‘mere’ technique and ‘known’ science. But that is a misreading of being innovative and creative – engineers often do breathtaking complex things that have never been done before. Scientific knowledge is necessary but not sufficient for inspirational engineering – many assumptions and assessments have to be made and there is no such thing as zero risk. Engineering requires practical wisdom.
Some argue that form should follow function – another way of saying that the ends determine the means. However the original meaning, by the American architect Louis Sullivan in 1896, was an expression of a natural law. He wrote ‘Whether it be the sweeping eagle in his flight or the open apple-blossom, the toiling work horse … form ever follows function, and this is the law …’

The philosopher Ervin Laszlo pointed out the difference between form and function does not exist in natural structures. So nature shows us the way. Form and function should be in harmony. We should recognize that good architecture and good engineering are both an art requiring science – but aimed at different purposes. Their historical separation is unfortunate. If an architect specifies a structural form which (whether for artistic/aesthetic reasons or through incompetence) is unbuildable or unnecessarily expensive to build then the final outcome will be poor. The best and most successful projects are where the architects and engineers work together right from the start and given equal credit. At the most mundane level good structural design can leverage orders of magnitudes of savings in costs of construction.
Michel Virlogeux, the French structural engineer responsible for a number of big bridges including the Millau Viaduct in France, says that we design beautiful bridges when the flow of forces is logical. A good architect welcomes the engineering technical discipline to create form through structural art and intelligence and a good engineer welcomes architectural conceptual discipline to create form through aesthetic art and intelligence.
Image credit: Diagram provided by David Blockley.
The post The ubiquity of structure appeared first on OUPblog.










September 4, 2014
Understanding Ebola
Ebola is a widely known, but poorly understood, virus. Even in West Africa, in the middle of the 2014 West African Ebola Epidemic, the vast majority of patients with a differential diagnosis of Ebola Virus Disease (EVD) will in fact be suffering with something else serious and potentially fatal. The possibility of EVD should not over-shadow other investigations and management.
Peter Piot’s team discovered Ebola in 1976 – he’s now the head of the London School of Tropical Medicine and was reassuringly quoted in 2014 as saying ‘I would sit next to an infected person on a train’. It is one of two Filoviruses (the other being Marburg) and according to the CDC, it has caused thirty-four outbreaks, twenty-four of which have been in Sub-Saharan Africa, with total fatalities numbering only in the thousands. Transmission relies on direct contact with bodily fluids containing the virus, either through broken-skin or through mucous membranes. Airborne, droplet- aerosol transmission does not seem to be a popular mechanism of spread, though it is possible that this does occur. Symptoms are visible as soon as people are contagious and Ebola Virus is not, therefore, what Piot termed the ‘right kind of virus’ to start an epidemic in a major western city. It is conceivable that an outbreak could occur, but it lends itself to active case finding, contact tracing, and containment far more easily than, for example, the flu-viruses.

Its relative fame therefore, is probably related to three aspects of EVD: the extremely high (both untreated and treated) case fatality rates (as high as 90% in some outbreaks); the extremely rapid onset and dramatic nature of its symptoms (it is a hemorrhagic fever and death is usually preceded by haemorrhage and widespread organ necrosis); and finally the enigmatic nature of the outbreaks–the animal reservoir is not yet clarified (though fruit bats are currently the most likely candidate). This last aspect allows popular descriptions of the virus to describe it as lurking in the sinister darkness of the African jungle, waiting to emerge on an unsuspecting population.
If you work in the global north in a modern, well-equipped hospital, the management of a low-risk of an extremely dangerous event must be governed by national and international protocols rather than the arbitrary decisions of individual clinicians. Members of medical teams should ensure that these protocols are available, and followed – they govern isolation techniques, blood sample procurement and delivery, and contact tracing.
If you work in a region where Ebola epidemics are a possibility then you will be faced with a vast number of challenges to identify and manage these cases: poor data collection and management systems, weak public health infrastructure, limited availability of personal protective equipment and, perhaps most importantly, a population who are vulnerable because of (in many cases) limited education, weakened immunity and cultural practices that encourage transmission. In these contexts the epidemic potential of the virus is greatly magnified. The 2014 epidemic has demonstrated that the international community is quite content to allow widespread transmission across several countries until expatriates are affected. The overstretched expatriate and national staff of the responding agencies have two jobs – to manage as best they can with epidemic control and to advocate for, and demand the vast resources – human and financial – that are needed to control the spread of a disease that reflects poverty and lack of long term investment in regions of the world that are vulnerable to so many other threats to life and health.
A version of this article originally appeared on Oxford Medicine Online.
The post Understanding Ebola appeared first on OUPblog.










The story of pain in pictures
Pain is a universal experience. Throughout time, everyone knows what it feels like to be in pain — whether it’s a scraped knee, toothache, migraine, or heart attack. Although the feeling of pain may remain the same, the ways in which it was described, treated, and interpreted in the 18th and 19th centuries varies greatly from the ways we regard pain today. The below slideshow of images from The Story of Pain: From Prayer to Painkillers by Joanna Burke will take you on a journey of pain through time.

The Cholic
She feels like her waist is being constrained by a rope that is being tightened to an unbearable extent by demons. Other devils prod her with spears and pitchforks. The painting on the wall behind her shows a woman over-indulging in alcohol. Coloured etching by George Cruikshank, after Captain Frederick Maryyat, 1819, in the Wellcome Collection, V0010874. Figure 3.2 Page 64.

Administration of nitrous oxide
The administration of nitrous oxide and ether by means of the wide-bore modification of Clover’s ether inhaler and nitrous oxide stopcock, from Frederic W. Hewitt, Anaesthetics and their Administration (London: Macmillan & Co., 1912), 583, in the Wellcome Collection, M0009691. Figure 9.3 Page 283.

Chemical Lecture
Thomas Rowlandson, “A chemical lecture by Humphrey Davy at the Surrey Institute”, colour etching, 1809, in the Wellcome Collection, L0006722. Figure 9.1 Page 274.

Surgeon attending to a wound
Oil painting by Johan Joseph Horemans of an interior with surgeon attending to a wound in a man’s side, 18th cent., in the Wellcome Collection, L0010649. Figure 8.2 Page 234.

William Osler at bedside of patients
1925, from William Cushing, The Life of Sir William Osler (Oxford: Clarendon Press, 1925), 552, in the Wellcome Collection, L0004900. Figure 8.4. Page 259.

The process of bleeding
A surgeon bleeding the arm of a young woman, as she is comforted by another woman. Coloured etching by Thomas Rowlandson, c. 1784, in the Wellcome Collection, L0005745. Figure 8.1, Page 233.

The Physiognomy of Pain
Fear (1896), trans. E. Lough and F. Kiesow (New York: Longmans, Green, and Co., 1896), 202, in the Wellcome Collection, L0072188. Figure 6.3 Page 171.

The Morning Prayer
Advertisement card of Dr Jayne’s Tonic, Vermifuge, Carminative Balsam, and Sanative Pills. R. Epp. c 1890s, in the Welcome Collection, L0041194 Figure 4.3. Page 116.

Origin of Gout
Gout (caused by excessive alcohol consumption) is portrayed as a burning pain, inflicted by a demon with red-hot pincers. The blackbird is a harbinger of worse to come. Coloured etching after Henry William Bunbury, c. 1780s-1800, in the Wellcome Collection, V0010848. Figure 3.3 Page 66.
Featured image credit: The unconscious man is nothing more than a passive on which little demons equipped with surgical instruments can operate. “The Effect of Chloroform on the Human Body”, watercolour by Richard Tennant Cooper, c. 1912, in the Wellcome Collection, V0017053. Used with permission.
The post The story of pain in pictures appeared first on OUPblog.










Policing by the book
Entry to the UK police force is changing. With Policing degrees are now available at over 20 universities and colleges across the UK – and the introduction of the direct entry scheme in a number of forces – fewer police officers are taking the traditional route into the force.
We spoke to officers, students, and course leaders to get their opinions on the relationship between theory and practice. Does a Policing degree make you a better officer?
On a personal level, a degree can help some students put their own career and practical training into context. Richard Honess had a “positive experience” in completing his Bachelor’s degree in Policing. “I now have a greater understanding of why we do what we do and the context of where our powers and policies originate; and why senior officers make the decisions they do. I have been able to merge my love of the job with my interest in science and scepticism with the development of ‘Evidence Based Policing’.”
“I have been bitten by the academic bug and I about to commence a Masters by Research in Policing, the ultimate in career development with a view to becoming a research ‘pracademic’!”
Experienced officers can also learn a thing or two. Darren Townsend operated as a Constable with 22 years’ service before deciding to take his degree. “The course opened my eyes completely around how policing worldwide operates, decision making processes especially in the wake of political interference, miscarriages of justice, [and] theory behind certain techniques of crime control.”
“In addition to all the operational aspects it has provided me with some fascinating ahandbook fro cademic reading which has generated an even greater interest in my chosen career which I believe will lead me to a greater professional performance and be far more open to opposing ideas, embrace positive change, and understand the difference academia and research can make to my already wide expanse of operational policing knowledge.”
However, some question whether academic study is really the best way to achieve the necessary skills. One contributor, who asked to remain anonymous, challenged the application of degrees in the field. “I personally do not possess a degree of any sort. My qualifications both within the police and previously in electrical engineering are more vocational. I have yet to see the benefit of policing degrees within policing and will be interested to see if, over time, they do improve policing. At lower levels of policing (up to inspector) I cannot foresee their worth: it is about communication and common sense at the front line.”

Paul Connor is series editor of the Blackstone’s Police Manuals and is a Police Training Consultant offering support for those sitting promotion exams. “Possession of a degree in any subject illustrates an ability to apply oneself and to learn but this does not equate an automatic right to pass every examination that follows in your life. This certainly applies to the OSPRE® Part I examination.”
“College of Policing research indicates that there is a correlation between the possession of a degree and success in OSPRE® Part I but a significant number of candidates without a degree pass the examination just as a significant number with a degree fail.”
The relationship between university research and its application in the field has also been put under scrutiny. Emma Williams is the Programme Director of the BSC Policing (In Service) degree at Canterbury Christ Church University. “Conversations about collaboration between universities and policing have never been so rife. Austerity and the need for resources to be used effectively have resulted in the College of Policing supporting the evidence based policing agenda and the commissioning of research by universities. Having spent eleven years in the Metropolitan Police as a senior researcher I am fully aware of some of the barriers that prevent research findings being fully implemented.”
“Officers can sense a loss of professional judgement when research further drives operational delivery and it can be seen as prescriptive and top down. Our degree programme fully encourages officers to use research and academic knowledge to assist them in their own decisions but to use it alongside their own experiential knowledge. Having knowledge of both the political and social context in which policing has developed and an understanding of theory and how it can assist them in their roles is in my opinion critical for this relationship to develop.”
The variance between theory and practice also raises questions about the structure of the degrees themselves. Susie Atherton previously worked on a police and PCSO training programme at De Montfort University. “It was very clear which were the ‘academic’ modules vs the ‘police training’. I do think there could have been better integration. We had to adapt and respond to their needs to make sure the academic modules did fit with their role, but this weakened their credibility as academic social science modules.”
“The new BA programmes promise employability through combining a three year policing studies degree with the Certificate in Knowledge of Policing. My worry is students who want to be police officers could leave after gaining the CKP, as undertaking this alongside 4 academic modules will be onerous and challenging. Students will perhaps question why they need to gain a full degree to get a job as a police officer, incurring 2 more years of fees, unless they wish to take advantage of direct entry. I am also aware of how valuable life experience, working in schools, military service and other roles are to the police service – transferable skills and knowledge about the world which cannot be gained doing a degree.”
“Fundamentally, if such programmes are to work, like any programme, they need proper investment, leadership and to respond to student feedback. Any weakness in these areas would jeopardise the continuation of programmes, but I do think policing programmes are vulnerable, simply because there are other options available”
The post Policing by the book appeared first on OUPblog.










September 3, 2014
A wrapping rhapsody
The Oxford Dictionary of English Etymology (ODEE) says about the verb wrap (with the abbreviations expanded): “…of unknown origin, similar in form and sense are North Frisian wrappe stop up, Danish dialectal vrappe stuff; and cf. Middle Engl. bewrappe, beside wlappe (XIV), LAP3.” XIV means “the 14th century,” and LAP3 is a synonym of wrap (as in overlap), related to lap “front part of a skirt,” which has a solid etymology. The quotation from The ODEE repeats what can be found in the OED. For “cf. wlappe” some dictionaries offer the dogmatic, unsupported statement that wrap is a doublet (so Skeat) or “corruption” of lap.
So once again we encounter the off-putting formula “origin unknown.” But, as we have seen more than once, in etymology, “unknown” is a loose concept. Minsheu, the author of the first etymological dictionary of English (1617), cited two words, which, in his opinion, could be akin to wrap. One of them was German raffen “to pile, to heap.” In 1854 the same idea occurred to a certain D.B., a contributor to Southern Literary Messenger (Richmond, Va.), the author of a four-page article on word origins. Like Minsheu, he also cited two probable cognates. To both authors’ second candidates we will briefly return below. D.B., I assume, was not a famous researcher, and Southern Literary Messenger is not everybody’s regular source of information on linguistic issues (again a mere guess). But in 1904 Heinrich Schröder, contrary to the semi-anonymous D.B., a distinguished, even brilliant German scholar, published a very long article in a leading philological journal and, and among many other things, proposed Minsheu and D.B.’s etymology as his own. Like Cato, who never stopped rubbing in his appeal Carthago delenda est (“Carthage should be destroyed”), I will keep repeating that we need summaries of everything ever said about word origins in any given language and only then shed words of wisdom to the public eager for reliable information. In the absence of summaries and surveys, etymologists, naturally, hit on the same, seemingly attractive hypotheses again and again, without realizing that the wheel has already been invented and even reinvented more than once.
Thus, three people suggested the affinity of wrap to German raffen. But not a single German dictionary I have consulted contends that raffen is akin to wrap, though, obviously, if A is related to B, B must also be related to A. This is another curious comment on the state of the art. While working on the entry raffen, the authors of German etymological dictionaries never thought of looking up wrap. And why should they have done so? In their sources, the comparison does not occur, and no one alerted them to the fact that in the English-speaking world some etymologists had tackled their word.
Closer to home than raffen is Engl. warp, whose original meaning was “to throw,” as evidenced by Dutch werpen and German werfen. Time and again, beginning with Minsheu, it has been said that wrap is a metathesized variant of warp. However, when a word falls victim to metathesis, which is a mechanical phonetic change not caused by semantic factors, the new form, with the sounds transposed (and this is what metathesis is all about), it continues meaning the same as its “parent,” while “to throw” and “to enfold” do not look like even remote synonyms. Another putative etymon of wrap that appears in some old sources is rap (so, for instance, in D.B.’s article). It is unclear which of several verbs spelled rap is meant. Perhaps it is rap “to give a quick blow, etc.,” still dimly recognized in the archaic phrase rap and rend? But that rap seems to have once had h before r, while in wrap, w- is genuine. The initial group wr- was simplified in southern English and subsequently in the Standard only in the seventeenth century, so that the fourteenth-century spelling of wrap inspires confidence. The other rap “to strike” may be a sound-imitative verb of Scandinavian origin, and neither h- nor w- has been recorded in it in any Scandinavian language.

Other conjectures are even less appealing. For instance, Old Engl. wrion “bend, contort” (its reflexes are hidden in Modern Engl. wry and wriggle) is phonetically too remote from wrap. Among several look-alikes, attention has been called to the short-lived and rare late Middle English words wrabble “to wriggle,” wrabbed “perverse,” and wraw “to mew.” The last of them is obviously onomatopoeic, like mew, moo, and the rest. The original sense of the Modern English verb warble was “to whirl”; hence “to sing with trills and quavers.” This verb had hw- at one time. The enigmatic wlappe “wrap” is said in the OED to be apparently a blend of the verb lap and wrap. But so little is known about most of those words that their mutual ties cannot be reconstructed. It only seems that at least some verbs beginning with wr- and hr- were sound-imitative and possibly sound symbolic. One of them was Greek raptein (historically, with initial h-) “to stitch together,” whence rhapsody (“the stitching of songs”), in English from Greek via Latin.
The initial sense of many such verbs was “to bend, twist, stitch; wriggle; *fold, *connect, *cover.” They crossed borders with ease. Engl. wrap may be a borrowing from Frisian. If it is so, we are left with the question about its origin there. The root of the Romance verbs that have become Engl. develop and envelop seems also to be lost in obscurity, but it is characteristic that envelop sounds somewhat like lap and means approximately the same as overlap. If all of them are “fanciful” sound symbolic formations, the similarity causes little surprise. Lap in lap up is obviously sound imitative.
What then is the summary? The verb wrap appeared in English in the middle period. It has a cognate, almost a twin, in Northern Frisian. Perhaps it was coined long before it surfaced in texts, at a time when Frisian and English were closer than in the fourteenth century, but borrowing in either direction cannot be excluded. Wrap is not a doublet of warp. Nor does it have direct ties with rap in any of its senses. German raffen is hardly related to it either. Wrap has no ancient (“Indo-European”) heritage. It looks like one of a sizable number of words beginning with wr- and wl- meaning “bend, twist.” Rare Middle Engl. wrabble and wrabbed are, most probably, related to it. Without certainty, warble can be added to the group. We have no way of knowing whether Middle Engl. wlappe had an independent existence or was a blend of lap and wrap. Some words structured like rap and lap (without initial h- and w-) were close to wrap, so that confusion between and among them was possible.
Despite all the caution required by such a hard case, it can probably be stated that wrap is a sound symbolic verb. The evidence at our disposal is meager and inconclusive, but I will repeat what I have said so many times: too often the verdict “origin unknown” fails to do justice to the words we discuss. Sometimes there is indeed nothing else one can say (notably so while dealing with slang), but a verdict that presupposes death sentence should not be returned without serious deliberation.
The post A wrapping rhapsody appeared first on OUPblog.










The crossroads of sports concussions and aging
The consequences of traumatic brain injury (TBI) are sizable in both human and economic terms. In the USA alone, about 1.7 million new injuries happen annually, making TBI the leading cause of death and disability in people younger than 35 years of age. Survivors usually exhibit lifelong disabilities involving both motor and cognitive domains, leading to an estimated annual cost of $76.5 billion in direct medical services and loss of productivity in the USA. This issue has received even more intense scrutiny in the popular media with respect to sports-related concussions where there is a proposed link between having suffered multiple injuries, regardless of severity, with later neurodegeneration. At present, there is a dearth of evidence to either support or undermine the role of sports concussions in the later development of neurodegenerative processes, much less the influence of those brain injuries on the normal aging process.
As most people agree that no two concussions are alike, they all share at least one feature in common; they all involve the near instant transfer of kinetic energy to the brain. The brain absorbs kinetic energy as a result of acceleration forces, while deceleration forces cause it to release kinetic energy when colliding with the skull. Coup contrecoup injury is one of the most ancient and best supported biomechanical models of traumatic brain injury induction. Acceleration/deceleration forces can either be transferred to the brain in a straight line passing through the head’s centre of gravity or in a tangential line and arc around its centre of gravity. Shearing and stretching of axons are common manifestations of inertial forces applied to the brain and this type of damage is commonly referred to as traumatic axonal injury. Although robustly demonstrated in both animal and post-mortem models of TBI, neuroimaging techniques limitations, however, have long prevented us from accurately tracking projecting axonal assemblies, also called white matter fibers, in living humans. The recent emergence of a magnetic resonance imaging (MRI)-based tool called Diffusion Tensor Imaging (DTI) can reveal abnormalities in white matter fibers with increasing sensitivity. DTI has quickly gained in popularity among TBI researchers who have long sought to characterize the neurofunctional repercussions of traumatic axonal injury in living humans. One particularly appealing clinical application of DTI is with athletes who have sustained sports concussion in whom conventional MRI assessments typically turn out negative despite the persistence of long-lasting, cumulative neurofunctional symptoms. First applied to young concussed athletes, a follow-up DTI study conducted in our laboratory revealed subtle white matter tracts anomalies detected in the first few days after the injury and again 6 months later. Interestingly, these young concussed athletes were all asymptomatic at follow-up and performance on concussion-sensitive neuropsychological tests had returned to normal.

In parallel, our group became increasingly interested in the characterization of the remote neurofunctional repercussions of concussion sustained decades earlier in late adulthood former elite athletes. Quantifiable cognitive (i.e. memory and attention) and motor function alterations were found on age-sensitive clinical tests, a finding that significantly contrasts with the full recovery typically found within a few days post-concussion in young, active athletes on equivalent neurofunctional measures. This finding was the first of many demonstrations that a remote history of sports concussion synergistically interacts with advancing age to precipitate brain function decline. These neuropsychological tests performance alterations specific to former concussed athletes were soon after found to correlate significantly with markers of structural damage restricted to ventricular enlargement and age-dependent cortical thinning. However, besides the significant interaction of age and a prior history of concussion on cortical thinning, former concussed athletes could not be differentiated from age-matched unconcussed teammates using highly sophisticated measures of grey matter morphometry. White matter integrity disruptions therefore appeared as a likely candidate to explain the observed significant ventricular enlargement found in former concussed athletes. We thus turned to state-of-the-art DTI metrics to conduct the first study of white matter integrity with older but clinically normal retired athletes with a history of sports-related concussions. A particular emphasis was put on bringing together former elite athletes who were free from confounding factors such as clinical comorbidities, drug/alcohol abuse, and genetic predisposition that are too often confusing the long-term effects of concussions on brain health. Our results show that aging with a history of prior sports-related concussions induces a diffuse pattern of white matter anomalies affecting many major inter-hemispheric, intra-hemispheric as well as projection fiber tracts. Of crucial clinical significance with relation to our previous findings on former concussed athletes, we found ventricular enlargement to correlate significantly with widespread alterations of key markers of white matter integrity including not only peri-ventricular white matter tracts, but also an extensive network of fronto-parietal connections. Most of all, these white matter integrity losses were found to be associated with altered neurocognitive functions including memory and learning.
Taken together with previous functional and structural characterizations of the remote effects of concussion in otherwise healthy older former athletes, the pattern of white matter alterations, being more pronounced over fronto-parietal brain areas, more closely resemble what has been observed in normal aging. From this interpretation, we suggest that concussion induces a latent microstructural injury that synergistically interacts with the aging process to exert late-life brain decline in both structure and function.
The post The crossroads of sports concussions and aging appeared first on OUPblog.










Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
