Oxford University Press's Blog, page 288
December 22, 2017
The war on Christmas: a two thousand year history [timeline]
Is there a war on Christmas? Yes. And it’s been fought for almost two thousand years. Since their earliest incarnation, Christmas festivities have been criticized and even outlawed. In the timeline below, historian and Christmas expert Gerry Bowler takes a look at this long history—from nativity protests in 240 through the billboard wars of 2014.
Featured image credit: “close-up-of-christmas-decorations-hanging-on-tree” by Miesha Moriniere. CC0 via Pexels.
The post The war on Christmas: a two thousand year history [timeline] appeared first on OUPblog.

“I’m not very good at making conversation”
During the festive period from Christmas to New Year, we can often find ourselves in situations that we are uncomfortable with, making conversations with people we don’t know, and sometimes struggling with social anxiety. In the following extract from Managing Social Anxiety, Workbook, the authors explore cognitive restructuring, and how it can be useful to prepare ourselves for uncomfortable social situations.
Many people with social anxiety disorder believe that they do not have the skills to make conversation. When they first go to see a therapist or counselor, these individuals state that they do not know how to act or what to say in social situations. They may say that they do not know how to make small talk. In the language of psychologists, many socially anxious people believe they have poor social skills. Socially anxious people talk about their poor social skills so much that many psychologists believe them. Some of the earliest treatments for social anxiety disorder were social skills training programs. These programs included instruction and practice in what to say in different situations as well as feedback on how loud your voice is, how to make the right amount of eye contact, and so forth. These programs were fairly helpful for many people. However, in the 1980s, psychologists started to notice that many, if not most, people with social anxiety disorder did not have poor social skills. In fact, observations of individuals with social anxiety in various situations suggested that although they frequently felt anxious and uncomfortable, they actually performed just fine. Sometimes their anxiety would distract them or interfere with what they were doing but it seemed clear that most individuals with social anxiety disorder had adequate (and often excellent) social skills. Thoughts such as “I’m not very good at making conversation” were actually thinking errors for many people with social anxiety. Let’s take a look at an example.
Alejandro is very anxious about the upcoming office Christmas party. He dreads going every year but believes that it is important to go because the boss seems to make a mental note of who does and does not attend. Although Alejandro usually becomes very anxious talking with people, he gets along OK at work because the conversations seem to have a purpose and are generally about work-related topics. People listen to him because they are exchanging information that they both need. When anticipating the party, Alejandro imagines himself standing in a group of people totally silent. He has some ideas about what he could say but he does not know how to break in. He may laugh along with the jokes but he does not really participate in the conversation. The situation feels hopelessly awkward and Alejandro says to himself, “I’m not very good at making conversation. Other people seem to know how to take their turns and the conversation flows back and forth.”
Let’s look at Alejandro’s AT (anxious thought) of “I’m not very good at making conversation.” First we will examine the thinking errors in Alejandro’s AT.
Labeling: Alejandro is labeling himself a “poor conversationalist” (or more honestly, perhaps, “too stupid to know how to talk to people”). He seems to see himself as fitting into a category of people that have a serious flaw—not knowing how to make conversation.
Disqualifying the Positive: Alejandro is disqualifying the conversations about work that he has every day. He seems to have the skills to carry on those conversations. Probably some of the skills are relevant to conversations at a party.
Let’s look at some cognitive restructuring that Alejandro might do for himself.
Anxious Alejandro: I’m not very good at making conversations.
Coping Alejandro: What evidence do you have that you are not very good at making conversations?
Anxious Alejandro: I am always miserable at these Christmas parties and usually end up standing off by myself or sticking very close to my wife all evening.
Coping Alejandro: Is there any other reason—besides a lack of social skills—that you could feel miserable at the Christmas party?
“I’m not very good at making conversation. Other people seem to know how to take their turns and the conversation flows back and forth.”
Anxious Alejandro: I’m embarrassed about what people will think if I am standing off by myself but my wife worries about me if I hang around her all night.
Coping Alejandro: So it sounds like at least part of feeling miserable might be due to thoughts about what people will think if you stand by yourself. Another part might be due to thoughts about your wife’s reaction if you stay with her too much.
Anxious Alejandro: Right.
Coping Alejandro: It sounds like your thought about your social skills is only one of the ATs that might be making you feel miserable at the party. I’m wondering if you are feeling so miserable that you do not actually attempt to talk with very many people.
Anxious Alejandro: I usually try a few times. I start a conversation, then after a few exchanges, the person does not seem to have much to say so I excuse myself and move on, feeling like I failed.
Coping Alejandro: Are there any other possible reasons, besides your lack of social skills that the other person might not continue to talk?
Anxious Alejandro: I guess some of them might be shy and not have much to say. Maybe they see someone they want to talk with. Some people try to use these parties to make points with their supervisor or the big boss by chatting with them.
Coping Alejandro: So it sounds like some people might cut the conversation short for reasons that have nothing to do with you.
Anxious Alejandro: Yes, I guess that is true. It takes two people to have a conversation. If they don’t stick around, there isn’t much I can do.
Coping Alejandro: You’re right. It does take two people to have a conversation. What percentage of the conversation are you responsible for?
Anxious Alejandro: I guess I’m only responsible for half and the other person is responsible for half.
As with many people with social anxiety, Alejandro did not have much evidence of the terrible social skills he thought he possessed. There were several reasons why he might be feeling uncomfortable at the party, including worrying about what his wife and other people thought of him. Also, he had been putting a lot of pressure on himself to carry the conversation. It is important to remember that if the other person does not want to talk, there is not much you can do about it. A good rational response for him might be, “I only have to do my half” or “I’m only responsible for 50% of the conversation.” If he is less worried about keeping the conversation going, Alejandro should be able to be more spontaneous and better able to think of things to say.
Featured image by Priscilla Du Preez. CC0 Public Domain CC0 via Unsplash.
The post “I’m not very good at making conversation” appeared first on OUPblog.

Emigration and political change
International mobility has been reshaping the economies and societies of countries over the course of human history. In Europe, during recent years, media and policy-makers have been focused on immigrants from North Africa and the Middle East who cross the Mediterranean Sea to look for opportunities in Europe. This inflow may have deep effects on the economic, demographic, and cultural future of Europe. However, another important but much less noticed mobility phenomenon has been on the rise. This is the increased mobility of young, mostly highly educated individuals from Southern to Northern Europe, which is a consequence of free mobility in the European Union, and is driven by the differences in economic performance of European countries.
In the aftermath of the deep 2008-2010 recession that affected the economies of Southern Europe badly (especially Greece, Spain, and Italy), a large wave of young, highly educated individuals from those countries moved to the UK, Germany, and other northern European locations searching for better job opportunities. In 2014 around 110,000 Italians migrated abroad, while, by comparison, the number of asylum-seekers in Italy was only 60,000. Had Italian migrants travelled by boat, we would have seen one boat with 2100 Italians leaving Italian shores every week.
While mobility of people, in Europe and the rest of the world, brings large advantages to migrants and allocates economic resources where they are most productive, it may produce a drain on resources from the sending areas. Several studies have analyzed the negative effect of the drain of highly educated individuals on the economic development of the sending local economy. However, the drain of young, highly skilled and dynamic people might also slow down political renovation, and worsen the quality of political leadership in the local governments.
Political scientists first noticed a trade-off between “exit” (i.e. leaving the country) and “voice” (i.e. political activism in the country) for people who wanted political change in totalitarian countries such as Eastern Europe before the fall of Communism (see the famous essays by Hirshman, 1993). If the more recent emigration from southern to northern Europe drove young people abroad on a similar “exit”/“voice” trade-off margin, we should expect the emigration to drain the political and social capital that is crucial for political renewal and change. This is what happened to Italian local governments in the years between 2008 and 2014.
Another important but much less noticed mobility phenomenon has been on the rise. This is the increased mobility of young, mostly highly educated individuals from Southern to Northern Europe, which is a consequence of free mobility in the European Union.
During this period, younger, more gender-balanced, and usually higher-educated political leadership emerged in many local governments, especially in the North of Italy following a widespread demand for change and reforms. However, municipalities with larger emigration rates exhibited a substantially slower pace of political change (or no change at all), as measured by characteristics of the political leadership (e.g. their education, gender, and age) and political efficiency of the local government. More specifically, larger emigration rates during the recession were related to a decrease in average education, an increase in average age, and a decrease in women’s share in the local political leadership (mayors and city council members) between 2008 and 2014. High-emigration municipalities also had a decrease in political participation (electoral turnout), and weaker support for protest/anti-establishment parties which had a larger probability of experiencing a dismissal of the local government for grave dysfunctionalities or corruption.
This evidence is consistent with the worsening in the quality and performance of local politicians in municipalities with large emigration rates. One explanation for such relationship is that young, educated, and dynamic voters, and potential political leaders, found an easier way to fulfil their need for improvement and change in the emigration option (exit). This reduced the potential for change in the local political environment (voice option).
While free mobility across European countries brings large gains overall by allowing productive young individuals to migrate towards better opportunities, it might have important distributional effects: economically strong countries tend to benefit from the migration inflows of these productive workers, while the economies suffering stronger economic recession are penalized a second time by the drain in political capital and the consequent political and institutional stagnation. A vicious circle that might dramatically amplify the divergence across countries. On the verge of Brexit, this Italian emigration episode also represents a cautionary tale for the effects of a possible drain of young, productive, European and British individuals on the British economy and political system.
Remaining connected with the diaspora, allowing emigrants to play a role in the political renovation of the country of origin by remitting ideas and values, and granting them voting rights and participation mechanisms in the political process, are possible strategies for recovering part of the “lost political capital” of emigrants.
Featured image credit: Crowd Of People by Free-Photos. Public domain via Pixabay .
The post Emigration and political change appeared first on OUPblog.

December 21, 2017
Harnessing the power of technology in medical education
Virtual Reality. Augmented Reality. Gamified Learning. Blended Learning. Mobile Learning. The list of technologies that promise to revolutionise medical education (or education in general) could go on, creating an exciting yet daunting task for the course leaders and educators who have to evaluate them. The visceral appeal of technology is understandable: technology is cool, and it’s increasingly a part of student’s lives and therefore something that they expect to be integrated into their curriculum.
However, it’s worth bearing in mind that use of technology is only a means to an end, and without a clearly defined implementation plan and goal, projects are often doomed to fail. Medical schools therefore have the difficult job of balancing student demand for technology with resource constraints and opportunity costs. This isn’t easy, especially considering that technology is evolving at such a rate that it is very difficult to find traditional evidence-based appraisals of these implementations, before the technology that was being tested is obsolete.
Considering this, it’s easy to see how unchecked enthusiasm for technology could be disastrous, if medical schools were to take too many risks and replace traditional-but-proven learning models with exciting-but-unproven tech-based alternatives. An example of this can be seen in ‘Problem Based Learning’, an intuitively appealing idea that medical schools could modernise their approach to teaching by creating a curriculum that traded traditional lectures for small, group-based discussions where students were ‘put in charge of their own learning’ and encouraged to solve problems.
A number of medical schools completely swapped out their curriculum to PBL, though even the most favourable studies now show that, at best, PBL is no better than traditional learning. A number of medical schools now have to move back from PBL to a more traditional curriculum, which is obviously not an inexpensive process.
The PBL example shows that it is possible for people and institutions to get over-excited about a new idea, and to sacrifice tried-and-tested techniques in favour of them. It is worth bearing this example in mind as a warning of what can go wrong when evaluating new technologies. It is perfectly reasonable to assume that new technologies can lead to significant improvements in medical education but they need to be considered carefully.
“…the best system is the one that people will actually use”
In order to successfully evaluate new technologies, educators should first speak to students about what they are already using. In the vast majority of cases, medical students will already be using technologies to help their learning, and where possible the medical school should look to complement and support, rather than replace this. After all, the best system is the one that people will actually use.
Secondly, educators should consider what metrics to measure success by. This can be tricky and there’s rarely a single metric that can measure the impact of a complex intervention, but it’s important to have at least some indication of whether the technology is better or worse than the thing it replaced.
In summary, we’re in a very exciting time for medical education. New technologies mean that medical students can get more exposure to conditions, procedures, and emergency scenarios during their time at medical school, or even just from home with the right equipment. Similarly, even more established technologies and platforms such as Wikipedia, blogs, and YouTube are enabling a revolution in ‘peer to peer learning’, with content being created, reviewed, and consumed by students themselves, with little to no input from the medical school.
The role for medical educators is to successfully guide their institution, and their students, through these changes and to implement technologies in a way that produces safer and more competent doctors. This means saying ‘no’ to new technologies just as much as it means saying ‘yes’.
Featured image credit by Rodion Kutsaev. Public domain via Unsplash.
The post Harnessing the power of technology in medical education appeared first on OUPblog.

Renewed activism, not budget cuts, needed to end the aids epidemic
Policy makers, organization, and governments have worked side-by-side with people living with AIDS as part of a global social movement for three decades. The success of the movement for HIV treatment access not only garnered billions of dollars of new money for HIV treatment, but also served to shift the public health paradigm from prevention-only to the provision long-term treatment. This paradigm shift ushered in a new era in global health. One that has strengthened health systems and treated a variety of conditions from non-communicable diseases, women’s health, mental illness, and cancer. Stronger and more resilient health systems are the result. Adult, child, and maternal mortality have dropped in many of the world’s poorest countries. UNAIDS recently announced that as of 2017, 21 million people have received antiretroviral therapy—the life-saving medications that have transformed AIDS from a fatal disease to a manageable and treatable one.
The establishment of health systems lays the needed groundwork for health care delivery. It is imperative in this new era of global health that we use science and data to continue to improve the quality of care, fight epidemics, and progressively avert more suffering and disease. The data is clear: it is possible to end the global AIDS pandemic. To do so, the international community must commit more resources to achieve the target of 90-90-90 (that ninety percent of people with the disease know their status, ninety percent of those with AIDS are on ART, and ninety percent of those on ART have undetectable virus). While the reach of global AIDS programs to treat 21 million people is remarkable, there remain 17 million people living with the disease who are not yet on antiretroviral therapy.
…the reach of global AIDS programs to treat 21 million people is remarkable…
In addition, second- and third-line AIDS drugs are needed. Millions of people on ART are living longer and have already or will eventually develop resistance to first line drugs. Programs must scale-up regular testing for circulating and resistant virus so that new drugs may be added as needed. Without this critical step, people will die and a second wave of new infections with resistant virus may occur. Lastly, preventive therapy must be expanded with PREP a single drug for those whose sexual partners are HIV positive.
As US Congress undergoes a December of budget-wrangling and spending debates we must voice our support for expanding global health funding, increasing US support to the Global Fund and increasing the PEPFAR budget. Investing in these steps now is critical to end the global pandemic that has been with us for almost four decades. The gains against HIV and improvements in health writ large will be lost if we lose focus on health as a cornerstone for global development. Sustained international funding is needed to combat the major epidemics of our time and to achieve Universal Health Coverage as part of the United Nation’s Sustainable Development Goals.
Each year on 1 December, it is World AIDS Day. It is a reminder. A time to remember those we have lost in the almost four-decade struggle against AIDS. World AIDS Day is also a time to reflect on the greatest global victory of the twenty first century—the collaborative global response to provide AIDS treatment for the most vulnerable which now claims 21 million people on AIDS treatment worldwide. In an era where bad news and fragmentation reigns, the global response to the AIDS pandemic is without peer in demonstrating a positive side of globalization.
…new HIV infections fell by nearly 40 percent between 2000 and 2016…
The WHO reports that new HIV infections fell by nearly 40 percent between 2000 and 2016, and HIV-related deaths fell by a third in that time 13.1 million lives saved. But rather than make us complacent, these victories should serve as a reminder that we can do better, we can accomplish great things through collaboration and solidarity. 40 years into the war against HIV, we must commit to end the epidemic and fight for health for all.
Featured image credit: Cafe by Christian Battaglia. CC0 public domain via Unsplash .
The post Renewed activism, not budget cuts, needed to end the aids epidemic appeared first on OUPblog.

December 20, 2017
The New Year is approaching. What else is new? Or a chip off the old block
Many things are new. The vocabulary of the Germanic languages shows its great potential when new objects have to be described. Even to characterize people wearing shiningly new clothes English has a picturesque phrase, namely, he/she has come (or stepped) out of the bandbox. A bandbox, as dictionaries explain, was a light box, made of pasteboard or thin flexible pieces of wood and paper, for holding caps, bonnets, or other light articles of attire: so called because originally made to contain the starched bands (that is, ruffs) commonly worn in the sixteen-hundreds (so The Century Dictionary). A person exquisitely neat and dressed in clothes seemingly straight from a tailor’s shop does look as though he or she has just left a bandbox of old. Originally, mainly or only clergymen kept their linen in such boxes, and, when something “came out” of them, it looked “very smart, spick and span.” Spick and span, a most curious idiom! I’ll leave it for dessert.

Perhaps the most common English expression for something quite new is brand-new, occasionally appearing in the form bran-new (but this is only a phonetic variant). Today, brand is perhaps remembered mostly not as “a piece of burning wood” (though compare firebrand), but as “trademark” (brand name springs to mind at once) and the verb (for example, to brand cattle; those interested in cattle branding may look up the origin of the word maverick on the Internet). In Old English, brand also meant “sword” (Italian brando “sword” is a borrowing from Germanic), perhaps with reference to the gleaming blade and flashing when wielded. Such is at least the prevailing opinion. Other Germanic languages had the same word. In Old Germanic poetry, swords were called flames and gleams of battle and were described as flashing, shining, blazing, and the like. If we look at the German for burn, we will find brennen. Then the connection between bran-d and bren-nen (by ablaut) will become immediately obvious. In English, r follows the vowel in the word burn, but this is a peculiar English change. When vowels and consonants play leapfrog, this process is called metathesis.

The idea of brand-new seems to be “fresh out of the furnace.” The cautious Oxford Dictionary of English Etymology says perhaps about this connection, but it also says perhaps about why brand also meant “sword.” Most likely, the situation requires no such guarded remarks. In 1876, Frank Chance, an active and successful etymologist of the second half of the nineteenth century, participated in the debate about the origin of the adjective brand-new and took to task a certain W. M., who attacked Archbishop Richard C. Trench, famous for his study of words and proverbs and for the idea that inspired what we know as the OED. I will quote the opening salvo for the sake of its style. The same pugnacious rhetoric marked the publications of Walter W. Skeat, James A. H. Murray, and many of their contemporaries. “W. M. is indubitably wrong, and Archbishop Trench right.” Among many other things, Chance collected words like brand-new: Dutch brand-nieuw, German funkel-neu and funken-neu (Funke “spark”), that is, so new as to glitter or give out sparks.
Adjectives with a reinforcing first element are numerous, but the message of the reinforcement is not always clear: consider Engl. stock–still (exactly which stock is meant?) and German fuchs-teufels-wild “furious”: Fuchs is “fox,” Teufel is “devil,” and wild is “wild,” but are foxes known for outbursts of bad temper, or does this fuchs have nothing to do with foxes? Many German adjectives synonymous with funkelneu have –nagel “nail” inserted in the middle. Is the reference to a straight and shining nail or to an object as though just nailed together? Dutch is instructive because in addition to nagel-nieuw it has a series of adjectives with elements denoting “splinter.” Splinters or perhaps spikes in this role also occur in the Scandinavian languages. They bring us closer to spick and span.
The origin of this somewhat enigmatic phrase and its close analogs has been discussed for centuries. Some of the early conjectures are curious, even “wild.” Horne Tooke (1736-1812), the author of many fanciful derivations, wrote that spick and span new means “shining new from the warehouse.” He combined a Dutch word with a German one, but it remains unclear what the warehouse has to do with the whole. The Swede Johan Ihre (1707-1780), a learned and reliable scholar, explained the analog of spick and span as a chip just cut. He had a much better idea than the one brought forward by the German etymologist Georg Wachter, even though he looked upon Wachter’s work as his source of inspiration. Wachter traced span to a verb meaning “to milk,” so that the result was “new as the first milk after calving.” (Those who know the German word Spanferkel “sucking pig” will recognize the misleading root.) Ihre, it appears, hit the nail on the head. English philologists also knew the true origin of spick and span long ago. The dialectal synonym spliter–new, Low German spiker-neu, and Icelandic spann “chip; spoon” told the researchers that the English adjective has something to do with spikes (or chips) and spoons. (Old spoons were, naturally, wooden. We too have plastic silverware.)

The binomial was spick and span; new was added later for emphasis. Span-new appeared in English dialects as early as the thirteenth century. Later the phrase was extended, and spick and span new appeared: it must have been tempting to add an alliterating near-synonym to the first adjective, even though three-element phrases of this type are all but non-existent. Usually we encounter phrases like stone-deaf, stone-broke, and the already cited stock-still. The source of spick-new is Scandinavian. In the pre-Skeat editions of Webster’s dictionary, the following explanation was given: “Quite new; that is, as new as a spike or nail just made, and a chip just split.” Engl. spick “spike” existed, but Swedish spik means “nail,” which returns us to German funkelnagelneu. Dutch has spik-splellder-nieuw and spik-splinter-nieuw. At one time, Engl. spelder “splinter or ship” also turned up. Frank Chance says: “Chips and shavings are commonly new, as they are usually burned up as fast as made.” But this explanation is hardly needed, for shavings are new by definition, whether burned or not, and, while cutting wood, we see chips flying in every direction; those are also new.

It will be seen that phrases like brand-new, “nail-new,” and “shavings-new” are current all over Germanic. Some must have ben borrowed. Is it possible that at one time they were part of the lingua franca, that is, the professional language of itinerant carpenters, blacksmiths, and other artisans? I suggested a similar idea in connection with the etymology of the English word ajar, which too has a puzzling number of seemingly superfluous near-homonyms, all meaning the same (see the post for August 22, 2012). Those who have access to my etymological dictionary will find more musings on this subject in the entry adz(e).
Who said that there is nothing new under the moon?
Featured image credit: “Full Moon Evening Sky Moonlight Moon Mood” by kasabubu. CC0 via Pixabay.
The post The New Year is approaching. What else is new? Or a chip off the old block appeared first on OUPblog.

The search for doctors in primary care
There is a physician workforce crisis in primary care, both in the United States and United Kingdom. In the UK, half or more general practice physician training positions have been difficult to fill in certain parts of the country. In the US, the American Association of Medical Colleges estimates that by 2025 there will be a shortfall of between 15,000 and 35,000 primary care physicians nationally. If you’ve recently tried to make a quick or easy appointment to see your primary care doctor in the US, you have found more often than not it is difficult to do. You end up seeing a nurse practitioner, physician assistant, or get told to go to the local urgent care centre. New patient waits for primary care doctors are up in many parts of the US, and in the UK delivery system, which has suffered chronically from long waiting times, there are calls to turn more patient visits into phone consultations, given the rising workload of general practitioners.
The situation in both countries is neither temporary nor easily fixable. And at its core the reasons for these shortages is similar in both places. Overall, the field of primary care medicine looms as increasingly unattractive to young medical students, for a variety of reasons.
First, it is a complicated field of medicine where the perception remains that one cannot know enough to be highly knowledgeable in all the areas of clinical care required. This makes many young doctors conclude that it involves becoming “a jack of all trades, master of none.” Second, both the unbalanced lifestyle and intense workload of a full-time primary care doctor or general practitioner who works as a partner or employee of a single organization puts many off from the outset. Increasingly, young doctors prefer a healthier work-life balance than their predecessors, at the same time wanting higher compensation for being primary care doctors as acknowledgement of the fact that the job is a difficult one. Yet, salaries for primary care remain low compared to other specialties, and the work-life balance that millennial medical students desire is seen as less and less possible in a field where the everyday responsibilities are often too diverse and highly task-oriented.
But there is a deeper, more profoundly troubling reason for both the growing exodus from primary care by older doctors, and the turn-off of it as a career for younger ones. That reason is the growing realization among both primary care doctors and their patients that relational care, the bedrock of effective primary care delivery, is less and less possible in US and UK health systems. Relational care is care defined by strong interpersonal attributes such as trust, mutual respect, and empathy. It involves doctors developing ongoing relationships with their patients, which foster emotional bonds that improve diagnostic accuracy and treatment efficacy. It involves primary care doctors finding greater meaning in their own work by experiencing the rewards of interpersonal care delivery—seeing their patients as individuals with unique life stories, and playing the roles of friend, advisor, and confidante for those patients.
What can be done about this situation? It may already be too late to turn back the decline of physician-centric primary care. The shortages of primary care doctors cannot be made up easily, and demand for primary care continues to rise. In addition, system forces in both countries, particularly the emphasis on squeezing greater efficiency out of care delivery; the intense focus on measurement and standardizing care through guidelines; and the growing use of non-physician personnel to provide services hamstrings the development of stable interpersonal relationships between doctor and patient, burns out physicians, and leaves patients with lowered expectations for their care.
“It may already be too late to turn back the decline of physician-centric primary care […] But that doesn’t mean we should give up.”
But that doesn’t mean we should give up. In its fullest realization, primary care medicine still offers the best chance for someone calling themselves a doctor to have meaningful connections with their patients. If we can better convince medical students of that reality, more will continue to choose the field. Even though their expectations have lowered, patients still would respond to competently done relational care, meaning that there remains strong demand for primary care doctors that are in the best position to give that type of care. We need innovations in primary care delivery that commit the system to a renewed emphasis on doctor-patient relationships, rather than emphasizing how to just give patients greater convenience in getting a minimum level of service quality. Young medical students have to be trained and socialized better to understand how to be a relational doctor in a system that won’t always reward or support that type of care. This way, when they choose the field, they will both know what they are getting into and be better prepared to deal with it.
Most of all, though, we need to have a larger societal conversation about the important role of doctors in our primary care delivery system, then advocate strongly within the delivery system and with insurers that the rewards for organizations that pursue a more physician-centric approach will be greater than for those that do not.
A version of this post was originally published on the Northeastern University D’Amore-McKim School of Business Blog.
Featured image credit: Photo by ESB Professional via Shuttershock .
The post The search for doctors in primary care appeared first on OUPblog.

To understand modern politics, focus on groups, not individuals
Modern politics seems very ego-centric. It’s common and rational to focus attention on particular individuals, or individual leaders, and puzzle over their actions. For several decades, the social scientific approach to politics also focused on individuals as the unit of interest to explain outcomes and behaviors. However, our ability to offer explanations about puzzling phenomena in politics is limited when our eye remains trained on the individual. Expanding our understanding of contemporary puzzles requires a change in our perspective.
On the other hand, if we’re not talking about politics with respect to a particular personality, we tend to talk about politics in terms of relationships and networks. It’s a part of everyday parlance to say that something happened in politics because of who you know, who one’s connections are, or who is connection to whom. Whether it’s late night talk show hosts, news anchors, pundits, or commentators, there is a natural inclination to talk about relationships in politics. Sometimes the relationships are nefarious, insinuating, or about rallying support. Whatever the context, the natural inclination to talk about politics in terms of how people and institutions are related infects both the common and academic discourse.
Since the middle of the 20th century, political scientists and policy analysts have focused on individuals to explain what happens in politics. The dominant intellectual paradigm in political science was primarily borrowed from economics. In this framework, individuals make choices in the face of constraints imposed by institutions. This approach is very logical in the study of politics since political actors–including voters, elected officials, media, advocates, and others–seek to achieve objectives while facing obstacles. The study of how individuals face these objectives and make their choices to produce political outcomes has been a fruitful lens to study politics through for some time.
Buried in this perspective were other scholars who viewed politics as a set of interactions. This minority set of scholars focused on the interactions as the analytical units of interest rather than the individual. In addition, this perspective borrowed more from sociology and behavioral sciences rather than economics, and treat groups of people as the object of study. This perspective was somewhat overshadowed by the individualist perspective for many decades. Academically, and socially, things are changing.
Whether it’s late night talk show hosts, news anchors, pundits, or commentators, there is a natural inclination to talk about relationships in politics.
One can see the value of a social, relational, or network perspective by looking at particular examples where studies have shifted from a focus on the individual to groups. Take voting for example. From the 1960s until just a few years ago, political scientists understood an individual’s decision about whether or not to vote in an election to be a choice determined by incentives the individual faces. However, using a social, relational, and network perspective, we now understand the decision about whether or not to vote is a social one. The decision about whether or not to vote is strongly affected by the context in which voters exist. It is impossible for us to understand the decision of an individual to cast a vote with absent information about that individual’s social context.
A second example is the study of the relationship between democracy and the tendency of countries to go to war. Until a few years ago the so-called “democratic peace” theorized that democracies are less likely to engage in conflicts than other forms of government. This explanation faced empirical challenges that could not be overcome when studying conflicts at the level of individual or pairs of countries. A social network perspective on the question of democratic peace shows that to understand the relationship between a country’s form of government and its tendency to engage in conflict requires an understanding of that country’s position in the world, relative to all other countries.
Insights like this are abundant in the modern political networks literature. They emphasize the need and the value of using a relational perspective to study how and why things happen in the political world. If we remain focused on individuals, we run the risk of missing an entire set of contexts that may enlighten the questions over which we’re most puzzled.
It is no accident that the renaissance of understanding politics in social context is happening at the same time as an explosion in data availability and computational power. In contrast to traditional data-driven strategies in the social sciences, network oriented-data tends to be boundless. In traditional statistics, where the power and leverage of causal inference is dependent on quality sampling, such strategies are often not available in studies of networks. Mathematically, sampling from a network is problematic. Therefore, the traditional strategies of engaging in causal inference are compromised in network studies. However, personal laptop computers now have the memory and computational capacity to handle many gigabytes, or more, of data. In some ways, our data and computational capacity has grown faster than our theoretical knowledge about how networks work. We now have access to more data than we ever have before. Now more than ever we require the theories and methods necessary for understanding the interdependencies in our social and political world.
Individuals are consequential, but politics is about relationships. If we eschew a social perspective on politics because it is inconvenient or difficult, we are likely to misunderstand or misconstrue what we observe in the world. The idea that networks are a consequential component of our social lives is apparent in everyday life, and is increasingly a part of how we understand decision making by citizens, policy makers, and leaders. It’s an exciting time to be alive and watch the world transform its understanding of human behavior and interaction in real time through social media, and to simultaneously incorporate this knowledge into scholarship that explains and reflects our society. Meaningful contributions have already been made, but we are only at the beginning of drawing on this paradigm to advance our understanding of ourselves.
Featured image credit: Networking World by geral. Public domain via Pixabay .
The post To understand modern politics, focus on groups, not individuals appeared first on OUPblog.

Of microbes and Madagascar
Microbes are everywhere.
On door knobs, in your mouth, covering the New York City Subway, and festering on the kitchen sponge. The world is teeming with microbes—bustling communities of invisible organisms, including bacteria and fungi. Scientists are hard at work cataloging the microbial communities of people, buildings, and entire ecosystems. Many discoveries have shed light on how culture and behavior shape these communities. For example, we now know some interesting things about human skin microbes: hand microbes can be transplanted to other people and objects, wearing deodorant changes the microbes that live in our armpits, and people are more similar (in terms of microbe composition) to their own dogs than to other dogs.
Indeed, microbes are everywhere.
However, the majority of what we know about human-microbe interactions comes from studies in industrialized settings like the US and Europe, where people spend most of their time indoors and disconnected from the natural environment. This lifestyle is drastically different from those of early humans. Whereas we used to run across the savanna to hunt game, we now run on the treadmill to burn off that extra slice of pizza. Because our culture evolves faster than our bodies, we can become mismatched to our new environment, which often has direct consequences for our health. It is easy to imagine that our contact with the outside world, and its microbes, has changed since the era of our ancestors. What is less clear, though, is if these changes create microbial mismatches that influence health.
Many human populations still live in close contact with the natural environment, and skin microbe communities in these settings often reflect regular interactions with the outdoors. Studies of these communities are important for considering mismatch, as this setting more closely resembles the environment in which humans evolved. With this in mind, I traveled to Mandena, a rural village in Madagascar, to investigate how contact with the natural environment affects skin microbes. Here, rice and vanilla farming is common, and many farmers use zebu (domesticated cattle) to work in the fields. We wondered if humans in close contact with zebu display a “microbial fingerprint” of interacting with livestock, similar to what would be expected in Westerners that interact closely with pets.
To research this question, we obtained skin swab samples from twenty men living in Mandena, supported by Duke University’s Bass Connections and the Duke Global Health Institute. We sampled four sites on each person (back of hand, outside of ankle, inside of forearm, and armpit) and the back of each zebu. We predicted that the skin microbes of the ten men who work with zebu would be different than those of the ten men who do not. We also expected to find differences in skin microbe communities across body sites. For example, dry, bare feet that are exposed to the outside environment should harbor different microbes than warm, unexposed armpits.

We were surprised to discover that despite close contact with zebu, the skin microbe communities of zebu owners were not markedly different from those of men who did not own zebu. It may be that other factors, such as host genetics and skin pH, are important in determining whether or not a given body site is a good home for microbes. However, there were clear differences in microbe communities across body sites. Ankle samples were the most similar to zebu samples, which is likely due to the shared environment of zebu and human feet (often without shoes) in the fields. Sure enough, zebu owner ankles harbored soil bacteria, including taxa that include pathogens for humans, plants, and animals.
We also tested the hypothesis that there would be microbial similarities between a given zebu and its owner (remember the dog study?). Interestingly, we found that a zebu was no more similar to its owner than to other owners. We think these results indicate a type of environmental mismatch. Aspects of the built environment, like the use of cleaning products and air conditioning units, affect which microbes are present alongside humans and their indoor pets. The variability across home microbial communities likely amplifies microbial similarities within each cohabiting dog-owner pair. In other words, you and your dog are exposed to the same indoor microbes, which are likely different from the ones that your neighbor contacts in his home. In contrast, all homes in Mandena are constructed from plant material and more closely resemble the outside “home” environment of zebu. Thus, it is likely that all humans and all zebu have ample opportunities to contact similar types of environmentally-derived microbes.
Our results indicate that contact with the environment, not solely with zebu, is one driver of skin microbial communities in Mandena. Thinking about lifestyle differences that influence contact with environmental microbes can help to tackle issues of health disparities. If certain microbes are linked to disease, are people living or working in environments rich with those microbes more susceptible to getting sick? If so, how can we use our understanding of microbes and mismatches to tackle these problems? Incorporating the microbiomes of non-industrialized populations can help us understand how associated health outcomes differ across the world, especially in populations that are typically targeted for other global public health initiatives. Answering these types of questions is critical if we wish to use microbiome research to improve health, and will require interdisciplinary efforts across microbial ecology, evolution, and global public health.
The post Of microbes and Madagascar appeared first on OUPblog.

December 19, 2017
Is yeast the new hops?
In recent years we have seen a revolution in brewing and beer drinking. An industry once dominated by a small number of mega brands has shifted so that bars and retailers across the world are offering a seemingly endless variety of beers produced by craft or speciality breweries. In the midst of all this new variety a key characteristic has been hops. This ingredient contributes the bitterness and much of the aroma to beer and brewers appear to be in a constant battle to produce the most bitter and exotically hoppy beer possible. This is perhaps an understandable reaction to the era of bland lagers that were differentiated more by their marketing campaigns than any clear difference between the products. But what now? Have we reached peak hoppiness in the beer market, and is further diversity even possible with the number of products on the market?
The signs are that further development is possible and indeed underway, though perhaps from an unlikely source. The humble yeast, the workhorse of the brewing industry, is beginning to receive the recognition that it perhaps has been missing. The brewing industry has traditionally relied on consistency to maintain customer loyalty. Breweries were proud to declare that they have been using the same yeast strain for generations. Even some craft brewers still view yeast as simply a means to an end; as long as the fermentation is completed there is no problem. However, the role of yeast is much more interesting and varied than that of a simple alcohol factory. During fermentation, yeast produce a range of aromatic compounds that give beer its characteristic flavour and vivacity. The next time you drink a wheat beer, remember that the prominent spice and banana flavours come not from barley or hops, but directly from the yeast. Yeast also removes the raw grain-like taste of wort, the syrupy malt extract from which beer is derived. The burgeoning interest in yeast has been inspired by a realization that yeast is not just one thing. There are specific strains for specific products: wheat beers, bitters, lagers, etc. And it’s not just strains of one species. The complex and funky flavours in Belgian lambic beers, for example, are due to a consortium of different yeast species all lending specific flavour notes during the long lambic fermentation process. This relationship between yeast complexity and flavour complexity has been appreciated for some time in the wine industry, where the presence of different yeast species is encouraged and often facilitated by winemakers. This attitude is however still new to the brewing industry.
We know of over 2,000 different yeast species, though only a handful have ever been used for brewing. However, even this small number has produced interesting flavours. Yeast with exotic names like Torulaspora and Kluyveromyces, for example, are known to produce distinct flavours of bubblegum and rose. Another yeast, Lachancea, adds a crisp acidity to beer, along with pleasant fruit aromas. With the aid of such yeast a brewer can generate innovative flavour profiles, completely naturally and without resorting to artificial flavourings. Such bioflavouring is perhaps most important in the production of low alcohol beers, which are often derided as lifeless and flat, an unfortunate consequence of fermentation processes that discourage yeast activity to limit alcohol production. Many non-traditional yeast are perfect for the production of low-alcohol beers due to their inability to utilize the complex sugars in brewer’s wort, while still producing typical beer flavours.

By far the most commercially important brewing yeast is the one that drives the lager brewing industry. This yeast, Saccharomyces cerevisiae is unique in that it combines two features that are rarely seen in the same species: ability to convert the complex sugars in wort to alcohol, and the ability to do this at low temperature. This latter feature is crucial as the crisp freshness of lager beers requires a cold fermentation. The uniqueness of this species stems from the fact that it is in fact not a true species, but rather a two-species hybrid – in effect, more of a mule than a workhorse. The two species involved, S. cerevisiae and S. eubayanus provide, respectively, good fermentation tolerance and ability to withstand very low temperatures. Without the happy marriage of these two species, lager beer as we know it would not be possible.
How the two parental species originally united is still a matter of debate. The S. cerevisiae parent was almost certainly an ale strain and the S. eubayanus probably a contaminant in the original fermentation. Though, curiously, S. eubayanus has never been found in Europe (the birthplace of lager beer) and appears, rather, to be a native of South America and Asia, raising the question of how it found its way to Central Europe to kickstart the lager brewing industry sometime around the 15th century. Whatever the reason, it was an unexpected and rare event that may have only happened once or twice in history. Essentially, lager brewers have been using the same one or two strains for centuries – perhaps one of the reasons for the aforementioned sameness of lager beers. This situation has now changed with the recent discovery of S. eubayanus, originally in Patagonia, but later in other regions. Once the absentee parent had reappeared, it became possible to artificially recreate the historic hybridization event that originally led to the emergence of the lager yeast. This has now been done by labs in Belgium, Finland, the Netherlands and elsewhere, greatly increasing the number of new strains that can be applied in the brewing industry. Blandness is no longer an issue; lager yeast with specific traits can be obtained simply by choosing parents with the required properties. Some of the new strains even appear to be better at fermentation than the natural hybrid yeast that has been utilized for generations.
The hunt for S. eubayanus in Europe continues and its location will surely shed light on the complex family tree of brewing yeast. In the meantime it has been realised that many Saccharomyces species are, like S. eubayanus, cold tolerant. Also, like S. eubayanus, they have little ability to ferment wort. However, when hybrids are created by mating ale yeast with some of these alternative species, the results have been surprisingly good, with some of these alternative hybrids even performing better than the S. cerevisiae x S. eubayanus hybrids. There is now the opportunity to make new lager yeast hybrids with the aid of local wild species. The majority of these new species have never been used in any biotechnological process and similar attempts could be made to take advantage of the phenotypic properties of other ‘wild’ yeast. These could include all the processes that involve yeast fermentation: baking, soy sauce, wine, kombucha and cider fermentation, even production of bioethanol or commercially important chemical precursors. A spirit of experimentation in the brewing industry has inspired a fresh look at what can be achieved by studying the basic biology and biogeography of wild yeast and may yet lead to many more new and important innovations that could be applied in a range of industries.
Featured image credit: Beer before meal by Jakub Kapusnak. CC0 Public Domain via Foodiesfeed.
The post Is yeast the new hops? appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
