Oxford University Press's Blog, page 248

June 14, 2018

Five ways entrepreneurship is essential to a classical music career

The other day, I posted something on my professional Facebook page about entrepreneurship and my compositional activities, and someone who I don’t know commented: “Forget entrepreneurship. Just compose.” (Well, they actually put it in somewhat more graphic terms, but in the interests of decorum…)


This sentiment is nothing new: resistance to “the e-word” continues; if anything it’s intensified in recent years as entrepreneurship has become an over-used buzzword. After all, when something becomes a buzzword it tends to be misappropriated (if not completely misunderstood), which in turn will inspire others to push back (sometimes, as with my commenter, quite bluntly).


Given this dynamic, clearing up common misunderstandings about the nature of entrepreneurship and how it can operate within our careers (as well as calling out those times when the term is too-casually thrown around) becomes all the more urgent for us arts entrepreneurship educators. In that spirit, I offer these five reasons why entrepreneurship is essential to a classical music career. I hope they might help you re-think “the e-word” and contemplate how it can empower your own career goals.


1. Entrepreneurship helps you identify opportunities


Entrepreneurship is all about problem-solving, about meeting needs in your community or within the marketplace you hope to reach. So a big part of adopting an entrepreneurial mindset is developing what I call “strategic observation.” That is, observing your field, area of interest, market, or community with an eye for musical opportunities. Oftentimes, these opportunities exist completely outside the arts and culture sector. For instance, is there a community event or milestone for which music could be used as part of the celebration? Is there a social need that could be addressed through music (engaging the homeless or the incarcerated)? Opportunities outside the traditional tracks of concert performance or studio teaching can have some considerable advantages, including the fact that nobody else is doing it (not to mention untapped funding sources to utilize). Once you begin to develop the habit of “strategic observation,” you’ll begin to realize that you’re surrounded by opportunities you hadn’t seen before!


2. Entrepreneurship requires creativity


Every year in my entrepreneurship class I pose this question to my students: Do you think that classical musicians get much chance to be creative in their careers? And the answer is always a resounding no. Why that’s the case is another essay altogether, but it begs the question: if traditional career paths in classical music do not afford a lot of opportunity for creativity, novelty, or innovation, then how else can we introduce creativity into our lives? One way is through entrepreneurship, because creativity goes hand-in-glove with the strategic observation we just talked about. Let’s say you’ve identified a big civic event in your community, but it’s not immediately clear how your string quartet might be a part of it. That’s where creative problem-solving comes into play: coming up with an idea for your group to perform, attract an audience, generate some press or social media buzz… and get paid! Once again, this might not be something you’re used to, but once you get in the habit of creative problem-solving, you’ll discover how fun it can be.



…if traditional career paths in classical music do not afford a lot of opportunity for creativity, novelty, or innovation, then how else can we introduce creativity into our lives?



3. Entrepreneurship allows flexibility


Another key entrepreneurial principle is that of flexibility and adaptability. Surely one of the constants in today’s society is that things are always changing. Moreover, that change is often driven by a complex web of factors, meaning our first stab at solving a problem or meeting a need in our community might miss the mark. When this happens, we can respond in one of two ways: we can just shrug our shoulders and try something else, or we can evaluate what went wrong and how we might fix it. By focusing on the needs and sensibilities or our audience (our market), entrepreneurship provides a rubric under which we can engage in evaluating and fixing that which is broken. Oftentimes, the core of our idea is strong, but there’s some aspect of the customer experience we’ve overlooked that prevents it from succeeding. By pointing us in the right direction, entrepreneurship can help us tweak our venture rather than just throw it out.


4. Entrepreneurship is about connecting with people


At the heart of entrepreneurial thinking is the notion that value for something is unlocked when the needs of the customer are met. For us in music, that means that value for our work is unlocked when we can make a meaningful connection with our market—whether that’s our audience, our students, or our community at large. As musicians, connecting with people ought to be at the center of why we’re in the field to begin with! In other words, entrepreneurship is a mechanism for us to achieve our core purpose: using music as a vehicle for making a positive difference in the world.


5. Entrepreneurship gives us options


There are any number of studies documenting the fact that the vast majority of folks end up pursuing more than one career path over the course of their working lives. I know that’s certainly true for me, and I’ve seen it in the musical careers of countless colleagues. For some, this prospect is a scary one: we went to school to study music, and we may feel we have few skills beyond those that have served us as performers, composers, or educators. This is where entrepreneurship can demonstrate its enduring value, because it is universal. Rather than having a specific outcome in mind (that is, always resulting in the same thing), entrepreneurship allows us to continually reinvent ourselves. That’s because entrepreneurial principles—things like opportunity recognition, focusing on customer needs, and being able to adapt to changing circumstances—can play out in an infinite number of ways. It can result in a brick-and-mortar business or an online marketplace; a not-for-profit social enterprise or an innovative piece of technology. And it can apply to how we conduct our individual careers as musicians. It’s truly one of the only skill sets I can think of that has no limits in terms of how it can be applied.


We know that the musical marketplace is fraught with challenges, is constantly changing, and contains a lot of competition. Embracing an entrepreneurial approach to these challenges not only increases our chances of success, it can also empower and even inspire our artistic lives. And that means it has a vital role to play in any musician’s career.


Featured image credit: “Untitled” by Jason Tong. CC-BY-2.0 via Flickr.


The post Five ways entrepreneurship is essential to a classical music career appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 14, 2018 00:30

June 13, 2018

The amorous and other adventures of “poor pilgarlic”

The word pilgarlic (or pilgarlik and pilgarlick) may not be worthy of a post, but a hundred and fifty years ago and some time later, people discussed it with great interest and dug up so many curious examples of its use that only the OED has more. (Just how many citations the archive of the OED contains we have no way of knowing, for the printed text includes only a small portion of the examples James A. H. Murray and his successors received.) There is not much to add to what is known about the origin of this odd word, but I have my own etymology of the curious word and am eager to publicize it. Besides, one of my sources is an article in Boston Evening Transcript for 1883, and I suspect that the back numbers of this illustrious newspaper (it stopped existing in 1941) are not everybody’s most common reading. So why keep something so useful under the bushel?


Dictionaries agree that pilgarlic (very often with the epithet poor appended to it) means “wretch.”  Todd’s Johnson says: “A poor forsaken wretch,” thus emphasizing the person’s isolation. Also, pilgarlic is (or was) a facetious, half-contemptuous designation of a bald-headed man, with reference to peeled garlic. The confusion of the so-called short and long [i] in English dialects is an old phenomenon. For instance, Jacob “set the rods which he had pilled [= peeled] before the flocks….” Shakespeare punned on ship ~ sheep. George Eliot devoted a passage on the ship-sheep confusion in her Middlemarch. The verb peel is an early borrowing from Latin, and, since pill is a variant of it, pilgarlic looks like a legitimate variant of peelgarlic. But what stands behind this exotic metaphor?


Not Dickens’s lone, lorn creetur, but certainly a poor pilgarlic. Image credit: Old Woman Human Person Invalid Stroke Disease by lagrafika. CC0 via Pixabay.

The word must have been “low” at all times. The peak of its popularity falls on the seventeenth century. The OED refers to the word’s use in dialects, and correspondents to the early Notes and Queries, that is, to the time long before the appearance of Joseph Wright’s English Dialect Dictionary, confirmed this fact. Here are two quotations dating to 1883 (not from the Boston Evening Transcript!): “In Staffordshire [the word]  was used some sixty years ago by old people to describe some one, frequently themselves, on whom some unfortunate responsibility had fallen, in which they were likely to be the scapegoat, or ‘poor Pill Garlick,’ as they put it.” (Staffordshire is a county in the West Midlands.) And: “The word is by no means out of use to describe people of the Mrs. Gummidge type.” Mrs. Gummidge, it will be remembered, was a widow, living in Mr. Peggoty’s house (David Copperfield) who constantly thought of her “old ‘un” and began every statement with: “I am a lone, lorn creetur’.”


In our earliest sources, pilgarlic was associated with syphilis, though one finds only an obscure reference to “a venereal disease.” However, later, when even such hints could not be mentioned in print, those who wrote about pilgarlic stressed the connection between isolation ( or being “forsaken”) and leprosy, as it was treated in the Middle Ages and beyond. However, baldness is not one of the symptoms of this disease. By contrast, syphilis is often accompanied by hair loss, though among the many visible marks of syphilis baldness is certainly not the most prominent one. The path from either disease to baldness remains unclear.


Leprosy, which today is fully curable, was the most feared disease in the Middle Ages. Image credit: Omne Bonum by James le Palmer. Public Domain via Wikimedia Commons.

The fact that garlic produces an offensive smell from the mouth and thus makes the person an “outcast” needs no proof. “The term pil-garlick (sic), as we now hear it occasionally used in conversation [1859!], has this peculiarity, that it not only signifies, in a general sense, one who has suffered ill-treatment, but, specifically, one has been abandoned by others, and left in the lurch…. Garlick of necessity isolates. The Greeks forbad those who had eaten garlick to enter their temples. But, connected with our mediæval therapeutics, there was a peculiar case, in which those who had to do with garlick were placed in a state of isolation.” This case is said to be leprosy. In conclusion: A bald head resembles a head of peeled garlic, garlic isolates, and so do leprosy and, presumably, syphilis; ergo: a bald-headed man, forsaken by society, is called a pilgarlic. I am not sure that this reasoning deserves the name of a syllogism.


One can find another approach to the word. Hensleigh Wedgwood, the main etymologist of the pre-Skeat era, wrote in his etymological dictionary that a pilgarlic is someone who peels garlic for others to eat, who is made to endure hardships of ill-usage while others are enjoying themselves. I know nothing about the profession of garlic-peelers (I had enough trouble with mad hatters: see the posts for 24 January and 31 January 2018). In any case, pilgarlic emerged from Wedgwood’s definition as some sort of metaphor. Thus (such is my inference) could we call someone who pulls chestnuts out of the fire a chestnut. Perhaps so, but what about the earliest reference to syphilis, and why baldness? According to a similar approach: “Pillgarlic was undoubtedly a name applied to the scullion in ancient kitchens, to whom was assigned the lowest service—of peeling or skinning the onions or garlic….” (1883). No one who proposes an etymology containing the word undoubtedly should be taken seriously.  If the writer had any evidence, he would have presented it. And again: what about baldness and syphilis?


Chaucer’s Pardoner still in full glory. Image credit: The Pardoner in the Ellesmere manuscript of Geoffrey Chaucer’s Canterbury Tales by Anonymous. Public Domain via Wikimedia Commons.

Some light on our word or phrase may come from the old allusion to pulling (not peelling) garlic. I am not sure my fantasy is worth anything, but here it is. In the old discussion about the origin of pilgarlic, the Pardoner’s plight was of course noticed. When Chaucer’s pilgrims arrived in Canterbury, the Pardoner made an appointment with the Tapster. He gave her money to buy a good supper, but on his return he found that his place was occupied by another man, who ate his goose, drank his wine, and beat him with his own staff, while he spent the night under the stairs in fear of the dog. “And ye shall hear the tapster made the Pardoner pull/ Garlick all the long night till it was near end day.” The author of a note in Notes and Queries (2/VIII, 1859: 229) wrote: “The derivation of this term seems one of those that is impossible to guess at. The way in which Chaucer speaks of pulling garlick evidently points to some popular anecdote which gave meaning to the phrase.”


I have made a most perfunctory search of annotated editions of Chaucer’s poem and found nothing like what I am going to propose. Some Chaucer specialist who may read this post will be able to refute or support my idea. I believe the idiom is incredibly obscene: to pull garlic must have meant “to masturbate” (in this case, the reference is of course to a cheated, “forsaken” man).  It sounds like our spank the monkey, flog the dolphin, and choke the chicken. I’ll leave it to our readers to guess which part of the Pardoner’s exposed anatomy looked like a clove of garlic and a bald head (modern wits call it a bulb or mushroom tip, among dozens of others). And, to be sure, pulling and peeling garlic was never thought to be a pleasant occupation. The rest of my reconstruction is much less secure. There might have been a near-synonym for pull garlic, namely pill ~ peel garlic. Perhaps a hapless lover ousted by a rival or someone who contracted a venereal disease was called pullgarlic and pil(l)garlic(k), but only the second word has survived. Then baldness came to the front, though being forsaken did not go away. References to leprosy have no support in the evidence, while only one short step separates “a man who contracted the pox” from “an unsuccessful lover” and “a solitary person in misery.”


Featured Image: Garlic pulled and pilled. Featured Image Credit: “Garlic Flavoring Food Seasoning Condiment Pungent” by stevepb. CC0 via Pixabay


The post The amorous and other adventures of “poor pilgarlic” appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2018 04:30

Crises and population health

This piece was originally published by Milbank Quarterly .


On the day after the horrific shooting that claimed the lives of 17 students at Stoneman Douglas High School in Parkland, Florida, the local state representative predicted what would happen next.


“Nothing.”


“We’ve seen this show before,’’ state delegate Jared Moskowitz said. “Now it’s in my hometown. While my four-year-old son was learning to write his name in preschool, his teacher’s daughter was killed in the shooting. We live in the most powerful country in the world and we have failed our children.”


And yet—within a few short days, something changed. Not the facts about the tragedy, but the perception of the problem. A wave of advocacy by high school students has created intense pressure on politicians who have avoided taking on the gun lobby and opened a national debate on what’s needed to prevent mass shootings in the United States. More than a dozen businesses have broken off ties with the National Rifle Association, the pro-gun Florida governor backed a package of gun control measures, and even President Trump began to consider ideas previously ruled off the table.


It’s true that many of these proposals fall short of what’s needed to reduce the deadly toll of gun violence in the United States, and as of this writing, it is not clear how far the ripples from Florida will spread. But in just a week, the conventional wisdom that gun control measures are politically impossible has been upended.


As in economics and political science, much of the thinking in public health takes place within a set of assumptions that what has happened before will happen again. The usual approach to population health starts with needs assessment, coalition building, and strategic planning. And yet the broad sweep of history indicates that moments of high stress can create unexpected openings for change.



History indicates that moments of high stress can create unexpected openings for change.



It was, after all, deaths from tainted elixir sulfanilamide that directly led to the Food, Drug, and Cosmetic Act of 1938, the world’s first law requiring medications to be shown to be safe prior to sale. Then, in 1961, the thalidomide disaster spurred legislation requiring “adequate and well-controlled studies” prior to approval of medications. Major legislation on device regulation, food safety, emergency preparedness, HIV/AIDS, and vaccines for children each followed crises that galvanized national attention.


There are three key lessons from this history.


First, the social and political response to a crisis does not directly correlate with the scale of illness or death. Thalidomide caused just a few serious birth defects in the United States, compared to an estimated 10,000 in Germany; yet while Germany was mired in litigation for years, it was the United States that first passed the strong laws on drug development. The Las Vegas shooting in October caused 58 deaths and more than 800 injuries, and the Pulse shooting in Orlando claimed the lives of 49 people with more than 50 injuries, without much of a movement for policy change in their wake.


In his 1969 book Crisis in Foreign Policy, Charles Hermann defined a crisis as having three attributes: a threat, a short decision time, and surprise. Arjen Boin and his colleagues described crisis as “a serious threat to the basic structures or the fundamental values and norms of a system, which under time pressure and highly uncertain circumstances necessitates making vital decisions.” And Dominic Elliott and Elliott Smith have noted that “[a] defining characteristic of crisis lies in its symbolism.” Together, these definitions make clear that the core part of crisis is a perception that the legitimacy of those in power depends on what happens next.



Image credit: “Measles. This child shows a classic day-4 rash with measles.” by CDC/NIP/Barbara Rice. Public Domain via Wikimedia Commons.

Second, preparation matters. If, as Louis Pasteur famously said, chance favors the prepared mind, then crisis favors the prepared organization. In 1937, the FDA responded rapidly to the sulfanilamide poisoning, emptying its offices to send inspectors across the country to recover unused medicine. The nation then listened to the agency’s case for what was needed to prevent future tragedies. In the early 1990s, the Centers for Disease Control and Prevention and legislators, led by Congressman Henry A. Waxman, responded to a national measles outbreak and developed the Vaccines for Children program—an initiative that has since protected millions of children from preventable disease. More recently, the Disneyland outbreak of measles opened a door for state Senator Richard Pan, a physician, and colleagues to pass legislation closing loopholes in vaccination requirements in California.


Today, those working for population health improvement should develop “in case of emergency, break glass” plans for jumping into crisis situations, helping to resolve the immediate issues, and then pivoting quickly to make the case for policies that can address underlying problems. These plans should be based on solid data and a credible understanding of the issues at stake.


Third, awakening a sense of crisis requires a lot more than throwing around the word crisis. There’s a danger for health officials in being seen as “Chicken Littles,” always claiming the sky is falling and imploring others to act. Declaring crises constantly can distract from the critical work of implementing effective programs and, in a worst-case scenario, encourage politicians to announce minimal steps to win credit in the media and with the public that have little effect on fundamental gaps.


And yet—a crisis can lead to real change and improved health, ending many years if not decades of frustration. Those health leaders who manage crises effectively can earn a chance to push the policy process in unforseen directions. But those who avoid crisis leadership are leaving one of their most important tools back in the toolbox.


Just a few days after his pessimistic assessment of the possibility for change in Florida, Representative Moskowitz told NPR he was hopeful that Florida would take some meaningful action because of the Parkland shooting. “The students have been the ones that have been able to articulate this message so clearly of what the failures were, what they want to see. And they’re not just talk. They’re action. They’re coming up to Tallahassee. They understand there’s a limited window here to do something.”


In showing that the ground can shift in the politics of health at a moment’s notice, the Parkland students have become the teachers.


Featured Image Credit: “Secretary Johnson pays Respect at Pulse Nightclub” by US Department of Homeland Security. Public Domain via Wikimedia Commons .


The post Crises and population health appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2018 03:30

The IMF’s role in the evolution of economic orthodoxy since the Crisis

The IMF & World Bank’s Spring meetings with finance ministers and central bankers, which took place in Washington DC recently, are one key forum where the IMF performs its mandated role as conduit of international economic co-ordination. The IMF uses its knowledge bank, expertise and mandate for economic surveillance and coordination to act as global arbiter of legitimate or ‘sound’ policy. This can shape the international economic policy debate, and how salient economic issues are understood. As such, the IMF’s interpretive framework for evaluating economic policy is a key site of power in world politics.


Christine Lagarde set out the IMF’s policy priorities in a recent speech, warning of the dangers of rising protectionism. Eschewing the Fund’s customary under-statedness, Lagarde ominously stated that the multilateral trade order, that ‘system of rules and shared responsibility’ is today ‘in danger of being torn apart’.


For the liberal oriented IMF to urge all to ‘redouble our efforts to reduce trade barriers and resolve disagreements without using exceptional measures’ is perhaps not surprising. However –other elements of the Fund prescriptive discourse clearly indicate significant evolution at the IMF since the maelstrom of the global financial crisis a decade ago. This, as Lagarde put it, is ‘not your grandmother’s IMF’. Key departures include concern that macroeconomic policy should reduce inequality, a more sceptical view of financial markets and their causal links to instability and systemic risk, and heightened appreciation of ‘non-linear’ threats such as deflation and stagnation. Gone, too, are the days of ‘one-size fits all’ policy recommendations. The post-crash Fund offers more differentiated policy advice, and betrays much less fiscal and intellectual conservatism than it used to.


The Fund played a key role in characterising the nature of the economic policy problems generated by the 2008 crisis. Its crisis-defining economic ideas, and crisis legacy defining ideas, were important in constructing particular interpretations of the global financial and Eurozone crises. These prioritised particular policy responses – notably calling for more activist counter-cyclical policy. This, the IMF continues to underline, can be hugely significant in reducing the losses in output arising from recessions and economic crisis. They also advocate that economic policy should do more to tackle inequality. These evolutions unearth an important but under-explored linkage between Fund ideas, and ideational influence, and policy space enjoyed by governments.



IMF Headquarters, Washington, DC. by International Monetary Fund. Public domain via Wikimedia Commons.

Fund understandings of the nature and potential scale of market instabilities and systemic risks posed by fragile financial systems have evolved markedly. Indeed, it is much more vigilant that financial market developments might lead to another financial crisis. This explains calls for more muscular, more counter—cyclical regulation of markets, and bigger, more powerful ‘firewalls’ to limit financial contagion. Gone are the assumptions about the inherent stability and efficiency of financial markets. The IMF’s take-home point is that financial stability needs to be activity nurtured and sustained by governments, by central banks, and by bodies like the IMF. It cannot be taken for granted. As Lagarde puts it, ‘we must keep our financial systems safe by avoiding a rollback of the regulatory framework put in place since the global financial crisis to boost capital and liquidity buffers.’


The IMF’s somewhat surprising focus on inequality is justified as central to the Fund’s core mandate because IMF Research has unearthed a link between higher inequality and lower growth. The Fund urges governments to ensure that the burdens of adjustment and benefits of economic recovery are distributed equitably, to target spending, transfers and automatic stabilisers on lower earners. Repeated IMF advocacy of redistributive fiscal policy, and greater progressivity of income tax, contrasts starkly with the IMF’s traditional reputation for imposing harsh austerity measures.


Analysing how the IMF contributes to prevailing understandings of sound or appropriate economic policy reveals the malleable, contingent, and changeable nature of economic orthodoxy. How prevailing views of ‘sound’ policy change is a deeply political process in which the Fund, for all the emphasis the institution places on the technocratic, scientific nature of its work, is intimately involved.


To understand these shifts, it is important to grasp how the production of economic policy knowledge is a social process, and that there are a set of social norms within the IMF which need to be navigated by IMF staff and leadership seeking to advance or promote particular economic policy understandings or positions. The new ideas need to be reconciled to Fund standard operating procedures, and its sedimented economic policy knowledge. They must also be amenable to corroboration using the varied methodological techniques and other approaches favoured within the Fund. To achieve maximum longevity and increase their chance of shaping ‘how the Fund gets things done’, they need to be, as one insider put it, ‘baked into guidance’. That is, they need to be incorporated into technical notes circulated to Fund desks as a guide to their operational work.


Yet once ideas gain acceptance internally, the real challenge comes when Fund leadership and surveillance missions seek to gain ‘traction’ for new IMF thinking with governments, and shift the wider economic policy debate. Whilst the IMF enjoys a privileged position in the construction of economic rectitude, it has little direct leverage over non-borrowing countries.


For example, the Fund calls urgently for more muscular counter-cyclical regulation and larger financial firewalls: it seeks the creation of a ‘strong global financial safety net’ where ‘the IMF plays a central role in helping countries to better cope with capital flow volatility in times of distress.’ Yet the realisation of this ambition is unlikely: it requires greater resources than the Fund enjoys, and it requires buy-in from powerful member states, amongst others. Similarly, repeated IMF entreaties to make income tax more progressive and use more social transfers to reduce inequality may, in some cases, fall on deaf ears. Thus in this the IMF reforming reach exceeds its grasp.


Featured image credit: 20120916 88 Mt. Washington Hotel, Bretton Woods, NH by David Wilson. CC-BY-2.0 via Flickr.


The post The IMF’s role in the evolution of economic orthodoxy since the Crisis appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2018 02:30

Ten things to know about women’s ordination in the United States

Pope Francis recently appointed three women for the first time to the Congregation for the Doctrine of the Faith, an important advisory body to the Pope on matters of Catholic orthodoxy. He has also recently established a commission for studying the role of women deacons in the early Christian church. While encouraging for supporters of women’s ordination in the Catholic Church, Pope Francis has also made it clear that he is keeping the door firmly shut in terms of the possibility of women priests.


Elsewhere, the Church of Jesus Christ of Latter-day Saints excommunicated feminist activist Kate Kelly in 2014 for advocating for women’s ordination. At the same time, LDS leaders have also expanded roles for women in the faith’s semiannual conferences and global governing committees.


In 2015, Katharine Jefferts Schori ended a historic term as the first female Presiding Bishop of the Episcopal Church, nearly forty years after the denomination opened the priesthood to women. Even in American denominations that have ordained women for decades, however, questions about pay equality and sexism toward women pastors and priests continue.


This ambiguity toward the role of women in American religious organizations is emblematic of wider conversations about gender equality and women’s roles in American society. Thus, understanding the dynamics of women’s ordination in religious congregations can reveal important insights into wider trends and the intersection of gender and leadership in America today.


Dozens of one-on-one interviews, as well as a nationally representative public opinion survey, have provided us with a contemporary snapshot of women’s ordination in American congregations, investigating two primary questions: 1) who supports female clergy in their congregations and why?, and 2) what effects do female clergy have on those in their congregations, especially young women and girls?


Here are ten things you should know about women’s ordination in the US:



A little over half (55%) of Americans who attend religious services at least occasionally say that their congregations allow women to serve as their principal leader, although only 9% currently attend a congregation where a woman is serving in that capacity. Thus, women’s ordination in America is more common in principle than in practice.
Religious traditions and denominations in the United States that generally permit female clergy in their congregations include American Baptists, United Methodists, Evangelical Lutheran Church in America, Presbyterian (USA), the Episcopal Church, Buddhism, Reform/Conservative Judaism, and Unitarian Universalists. Those that generally prohibit female clergy include the Roman Catholic Church, Southern Baptists, Jehovah’s Witnesses, Orthodox Judaism, Mormons, and Muslims.
Two of every five American worshipers say that they “strongly prefer” that their congregation allow women to serve as their principal religious leader. When added to the 32% who say they “somewhat prefer,” it makes for nearly two-thirds (72%) of American worshipers who say that they support women’s ordination. This includes 68% of Evangelicals, 85% of Mainline Protestants, and 70% of Catholics.
70% of female worshipers say that they support women’s ordination in their congregations. This is, however, nearly identical to the 69% of male worshipers who say the same. In other words, women are no more or less likely than men to support or oppose female clergy in their congregations. Contrary to what might be expected, gender does not structure attitudes toward women’s ordination in American society today.
Instead, those most supportive of female clergy in their congregations are theological modernists who believe that their traditions should adapt to modern sensibilities, those who identify politically as liberals and Democrats, those that currently attend congregations that allow for female clergy, and those who attend religious services less frequently.
When asked about their support or opposition to female clergy in their congregations, the most common reasons included scriptural authority, personal experiences, and gender stereotypes. These three issues were cited by both those in favor and those against female ordination, selectively applying arguments and experience to support their positions.
While support for female clergy is high, only 9% of worshipers report that they would personally prefer that their own congregation’s leader were female. This might help partially explain the persistent gender gap in the leadership of American congregations, even among those that have gender-inclusive leadership policies in place.
While many people are quick to say that it “doesn’t matter” whether their congregation’s principal leader is male or female, they are quick to point a variety of ways in which they have personally seen that it does matter in their own lives. Specifically, they tend to focus on ways that gender affects what type of counseling clergy are able to provide (in talking about issues such as rape or abortion, for instance), as well as the ways that female clergy can often successfully attract young people and families to their congregations.
In our survey, women who had influential female clergy growing up have higher levels of self-esteem as adults, as well as higher levels of education and full-time employment, compared to those who had only male leaders. They are also more likely think about God in more graceful/loving terms instead of a more authoritarian/judgmental way. This is important because self-esteem, education, and one’s view of God have all been linked to psychological and emotional health and well-being. Thus, female clergy can indirectly improve future levels of health, well-being, and economic empowerment of young women and girls in their congregations.
Politics structures attitudes toward and responses to women’s ordination more than gender. Political liberals, both men and women, are most supportive of female clergy and are also the most likely to disengage from their religious communities if their congregations maintain male-only leadership policies. This is yet another example of how politics is driving religious identity and affiliation much more often than the reverse in contemporary American society.

Featured Image: Brown Wooden Church Pew by MichaelGaida. CCO via Pixabay.


The post Ten things to know about women’s ordination in the United States appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 13, 2018 00:30

June 12, 2018

The gravity of gravitational waves

Rarely has a research field in physics gotten such sustained worldwide press coverage as gravity has received recently. A breathtaking sequence of events has kept gravity in the spotlight for months: the first detection(s) of gravitational waves from black-holes; the amazing success of LISA Pathfinder, ESA’s precursor mission to the LISA gravitational wave detector in space; the observation — first by gravitational waves with LIGO and Virgo, and then by all possible telescopes on Earth and in space — of the merger of two neutron stars, an astrophysical event that likely constitutes the cosmic factory of many of the chemical elements we find around us.


There is a reason for this attention. Detection of gravitational waves has been one of the Holy Grails in physics. In 1916, Albert Einstein had already calculated that gravity cannot propagate instantaneously and must instead travel in the form of waves at the same speed as light. Any accelerated motion of celestial bodies, he calculated, generates a flux of such waves of gravity that travel towards us.


It took about 40 years until other scientists figured out that such a travelling gravity could be detected, at least in principle, by looking at the minute mutual accelerations they would impress onto a set of free-falling test-particles (gravity always shows up by accelerating test-particles—think of Galileo’s stone accelerating from the top of the Leaning Tower of Pisa). Even after this, it took another 60 years to detect those minute accelerations!


It was during these 60 preparatory years that scientists slowly realized that, once detectable, gravitational waves would turn into a formidable new tool to investigate the universe. There is a powerful analogy between gravitational waves and sound. Both record the motion of the source — the vibration of a string or the collapse of a binary black hole — and carry this information to a faraway detector where it is recorded by the motion of a sensing body: your eardrum, or the LIGO and Virgo test particles. Thus the 2016-17 revolution has, in a sense, added sound to our methods of investigating the universe and the bodies that populate it. Sound may be very important when exploring a place like our universe, since more than 99% of it is pitch dark.



Detection of gravitational waves has been one of the Holy Grails in physics.



It is amazing that at the same time as this discovery was made with ground-based detectors, scientists were also able to clear the way to the realization of even more powerful detectors in space that will be able to hear cosmic sounds emitted throughout the universe.


Comparing the impact of such gravitational revolution to Galileo’s telescope might be an exaggeration, but the analogy comes naturally to mind.


Gravity has continually been changing our view of nature in many other respects, even if these come with less media glamour. It is by studying gravity that we first guessed the existence of, and then found, black holes: mind boggling places near which time stops flowing, from which (almost) nothing comes back, and the surface of which behaves like a cosmic solid state memory, storing any information that has been thrown at it.


By studying gravity within galaxies and in the universe at large, we have found that something is missing, that, judging from the intensity of gravity, there is a hundred times more matter and energy in the universe than what we can see with telescopes. The need to understand the nature of this dark universe pushes scientists to question whether the model of the constituents of matter that we have built up with our laboratory experiments is telling the whole story, or if there are instead some more elusive particles we have not yet been able to detect.


There is one question haunting physicists researching the invisible force. We know a fair amount about how gravity works in the universe and in ordinary life, but we don’t really understand how it works in the subatomic world. While all other interactions between elementary particles are ruled by quantum mechanics, our best theory of gravity, Einstein’s Theory of General Relativity, does not abide by the same rules. While gravity among elementary particles is negligible in the laboratory, during the Planck era (the first instants of the universe) gravity was dominating everything, which means gravitation quantum effects must have been important.


Thus, is not surprising that understanding how gravity works in the quantum regime has been and still is the Holy Grail of Physics, definitely more than the detection of gravitational waves. The good news is that the observation of primordial gravitational waves—emitted at the Planck era or thereabouts, and that might be detectable by LISA or by other indirect observations — may shine the necessary light to, at least, find our way to the Holy Grail.


Featured image credit: “Lost in the Milk” by Luke Dahlgren. Public Domain via Unsplash.


The post The gravity of gravitational waves appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2018 05:30

Divine victory: the role of Christianity in Roman military conquests

The Roman Empire derived its strength from its military conquests: overseeing territories across Europe, Africa, and Asia. Before Christianity, emperors were praised and honored for their successes on the battlefield. As Christianity took root throughout Rome, it was used as a means to elevate emperors to an even greater status: raising them from successful imperialists to divinely appointed leaders.


The below excerpt from Rome Resurgent: War and Empire in the Age of Justinian breaks down how the Romans incorporated Christianity into their imperial culture.


Throughout imperial history, the prime virtue required of all Roman emperors was victory: the capacity to win on the field of battle.


This changed not a jot with the advent of Christianity, for an extremely straightforward reason. Military victory had an ideological and political impact which neither the religious nor civilian dimensions of the imperial job description could possibly match. Both of the latter could be trawled (and regularly were) for possible signs of divine favour (excellent doctri­nal settlements; laws which secured the operation of civilitas, etc.), but an emperor’s actions on these fronts were always open to question. Doctrinal settlements always involved losers, who continued to deny, often for dec­ades, the legitimacy of what had been decided.


Military victory, by contrast, had a legitimizing power which no other form of imperial activity could even begin to match. There could be no clearer sign of favour from an omnipotent divinity than a thumping military victory over barbarian opponents who, by definition, stood in a lower place in the divine order of creation. Throughout imperial history, therefore, while huge effort was put into presenting every imperial act as fully in tune with the divine plan for humankind, winning military vic­tories possessed an overwhelming ideological cache. Even if an emperor, like the young Honorius, did not campaign in person, his divinely chosen legitimacy could still be made incontestably manifest through the mili­tary victories of his generals. The ideological circle was thus closed. A legitimate emperor brought divine support, which meant nothing if not victory on the battlefield. Correspondingly, military success generated a level of political legitimacy which no other imperial action could ever hope to reach.


As a result, claiming military victory—because it was the ultimate sign of divine support—loomed large in the propaganda of all Roman rulers, early or late. From the time of Constantine’s immediate predecessors— the tetrarchic emperors Diocletian and his colleagues—emperors claimed and carefully recorded victory titles: adding adjectival forms of the names of defeated enemies to the lists of their own titles. Thus Parthicus, Alamannicus, Gothicus, and many others were added to Caesar, Augustus, and Pontifex Maximus. The tetrarchs even added numbers after each title to indicate just how many times they or one of their colleagues had defeated a particular opponent. Titles like VII Carpicus—‘victor seven times over the Carpi’—became commonplace. The fixation with num­bers declined after Constantine but not that with victory. Whenever an emperor’s name was mentioned, whether they liked it or not, his subjects were confronted with a litany of victory which underscored the legitimate, divinely supported nature of the regime.


A legitimate emperor brought divine support, which meant nothing if not victory on the battlefield.

These occasions were many and varied. Imperial titulature appeared in all official imperial pronouncements: everything from brief letters to formal laws. It also featured on many inscriptions, most of which were dated by the names of consuls, an office which emperors regularly held in the late Roman period. Most of the public life of the empire, whether cen­tral and imperial or more local, involved formally shouted acclamations where an emperor’s titles would also figure. Every meeting of the many hundreds of town councils of the late Roman world began with such acclamations (although the minutes of only one have come down to us), as did every formal imperial ceremony, not least those carefully orches­trated moments of arrival—adventus—which greeted an emperor’s entry into any of his cities.


On all these occasions, an emperor’s military rec­ord was not only held up to general public view in titulature outline but usually discussed in more detail. Most imperial ceremonies also involved a formal speech of praise—panegyric—in the emperor’s honour, given by some lucky orator who could use it to advance his own career. These speeches could take many forms, but a commonly utilized model devoted a specific section to an emperor’s deeds in war. Even where another form was adopted, some reference to the current emperor’s success in warfare was still de rigueur.


Beyond such specific references, the point was reinforced by a great deal of more general allusion to the fact that divine support had shown itself in the current emperor’s capacity to generate victory. The figure of a submit­ting barbarian played a huge role in late Roman iconography. A staple of coinage types—often accompanied by an appropriate inscription such as debellator gentes (‘conqueror of peoples’)—was the supine barbarian lying at the bottom of the reverse just to remind everyone that the emperor defeated such enemies as a matter of course. Similarly defeated barbar­ians of various kinds also appeared regularly in the engraved reliefs, not least on the triumphal victory arches, with which emperors liked to dec­orate the larger cities of their empire. Submitting barbarians were the natural accompaniment of divinely supported, victorious Roman emper­ors, spreading the general message that the current regime was ticking all correct ideological boxes.


The ideologies of Roman imperium, only slightly modified by Christianity, thus defined an overarching imperial job description, not directly in terms of dictating a series of day-to-day functions but, at least as important, by setting a series of targets which had somehow to be hit. A legitimate Roman emperor was not a secular ruler in modern under­standings of the term but one chosen directly by the supreme creator divinity of the entire cosmos to maintain the key pillars of rational Roman civilization—education, city life, the rule of written law, the welfare of the Christian Church—by which humankind was destined to be brought into line with the divine plan and in return for which the divinity would guarantee that ruler’s success against all comers.


Featured image credit: banner header easter cross sunset by geralt. Public domain via Pixabay .


The post Divine victory: the role of Christianity in Roman military conquests appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2018 04:30

Looking back at 100 years of flu [timeline]

This year is the centenary of the Spanish influenza pandemic of 1918. It is estimated that the number of people who died as a result of the Spanish flu equated more than the deaths from both the first and second World Wars combined. It impacted all spectrums of the population with no bias towards race, gender, or economic background, and causing horrific impact globally. The majority of deaths were most likely caused by secondary bacterial pneumonia, which is also suggested in evidence following the pandemics of the Asian flu and Hong Kong flu.


The Spanish flu was attributed to Spain by France in 1918 where it was dubbed Spanish influenza. Interestingly, Spain had attributed the flu to France at the same time, with other countries also suggesting other attributions. Wherever the origin, the name Spanish flu stuck.


However, it was only by 2010 that the industry had started universal flu vaccine trials, following the Swine flu pandemic in 2009.


Vaccination developments can be effective when combating the two forms of influenza: seasonal and pandemic. However, for seasonal flu vaccines, the effectiveness can range anywhere from a mere 10% up to around 60%, depending on the vaccine strains used. In other words, if the vaccine strain is not a close match to the type of influenza it is vaccinating against our level of protection is much more compromised. The development of a vaccine that serves against both seasonal and pandemic influenza is of high priority. It is apparent that whilst medicine and technology have developed rapidly since 1918, the risk of another pandemic of influenza still remains a global concern.


Explore the last hundred years of flu, as we mark the Spanish flu centenary, from the four major pandemics to the medical advances along the way, with this interactive timeline.



Featured image credit: Influenza by Demet. CC-BY-2.0 via Flickr.


The post Looking back at 100 years of flu [timeline] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2018 02:30

June 11, 2018

The scientist as historian

Why should a trained scientist be seriously interested in science past? After all, science looks to the future. Moreover, as Nobel laureate immunologist Sir Peter Medawar once put it: “A great many highly creative scientists…take it for granted, though they are usually too polite or too ashamed to say so, that an interest in the history of science is a sign of failing or unawakened powers.”


There is, of course, a whole discipline and a profession called “history of science.” People get doctoral degrees in this discipline, they teach it as members of history faculties or, if they are fortunate, as members of history of science departments. Many of them may have begun as apprentice scientists, but early in their post-graduate careers decided to make the switch. Others may have entered this discipline from the social sciences or the humanities. But here I am speaking to the idea of practicing scientists, who have experienced the pleasures and the perils of actual scientific research, who have got their hands dirty in the nuts and bolts of doing science, who earn a living doing science, turning to the history of their science.


Now, I happen to be an admirer and reader of Medawar’s superb essays on science, but here I think he missed—or chose to ignore—some crucial points.


First, a scientist may be lured into the realm of science past when certain kinds of questions are presented to him or her that can only be answered, or at least explored, by doffing the historian’s hat. Put simply, the scientist qua scientist and the scientist qua historian ask different kinds of questions of science itself. The latter pose and address problems and questions about their science rather than within it.


Second, the scientist can bring to the historical table a corpus of knowledge about his or her science and a sensibility that derives from scientific training which the non-scientist historian of science may not be in a position to summon in addressing certain questions or problems.


As an example of these factors at work, consider the Harvard physicist Gerald Holton’s book Thematic Origins of Scientific Thought: Kepler to Einstein (1973). Here, we find a physicist striving to find patterns of thinking in his discipline by examining the evolution of physical ideas. Toward this end, he undertook historical case studies of such physicists as Kepler, Einstein, Millikan, Michelson, and Fermi, but case studies that were deeply informed by Holton’s background and authority as a professional physicist.



Image credit: Image Credit: ‘Peter Medawar c1969’ by Codebreakers, Makers of Modern Genetics. CC BY 4.0 via Wikimedia Commons.

As another example from the realm of engineering sciences, we may consider David Billington, professor of civil engineering at Princeton, who published a remarkable book, titled Robert Maillart’s Bridges; The New Art of Engineering (1979) on the work of the 19th century Swiss bridge engineer Robert Maillart. Billington studied the complete corpus of Maillart’s work on bridges and other structures—his designs, constructions and writings—to illustrate the nature of Maillart’s cognitive style in design: a style Billingon summarized by the formula “force follows form”—that is, for Maillart, the form of a bridge, determined by the physical environment in which it would be situated came first in the engineer’s thinking, and then his analysis of the mechanical forces within the structure followed thereafter. This study was authored by an engineering scientist who brought his deep structural engineering knowledge and scientific sensibility to his task.


There is another compelling reason why and when a working scientist might want to delve into science past. If we take the three most fundamental questions of interest to historians: “How did it begin?”; “What happened in the past?”; and “How did we get to the present state?”, then there are scientists who feel compelled to ask and investigate these questions in regard to their respective sciences. I am talking of scientists who possess a synthesizing disposition, who wish to compose a coherent narrative in response to their desire to answer such broader questions. They would probably agree with the Danish philosopher Søren Kierkegaard’s dictum, “Life must be understood backwards. But… it must be lived forwards.” To understand a science, such scientists believe, demands understanding its origins and its evolution, and framing this understanding as a story. They want to be storytellers—as much for fellow scientists as for historians of science.


Numerous examples can be given. One is the Englishman John Riddick Partington, for decades professor of chemistry at Queen Mary College, University of London. His four volume History of Chemistry (1960-1970) is extraordinary for the sheer range of its scholarship, but my particular exemplar is his A History of Greek Fire and Gunpowder (1960), an account spanning some 600 years, of the evolution of incendiaries. This is an account written by a professional chemist who summoned all his chemical authority to his task. Thus, when he talks about the nature of Greek fire (the name given by the Crusaders to an incendiary first used by the Byzantines) he tells that of the several different explanations offered about its nature, he believed only one particular theory of its composition agreed with the description of its nature and use. Here we are listening to a chemical-historical discourse which only a chemist can speak with authority. Partington’s remarkable book, intended for the scholarly reader interested in this topic, is uncompromising in its attention to the chemistry of explosives.


There are, however, important caveats to the scientist’s successful engagement in historical studies. He or she must master the tools of historical research and the principles of historiography (that is, the writing of history). The scientist-turned-historian must learn to understand, assess, and discriminate between different kinds of archival sources: what counts as historical “data”; be aware of such pitfalls as what is called “presentism”—the tendency to analyze and interpret past events in the light of present day values and situations; and master the nuances of historical interpretation. In other words, in methods of inquiry, the scientist-turned-historian must be indistinguishable from the formally trained historian of science.


Featured image credit: Galileo Donato by Henry-Julien Detouche. Public Domain via Wikimedia Commons.


The post The scientist as historian appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2018 04:30

Economic inequality, politics, and capital

Economic inequality and campaign finance are two of the hottest topics in America today. Unfortunately, the topics are typically discussed separately, but they are actually intertwined.


The rise of US economic inequality that economist Thomas Piketty chronicles in his renowned book Capital in the Twenty-First Century – starting in the late 1970s and continuing through today – coincides remarkably with the US Supreme Court’s decision of Buckley v. Valeo. That decision extended constitutional protection to spend vast sums of money to elect candidates for political office. In 1976, Buckley seismically altered the election process; it opened the “money gates,” flooding elections with torrents of dollars.


Interestingly, in defending the congressional legislation that regulated both presidential and congressional candidates, Congress itself bluntly asserted the interest of preventing corruption. Having endured the Watergate scandal and the Nixon impeachment crisis, Congress had witnessed the devastating power of corruption. Unfortunately, the Court partially accepted and partially rejected this interest; it split the baby and in so doing, it produced a grossly deformed law bearing no resemblance to the systemic approach of the original legislation.


Treating campaign financing as free speech, Buckley upheld congressional limitations on contributions to political campaigns, but struck down limitations on a candidate’s personal and total campaign expenditures. Contributions were only a symbolic expression of support for a candidate, and contribution limitations advanced Congress’s primary purpose of eliminating political corruption or its appearance. Importantly, the Court also invalidated limits on independent expenditures made by others, including Political Action Committees (PACs). Theoretically, these regulations did not deter political corruption as they concerned expenditures made independent of the candidate and his or her campaign.


The opinion equated spending money to elect a candidate with speaking, and this laid the foundations for allowing a very small group of people to dominate the entire political system. Considerable data shows that wealthy campaign donors share certain common interests, as one would expect, driven by their common economic situations. Contrary to the warnings of the great political philosopher Michael Walzer in Spheres of Justice, the political sphere is not separated from the eco­nomic one. Instead, Buckley has united them, or in fact empowered the economic sphere to subjugate the political sphere.


As elementary democratic theory would suggest, control of the political process translates into control over law. Predictably, a few short years after Buckley was decided, legislative policy began to turn sharply toward property interests as evidenced by the astonishing and temporally correlated rise in income share of the top 0.1 percent, to levels not seen in many years; indeed, the current extreme concentration of income in the top 0.1 percent has not existed for a century.


Piketty has shown that, from the First World War through the mid-1970s, income inequality decreased colossally in the US. However, it then suddenly boomeranged and now rivals its level witnessed in the early twentieth century. The graph below, chronicling the income share of the top 0.1 percent, shows a dramatic ascent of their fortunes shortly after Buckley was decided.



Another decision several decades later unleashed another torrent of potential funding. Citizens United v. Federal Election Commission, decided in 2010, extended protection to independent electioneering expenditures made by corporations. The graph also shows another dramatic rise in income share of the 0.1 percent beginning in 2010.


The correlations are uncanny. The disparity in income began a slow rise in the late 1970s and rocketed upward beginning in 1980. During the 1980s, income and estate taxes plunged disproportionately in favor of property interests. Contemporaneously, assaults on the New Deal welfare state began, targeting government regulations, unionization, and the social safety net for the vulnerable. Such “coincident” developments thus increasingly look less like coincidences. Skeptics will argue that the rise in economic inequality just happened to have begun at the same time as Buckley was decided. Theories of risk analysis actually establish that Buckley and subsequent campaign finance cases have played a tremendous, probably pivotal, role in driving American economic inequality. Regrettably, economic and political inequality are synergistic: allowing money into the political sphere introduces the inequalities of the economic sphere into the political one. The resultant malignant growth in political inequality causes further economic inequality, and so on. And so the deadly spiral begins. This constricting coil chokes both democracy and capitalism, leading to oligarchy and oligopoly.


Piketty describes American economic inequality as the worst among advanced nations. In The Price of Inequality, Nobel Laureate Joseph Stiglitz says that the U.S. is approaching the inequality levels of countries such as “Iran, Jamaica, Uganda, and the Philippines.”


It is a widespread worry that this economic inequality accounts for the rise of American populism. It may also account for the anemic growth the US has been experiencing. Economic inequality hurts the demand curve: people who don’t have money can’t buy things, and that is bad both for them and for our consumer-driven economy. Moreover, Stiglitz thinks that economic inequality has produced more frequent and severe economic downturns like the Great Recession of 2008.


The correlation between the campaign finance cases and soaring US debt is sadly predictable. If the campaign finance cases enabled the capture of the electoral system, that capture would manifest itself` in changing legislation to advance the interests of the 0.1 percent: tax cuts, skewed heavily to favor the wealthiest strata, advanced their interests. However, these tax cuts have also produced skyrocketing government debt. The political and economic systems are interrelated, and theories of political economy must take this into account.


We know that elections matter, but as with any other game, the rules heavily influence the outcomes.


Featured image: Capital Hill by cytis. CC0 Creative Commons via Pixabay .


The post Economic inequality, politics, and capital appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2018 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.