Oxford University Press's Blog, page 809

May 24, 2014

Verdun: the longest battle of the Great War

The battle of Verdun began on 21 February 1916. It did not end until December of that year. It was a place of no advance and no retreat, where national resources continued to pour in, extending the slaughter indefinitely. Paul Jankowski, leading French historian and author of Verdun: The Longest Battle of the Great War, examines Verdun in a new, unique way, using both French and German sources with equal weight. Jankowski questions why Verdun holds such a high status in World War I when it sparked no political changes, had an indecisive outcome, and was not the bloodiest of the war. He explains not only the total history of the battle, including leaders, plans, technology, and combat, but also analyzes and stresses the soldiers’ experiences and the impact of war on national memory.


Why did the battle of Verdun begin?


Click here to view the embedded video.



“Verdun:a hell that was all its own.” – Paul Jankowski


Click here to view the embedded video.





“Nobody could win…but nobody could afford to lose…” – Paul Jankowski


Click here to view the embedded video.





Results of Verdun


Click here to view the embedded video.





Paul Jankowski is Raymond Ginger Professor of History at Brandeis University. His many books include Verdun: The Longest Battle of the Great War, Stavinksy: A Confidence Man in the Republic of Virtue and Shades of Indignation: Political Scandals in France, Past and Present.



-Subscribe to OUPblog via email or RSS.


-Subscribe to only history articles at OUPblog via email or RSS.


The post Verdun: the longest battle of the Great War appeared first on OUPblog.




                Related StoriesReflections on World War IAn illustrated history of the First World WarChurchill, Hitler, and Stalin’s strategy in World War II 
 •  0 comments  •  flag
Share on Twitter
Published on May 24, 2014 00:30

May 23, 2014

‘Storytelling’ in oral history: an exchange, part 2

On 25 April, we shared an excerpt from the conversation between OHR 41.1 contributor Alexander Freund and OHR board member Erin Jessee regarding Freund’s article, “Confessing Animals: Towards a Longue Durée History of the Oral History Interview.” Below, Freund and Jessee continue their exchange, tackling storytelling in non-Western arenas.


Alexander Freund: I fully agree that conducting interviews with open ended questions that create lots of space for people to tell their stories is an excellent methodology. That way, we develop rapport and get rich and “true” (rather than simply publicly sanctioned) stories. The underlying assumption is grounded in hermeneutics: we will receive a rich text that is as “pure” as possible and can then be interpreted.


I think we can go also beneath these methodological and ethical questions toward fundamental epistemological questions about how the knowledge that we create in an interview is shaped by longue durée processes, and how each interview is another step in learning how to be “right” in the world.


Thus, thinking about the long history of the interview and its connection to confessional practices, the questions about interviewing I have are these: how did we get to the point where, as scientists, we believe it is epistemologically, methodologically, and ethically sound to approach a person (often a stranger) and ask her to “tell me about yourself”? And how have the people we approach come to be more comfortable with one or another kind of responding? Indeed, how, in the first place, have they come to be comfortable with being approached and then giving an account of themselves? And how have we come to a place where, whenever we ask someone, “tell me about your life,” there are basic structural similarities (e.g. narrativity, an account about the self, basic chronology, or frustration about a lack of or expectation of chronology, and personal experiences) in their accounts (at least within specific cultures)?


* * * * *


Erin Jessee: Here again I can appreciate the links you’ve drawn between the practice of oral history and the confessional culture that has developed in many Western nations, especially with regards to the perceived cathartic value of the interview. While I’d like to think that the interviews I’ve conducted in different settings haven’t harmed the people I’ve interviewed, I find our tendency to approach the oral history interview as having similar benefits to narrative therapy troubling. The emotional benefits of the interview, if any, would be incredibly difficult to document, and to my knowledge (and please correct me if I’m wrong), oral historians haven’t taken the time to analyse this in any meaningful way.


And indeed, it fits into more troubling observations about the growing prevalence of storytelling methodologies in the post-conflict nations like Rwanda, Bosnia, and Uganda. While I find storytelling methods are often received as more culturally appropriate in places like Rwanda and Uganda, over the years, I’ve noticed a growing interest in disseminating the outcomes of storytelling-based fieldwork online as a means of educating the public. The recent controversy surrounding Invisible Children’s Kony 2012 mini-documentary — admittedly a poor film that smacks of the white savior industrial complex at its worst — demonstrates that once these materials are made public, there is no way to control how it will be received, replicated, and disseminated by that public going forward. So again, this leaves me wondering whether oral historians can deliver a positive cathartic experience surrounding the interview and its dissemination via digital storytelling platforms. It seems to me that oral historians, and particularly those who work on sensitive subjects, should proceed with caution. And yet simultaneously, it seems everything about current academic and funding climates is pushing us to explore the relevance of digital storytelling and online dissemination for our work.


* * * * *


Alexander Freund: I am interested to hear that there is a “growing prevalence of storytelling methodologies in the post-conflict nations like Rwanda, Bosnia, and Uganda.” Where does this come from? Is that homegrown or a Western import? You say that “storytelling methods are often received as more culturally appropriate in places like Rwanda and Uganda,” but I am always wondering about such claims. Are these backed up by evidence that shows a connection between traditional and current storytelling practices? Is storytelling always just storytelling? Or was there a differentiation of how different kinds of stories got traditionally told? Whenever I hear that something is “culturally appropriate,” I am wondering to what degree this is a colonial fantasy?


Rwandan Children at Volcans National Park by Philip Kromer. CC BY-SA 2.0 via Wikimedia Commons.

Rwandan Children at Volcans National Park by Philip Kromer. CC BY-SA 2.0 via Wikimedia Commons.


* * * * *


Erin Jessee: I think what we’re seeing at present is an attempted blending by foreign researchers, professionals, and civil society organizations, of homegrown and Western methodologies, and by labeling them “storytelling” the expectation is that they will simultaneously appeal to international audiences, local participants, and to be blunt, funding agencies. But you’re right to question whether they are culturally appropriate.


What might have been described as “storytelling” in the past in Rwanda, for example, is actually a complex array of practices that included everything from official histories and stories that were carefully preserved and disseminated by ritual specialists to select members of the royal court, to unofficial histories and stories that could be performed for and by the public. From what I’ve observed, these acts of storytelling are vastly different from the storytelling methodologies (including life history and thematic interviews, focus groups, etc.) commonly used by academics and related practitioners working in Rwanda today. But it’s also important to note that one of the many outcomes of colonialism and later, the 1994 Genocide, is that Rwandans have become quite well-versed in narrative therapy, interviews, and so on, even if they aren’t always comfortable participating in them. Just because current storytelling methodologies aren’t “traditional” for Rwanda, strictly speaking, doesn’t mean they can’t be adapted to make them more culturally appropriate by developing methodological framework in collaboration with Rwandan experts and one’s participants.


But it’s still important to consider expectations — both the researcher’s and the participants’ — when engaging interview and storytelling-based methodologies. Adding to the challenge, many conflicted and post-conflict nations are steeped in transitional justice discourses that, like the interview, are embedded in Western political philosophy and human rights. Bronwyn Leebaw has written an interesting article, “The Irreconcilable Goals of Transitional Justice,” in which she suggests that many of the stated benefits of applying transitional justice mechanisms (such as memorials, trials, and truth and reconciliation commissions) in post-conflict settings are “articles of faith” that claim to facilitate social repair, reconciliation, and so forth, but have never been proven — and indeed often turn out to be false due to the irreconcilable nature of transitional justice’s stated goals. I suspect we’re dealing with a similar phenomenon with regards to the oral history interview, and indeed storytelling more generally. That people should experience catharsis and healing as a result of sharing their experiences during an interview seems to be taken for granted in many parts of the world, and as I’ve mentioned previously, I’m not sure that oral historians have enough solid evidence to support the claim that in the context of an oral history interview, this is indeed the case.


But to return to your paper, it seems the time is ripe for an oral history project that interrogates the foundations of oral history as a (sub-)discipline and its development over time, perhaps by turning the oral history interview on founding scholars and practitioners, as well as analyzing relevant archival materials. And certainly it would be relevant to expand the project to consider the use of interviews in other fields and in cross-cultural settings.


So before we conclude our email exchange, is there anything else you’d like to add?


* * * * *


Alexander Freund: No, I will just say thanks again for your thoughtful response. I agree with all of your points and I am looking forward to a continued online discussion that will hopefully include others interested in this topic.


Alexander Freund is a professor of history and holds the Chair in German-Canadian Studies at the University of Winnipeg, where he is also co-director of the Oral History Centre. He is co-president of the Canadian Oral History Association and co-editor of Oral History Forum d’histoire orale. With Alistair Thomson, he edited Oral History and Photography (New York: Palgrave Macmillan, 2011). He is the author of “Confessing Animals”: Toward a Longue Durée History of the Oral History Interview” (available to read for free for a limited time) in the latest issue of the Oral History Review.


Erin Jessee, in addition to serving on the OHR Editorial Board, is an assistant professor affiliated with the Scottish Oral History Centre (Department of History) at the University of Strathclyde. Her research interests include mass atrocities, nationalized commemoration, spiritual violence, transitional justice, mass grave exhumations, and the ethical and methodological challenges surrounding qualitative fieldwork amid highly politicized research settings. Erin is in the final stages of writing a book manuscript (under consideration with Palgrave MacMillan’s Studies in Oral History series) tentatively titled Negotiating Genocide: The Politics of History in Post-Genocide Rwanda.


The Oral History Review, published by the Oral History Association, is the U.S. journal of record for the theory and practice of oral history. Its primary mission is to explore the nature and significance of oral history and advance understanding of the field among scholars, educators, practitioners, and the general public. Follow them on Twitter at @oralhistreview, like them on Facebook, add them to your circles on Google Plus, follow them on Tumblr, listen to them on Soundcloud, or follow their latest OUPblog posts via email or RSS to preview, learn, connect, discover, and study oral history.


Subscribe to the OUPblog via email or RSS.


Subscribe to only history articles on the OUPblog via email or RSS.


The post ‘Storytelling’ in oral history: an exchange, part 2 appeared first on OUPblog.




                Related StoriesCrowdfunding for oral history projects‘Storytelling’ in oral history: an exchangeCharting Amelia Earhart’s first transatlantic solo flight 
 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2014 05:30

Morality, science, and Belgium’s child euthanasia law

vsi
By Tony Hope




Science and morality are often seen as poles apart. Doesn’t science deal with facts, and morality with, well, opinions? Isn’t science about empirical evidence, and morality about philosophy? In my view this is wrong. Science and morality are neighbours. Both are rational enterprises. Both require a combination of conceptual analysis, and empirical evidence. Many, perhaps most moral disagreements hinge on disagreements over evidence and facts, rather than disagreements over moral principle.


Consider the recent child euthanasia law in Belgium that allows a child to be killed – as a mercy killing – if: (a) the child has a serious and incurable condition with death expected to occur within a brief period; (b) the child is experiencing constant and unbearable suffering; (c) the child requests the euthanasia and has the capacity of discernment – the capacity to understand what he or she is requesting; and, (d) the parents agree to the child’s request for euthanasia. The law excludes children with psychiatric disorders. No one other than the child can make the request.


Is this law immoral? Thought experiments can be useful in testing moral principles. These are like the carefully controlled experiments that have been so useful in science. A lorry driver is trapped in the cab. The lorry is on fire. The driver is on the verge of being burned to death. His life cannot be saved. You are standing by. You have a gun and are an excellent shot and know where to shoot to kill instantaneously. The bullet will be able to penetrate the cab window. The driver begs you to shoot him to avoid a horribly painful death.


Would it be right to carry out the mercy killing? Setting aside legal considerations, I believe that it would be. It seems wrong to allow the driver to suffer horribly for the sake of preserving a moral ideal against killing.


Thought experiments are often criticised for being unrealistic. But this can be a strength. The point of the experiment is to test a principle, and the ways in which it is unrealistic can help identify the factual aspects that are morally relevant. If you and I agree that it would be right to kill the lorry driver then any disagreement over the Belgian law cannot be because of a fundamental disagreement over mercy killing. It is likely to be a disagreement over empirical facts or about how facts integrate with moral principles.


Euthanasia_and_the_Law


There is a lot of discussion of the Belgian law on the internet. Most of it against. What are the arguments?


Some allow rhetoric to ride roughshod over reason. Take this, for example: “I’m sure the Belgian parliament would agree that minors should not have access to alcohol, should not have access to pornography, should not have access to tobacco, but yet minors for some reason they feel should have access to three grams of phenobarbitone in their veins – it just doesn’t make sense.”


But alcohol, pornography and tobacco are all considered to be against the best interests of children. There is, however, a very significant reason for the ‘three grams of phenobarbitone’: it prevents unnecessary suffering for a dying child. There may be good arguments against euthanasia but using unexamined and poor analogies is just sloppy thinking.


I have more sympathy for personal experience. A mother of two terminally ill daughters wrote in the Catholic Herald: “Through all of their suffering and pain the girls continued to love life and to make the most of it…. I would have done anything out of love for them, but I would never have considered euthanasia.”


But this moving anecdote is no argument against the Belgian law. Indeed, under that law the mother’s refusal of euthanasia would be decisive. It is one thing for a parent to say that I do not believe that euthanasia is in my child’s best interests; it is quite another to say that any parent who thinks euthanasia is in their child’s best interests must be wrong.


To understand a moral position it is useful to state the moral principles and the empirical assumptions on which it is based. So I will state mine.


Moral Principles



A mercy killing can be in a person’s best interests.
A person’s competent wishes should have very great weight in what is done to her.
Parents’ views as to what it right for their children should normally be given significant moral weight.
Mercy killing, in the situation where a person is suffering and faces a short life anyway, and where the person is requesting it, can be the right thing to do.


Empirical assumptions



There are some situations in which children with a terminal illness suffer so much that it is in their interests to be dead.
There are some situations in which the child’s suffering cannot be sufficiently alleviated short of keeping the child permanently unconscious.
A law can be formulated with sufficient safeguards to prevent euthansia from being carried out in situations when it is not justified.




This last empirical claim is the most difficult to assess. Opponents of child euthanasia may believe such safeguards are not possible: that it is better not to risk sliding down the slippery slope. But the ‘slippery slope argument’ is morally problematic: it is an argument against doing the right thing on some occasions (carrying out a mercy killing when that is right) because of the danger of doing the wrong thing on other occasions (carrying out a killing when that is wrong). I prefer to focus on safeguards against slipping. But empirical evidence could lead me to change my views on child euthanasia. My guess is that for many people who are against the new Belgian law, it is the fear of the slippery slope that is ultimately crucial. Much moral disagreement, when carefully considered, comes down to disagreement over facts. Scientific evidence is a key component of moral argument.


Tony Hope is Emeritus Professor of Medical Ethics at the University of Oxford and the author of Medical Ethics: A Very Short Introduction.


The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday, subscribe to Very Short Introductions articles on the OUPblog via email or RSS, and like Very Short Introductions on Facebook.


Subscribe to the OUPblog via email or RSS.


Subscribe to only science and medicine articles on the OUPblog via email or RSS.


Image credit: Legality of Euthanasia throughout the world By Jrockley. Public domain via Wikimedia Commons


The post Morality, science, and Belgium’s child euthanasia law appeared first on OUPblog.




                Related StoriesA question of consciousnessIs our language too masculine?15 facts on African religions 
 •  0 comments  •  flag
Share on Twitter
Published on May 23, 2014 00:30

May 22, 2014

Restoring our innovation “vision”

What are the optimal conditions for commercializing technology breakthroughs? How can we develop a common framework among universities, government, and businesses for generating fundamentally fresh insights? How can the government maximize the public’s return on research and development investments? Innovation is a important topic in both the public and the private sectors, yet no one can agree the best path forward for it. We present a brief excerpt from Organized Innovation: A Blueprint for Renewing America’s Prosperity by Steven C. Currall, Ed Frauenheim, Sara Jansen Perry, and Emily M. Hunter.


Professor Mark Humayun and his colleagues have created a small device with a big story to tell. It is an artificial retina, whose electronics sit in a canister smaller than a dime, and that literally allows the blind to see. The device also reflects a new approach to innovation that can help America find its way to a more hopeful, prosperous future.


During the late 1980s, Humayun was in medical school preparing to be a neurosurgeon. But his grandmother’s loss of vision put him on a quest to create technology that would help people see again. He switched his focus to ophthalmology, earned his MD, and imagined an implant to send digital images to the optic nerve. But when he asked biomedical engineers to help him develop such a device, he found they spoke a different language.


“I remember trying to tell them I wanted to pass a current to stimulate the retina. I wanted to excite neurons in a blind person’s eyes. They looked at me and said, ‘What?’” he recalls. “I couldn’t communicate what I wanted.” So Humayun did something that remains rare among American researchers: he crossed over into a different discipline. He earned a doctorate in biomedical engineering at the University of North Carolina.


By 1992 Humayun and his team of fellow researchers, then at Johns Hopkins University, had a rudimentary prototype of an artificial retina. But they still had a long ways to go. In 2001, Humayun and his key collaborators moved to the University of Southern California to continue their work on the retinal prosthesis. Humayun also helped form a start-up company, Second Sight, which aimed to commercialize the implant. And in 2003 Humayun and his colleagues won a National Science Foundation (NSF) grant to launch a research center to pursue retinal prostheses and other potential medical implants.


The Argus II artificial retina can restore a form of sight to patients with retinitis pigmentosa. Image courtesy of Second Sight Medical Products.

The Argus II artificial retina can restore a form of sight to patients with retinitis pigmentosa. Image courtesy of Second Sight Medical Products.


That center—the Biomimetic MicroElectronic Systems program—is part of a broader National Science Foundation initiative called the Engineering Research Center (ERC) program. The ERC program embodies government research funding as well as principles of planning, teamwork, and smart management. And it has quietly achieved remarkable success, returning to the US economy more than tenfold the $1 billion invested in it between 1985 and 2009.


The USC-based ERC prompted researchers to put their basic research projects on a path toward commercial prototypes. It also cultivated connections between academics and private- sector executives, as well as between researchers of different disciplines. And it provided funding for ten years—much longer than the typical academic grant.


During Humayun’s leadership of the ERC, his team hit several milestones. Most visibly, the artificial retina won approval from regulators in Europe and the Food and Drug Administration in the United States, and began changing people’s lives. The BBC broadcast a segment of a once-blind grandmother playing basketball—and making shots—with her grandson. The video went viral.


As Humayun and his team expand into other applications of artificial implants, the possibilities resemble science fiction—for example, improving short-term memory loss, headaches, and depression. In short, Humayun and his ERC team remind us that America can achieve fundamental technology breakthroughs—the sort that improve lives, launch new industries and create good jobs.


But we must improve our innovation efforts. Global competition has intensified in recent years, as other nations have ramped up their technology commercialization capabilities. At the same time, the U.S. innovation ecosystem has devolved into an unorganized, suboptimal approach. An “innovation gap” has emerged in recent decades, where U.S. universities focus on basic research and industry concentrates on incremental product development. This book aims to give U.S. leaders a blueprint for closing that gap and improving our ability to compete.


Based on the successes of the Biomimetic MicroElectronic Systems center and other ERCs, we have developed a framework we call Organized Innovation. Organized Innovation is a systematic method for leading the translation of scientific discoveries into societal benefits through commercialization. At its core is the idea that we can, to a much greater extent than generally thought possible, organize the conditions for technology breakthroughs that lead to new products, companies, and world-leading industries.


Organized Innovation consists of three pillars, or “three Cs”:



Channeled Curiosity refers to the marriage of curiosity-driven research and strategic planning.
Boundary-Breaking Collaboration refers to a radical dismantling of traditional research and academic silos to spur collective creativity and problem solving.
Orchestrated Commercialization means coaxing the different players, including researchers, entrepreneurs, financial investors, and corporations so that they make innovations real for global use.




If we can recognize the importance of Organized Innovation, we are confident the United States can restore its vision as a technology leader, revitalize its economy and employment levels, and help to resolve pressing global problems. We are confident, in other words, that America can produce many more big breakthroughs like the small device created by Mark Humayun and his colleagues.


Steven C. Currall is Dean and Professor at the Graduate School of Management at University of California, Davis; Ed Frauenheim is Content & Curation Specialist for the Great Place to Work Institute; Sara Jansen Perry is Assistant Professor of Management at the University of Houston-Downtown; and Emily Hunter is Assistant Professor at the Hankamer School of Business at Baylor University. They are the co-authors of Organized Innovation: A Blueprint for Renewing America’s Prosperity, published by Oxford University Press.


Subscribe to the OUPblog via email or RSS.


Subscribe to only business and economics articles on the OUPblog via email or RSS.


The post Restoring our innovation “vision” appeared first on OUPblog.




                Related StoriesWhy America must organize innovationSam sellsA conversation on economic democracy with Tom Malleson 
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 05:30

Make your own percussion instruments

By Scott Huntington




You’d probably be lying if you said that you didn’t spend at least a moderate amount of time during your childhood banging on various and sundry items that happened to be within reach. If we’re being honest, this particular sort of self-expression doesn’t seem to lessen with age; thankfully, our methods tend to get more sophisticated over time.


However, sometimes it’s fitting to go back to the primal days of beating anything that will make noise. Making your own percussion instruments can be a great way fully to understand sound, timbre, and tone. If you have music students or teach a drumline, having your students build their own drums can be a fantastic learning experience for everyone involved.


Before we get into the specifics about how to perfect your own DIY percussion instruments, let’s get some inspiration from some of the big names in homemade instruments.


Learn from the professionals

Most of us have some experience appropriating household items in our music-making endeavors, but the people behind the show STOMP have turned this pastime into an art form. This unique live show has a 20th anniversary quickly approaching, with tickets for the celebration show in New York City selling fast.


Using everything from trash-can lids to their own bodies, this is as good as it gets when it comes to DIY instruments.


If you’re looking for another great source of inspiration, look no further than Recycled Percussion – a “quintessentially Vegas” experience that boasts of having performed more than 4,000 shows worldwide. Quite a few of the band’s instruments will look quite familiar; they’re no strangers to homemade instruments made from pots, pans, scrap metal and even automobile parts.


Use What’s Around You
399px-Steel_drum_tuning

Tuning a steel drum with a Peterson strobe electronic tuner. Photo by Andrew Hitchcock. CC BY 2.0 via Wikimedia Commons.


Steel drums, or pans, has its roots in Western Africa, and its sound remains intrinsically linked with the spirit of the Caribbean. It really is a singular sound, and even the most basic steel pans – those made of recycled 55-gallon drums – are capable of producing utterly captivating sounds.


Building and tuning your own steelpan is time-intensive, but certainly not impossible, as this video from SmartyPansMusic demonstrates. Even if you don’t have the time or any spare oil drums lying around, there’s a good chance that you can find some suitable materials not far from where you live. Here are some ideas.


PVC Pipes: Whoever it was that first looked at a PVC pipe and said “I can make music with that” was clearly a visionary. PVC pipes are fairly inexpensive, as far as building materials go, and can produce an almost shocking range of sounds.


To get a sense of what’s possible with PVC pipes, check out this wonderful video from a guy who played some recognizable tunes including “In the Hall of the Mountain King” and “Viva la Vida.” The interesting thing about this type of instrument is that the sounds it produces is less about the sound of two objects colliding and more about the manipulation of the air within the pipes. The major variables you’ll be playing with are the lengths and widths of the pipes.


Scrap Metal: If you want to create your own STOMP experience at home, it may be time to “rescue” some scrap metal to create your own percussion instruments. Companies like McElroy Metal have sites in many states throughout the U.S., and offer a variety of materials to choose from, in different sizes and shapes.


scrap metal

Scrap metal / offcuts at Toruń Centre for Astronomy, Toruń, Poland. Photograph by Mike Peel. CC BY-SA 4.0 via Wikimedia Commons.


Slum Drummers is a group of Kenyan-born musicians who have brilliantly combined scrap-metal instruments with public outreach; their mission is to spread not only a love of music, but also an awareness of cultural issues such as drug use. From humble beginnings in scrap yards, these musicians and their castaway pieces of metal have gone on to inspire audiences across the world.


Buckets: It really is amazing what can be accomplished with some ordinary household items. If you’re working with a somewhat tighter budget, or a trip to a scrap yard simply isn’t in the cards for you, buckets might just be the way to go.


You can experiment with different materials, such as plastic and metal, as well as with different thicknesses. Buckets are some of the simplest and most utilitarian household items at our disposal, but they can produce a wide array of sounds. You’ll also want to try different methods of striking the buckets; traditional drumsticks are great, but you could try differently sized pieces of wood or even metal to really get the perfect tone.


7460158100_61f9f8d1a3_b

Street Drummer, by Nicholas Erwin. CC BY-NC-ND 2.0 via nickerwin Flickr.


Don’t Be Afraid to Experiment

No matter what materials you end up choosing, some experimentation will be in order before you get the sound you’re looking for. Modern drum kits work the way they do because of resonant heads and strategically placed air holes. Some trial and error is necessary to see what works best for the materials you’ve chosen.


Experimenting with different types of materials can be a really instructive experience for music students. It’s one thing to have a measure of skill as a musician, but quite another to understand precisely how it is that our favorite instruments create their sound. To that end, homemade drums are a great place to start.


Scott Huntington is a percussionist specializing in marimba. He’s also a writer, reporter and blogger. He lives in Pennsylvania with his wife and son and does Internet marketing for WebpageFX in Harrisburg. Scott strives to play music whenever and wherever possible. Follow him on Twitter at @SMHuntington.


Oxford Music Online is the gateway offering users the ability to access and cross-search multiple music reference resources in one location. With Grove Music Online as its cornerstone, Oxford Music Online also contains The Oxford Companion to Music, The Oxford Dictionary of Music, and The Encyclopedia of Popular Music.


Subscribe to the OUPblog via email or RSS.


Subscribe to only music articles on the OUPblog via email or RSS.


The post Make your own percussion instruments appeared first on OUPblog.




                Related StoriesRos Bandt, Grove Music OnlineBrian Eno, the influential “non-musician” at 66“There Is Hope for Europe” – The ESC 2014 and the return to Europe 
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 03:30

Consequences of the Truman Doctrine

By Christopher McKnight Nichols




On 22 May 1947, President Harry Truman signed the formal “Agreements on Aid to Greece and Turkey,” the central pillars of what became known as the “Truman Doctrine.” Though the principles of the policy were first articulated in a speech to a joint session of Congress on 12 March 1947, it took two months for Truman to line up the funding for Greece and Turkey and get the legislation passed through Congress.


Official portrait of Harry Truman by Greta Kempton

Official portrait of Harry Truman by Greta Kempton


In his March address, Truman reminded his audience of the recent British announcement — a warning, really — that they could no longer provide the primary economic and military support to the Greek government in its fight against the Greek Communist Party, and could not prevent a spillover of the conflict into Turkey. Truman asserted that these developments represented a seismic shift in post-war international relations. The United States, he declared, had to step forward into a leadership role in Europe and around the world. Nations across the globe, as he put it, were confronted with an existential threat. They thus faced a fundamental choice about whether or not states “based upon the will of the majority” with government structures designed to provide “guarantees of individual liberty” would continue. If unsupported in the face of anti-democratic forces, a way of life “based upon the will of a minority [might be] forcibly imposed upon the majority”, a government orientation which he contended depended on “terror and oppression.”


Ultimately, the “foreign policy and the national security of this country,” Truman reasoned, were at stake in the global conflict over democratic governance and thus in the particular tenuous situations confronting Greece and Turkey.


The fates of the two states were intertwined. Both nations had received British aid,  he said. If Turkey and Greece faltered, or “fell” to communists, then the stability of the Middle East would be at risk; thus US assistance also was “necessary for the maintenance of [Turkey’s] national integrity.”


The President therefore made the ambitious proposal that was elemental to his “doctrine”: thereafter “it must be the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.” Truman requested $400 million in assistance for the two nations, in a move that many at the time — and most subsequent scholarship — depicted as marking a sort of de facto onset of the Cold War.


While transformative, the precise significance of Truman’s speech is a subject of debate. As historian John Lewis Gaddis has argued, “despite their differences, critics and defenders of the Truman Doctrine tend to agree on two points: that the President’s statement marked a turning point of fundamental importance in the history of American foreign policy; and that US involvement in the Vietnam War grew logically, even inevitably, out of a policy Truman thus initiated.”


However, Truman’s speech and authorization of funding on which the principles depended was neither a subtle nor a decisive shift toward the strategy of containment as many later politicians and scholars have surmised. As Martin Folly observes in a superb piece on Harry Truman in the Oxford Encyclopedia of American Military and Diplomatic History: “It is easy to see the Marshall Plan for European economic recovery as following directly from the Truman Doctrine.” Folly goes on to note that this association is wrong. There is little evidence to support a claim that Truman or his powerful then-Undersecretary of State Dean Acheson conceived of the Doctrine as a first step toward, for instance, the measured but firm anti-Soviet resolution showed in the US response to the Berlin Crisis (in the form of the Berlin airlift) nor was the doctrine directly linked to the Marshall Plan as it developed in the year to come. However, as Folly suggests, the Doctrine “reflect[s] Truman’s own approach to foreign affairs as it had evolved, which was that the United States needed to act positively and decisively to defend its interests, and that those interests extended well beyond the Western Hemisphere.”


The major ideological shift represented by the Truman Doctrine and the aid to Greece and Turkey its its simultaneous rejection of the long-standing injunction to “steer clear of foreign entanglements” and an embrace of a heightened expansion of a sphere of influence logic. For the first time in US history, the nation’s peacetime vital interests were extended far outside of the Western Hemisphere to include Europe and, indeed, much of the world. According to Truman, it is “the policy of the United States to support free peoples who are resisting attempted subjugation by armed minorities or by outside pressures.”


This new logic of pro-active aid and intervention to support “vital interests” (always hotly contested, continually open to interpretation) worldwide undergirds the ways in which the United States continues to debate the nation’s internationalist as well as unilateralist options abroad in Ukraine, Libya, Syria, Afghanistan, Nigeria, and elsewhere.


Wherever one stands on debates over the “proper” US role in the world and contemporary geopolitical challenges, the antecedents are clear. After 1947 American national security—and foreign relations more broadly — were no longer premised on a limited view of protecting the political and physical security of US territory and citizens. Instead, the aid agreement signed on 22 May 1947 clinched a formalized US commitment to (selectively) assist, preserve, intervene, and/or reshape the political integrity, structures, and stability of non-communist nations around the world. The consequences of this aid agreement were profound for the early Cold War and for the shape of international relations in the world today.


Christopher McKnight Nichols  is a professor at Oregon State University and a Senior Editor for the Oxford Encyclopedia of American Military and Diplomatic History.


Subscribe to the OUPblog via email or RSS.


Subscribe to only history articles on the OUPblog via email or RSS.


Image: Official Presidential Portrait painted by Greta Kempton. Public Domain via Wikimedia Commons.


The post Consequences of the Truman Doctrine appeared first on OUPblog.




                Related StoriesCelebrating Victoria DayPhotography and social change in the Central American civil warsUnknown facts about five great Hollywood directors 
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 02:30

What role does symmetry play in the perception of 3D objects?

By Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawada, and Robert M. Steinman




The most general definition of symmetry is self-similarity: that one part of an object, pattern, signal, or process is similar, or more-or-less identical to another. According to this definition, the complete absence of symmetry is equivalent to perfect randomness, so symmetry is another name for redundancy. This makes the connection between symmetry and Shannon’s information theory explicit. The presence of symmetry also means that engineering and biological signals can be “compressed”; redundancy inherent in them can be reduced or even removed.


animation1


Symmetry is ubiquitous, as well as important, in our natural environments. There are several types of symmetry. The human body is mirror-symmetrical, one half is a reflection of the other. The two halves are never perfectly identical, but they usually are nearly so. The same is true of the bodies of almost all animals simply because mirror symmetry facilitates effective locomotion. A person could not walk and run along a straight line if his body were not mirror-symmetrical. A bird could not fly along a straight trajectory, and a fish or reptile could not swim along a straight trajectory if it were not mirror-symmetrical. Flowers are characterized by rotational-symmetry and many plants are characterized by translational-symmetry as well as by rotational symmetry. Man-made objects are usually made symmetrical because of the function they serve. A typical chair is mirror-symmetrical and a screw driver is rotationally-symmetrical. A completely asymmetrical object would most-likely be dysfunctional. Considering the fact that most things in our environment are symmetrical, one would think that our visual system should, at the very least, “know” about symmetry, and hopefully make good use of it. Symmetry is important not only because “it is there”, but also because the presence of symmetry implies that objects have shape and that scenes have structure.


Recently, we have been able to collect empirical evidence showing that the human visual system (one can also say, the human brain) uses symmetry to see 3D objects and scenes veridically (as they are). Symmetry is a natural, powerful predilection of our mind. It forms a large part of our a priori knowledge about the animate and inanimate things in the world around us. We are born with the concept of symmetry already in our minds. Why not? Symmetry is a mathematical concept, something that exists without any experience with the physical world. If our DNA contains information about the symmetry of our brain, why shouldn’t the brain know about symmetry, whether it is its own symmetry, or the symmetry of the real 3D objects and 3D scenes with which the brain’s owner’s will interact?


animation2


Our computational models show that symmetry is indispensable for veridical vision. It is also indispensable for avoiding the horrendous curse called computational intractability. Recovering a 3D shape from a single 2D retinal image would, without symmetry, require examining what are often called an “astronomically” large number of possibilities. How large? How about 1010,000,000, a number starting with 1 followed by 10 million zeros. Considering the fact that the number of atoms in the entire Universe is estimated to be 1080, a 1 followed by only eighty zeros, astronomers should probably start calling exceptionally large numbers “visually” rather than “astronomically” large. The visual system, by using symmetry, does not need to explore even a miniscule fraction of this huge number of possible 3D interpretations. Symmetry, and only symmetry allows a human, or a robot, to select the right 3D interpretation on its first attempt.


Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawada, and Robert M. Steinman are the authors of Making a Machine That Sees Like Us. Zygmunt Pizlo is a professor of Psychological Sciences and of Electrical and Computer Engineering at Purdue University. Yunfeng Li is a postdoctoral fellow at Purdue University. Tadamasa Sawada is a postdoctoral researcher in the Graduate Center for Vision Research at SUNY College of Optometry. Robert M. Steinman devoted most of his scientific career, which began in 1964, to sensory and perceptual process, heading this specialty area in the Department of Psychology at the University of Maryland in College Park until his retirement in 2008.


Subscribe to the OUPblog via email or RSS.


Subscribe to only brain sciences articles on the OUPblog via email or RSS.


Animations created and provided by Yunfeng Li and Tadamasa Sawada.


The post What role does symmetry play in the perception of 3D objects? appeared first on OUPblog.




                Related StoriesHelping yourself to emotional healthChurchill, Hitler, and Stalin’s strategy in World War IIRethinking domestic violence: learning to see past the stereotypes 
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 01:30

All (European) politics is national

By Jean Pisani-Ferry




At the end of May, 400 million EU citizens will be called to participate in the second-largest direct election in the world (the first being held in India). Since they last went to the polls to elect their parliament, in 2009, Europe has gone through an acute crisis that precipitated several countries deeper into recession than any peacetime shock they had suffered for a century. In several of the continent’s regions, more than a fourth of the labour force is unemployed. Over the last five years, the crisis has exposed many weaknesses in the design of the euro area and there has been no shortage of heated policy debates about the nature of the systemic reforms that were required. In the same vein, both the European Central Bank’s response and the pace of fiscal consolidation have been matters for ongoing controversies.


Against this background, one could expect political parties to offer clearly defined alternative choices for the future of Europe and citizens to participate to the elections en masse – even more so because the next parliament will have a say in the selection of the coming European Commission, the EU’s executive body. Expectations, however, are uniformly grim. Last time the election was held, turnout was 43% only. It is anticipated that it will be low again and that fringe national parties will be significant winners in the election. Throughout Europe, mainstream politicians are preparing for a setback. Some foresee a disaster.


There are three reasons for this paradox. First, citizens do not grasp what the European parliament is about. It is, in fact, an active and thorough legislator. Over the last five years, it has for example been an energetic player in the elaboration of a regulatory response to the global financial crisis and a staunch protector of European consumer. Recently it has played a major role in the creation of a banking union in the EU. But it is rarely the place where the debates that define the political agenda and capture the citizens’ attention are held.


european parliament


Second, dividing lines within parliament are often national rather than political. On industrial policy, trade and regulation, as well as far as relationships with neighbours are concerned, which country you belong to matters as much as which camp you are from. Consequently, issues are often settled with a compromise that blurs the separation between left and right. As in addition virtually all the media are national and generally pitch the debate as opposing the national capital and ‘Brussels’ or another capital, voters have no perception of the sometimes very real differences between left and right.


Third, the fundamental European debate is of a constitutional nature and for this reason it cannot be settled by the parliament. This is true of the key issues that arose during the euro crisis: whether to rescue countries in trouble, whether to mutualise public debt, whether to change the decision rule for sanctions against excessive budget deficits, whether to go for a banking union. Each time the big question was, what do Germany, France and other Eurozone countries think? It was not what does the European parliament think, because almost by definition the parliament has always been in favour of more Europe.


These three obstacles to a pan-European political debate explain why fringe anti-EU parties like the UK Independence Party (UKIP), or the French National Front generally do well in the European parliament elections. Their simple message is that European integration is the wrong way to go and that national governments should repatriate powers from Brussels. As the scope for disagreement between the two main centre-right and centre-left parties is much narrower than the range of views amongst voters, voters who have sympathy for the anti-EU know why and for whom they should vote while those who are in favour of European integration do not have many reasons to vote, because the mainstream parties’ platforms are largely interchangeable.


To overcome the obstacle, a recent reform has stipulated that when appointing the European Commission’s president, the heads of state and government should take into account the result of the elections to the European parliament. In principle therefore, the next European Commission president will belong to the party holding the (relative) majority in the European parliament. Furthermore, the main parties have already nominated their candidates to the European Commission. This politicisation is meant to flag to the citizens that their vote matters and will result in determining the roadmap for the next five years. Unfortunately however, it is not clear whether mainstream parties will be able to formulate policy platforms that are defined enough to attract voters.


Does it matter? After all Europe’s situation is not unique. In the US participation rates in the mid-term elections (when the presidency is not in the ballot) are generally well below 50%. They are also rather low in other federations like India or Switzerland. As Tip O’Neill, the former speaker of the US House, used to say, “all politics is local” and this affects the voters’ behaviour. Europe, in a way, is awkward, but normal: the EU does the legislation, but politics is national.


This is however a too complacent reading of the reality. At a time when countries participating in the euro are confronted with major choices, the risk for Europe is to emerge from the elections with a weak legitimacy (because of the turnout) and a politically distorted parliament (because of the strong showing of the fringe parties). This would make governments wary of bold choices and could result in an unhealthy stalemate. It is not yet time for the EU to become boringly normal.


Jean Pisani-Ferry currently serves as the Commissioner-General for Policy Planning to the Prime Minister of France. He is also Professor of Economics and Public Management at the Hertie School of Governance in Berlin. Until May 2013 he was the director of Bruegel, the Brussels-based economic think tank he contributed to founding in 2005. He is the author of The Euro Crisis and Its Aftermath.


Subscribe to the OUPblog via email or RSS.


Subscribe to only political sciences articles on the OUPblog via email or RSS.


Image credit: The European Parliament, Brussels. Photo by Alina Zienowicz. CC-BY-SA-3.0 via Wikimedia Commons


The post All (European) politics is national appeared first on OUPblog.




                Related StoriesMy democracy, which democracy?Tinderbox drenched in vodka: alcohol and revolution in UkraineA conversation on economic democracy with Tom Malleson 
 •  0 comments  •  flag
Share on Twitter
Published on May 22, 2014 00:30

May 21, 2014

Small triumphs of etymology: “oof”

By Anatoly Liberman




There is an almost incomprehensible number of English words for money and various coins. Some of them, like shilling, are very old. We know (or we think that we know) where they came from. Other words (the majority) surfaced as slang, and our record of them seldom goes beyond the early modern period. They belonged to thieves and counterfeiters’ vocabulary; outsiders were not supposed to make sense of all those boodles, crocards, firks, prindles ~ pringles, and wengs. Words are like people, and it is no wonder that some upstarts make their way into high society and become respectable. Among them are, for instance, buck “dollar,” quid “sovereign; guinea” (such a strange Latinism!), and stiver “a small coin.” Coins have always circulated far outside their countries of origin (Dutch stiver is one of them). Cant words, along with money in general, discovered the joys of globalization long before our time. The international community of criminals accepted them, and that is why so much “monetary slang” is “foreign-born” and why its etymology puzzles historical linguists.


A word lover can enjoy names without knowing their origin: they are like pets (mongrels are often much friendlier than purebreds). Who can resist the charm of scittick ~ scuttick ~ scuddick ~ scurrick and their cousins (or perhaps look-alikes) scat and squiddish? Boar, grunter, hog, and the afore-mentioned buck—aren’t they impressive-looking beasts? Money, as Mowgli said, are things that change hands and don’t become warmer in the process. Very true, but we are word hunters, not merchants, and today’s story is about the word oof, British slang for “money.” Its origin has been guessed, and there is every reason to be proud of the result.


I have once touched on the word oof, but, unfortunately, coupled oof with another word, whose provenance, although undiscovered, is quite different. Also, that post appeared on September 9, 2009, and hardly anyone remembers it. However, I do, because for my erroneous hypothesis I was hauled over the coals in a not very courteous manner, as Skeat would have put it (see the previous post), and the burns still smart. Below I will repeat part of what I said five years ago, for the context of the present essay is quite different from the one written in 2009.


The guessing game was played by amateurs. They were inspired by the famous Osborne trial (1892; of course, its fame faded long ago), at which the word oof was used more than once; this circumstance explains the date of the first letters on this subject sent to Notes and Queries (1893). By that time oof had been around for several decades but needed a push from outside to become public property. Some people fell into a trap. They knew the phrase oof bird “an imaginary provider of wealth.” Most likely, the phrase emerged as a joke and was coined under the influence of French œuf “egg,” with reference to the bird that lays golden eggs. Quite naturally (journalists like to say unsurprisingly in such cases), they concluded that oof is the English pronunciation of œuf. It did not bother them that no English speaker however atrocious his or her accent might be, would turn œuf (even if it is a solid golden œuf) into oof.


Lots of oof.

Lots of oof.


On the other hand, there was a man called William Hoof, a “wealthy railway contractor, who died in 1855, leaving upward of half a million sterling.” Hoof with its h dropped would have easily yielded oof. In the middle of the nineteenth century and much later, such wild conjectures filled the pages of many popular journals. But there were others, whose ideas were not only sensible but also correct. A correspondent to Notes and Queries, who identified himself by his four initials (S. J. A. F.; I am sure many readers knew who hid under those letters), remarked that in Low German there was the slang word ofti[s]ch “money.” “It has descended to its present low estate from certain semi-Bohemian circles.” He also cited the word oofless” penniless.” Soon after him Willoughby Maycock pointed out that the word in question was of Jewish origin and had its roots in London. Its etymon, he repeated, was the phrase ooftisch “on the table”: the stakes had to be put on the table before the game began. The word “was introduced… by the facetious columns of the Sporting Times, but not invented by that organ.” Money on the table would be an approximate analog of Engl. cash on the nail and especially of Russian den’gi na bochku “money on the barrel” (money on the barrel has some currency in English, especially, as it seems, in American English).


The great Walter Skeat found the noun spinuffen “money” (plural) in a Westphalian dictionary and derived oof from uffen (1899). Strange as it may seem, he disregarded (more probably, missed) the explanation offered six years earlier. His note made James Platt, Jun., a most remarkable student of word origins, to write in his rejoinder that it was as certainly courting failure to explain oof without reference to its full form ooftisch as it would be to attempt the derivation of bus and cab without taking into account omnibus and cabriolet. Skeat rarely conceded defeat gracefully and wrote to Notes and Queries again. No, he was not at all sure that spinuffen and ooftisch are unrelated, “for the latter, whether it represents ooft-isch or ooft-ich, may be suspected to be formed upon the base ooft.” He was wrong and never tried to defend his etymology again. The first edition of the OED recognized the Jewish ooftisch derivation, though, as is the case with pedigree (see again the previous post), without absolute certainty. All the later dictionaries followed the OED (in lexicographical work, followed means “copied”). Be that as it may, oof does seem to go back to ooftisch.


A small triumph! One insignificant “slangism” has emerged from its obscurity, but this is how the science of etymology progresses nowadays: by infinitesimal steps. Unlike “regular” words, slang comes from popular culture and the underworld; it is a product of the ludic spirit. In that area, researchers can seldom base their conclusions on precedent. Phonetic correspondences play little or no role in the development of slang. Words of allegedly Jewish origin are particularly dangerous, for time and again Hebrew and Yiddish are conjured up to account for the coinages (particularly, when it comes to crime and swindling) that have nothing to do with the life and language of the Jews. English slang depends on Yiddish to a much smaller extent than does German. But ooftisch lost its second element in England; so oof can be called English, especially because it rhymes with hoof (the oo in its source sounded like Engl. awe). Dictionaries mark oof as British slang. However, the word was not unknown in the United States, and The Century Dictionary has a good American citation.


Anatoly Liberman is the author of Word Origins And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears on the OUPblog each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.” Subscribe to Anatoly Liberman’s weekly etymology articles via email or RSS.


Subscribe to the OUPblog via email or RSS.


Subscribe to only language articles on the OUPblog via email or RSS.


Image credit: UK coins by Karen Bryan. CC BY-ND 2.0 via Flickr.


The post Small triumphs of etymology: “oof” appeared first on OUPblog.




                Related StoriesLittle triumphs of etymology: “pedigree”Henry Bradley on spelling reformUnsung heroes of English etymology: Henry Bradley (1845-1923) 
 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 05:30

The politics of political science

By Christopher Hood, Desmond King, and Gillian Peele




Why are there now more salaried academic political scientists than salaried politicians in Britain today? There are well over 2,000 academic members listed in the current directory of the UK’s main political-science association (the PSA) – more than twice the number of elected members of the Westminster, devolved and European parliaments put together. But it has not always been so; a century ago, all of Britain’s ‘politics’ academics could have comfortably fitted into a small room.


Since then, and particularly in the later twentieth century, academic political science grew spectacularly. Indeed, the United States’ main political science association (APSA) is running out of cities with sufficient hotel space to host its annual conference, now attended by over 8,000 political scientists. As the subject has grown, its content has changed too, with the empirical study of government organization all but disappearing from mainstream political science research and teaching, the development of international relations and electoral studies as the two largest and most powerful subfields, and a notable drive to greater scientific ‘professionalization’ in the form of ever-more abstraction, quantification and a puzzle-solving rather than knowing-about style of scholarship.


So what accounts for this remarkable historical development? Political scientists themselves can be forgiven for putting it down to the inherent intellectual fascination and self-evident importance of their subject and to the value of its discoveries, such as the median voter theorem and collective action theory. But that is hardly an uncontested view; after all, the US Congress last year (after a long campaign by Republican Senator Tom Coburn of Oklahoma) narrowly voted to ban federal funding of any political science research projects not deemed essential for promoting the national security or economic interests of the United States – a restriction that was removed by a spending bill passed in January of this year. But even if Coburn and his political colleagues who voted for last year’s ban are wrong to think their country can do very well without most political-science research, it still prompts the question of why the academic study of politics should be so much more fascinating and important today than it was a century ago as to drive this notable expansion.


Oxford


It seems obvious that part of the answer lies in the development of mass higher education over the past century, and the professorial population explosion that has gone along with it. But that does not of itself explain why political science grew as a field of research and teaching not just absolutely but relative to other subjects, such as languages, history or classics. A century ago in Oxford, Modern History was the one of the major launch-pads for those seeking careers in government, politics and public service, but it began to take a purer academic view of its mission, leaving a space into which PPE could move. And that was not an isolated development, as political science became more common as an educational background for politicians and bureaucrats. Indeed, at least one of Senator Coburn’s Republican allies in his battle against federal funding of political science research (Senator Jeff Flake of Arizona) has a graduate degree in that very subject.


What of the future? Will the rise and rise of political science over the last hundred years continue with a similar rate of growth in the present century, such that by 2100 there might be 250,000 or so academic political scientists facing the impossible task of finding contiguous hotel space for APSA’s annual conference? Will the growth slow down and change into some sort of stability? Or must what goes up come down, perhaps as a result of the kind of political reaction illustrated by the efforts of Senator Coburn and his colleagues?


Time will tell. Even after a century of such spectacular growth, there is no more agreement about what – or who – political science teaching and research is for than there was a hundred years ago. Is it curiosity-driven puzzle-solving science aimed at an international peer-reviewing professoriate? Is it to serve the practical needs of governments and bureaucracies, for example in promoting national security or national economic interests, as the UK’s ESRC now tends to expect? Is it to serve the citizenry and ‘civil society’ at large, as Oxford tried to do in the 1880s, with extension classes in ‘political science’, aimed at (then disenfranchised) women, trade unionists and working-class students interested in politics and political activism? Can political science continue to live with these contradictory views of its mission for another hundred years? Will one of those visions win out against the others? Or will the subject fragment further, for example by partly turning inwards to a purer academic orientation, as Modern History did in Oxford a hundred years ago and partly developing into more applied leadership training, such as that offered by Oxford’s new Blavatnik School of Government and other institutions like it? All of these are possibilities. But if the past is anything to go by, a stable equilibrium seems an unlikely future for this subject.


Christopher Hood, Desmond King, and Gillian Peele are the editors of Forging a Discipline: A Critical Assessment of Oxford’s Development of the Study of Politics and International Relations in Comparative Perspective. Christopher Hood, FBA is Gladstone Professor of Government at the University of Oxford and Fellow of All Souls College, Oxford. Desmond King, FBA is Andrew W. Mellon Professor of American Government at the University of Oxford and Fellow of Nuffield College, Oxford. Gillian Peele, FRHistS is University Lecturer in Politics at the University of Oxford and Tutorial Fellow of Lady Margaret Hall, Oxford.


Subscribe to the OUPblog via email or RSS.


Subscribe to only politics articles on the OUPblog via email or RSS.


Image credit: Oxford from above at sunset. © Andrea Zanchi via iStockphoto.


The post The politics of political science appeared first on OUPblog.




                Related StoriesA conversation on economic democracy with Tom MallesonTinderbox drenched in vodka: alcohol and revolution in UkraineWe’re all data now 
 •  0 comments  •  flag
Share on Twitter
Published on May 21, 2014 03:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.