Oxford University Press's Blog, page 906

September 5, 2013

The lark ascends for the Last Night

By Robyn Elton




On Saturday 7 September 2013, lovers of classical music will gather together once again for the final performance in this year’s momentous Proms season. Alongside the traditional pomp and celebration of the Last Night, with Rule, Britannia!, Jerusalem, and the like, we are promised a number of more substantial works, including Bernstein’s Chichester Psalms and the overture to Wagner’s The Mastersingers of Nuremberg. I suspect the crowning glory for many listeners, however, will be Ralph Vaughan Williams’s The Lark Ascending, performed by Nigel Kennedy—one-time enfant terrible of the violin world.


Perhaps not surprisingly, Kennedy’s earlier performance in this year’s Proms season could hardly have been less conventional. His late-night Prom with the Palestine Strings and members of the Orchestra of Life revisited Vivaldi’s The Four Seasons—the work he recorded to great acclaim nearly 25 years ago—but with a twist: this time the musicians added improvised links between the sections, fusing the Italian Baroque with jazz and microtonal Arabic riffs. Given this precedent, along with Kennedy’s reputation, I can’t help wondering what he has planned for his Last Night performance.


There’s certainly a lot of scope for personal interpretation within The Lark Ascending. Although Vaughan Williams is specific about his requirements on the page, the solo writing is calling out for a violinist to breathe life into it—to make the lark ascend, as it were. It must sound natural, almost as if it was improvised (as the lark’s song), leaving the door open for all kinds of interpretive inventiveness. In fact, I’d say that this is one of the main challenges for the performer, because to play this music ‘straight’ would be to completely take away its character. The composer makes his intentions in this area clear from the outset, with the opening cadenza notated entirely freely, without barlines and with senza misura marked not once but twice.


violin-small


When I was 16, and again a few years later, I was lucky enough to have the opportunity to perform The Lark Ascending with orchestra—a rare chance for a young performer, and an experience I haven’t since repeated. The freedom of the work’s opening was exhilarating, yet in my case somewhat terrifying. You really are left hanging, when the already sparse orchestral accompaniment (just a held chord in the strings) drops out, leaving the soloist stranded at the extreme end of the violin’s upper range. With no orchestral support, there really is nowhere to hide, but on the other hand, you know you can take your time and everyone will just have to wait. For me, there was no way to practise exactly how that part would turn out on the night—no point in counting imaginary beats or planning the precise amount of bow to save. It’s all in the moment, and you can decide what you want to do at that very point in time, depending on how the mood takes you, the atmosphere in the hall, or even what your fingers feel like doing: it’s as if time is suspended. I can imagine that’s something that appeals to Nigel Kennedy, and I’m sure he’s on the exhilarated rather than terrified end of the spectrum.


After that initial cadenza, I almost felt like my work was done: I could relax and enjoy the sumptuous melodies to come (Vaughan Williams was especially kind in his first main melody—nothing too tricky there). Even the double stopping at the Largamente, the alternating parallel fifths, and the seemingly never-ending runs and twiddles, seem relatively harmless once you’ve conquered the opening. Of course, the cadenza returns at the end of the work (as well as briefly in the middle), and the soloist is once again left to wrap things up on their own. I just hope the excitable Last Night audience will be able to hold that moment of silence for long enough before bursting into rapturous applause.


Robyn Elton is Senior Editor in the printed music department at Oxford University Press and an active local violinist.


In the fifty years since his death, Vaughan Williams has come to be regarded as one of the finest British composers of the 20th century. He has a particularly wide-ranging catalogue of works, including choral works, symphonies, concerti, and opera. His searching and visionary imagination, combined with a flexibility in writing for all levels of music-making, has meant that his music is as popular today as it ever has been.


Subscribe to the OUPblog via email or RSS.


Subscribe to only music articles on the OUPblog via email or RSS.


Oxford Sheet Music is distributed in the USA by Peters Edition.


Image credit: Violin via Shutterstock.


The post The lark ascends for the Last Night appeared first on OUPblog.




                Related StoriesOxford University Press at the BBC Proms 2013Interpreting Chopin on piano - EnclosureHarriet Cohen: alluring woman, great pianist devoted to Bach 
 •  0 comments  •  flag
Share on Twitter
Published on September 05, 2013 03:30

Adapting Henry V


By Gus Gallagher




In the Autumn of 2011 I found myself at something of a loose end in the beautiful city of Tbilisi, Georgia, working with the Marjanishvili Theatre there on a production of Captain Corelli’s Mandolin. Unsure of what my next project might be, my attention turned to an old love, Shakespeare’s Henry V. Having long been intrigued by both the story and the title character, I set about reading the text afresh. For perhaps the first time, I realised I no longer sought to play the lead role myself, but found myself still driven to have the story told in a fresh, vibrant, immediate fashion.


Prior to setting out for Georgia, I’d been involved with a five-man production of Doctor Faustus during which I had been struck by how well the classical verse seemed to lend itself to the more intimate company structure. In previous years I had also been a member of a small-cast version of Macbeth, which had likewise seemed to benefit from the experiment. These earlier experiences must have been in my mind when I started thinking about how I might stage Henry V.


Henry V

Morgan Philpott in Creation Theatre’s production of Henry V


At first, I was curious to see if it might be possible to tell the story using only five actors, and was interested to see that it was. However, as I took another swing at it, I began to distil the idea further. It became apparent to me that in most key scenes there were three distinct ‘voices’. These, I thought later, might more often than not be termed the petitioner, the advocate, and the judge. The petitioner often seemed to pose ‘The Question’ at the top of the scene (such as The Archbishop of Canterbury in I.2), whilst the advocate rallies either for or against his or her cause (such as Exeter in the same scene). Finally, each key scene seemed to have a singular figure who would judge the outcome and lead the way onwards (Henry).


Obviously, it was not possible to achieve a wholesale three-man cut of the text without considerable and audacious changes to the original — mostly in the form of character amalgamations, slight re-ordering or outright edits — but I believe the integrity of the piece as a whole, and crucially the story, remain intact.


Having gladly agreed to an application of performance rights from Creation Theatre in Oxford, I then stood back completely from the process of production. What I was intrigued to find was how well the three-man format seemed to bring out the comedy of the piece. The pace, also, seemed more in tune with what I believe was Shakespeare’s intent. Of course, both these factors are entirely to the credit of the director, cast and creative team, but I was pleased to see them both used so effectively in a production in which I played a modest role.


Gus Gallagher trained at The Guildhall School of Music and Drama. After ten years as an actor, playing such roles as Romeo, Coriolanus, Mercutio, Macduff, and Dr. Faustus, he turned his attention to writing. The Creation Theatre adaptation of William Shakespeare’s Henry V is Gus’s first produced work. He is currently working on a piece about the life and times of King William IV, as well as a play about The Jarrow March of 1936. Oxford World’s Classics are sponsoring the production, which is on at Oxford Castle Unlocked until September 14.


For over 100 years Oxford World’s Classics has made available the broadest spectrum of literature from around the globe. Each affordable volume reflects Oxford’s commitment to scholarship, providing the most accurate text plus a wealth of other valuable features, including expert introductions by leading authorities, voluminous notes to clarify the text, up-to-date bibliographies for further study, and much more. You can follow Oxford World’s Classics on Twitter, Facebook, or here on the OUPblog.


Subscribe to the OUPblog via email or RSS.


Subscribe to only literature articles on the OUPblog via email or RSS.


Subscribe to only Oxford World’s Classics articles on the OUPblog via email or RSS.


Image credit: Morgan Philpott in Henry V. Image copyright Creation Theatre Company. Photography by Richard Budd. Do not reproduce without permission.



The post Adapting Henry V appeared first on OUPblog.




                Related StoriesThe poetry of Federico García Lorca10 questions for David GilbertWalter Scott’s anachronisms 
 •  0 comments  •  flag
Share on Twitter
Published on September 05, 2013 00:30

September 4, 2013

Monthly etymology gleanings for August 2013, part 2

By Anatoly Liberman




My apologies for the mistakes, and thanks to those who found them. With regard to the word painter “rope,” I was misled by some dictionary, and while writing about gobble-de-gook, I was thinking of galumph. Whatever harm has been done, it has now been undone and even erased. All things considered, I am not broken-hearted, for over the years I have written almost 400 posts and made considerably fewer mistakes. And now to business.


The letters of the alphabet.


One of the questions related to this topic was answered in the comments. Although alphabetical writing attempts to render pronunciation and is therefore from a historical point of view secondary, we hardly know more about its origin than about the origin of language. Every ancient alphabet appears to have been borrowed, but the source of the initial idea remains hidden. According to a credible surmise, A is a natural beginning because it renders or represents the most elementary sound (an open mouth and a yell), but what are we supposed to do with the rest of the sequence?


When people decide that they need more letters, they traditionally add them to the end of the alphabet. This is what the Greeks and the modern Scandinavians did (but it is amusing that Icelandic, Norwegian, Danish, and Swedish letters follow in a different order—so much for the Pan-Scandinavian unity). Some letters drop out, as evidenced by the history of English and Russian, among others. However, examples of an order different from the one familiar to us are not far to seek. One is Sanskrit, another is the Old Scandinavian runic alphabet (futhark). Its strange order (why begin with an f?) has been the object of endless speculation, but a convincing answer has not been found. Each hypothesis explains some oddity rather than the overall system.


Letters often have names. For instance, aleph means “ox,” the runic f was associated with the word for “property” (of which Engl. fee is a distant echo), and so forth. Such names are usually added in retrospect, to facilitate the process of memorization; they can be called mnemonic rules for learners. Our “names” (B = bee, F = ef, etc.) are instances of vocalization. Its history is also partly obscure. I dealt with the name aitch for H in a recent blog. Professor Weinstock cited alphabets in which H and K follow immediately upon each other. See the picture in his article “The Rise of the Letter-Name ‘Aitch’” (English Studies 76, 1995, p. 356).


Rigveda MS in Sanskrit on paper, India, early 19th c.

Rigveda MS in Sanskrit on paper, India, early 19th c., 4 vols., 795 ff. (complete), 10×20 cm, single column, (7×17 cm), 10 lines in Devanagari script with deletions in yellow, Vedic accents, corrections etc in red. Public domain via Wikimedia Commons.


Definition of literacy.


In my file I discovered a question asked long ago. I doubt that I ever answered it and don’t know whether our correspondent still needs an answer. In any case, I am sorry that I mislaid the letter. Can the American Sign Language (ASL) be viewed as having literacy? “ASL has never been considered as purely oral. It does not have a written system but some of us consider ASL as having literacy.” Judging by the usage familiar to most of us, literacy deals with writing. The person who cannot read and write is “illiterate.” The communities of the past that had no writing systems are sometimes referred to as preliterate (an unfortunate term, for it implies that literacy is a natural state in the development of culture). To the extent that ASL addresses itself to the eye it probably cannot be called a form of literacy. Our correspondent added the following statement: “As you may be aware, literacy is much more than reading/writing. I am trying to argue that ASL with no written system of its own can truly be considered as literate or having literacy. This is not a major concern because approximately 67% of the spoken languages in the world have no written forms.” It seems that, unless we broaden the definitions and make too much of such phrases as computer literacy, in which literacy means “expertise,” and literate as synonymous with “educated, learned; well-read” (he is very literate), ASL cannot be called a literate language. Like several other sign languages, it is a means of communication that bypasses writing.


“Week” and its cognates.


Why does Engl. week have a so-called long vowel, while German has Woche, Swedish has vecka, and so forth, all of them with a short vowel in the root? The oldest form of the word must have been wika. Initial w tended to change the vowel that followed it; hence the labial vowels u and o (as in German Woche). In the Scandinavian languages, w was lost before u and o, which explains Norwegian uke and Danish uge. In the languages in which the vowel did not become u or o, it often became e (o in Woche goes back to e). In Old English, the form was wice ~ wicu, but in Middle English, as in the other Germanic languages, a vowel standing before a single consonant tended to get length. That is why German Name and its English cognate name have long vowels. However, in English, short i and short u tended to resist lengthening, and, if they succumbed to the change, they became long e and long o (long in its etymological sense, that is protracted, with an increase in duration, not as they are understood in Modern English!). Engl. wice became weke (with long e), and this long e changed to ee by the Great Vowel Shift. Similar processes occurred in many Scandinavian dialects. Elsewhere we have only a more open vowel (short e), without lengthening; hence Swedish vecka. The Norwegian and Swedish forms have long vowels, even though it is u rather than i or e. Sorry for an overabundance of technicalities, but here the answer depended entirely on details of this nature.


Meleda.


I should have quoted the letter of our correspondent rather than retelling it. This would have made some comments unnecessary.


“The particular name I am interested in is meleda from Pieter van Delft & Jack Botermans’ 1978 Creative Puzzles of the World: ‘Meleda first appeared in Europe in the mid-16th century and was described by the Italian mathematician Geronimo Cardano.’ Some folk appear to have taken this to mean that Cardano himself used the word but I do not see it in the relevant De Subtilitate paragraphs which describe the ‘instrument’ in Latin. In fact, at present I have no reference to meleda prior to 1978. Google’s Ngram Viewer suggests major usage only after 1900 but this appears to be an island name and (perhaps) a disease related to it.”


So the puzzle (I mean the earliest recorded use of the word) remains.


If you will and related matters.


Here is an elegant example of will after if. “‘If he [Snowden] wants to go somewhere and somebody will host him—no problem,’ Putin said.” Unfortunately, this is a translation. The Associated Press did not quote the Russian original, and I wonder what Putin meant. I suspect something like …“and if somebody is willing to host him.” Compare another sentence: “If a girl younger than sixteen gives birth and won’t name the father, a new Mississippi law… says…” (also from the Associated Press). Is the sequence justified? And finally, an extract from a letter to the editor: “…if someone—anyone—reading this will think of their family before getting behind the wheel, it would bring me some sense of peace.” Does if someone will think mean “please think”? And does would after will sound like today’s standard American usage? It is not my intention to police anyone’s speech habits (let her rip): as a linguist I am just wondering what has happened to auxiliary verbs in conditional clauses.


Vodka.


Yes, of course I am aware of the alternate etymology of vodka, and Chernykh’s two-volume dictionary stands on my shelf next to Vasmer’s and a few others. However, the origin of the word remains unsolved (clearly not “little water”!).


I still have several unanswered questions. Next month!


Anatoly Liberman is the author of Word Origins…And How We Know Them as well as An Analytic Dictionary of English Etymology: An Introduction. His column on word origins, The Oxford Etymologist, appears on the OUPblog each Wednesday. Send your etymology question to him care of blog@oup.com; he’ll do his best to avoid responding with “origin unknown.”


Subscribe to Anatoly Liberman’s weekly etymology posts via email or RSS.


Subscribe to the OUPblog via email or RSS.


The post Monthly etymology gleanings for August 2013, part 2 appeared first on OUPblog.




                Related StoriesAlphabet soup, part 2: H and YMonthly etymological gleanings for July 2013Monthly etymology gleanings for August 2013, part 1 
 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2013 05:30

Reasoning in medicine and science

By Huw Llewelyn




In medicine, we use two different thought processes: (1) non-transparent thought, e.g. slick, subjective decisions and (2) transparent reasoning, e.g. verbal explanations to patients, discussions during meetings, ward rounds, and letter-writing. In practice, we use one approach as a check for the other. Animals communicate solely through non-transparent thought, but the human gift of language allows us also to convey our thoughts to others transparently. However, in order to communicate properly we must have an appropriate vocabulary linked to shared concepts.


‘Reasoning by probable elimination’ plays an important role in transparent medical reasoning. The diagnostic process uses ‘probable elimination’ rival possibilities and points to a conclusion through that process of elimination. Suppose one item of information (e.g. a symptom) is chosen as a ‘lead’ that is associated with a short list of diagnoses that covers most people with that lead (ideally 100%). The next step is to choose a diagnosis from that list and to look for a finding that occurs commonly in those with that chosen diagnosis and rarely (ideally never) in at least one other diagnosis in the list. If such a finding is found for each of the other diagnoses in the list, then the probability of the chosen diagnosis is high. If findings are found that never occur in each other possibility in the list, then the diagnosis is certain. However, if none of this happens, then another diagnosis is chosen from the list and the process is repeated.


Probabilistic reasoning by elimination explains how diagnostic tests can be assessed in a logical way using these concepts to avoid misdiagnosis and mistreatment. If clear, written explanations became routine, it would go a long way to eliminating failures of care that have dominated the media of late.


Doctor and patient


Reasoning by probable elimination is important in estimating the probability of similar outcomes by repeating a published study (i.e. the probability of replication). In order for the probability of replication to be high, the probability of non-replication due to all other reasons has to be low. For example, the estimated probability of non-replication due to poor reporting of results or methods (due to error, ignorance or dishonesty) has to be low. Also, the probability of non-replication due to poor or idiosyncratic methodology, or different circumstances or subjects in the reader’s setting, etc. should be low. Finally, the probability of non-replication by chance due to the number of readings made must be low. If, after all this, the estimated probabilities are low for all possible reasons of non-replication, then the probability of replication should be high. This assumes of course that all the reasons for non-replication have been considered and shown to be improbable!


If the probability of replicating a study result is high, the reader will consider the possible explanations or hypotheses for that study finding. Ideally the list of possibilities should be complete. However, in a novel scientific situation there may well be some explanations that no one has considered yet. This contrasts with a diagnostic situation where past experience tells us that 99% of patients presenting with some symptom have one of short list of diagnoses. Therefore, the probability of the favoured scientific hypothesis cannot be assumed to be high or ‘confirmed’ because it cannot be guaranteed that all other important explanations have been eliminated or shown to be improbable. This partly explains why Karl Popper asserted that hypotheses can never be confirmed – that it is only possible to ‘falsify’ alternative hypotheses. The theorem of probable elimination identifies the assumptions, limitations and pitfalls of reasoning by probable elimination.


Reasoning by probable elimination is central to medicine, science, statistics and other disciplines. This important method should have a central place in education.


Huw Llewelyn is a general physician with a special interest in endocrinology and acute medicine, who has had a career-long interest in the mathematical representation of the thought processes used by doctors in their day to day work during clinical practice, teaching and research. He has also been an honorary fellow in mathematics in Aberystwyth University for many years and has had wide experience in different medical settings: general practice, teaching hospital departments with international reputations of excellence and district general hospitals in urban and rural areas. His insight is reflected in the content of the Oxford Handbook of Clinical Diagnosis and the mathematical models in the form of new theorems on which that content is based.


Subscribe to the OUPblog via email or RSS.


Subscribe to only health and medicine articles on the OUPblog via email or RSS.


Image credit: Image via iStockphoto.


The post Reasoning in medicine and science appeared first on OUPblog.




                Related StoriesPolio provocation: a lingering public health debateIs your doctor’s behavior unethical or unprofessional?Five things to know about my epilepsy 
 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2013 03:30

The trouble with Libor

By Richard S. Grossman




The public has been so fatigued by the flood of appalling economic news during the past five years that it can be excused for ignoring a scandal involving an interest rate that most people have never heard of. In fact, the Libor scandal is potentially a bigger threat to capitalism than the stories that have dominated the financial headlines, such as the subprime meltdown, the euro-zone crisis, the Madoff scandal, and the MF Global bankruptcy.


It’s not surprising that Libor has generated less interest than these other stories. It has left neither widespread financial turmoil nor bankrupt celebrities in its wake. It took place largely outside of the United States, further rendering the American media and public more disinterested. It involves technical issues that induce sleep in even the most hard-bitten financial correspondents.


Yet, despite its lower profile, the Libor scandal is potentially more serious than any other financial catastrophe in recent memory.


The subprime crisis can be blamed on poor government management: irresponsible fiscal policy combined with loose monetary policy and poor regulatory enforcement. The euro crisis resulted from one poorly conceived idea: creating one currency when retaining 17 distinct currencies would have been better. The Madoff and MF Global debacles can be chalked up to a few isolated unscrupulous and reckless individuals.


By contrast, the Libor scandal was nothing less than a conspiracy in which a group of shadowy bankers conspired against the majority of participants in the financial system—that is, you and me. And therein lies the danger.


Libor is the acronym for the London InterBank Offered Rate. Previously produced for the British Bankers’ Association, it was calculated by polling between six and 18 large banks daily on how much it cost them to borrow money. The highest and lowest estimates were thrown out and the remainder—about half–were averaged to yield Libor.


Libor plays a vital role in the world financial system because it serves as a benchmark for some $800 trillion in financial contracts–everything ranging from complex derivative securities to more mundane transactions like credit card interest rates and adjustable rate home mortgages.


Since so much money rides on Libor, banks have an incentive to alter submissions to improve their profitability: raising submissions when they are net lenders; lowering them when they are net borrowers. Even small movements in Libor can lead to millions in extra profits–or losses.


libor


Financial conspiracy theories are about as commonplace–and believable–as those on the Kennedy assassination and the Lindbergh kidnapping. This time, however, emails have surfaced proving that banks colluded on their Libor submissions. In one email, a grateful trader at Barclays bank thanked a colleague who altered his Libor submission at the trader’s behest: “Dude. I owe you big time! Come over one day after work and I’m opening a bottle of Bollinger.”


Unfortunately, efforts to reform Libor have been insufficient.


In July British authorities granted a contract to produce the Libor index to NYSE Euronext, the company that owns the New York Stock Exchange, the London International Financial Futures and Options Exchange, and a number of other stock, bond, and derivatives exchanges. In other words, the company that will be responsible for making sure that Libor is set responsibly and fairly will be in a position to reap substantial profits from even the slightest movements in Libor. Like putting foxes in charge of the chicken coop, this is a recipe for disaster.


The financial system’s role is to channel the accumulated savings of society to projects where they can do the most economic good—a process known as intermediation. My retirement savings may help finance the construction of a new factory; yours might help someone pay for a new house. Although Goldman Sachs CEO Lloyd Blankfein exaggerated when he called this function “Doing God’s work,” intermediation is nonetheless a vital function.


Intermediation will come screeching to a halt if individuals, corporations, and governments no longer trust the financial system with their savings. Those who believe that the interest rates they pay and receive are the result of a game that is rigged will just opt out. They may not go so far as to stash their savings under their mattresses, but they will certainly keep it away from the likes of bankers they believe have been cheating them. Instead they will hold it in cash or in government bonds which will reduce the amount of money available for productive purposes.  The consequences for the economy will be severe.


Rather than handing Libor over to a firm with a conflict of interest, the British government should announce that a year from now, Libor will cease to exist. How would markets react to the disappearance of Libor? The way markets always do. They would adapt.


Financial firms will have a year to devise alternative benchmarks for their floating rate products. Given the low repute in which Libor—and the people responsible for it—are held, it would be logical for one or more publicly observable, market-determined (and hence, not subject to manipulation) interest rates to take the place of Libor as currently constructed.


Only by making this important benchmark rate determined in a transparent manner can faith be restored in it.


Richard S. Grossman is Professor of Economics at Wesleyan University and a Visiting Scholar at the Institute for Quantitative Social Science at Harvard University. He is the author of WRONG: Nine Economic Policy Disasters and What We Can Learn from Them and Unsettled Account: The Evolution of Banking in the Industrialized World since 1800.


Subscribe to the OUPblog via email or RSS.


Subscribe to only business and economics articles on the OUPblog via email or RSS.


Image credit: Stacks of coins with the letters LIBOR isolated on white background. © joxxxxjo via iStockphoto.


The post The trouble with Libor appeared first on OUPblog.




                Related StoriesSocial video – not the same, but not that differentPension fund divestment is no answer to Russia’s homophobic policiesCan supervisors control international banks like JP Morgan? 
 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2013 01:30

Why Parliament matters: waging war and restraining power

By Matthew Flinders




The 29 August 2013 will go down as a key date in British political history. Not only because of the conflict in Syria but also due to the manner in which it reflects a shift in power and challenges certain social perceptions of Parliament.


“It is very clear to me that Parliament, reflecting the views of the British people, does not want to see British military action,” the Prime Minster acknowledged, “I get that and the Government will act accordingly.” With this simple statement David Cameron mopped the blood from his nose and retreated to consider the political costs (both domestically and internationally) of losing the vote on intervention in the Syrian conflict by just 13 votes. While commentators discuss the future of ‘the special relationship’ with the United States, and whether President Obama will risk going into Syria alone, there is great value is stepping back a little from the heat of battle and reflecting upon exactly why the vote in the House of Commons matters. In this regard, three inter-related issues deserve brief comment.


The broader political canvas on which the vote on military intervention in Syria must be painted can be summed up by what is known as the Parliamentary Decline Thesis (PDT). In its simplest manifestation the PDT suggests that the government became gradually more ascendant over Parliament during the twentieth century. Texts that lamented the ‘decline’ or ‘death’ of Parliament — such as Christopher Hollis’ Can Parliament Survive? (1949), George Keeton’s The Passing of Parliament (1952), Anthony Sampson’s Anatomy of Britain (1962), Bruce Lenman’s The Eclipse of Parliament (1992), to mention just a few examples — have dominated both the academic study of politics and how Parliament is commonly perceived.


What the vote on Syria reveals is the manner in which the balance of power between the executive and the legislature is far more complex than the PDT arguably allows for. There is no doubt that the executive generally controls the business of the House but independent-minded MPs are far more numerous, and the strength of the main parties far more constrained, than is generally understood. (Richard Crossman’s introduction to the 1964 re-print of Walter Bagehot’s The English Constitution provides a wonderful account of this fact.)


westminster parliament


Drilling down still further, this critique of the PDT can be strengthened by examining the changing constitutional arrangements for the use of armed force. The formal legal-constitutional position over the use of armed force is relatively straightforward: Her Majesty’s armed forces are deployed under Royal Prerogative, exercised in practice by the Prime Minister and Cabinet. However, the last decade has seen increased debate and discussion about Parliament’s role in approving the use of armed force overseas. From Tam Dalyell’s proposed ten-minute rule bill in 1999 that would have required ‘the prior approval — by a simply majority of the House of Commons — of military action by the UK forces against Iraq’ through to the vote on war in Iraq on 18 March 2003, the balance of power between the executive and legislature in relation to waging war has clearly shifted towards Parliament. Prior assent in the form of a vote on a substantive motion is now required before armed force can be deployed. The problem for David Cameron is that he is the first Prime Minister to have been defeated in a vote of this nature.


Defeat for the coalition government brings us to our third and final issue: public engagement and confidence in politics (and therefore politicians). The data and survey evidence on public attitudes to political institutions, political processes and politicians is generally overwhelmingly negative with a strong sense that MPs in particular have become disconnected from the broader society they are supposed to represent and protect. The public’s perception is no doubt related to the dominance of the PDT but on this occasion it appears that a majority of MPs placed their responsibility to the public above party political loyalties.


With less than 22% of the public currently supporting military intervention in Syria, Parliament really has ‘reflected the views of the British people’. The bottom line seems to be that the public understands that ‘punitive strikes’ are unlikely to have much impact on a Syrian President who has been inflicting atrocities on his people for more than thirty months. (Only in Britain could war crimes in Syria be relegated for several months beneath a media feeding frenzy about Jeremy Paxman’s beard!) War is ugly, brutal, and messy; promises of ‘clinical’ or ‘surgical’ strikes cannot hide this fact.


At a broader level — if there is one — what the ‘war vote’ on the 29 August 2013 really reveals is that politics matters and sometimes works. Parliament is not toothless and it has the ability to play a leading role in restraining the executive in certain situations. Could it be that maybe politics isn’t quite as broken as so many ‘disaffected democrats’ seem to think?


Flinders author picProfessor Matthew Flinders is Director of the Sir Bernard Crick Centre for the Public Understanding of Politics at the University of Sheffield. He wrote this blog while sitting in the Casualty Department of the Northern General Hospital with a broken ankle and is glad to report that he received a wonderful standard of care. Author of Defending Politics (2012), you can find Matthew Flinders on Twitter @PoliticalSpike and read more of Matthew Flinders’s blog posts here.


Subscribe to the OUPblog via email or RSS.


Subscribe to only politics articles on the OUPblog via email or RSS.


Image credit: London Houses of Parliament and Westminster Bridge. By Francesco Gasparetti [CC-BY-2.0], via Wikimedia Commons.


The post Why Parliament matters: waging war and restraining power appeared first on OUPblog.




                Related StoriesPunitive military strikes on Syria risk an inhumane interventionPension fund divestment is no answer to Russia’s homophobic policiesThe dawn of a new era in American energy 
 •  0 comments  •  flag
Share on Twitter
Published on September 04, 2013 00:30

September 3, 2013

Social video – not the same, but not that different

By Karen Nelson-Field




Why is it when a new media platform comes along that everything we know about how advertising works and how consumers behave seems to go out the window? Because the race to discovery means that rigorous research with duplicated results are elusive. Instead, we see many one-offs, case studies, and even observations claiming ‘law’ status. But not all laws are laws; they are typically qualitative examinations of a single instance, providing little understanding or insight for managers. And this is particularly the case in the social video space.


This is not to say that marketers are gullible, but rather they are in the ‘worship’ stage of the new media adoption cycle. At this stage, marketers are still so taken by the possibilities of the new platform that they only see the positives of the channel. Of course this is not all that surprising when typically (a) only the positives of the channel are ever reported and (b) only a small sample of ‘winning’ campaigns are analysed. It’s not sexy to be sceptical of the hot new thing and no one in marketing wants to be labelled dowdy. But a result that can’t be replicated and is based on a single (often small) set of skewed data is poor research.


iStock_000023274456XSmall


It is essential to consider multiple variables (both separately and collectively) associated with content diffusion including creative characteristics, branding characteristics, emotions and distribution. Data sets must vary both greatly in orientation (both commercial and non-commercial) and in the degree of viewing and sharing (high and low). Additionally, actual sharing behaviour — not claimed behaviour or sharing intent — must be considered.


How does overt branding affect sharing? What emotions are felt when overt branding is present? Which creative devices (babies and dogs) get shared the most and induce the strongest emotions? What is the role of emotions and valence in sharing? What sort of reach do brand communities’ offer to social video marketers and the implications to brand growth? What is the reality of viral success and the key to highly memorable content? What is the role of distribution in sharing and the combination of elements to best predict success?


Question everything that is known and challenge anything that isn’t evidenced with real data.


karenKaren Nelson-Field is the author of Viral Marketing: The Science of Sharing from Oxford University Press Australia & New Zealand. She is a Senior Research Associate with the Ehrenberg-Bass Institute at the University of South Australia. Her current research focuses on whether existing empirical generalisations in advertising and buyer behavior hold in the new media context. Her research into social media marketing, content marketing and video sharing have been internationally recognised both in industry and academic forums while her (sometimes controversial) findings regularly spark global discussion amongst practitioners.


Subscribe to the OUPblog via email or RSS.


Subscribe to only business and economics articles on the OUPblog via email or RSS.


Image credit: Two young girls with a smartphone. © alvarez via iStockphoto.


The post Social video – not the same, but not that different appeared first on OUPblog.




                Related StoriesPension fund divestment is no answer to Russia’s homophobic policiesCan supervisors control international banks like JP Morgan?Punitive military strikes on Syria risk an inhumane intervention 
 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 05:30

Polio provocation: a lingering public health debate

By Stephen E. Mawdsley




In 1980, public health researchers working in the United Republic of Cameroon detected a startling trend among children diagnosed with paralytic polio. Some of the children had become paralyzed in the limb that had only weeks before received an inoculation against a common pediatric illness. Further studies emerging from India seemed to corroborate the association. Health professionals discussed the significance of the findings and debated whether they were due to coincidence or due to the provocation of polio from immunizations. The theory of ‘polio provocation’ was of historical significance and had been hotly contested by doctors and public health officials many decades earlier.


A child receiving an injection at Kroo Bay Health Clinic. Photo by Louise Dyring Nielson. (CC BY 2.0)


Whether polio provocation really existed or was simply a clinical chimera waxed and waned. The theory first came to light in the early 1900s, just as epidemic polio began to plague industrialized countries. However, most of these studies were based on clinical observation and did not utilize placebo controls for comparative purposes. In the United States, the theory of polio provocation was fiercely contested in the 1940s and 1950s. As laboratory technology at the time could not unlock the mechanism behind polio provocation, health professionals considered how to balance the health risks. Were all injections guilty of triggering polio infection? Should immunization programs be banned during polio epidemics? Was the risk of declining herd immunity from halting pediatric immunizations greater than the risk of inciting polio from inoculations?


The theory of polio provocation divided medical communities and inspired temporary shifts in public health policy. Some health departments even shut down child immunization clinics and discouraged throat operations out of concern that the risk of causing polio was too high. After the vaccine was licensed in 1955 and the incidence of polio began to plummet, the risk of provocation waned in relation to the rise in herd immunity. Children who were vaccinated against polio did not face the risk of polio provocation from other inoculations. Traditional public health practices were soon restored and the theory seemed no longer applicable. Concerns regarding provocation disappeared in nations where immunization against polio was commonplace.


Lucknow, Uttar Pradesh, India, 8 November 2009: British Rotarians immunize children in the streets of Lucknow during the polio immunization campaign in Northern India. Photo by Jean-Marc Giboux. (CC BY 2.0)


Polio provocation resurfaced in the medical literature during the 1980s when large aid agencies, such as Rotary International and the World Health Organization, undertook immunisation programmes in regions where polio raged unabated. A few observers turned to timeworn medical journals to better understand polio provocation, only to uncover the debates of old. Freighted on public health activism and the evidence of health workers, medical researchers deployed modern laboratory equipment to unlock the secrets of the theoretical adverse health link. By the 1990s, researchers announced their discovery of the mechanism behind polio provocation: tissue injury caused by certain injections permits the virus easy access to nerve channels, thereby heightening its ability to cause paralysis. Over the course of a century, polio provocation had migrated from a theory to a clinical model.


Just because the mechanism behind polio provocation was identified did not mean that changing immunisation policies was immediate. In regions where polio remains endemic, such as Afghanistan, Pakistan, and Nigeria, the consequences of this issue continues to concern public health officials.


Stephen E. Mawdsley is the Isaac Newton – Ann Johnston Research Fellow in History at Clare Hall, University of Cambridge. He is interested in the history of twentieth century American medical research and public health. His forthcoming monograph examines one of the first large clinical trials undertaken to control polio in the United States. He is the author of “Balancing Risks: Childhood Inoculations and America’s Response to the Provocation of Paralytic Polio” (in advance access and available to read for free for a limited time) in the Social History of Medicine.


Social History of Medicine is concerned with all aspects of health, illness, and medical treatment in the past. It is committed to publishing work on the social history of medicine from a variety of disciplines. The journal offers its readers substantive and lively articles on a variety of themes, critical assessments of archives and sources, conference reports, up-to-date information on research in progress, a discussion point on topics of current controversy and concern, review articles, and wide-ranging book reviews.


Subscribe to the OUPblog via email or RSS.


Subscribe to only health and medicine articles on the OUPblog via email or RSS.


The post Polio provocation: a lingering public health debate appeared first on OUPblog.




                Related StoriesFall cleaning with OHRCrawling leaves: photosynthesis in sacoglossan sea slugsIdeal pregnancy length: an unsolved mystery 
 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 03:30

The end of the Revolutionary War

On 3 September 1783, the Peace of Paris was signed and the American War for Independence officially ended. The following excerpt from John Ferling’s Almost a Miracle: The American Victory in the War of Independence recounts the war’s final moments, when Washington bid farewell to his troops.


The war was truly over. It had lasted well over eight years, 104 blood-drenched months to be exact. As is often the habit of wars, it had gone on far longer than its architects of either side had foreseen in 1775. More than 100,000 American men had borne arms in the Continental army. Countless thousands more had seen active service in militia units, some for only a few days, some for a few weeks, some repeatedly, if their outfit was called to duty time and again.


The war exacted a ghastly toll. The estimate accepted by most scholars is that 25,000 American soldiers perished, although nearly all historians regard that figure as too low. Not only were the casualty figures reported by American leaders, like those set forth by British generals, almost always inaccurately low, but one is left to guess the fate of the 9,871 men—once again, likely a figure that is wanting—who were listed as wounded or missing in action. No one can know with precision the number of militiamen who were lost in the war, as record keeping in militia units was neither as good as that in the Continental army nor as likely to survive. While something of a handle may be had on the number of soldiers that died in battle, or of camp disease, or while in captivity, the totals for those who died from other causes can only be a matter of conjecture. In all wars, things happen. In this war, men were struck by lightning or hit by falling trees in storms.  Men were crushed beneath heavy wagons and field pieces that overturned. Men accidentally shot themselves and their comrades. Men were killed in falls from horses and drowned while crossing rivers. Sailors fell from the rigging and slipped overboard. As in every war, some soldiers and sailors committed suicide. If it is assumed that 30,000 Americans died while bearing arms—and that is a very conservative estimate—then about one man in sixteen of military age died during the Revolutionary War. In contrast, one man in ten of military age died in the Civil War and one American male in seventy-five in World War II. Of those who served in the Continental army, one in four died during the war. In the Civil War, one regular in five died and in World War II one in forty American servicemen perished.


Unlike subsequent wars when numerous soldiers came home with disabilities, relatively few impaired veterans lived in post-Revolutionary America. Those who were seriously wounded in the War of Independence seldom came home. They died, usually of shock, blood loss, or infection. Some survived, of course, and for the remainder of their lives coped with a partial, or total, loss of vision, a gimpy leg, a handless or footless extremity, or emotional scars that never healed.


Washington Resigning Commission at Annapolis


It was not only soldiers that died or were wounded. Civilians perished from diseases that were spread unwittingly by soldiers and not a few on the homefront died violent deaths in the course of coastal raids, Indian attacks, partisan warfare, and siege operations. There is no way to know how many civilians died as a direct result of this war, but it was well into the thousands.


The British also paid a steep price in blood in this war, one that was proportionately equal to the losses among the American forces. The British sent about


42,000 men to North America, of which some 25 percent, or roughly 10,000 men, are believed to have died. About 7,500 Germans, from a total of some 29,000 sent to Canada and the United States, also died in this war in the North American theater. From a paucity of surviving records, casualties among the Loyalists who served with the British army have never been established. However, 21,000 men are believed to have served in those provincial units. The most complete surviving records are those for the New Jersey Volunteers, which suffered a 20 percent death toll. If its death toll, which was below that of regulars and Germans, is typical, some four thousand provincials who fought for Great Britain would have died of all causes. Thus, it seems likely that about 85,000 men served the British in North America in the course of this war, of which approximately 21,000 perished. As was true of American soldiers, the great majority—roughly 65 percent—died of diseases. A bit over 2 percent of men in the British army succumbed to disease annually, while somewhat over 3 percent of German soldiers died each year of disease. Up to eight thousand additional redcoats are believed to have died in the West Indies, and another two thousand may have died in transit to the Caribbean. Through 1780, the Royal Navy reported losses of 1,243 men killed in action and 18,541 to disease. Serious fighting raged on the high seas for another two years, making it likely that well over 50,000 men who bore arms for Great Britain perished in this war.


The French army lost several hundred men during its nearly two years in the United  States, mostly to disease, but the French  navy suffered losses of nearly 20,000 men in battle, captivity, and from illnesses. Spanish losses pushed the total death toll among those who fought in this war to in excess of 100,000 men.


Washington was anxious to get home, it now having been more than two years since he had last seen Mount Vernon.  It must at times have seemed that New York would not let him go. He remained for ten days after the British sailed away, looking after the final business of his command, but mostly attending a seemingly endless cycle of dinners and ceremonies.  At last, on December 4, he was ready to depart.  Only one thing remained.  At noon that day Washington hosted a dinner at Fraunces Tavern for the officers. Not many were still with the army. Of seventy-three generals yet on the Continental army rolls, only four were present, and three of those were from New York or planned to live there. Not much should be made of the paltry turnout. Men had been going home since June. Like the enlisted men, the officers were anxious to see their families and put their lives together for the long years that lay ahead. All who attended the dinner knew that the function was less for dining than for saying farewell, and it soon became an emotional meeting. At some level, each man knew that the great epoch of his life was ending.  Each knew that he would never again savor the warm pleasures of camaraderie, the pulsating thrill of danger, the rare exhilaration of military victory that had come from serving the infant nation in its quest for independence. Each knew that he was leaving all this for an uncertain future. No man was more moved than Washington, who, if he had planned to give a speech, discarded the idea. He merely asked each man to come forward to say goodbye. With tears streaming down his face, he embraced every man, and they in turn clasped him. Henry Knox grabbed his commander in chief and kissed him.


When the last man had bidden him farewell, Washington, too moved to talk, hurried to the door and to his horse that awaited him on the street. He swung into the saddle and sped away for Virginia, and home.


John Ferling is Professor Emeritus of History at the University of West Georgia. He is a leading authority on late 18th and early 19th century American history. His new book, Jefferson and Hamilton: The Rivalry that Forged a Nation, will be published in October. He is the author of many books, including Independence, The Ascent of George Washington, Almost a MiracleSetting the World Ablaze, and A Leap in the Dark. He lives in Atlanta, Georgia.


Subscribe to the OUPblog via email or RSS.


Subscribe to only American history articles on the OUPblog via email or RSS.


Image credit: Washington resigning his commission at Annapolis, Dec. 23, 1783. Thomas Addis Emmet. Courtesy of the New York Public Library Digital Collections.


The post The end of the Revolutionary War appeared first on OUPblog.




                Related StoriesWho were the Carlisle Commissioners? Part oneWomen’s Equality DayWho were the Carlisle Commissioners? Part two: Jeremy Bentham 
 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 01:30

Interpreting Chopin on piano

By Deborah Rambo Sinn




One of the fascinating things about being a musician is that I can perform the same Chopin piece that has been played by thousands of pianists for almost two centuries and breathe life into it in a way that no one has ever done before. Tomorrow, I will play the same piece and know it will be different again. What allows such individuality is interpretation: an alchemist’s mixture of historically accurate performance practice, the artist’s personality, convictions, and good guesses.


The score — the source for classical music — is always a puzzle because it is simply not possible for composers to faithfully transcribe every aspect of the music from their imaginations into notes, boxes, and beams. The resulting black and white pages can only hint at the original magic behind them, leaving musicians to interpret the meaning of the myriad symbols they find. Add to this the ever-changing parameters of acceptable practices that hover around composers and eras and one begins to see the complexity of interpreting even the simplest piece. Performers must know how to negotiate these rubrics while crafting performances that uniquely reflect their personalities.


When I look at a score, it separates into countless parts. I will see the melody, connect it with its bass-note counterweights, and quickly identify “throw-away notes” — the tier that ought to recede into the textural background. Because of my formal training as a pianist, I will understand the boundaries of volume (dynamics), rhythmic wiggle room (rubato), and pedaling surrounding a given piece, and I will dance flirtatiously towards and away and around these invisible lines.


Steinway & Sons concert grand piano. Photo: © Copyright Steinway & Sons. Creative Commons License.


Each element of interpretation lies on a sliding scale. This note or that one could be just a fraction of a decibel louder than the other. Rhythm could be strict — or messy to a startling degree — and still fall within the range of acceptability to the listener’s ear. This melody or that one could be brought forward. Harmonies lean in, clash, and resolve. In the interpretation of classical music, pushes and pulls are expected within the margins of “good taste” and great artists know how to tastefully manipulate any note at any moment. They have developed a vast store of ideas that can be grabbed from the shelf and applied at an instant during performance.


Works by Chopin permit freedoms that stretch rhythm to the edges of total collapse. Melodies are dramatic and gorgeous, imitating the human voice, while wide ranges and giant leaps make them impossible for any singer to perform. The interpretive licenses a pianist is allowed to take makes these some of the most intensely personal pieces a pianist can play.


In my performance of this nocturne, listen for the range of dynamics, the way I highlight surprises in the harmonies, and to the rubbery stretches in the rhythm. Although I make interpretive decisions based on both scholarship and gut feeling, it is my ultimate objective to force you to the edge of your seat, to breathe with me at the end of every phrase, and invite you to an intimately personal experience. Let me know how I did.


Deborah Rambo Sinn is the author of Playing Beyond the Notes: A Pianist’s Guide to Musical Interpretation. She taught for two decades at colleges and universities and has performed classical concerts on four continents. She lived in Germany for five years where she also played keyboards for professional musical productions and coached opera singers and instrumentalists in music interpretation. Sinn currently resides in Washington State, where she is active as a private teacher, coach, and performer.


Subscribe to the OUPblog via email or RSS.


Subscribe to only music articles on the OUPblog via email or RSS.


The post Interpreting Chopin on piano appeared first on OUPblog.




                Related StoriesHarriet Cohen: alluring woman, great pianist devoted to BachCelebrating Women’s Equality DayThe Beatles and “She Loves You”: 23 August 1963 
 •  0 comments  •  flag
Share on Twitter
Published on September 03, 2013 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.