Oxford University Press's Blog, page 1014

October 12, 2012

Four questions about the relationship between music and language

By Aniruddh D. Patel



Music and language are our two most powerful and complex communication systems. What is their relationship as mental systems? This question has fascinated thinkers for centuries, but only in the past decade has it become a focus of empirical research.


Why study the relationship between music and language?


As far as we know, spoken language surpasses all other animal communication systems (e.g., bird song, whale song) in terms of its degree of structure, meaning, and complexity. This makes it hard to get insight into how it works by studying the brains of other species. Yet within our own brain is a second communication rich in complexity and meaning, namely music. Comparative music-language research can help us understand our remarkable ability to make sense out of sound.


What are some recent trends in music-language research?


Currently there’s a lot of interest in the impact of musical training on language function. There is suggestive evidence that learning a musical instrument improves the brain’s processing of speech, but how and why does this occur? Similarly, there are growing indications of a link between musical rhythmic abilities and linguistic reading skills in young children, but what is the underlying link in the brain?


What are some of the key differences between music and language?


One important difference is that music often involves simultaneous, coordinated sound production by several people (e.g., in group singing), while ordinary language involves alternation between solo ‘performers’. Music thus has a natural way of building social bonds between people, and this social aspect of music is key to its power to unite groups and build a sense of identity in a community.


Which came first, music or language?


Darwin considered this question in The Descent of Man. He argued that music came first in the form of wordless songs used for courtship in human ancestors, akin to how birds use song today. In contrast, his contemporary Herbert Spencer argued that language came first. This debate continues today.  While we may never know the answer to this question, modern research on the evolution of music has turned from speculation to experimental research on the musical capacities of other species, including parrots, as a way to gain insight into the evolutionary history of our own musical abilities.


Aniruddh D. Patel is an Associate Professor, Department of Psychology, at Tufts University and the author of Music, Language, and the Brain.


Subscribe to the OUPblog via email or RSS.

Subscribe to only psychology articles on the OUPblog via email or RSS.

Subscribe to only music articles on the OUPblog via email or RSS.

Subscribe to only lexicography and language the OUPblog via email or RSS.

View more about this book on the




 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2012 03:30

The future of an illusion

This article originally appeared in The Times Literary Supplement (reproduced with permission)




By Andrew Scull



Fights over how to define and diagnose mental illness are scarcely a novel feature of the psychiatric landscape, but their most recent manifestation has some unusual features. For more than a decade now, the American Psychiatric Association has been preparing a new edition of its Diagnostic and Statistical Manual (DSM), the fifth (or by some counts the seventh) edition of that extraordinary tome, each incarnation weightier than the last. Over the past two years, however, major attacks have been launched on the enterprise, replete with allegations that the new edition shows signs of being built on hasty and unscientific foundations; that it pathologises what are everyday features of normal human existence; and that it threatens to create new epidemics of spurious psychiatric diseases. These verbal assaults have come in substantial part from an unexpected quarter, however: not from the ranks of the anti-psychiatric chorus, Szaszian, sociological or otherwise, but, amongst others, from the editors-in chief of DSM III and DSM IV (Frances 2009; Spitzer 2009).


These are psychiatrists whose previous work, to be sure, will be modified and superseded in the new edition, but they are also men whose careers were built upon their unswerving commitment to the underlying logic of creating a nosological Bible. Their critiques have spawned claims from the ruling psychiatric oligarchy that they are motivated by pique at seeing their creations cast aside, or perhaps, as some have suggested, even by the loss of royalties the editor-in chief of DSM IV will suffer when his version of the classificatory system is rendered obsolete (Schatzberg, Scully, Kupfer, and Regier 2009). But their criticisms have already forced a delay in the publication of DSM-5 (the pretentious resort to Roman numerals to designate successive editions of the manual having finally been abandoned).  And they have helped to intensify a renewed crisis of psychiatric legitimacy.


Some historical context is in order here. Complicated nosologies were a feature of nineteenth century psychiatry. They proliferated endlessly, and seemed to be of little clinical use, not least because they were so hard to operationalize. The first generally accepted sub-dividing of the psychoses emerged in Germany in the late nineteenth century, at the hands of Emil Kraepelin, who claimed to have developed inductively a distinction between two basic sub-types of serious mental disorder, dementia praecox (soon relabelled schizophrenia), and manic depressive psychosis – for Kraepelin a sort of residual category for psychotics who didn’t manifest the symptoms or have the hopeless prognosis of dementia praecox, and something generally regarded at the time as a more hopeful diagnosis.  It was testament to the wide and continuing influence of Kraepelin’s endeavours that the revolution in psychiatric nomenclature launched by DSM III in 1980 is commonly referred to as the neo-Kraepelinian revolution in psychiatry. Both enterprises sought to transform a disorderly chaos of symptoms into an orderly list of illnesses.


As its title indicates, DSM III had some predecessors. American psychiatrists had constructed two previous official diagnostic systems of their own, small pamphlets that appeared successively in 1952 and 1968.  Both set up a broad distinction between psychoses and neuroses (roughly speaking, between mental disorders that involved a break with reality, and those that, less seriously, involved a distorted view of reality), and they divided up many of hundred or so varieties of mental illnesses that were recognized in accordance with their alleged psychodynamic etiologies.  In that respect, they reflected the dominance of psychoanalytic perspectives in post World War II American psychiatry. But diagnostic distinctions of the broad, general sort these first two editions set forth were of little significance for most analysts, focused as they were on the individual dynamics of the particular patient they were treating. The first two DSMs were therefore seldom consulted and were seen as little more than paperweights – and rather insubstantial paperweights at that. DSM II was a small, spiral-bound pamphlet running to no more than a hundred and thirty-four pages, and encompassing barely a hundred different diagnoses that were listed alongside the most cursory of descriptions.  It sold for a mere three dollars and fifty cents, which was more than most professional psychiatrists thought it was worth.


It was precisely that lack of concern with diagnostic categories and the sense that questions of nomenclature were supremely unimportant that led psychoanalysts to view the formation of an American Psychiatric Association task force on creating a new edition of the DSM with a complacency that verged on contempt. It would prove a stunning political miscalculation. The task force quickly came to be dominated by its chairman, Robert Spitzer, and by a group of biologically-oriented psychiatrists who liked to refer to themselves as DOPS (data oriented people), which was an interesting conceit, since data and scientific evidence had remarkably little to do with what emerged from the committee’s deliberations. Instead, their work product had much to do with the preferences and prejudices the self-anointed DOPS shared. These were psychiatrists, many of them hand-picked by Spitzer, who preferred pills to talk, and for whom creating a wholly distinctive new approach to the diagnostic process became a decisive weapon in their battle to re-orient the profession. Psychoanalysts had placed but a single member of their fraternity on the major committee, and he was so swiftly marginalized that he ceased attending the sessions at which the proposed changes were discussed and finalized.


Too late, realization dawned among the psychoanalystic elite that the new nosology would have profound effects on the future of psychiatry, and the very terms in which the broader culture conceptualized and thought about mental illness. The speculations about the psychodynamic etiology of the psychoses and neuroses that had been central to the first two editions were stripped out of the new nosology, along with all traces of obeisance to psychoanalytic doctrines. The distinction between psychosis and neurosis was abandoned. In their place, the task force adopted a seemingly simplistic and radically revamped approach to distinguishing among sub-types of mental illness. A succession of studies during the late 1960s and 1970s had demonstrated the extraordinary unreliability of psychiatric diagnoses. Many of these studies had been conducted by the profession itself (including a landmark study by Cooper et al. (1972) of differential diagnosis in a cross-national context), though the study that drew most public attention (and inflicted most damage on psychiatry’s public image) was an experiment using pseudo-patients conducted by the Stanford social psychologist David Rosenhan, whose results appeared in Science (Rosenhan 1973).  Whatever its methodological flaws (and they were considerable), Rosenhan’s study was widely seen as confirmation of psychiatry’s diagnostic incompetence.


The documented failure of psychiatrists to agree on what was wrong in any given case before them proved a great embarrassment to the profession. Lawyers used the lack of consensus to cast doubt on the profession’s claims to expertise (Ennis and Litwack 1974), and drug companies seeking homogeneous populations on which to conduct clinical trials for new psycho-pharmaceuticals expressed their frustrations at psychiatry’s shortcomings in this regard. As drug development proceeded, the need to standardize the patient population on which new drugs were tested had become more pressing. And as new drugs seemed to have an effect on some, but not all, psychiatric patients, it became commercially attractive to try to distinguish different sub-populations among the mentally ill.


Unable to demonstrate convincing chains of causation for any major form of mental disorder, the Spitzer task force abandoned any pretence at doing so. Instead, they concentrated on maximizing inter-rater reliability to ensure that psychiatrists examining a particular patient would agree on what was wrong.  This entailed developing lists of symptoms that allegedly characterized different forms of mental disturbance, and matching those to a “tick the boxes” approach to diagnosis. Faced with a new patient, psychiatrists would record the presence or absence of a given set of symptoms, and once a threshold number of these had been reached, the person they were examining was given a particular diagnostic label, with “co-morbidity” invoked to explain away situations where more than one “illness” could be diagnosed. Disputes about what belonged in the manual were resolved by committee votes, as was the arbitrary decision about where to situate cut-off points: i.e., how many of the laundry list of symptoms a patient had to exhibit before he or she was declared to be suffering from a particular form of illness. Questions of validity – whether the new classificatory system was really cutting nature at the joints, so that the listed “diseases” corresponded in some sense with distinctions that made etiological sense – were simply set to one side.  If diagnoses could be rendered mechanical and predictable, consistent and replicable, that would suffice.


DSM III’s triumph marked the advent of a classificatory system that increasingly linked diagnostic categories to specific drug treatments, and an embrace on the part of both profession and public of a conceptualization of mental illnesses as specific, identifiably different diseases, each amenable to treatment with different drugs. Most importantly, since the insurance industry began to require a DSM diagnosis before agreeing to pay for a patient’s treatment (and the preferred course and length of treatment came to be linked to individual diagnostic categories), DSM III became a document that it was impossible to ignore, and impossible not to validate. If a mental health professional wanted to be paid (and could not afford to operate outside the realms of insurance reimbursement, as most self-evidently could not), then there was no alternative to adopting the manual. In subsequent years, particularly once antidepressant drugs took off in the 1990s, biological language saturated professional and public discussions of mental illness. Steven Sharfstein, the then president of the American Psychiatric Association, referred to the upshot of this process as the transition from “the biopsychosocial model [of mental illness] to… the bio-bio-bio model” Sharfstein (2005:3).


That the specificity of the treatments was largely spurious, and that the various editions of the Diagnostic and Statistical Manual from the third edition onwards emphasized reliability and essentially ignored the more central issue of the validity of psychiatric diagnoses, proved largely irrelevant to their success in reorienting perceptions, lay and professional alike. Linked to expanded insurance coverage for the treatment of mental disorders, and providing a new grounding for psychiatric authority and a less time-consuming and more lucrative foundation for psychiatric practice, psychopharmacology encouraged a distancing of psychiatrists from the provision of psychotherapy. Each successive edition of the manual, the revised third edition (III R) of 1987, the fourth edition (IV of 1994) and its “text revision” (IV TR of 2000) has adhered to the same fundamental approach, though new “illnesses” have been added on each occasion, and the page count has mounted, like the ‘Yellow Pages’ on steroids, from the 104 pages of DSM I to the 992 pages of DSM IV TR.


Thus we return to the current controversy. The classificatory mania embodied in the various editions of DSM, from III onwards, arose from an attempt to lend an aura of “facticity” to psychiatric diagnoses, and to stave off the ridicule that threatened the profession’s legitimacy when its practitioners were shown to be unable to agree about the nature of the illness that confronted them (or even whether the patient was sick at all). But as “illnesses” proliferated in each revision, and the criteria for assigning a particular diagnosis were loosened (as they will be again in DSM-5), the very problem that had led to the invention of the new DSMs recurred, and major new threats to psychiatric legitimacy surfaced.


As diagnostic criteria were loosened, an extraordinary expansion of the numbers of mentally sick individuals ensued. This has been particular evident amongst, but by no means confined to, the ranks of the young.  “Juvenile biopolar disorder,” for example, increased forty-fold in just a decade, between 1994 and 2004. An autism epidemic broke out, as a formerly rare condition, seen in less that one in five hundred children at the outset of the same decade, was found among one in every ninety children only ten years later. The story for hyperactivity, subsequently relabelled ADHD is similar, with ten per cent of male American children now taking pills daily for their “disease.” Among adults, one in every seventy-six Americans qualified for welfare payments based upon mental disability by 2007.


If psychiatrists’ inability to agree among themselves on a diagnosis threatened to make them a laughing-stock in the 1970s, the relabelling of a host of ordinary life events as psychiatric pathology now seems to promise more of the same. Social anxiety disorder, oppositional defiant disorder, school phobia, narcissistic and borderline personality disorders are apparently now to be joined by such things as pathological gambling, binge eating disorder, hypersexuality disorder, temper dysregulation disorder, mixed anxiety depressive disorder, minor neurocognitive disorder, and attenuated psychotic symptoms syndrome. Yet we are almost as far removed as ever from understanding the etiological roots of major psychiatric disorders, let alone these more controversial diagnoses (which many people would argue do not belong in the medical arena in the first place). That these diagnoses provide lucrative new markets for psychopharmacology’s products raises questions in many minds about whether commercial concerns are illegitimately driving the expansion of the psychiatric universe – a concern that is scarcely allayed when one recalls that the great majority of the members of the DSM taskforce are recipients of drug company largesse. That psychoactive drugs are associated with rising concerns about major side effects (ranging from iatrogenic and permanent neurological damage, through increased risks of child and adolescent suicides, massive weight gain, metabolic disorders, diabetes, and premature death) only compounds the problem.


Relying solely on symptoms and behaviour to construct its illnesses, and on organizational fiat to impose its negotiated categories on both the profession and the public, psychiatry is now facing a revolt from within its own ranks.  The Heath Robinson apparatus that is descriptive psychiatry seems to survive only because it lacks a plausible rival.  It is, however, an increasingly tenuous basis on which to rest claims to professional legitimacy.  Having chosen to erect a vast and ramshackle superstructure on such frail foundations, psychiatry must pray it doesn’t collapse in a heap of rubble.


This article originally appeared in The Times Literary Supplement (reproduced with permission)


Andrew Scull has held faculty positions at the University of Pennsylvania, Princeton, and the University of California, where he is Distinguished Professor of Sociology and Science Studies. He is a past president of the Society for the Social History of Medicine, and has held fellowships from the Guggenheim Foundation and the American Council of Learned Societies. He is the author or editor of more than twenty books, many of them on the history of psychiatry in Britain and the United States. He has lectured on five continents, as well as making many media appearances on programmes dealing with mental health issues. His book Madness: A Very Short Introduction published in 2011.


The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about.


Subscribe to the OUPblog via email or RSS.

Subscribe to only VSI articles on the OUPblog via email or RSS.

Subscribe to only psychology articles on the OUPblog via email or RSS.

View more about this book on the  




 •  0 comments  •  flag
Share on Twitter
Published on October 12, 2012 00:30

October 11, 2012

Paul Ryan’s worldview

By Tom Allen



Paul Ryan is the most puzzling member of Congress, at least to me. I served with him on the House Budget Committee for four of my twelve years in the House. Paul is warm, personable, intelligent, articulate — a true gentleman. Yet what he says about the federal budget and taxes makes little sense. His belief in the miraculous power of tax cuts and the crippling effect of federal “spending” was not supported by the economists who testified at our hearings.


Paul Ryan is a disciple of Ayn Rand. He credits the author of The Fountainhead and Atlas Shrugged as the person most responsible for inspiring him to get into politics. Rand’s celebration of heroic entrepreneurs and demonization of government and bureaucrats finds expression today in the Republican elevation of “job creators” as our indispensable driving economic force, people who must be fed a steady diet of tax cuts to be productive.


Ryan’s rise in influence within the House, and as Romney’s running mate, is a reflection of the energy of the anti-government, libertarian center of the Republican Party, and also of its inability to generate coherent public policies on economic growth, health care, and climate change. Why was the clarion call to “repeal and replace Obamacare” never followed by a comprehensive proposal to replace it? Why must viable Republican candidates deny the scientific consensus on climate change? Why was the Republican Convention so barren of concrete conservative policies and oversaturated with airy rhetoric about “freedom, faith and family”?


After more than a decade listening to my Republican colleagues in Congress, I believe that the GOP has lost any conception of what a conservative government should do in the 21st century. Fixated on a single central vision of “smaller government, lower taxes,” the party’s leaders cannot develop a constructive governing agenda to address our major public challenges. A more pragmatic agenda would include limiting or countervailing conservative principles such as improving governmental effectiveness or (seriously) reducing the deficit.


It wasn’t entirely sheer obstructionism that led Senator Mitch McConnell to proclaim his highest priority was to deny President Obama a second term. His party’s agenda for the federal government was and remains to tax less, spend less, and do less. The Republican Party has become in all but name a libertarian organization with a mission of ever smaller government, ever lower taxes. On that slippery slope Republican members of Congress can always be outflanked on the right.


The Republican worldview that Ryan so ably articulates, with its heroic job creators battling governments that stifle innovation and create “dependency,” is the primary source of the Party’s inability to support action by government to address health care, environmental, and economic issues on the basis of best evidence about competing policies.


Before the invasion of Iraq, Secretary Rumsfeld explained that a rapid US withdrawal after decapitating the Iraqi government was necessary to avoid creating a “dependency” among Iraqis. Ryan used the same language to justify his proposals to dramatically reduce support for poor, retired and disabled people on Medicare and Medicaid. Underlying these positions is a conviction — grounded in faith, not evidence — that helping others through government action will weaken both the intended beneficiaries and the country.


That position is inconsistent with the teachings of the world’s great religions, which call on their adherents to help those in need. That’s why the Catholic Bishops criticized the 2012 Ryan budget as being inconsistent with the gospel of Jesus Christ. But it is consistent with the teachings of Ayn Rand, who celebrated selfishness and wrote in her journal that “Christianity is the best kindergarten for communism.”


Paul Ryan has said that “we are living in an Ayn Rand novel, metaphorically speaking.” But Galt’s Gulch is a fable and our world is infinitely complicated. Contemporary hostility to government on the right has bred dangerous convictions, among them, tax cuts pay for themselves, we’ll be welcomed as liberators, and climate science isn’t proven — dangerous because they aren’t supported by evidence and aren’t susceptible to compromise.


I have been out of Congress for four years, and people still ask in frustration and anger why the two parties cannot reason together. The usual explanations have to do with redistricting, big money, the media, the permanent campaign, and political power struggles. Yet I believe the primary source of political polarization and congressional gridlock is the transformation of a quintessential American virtue, self-reliance, into a political doctrine that rejects the idea that government is one way we work together for the common good.


Traditional interest-group politics is now overwhelmed by “worldview politics,” a widening, hardening conflict between those who believe that the mission of government is to advance the common good and those who believe government inevitably diminishes individual liberty. As a result, all domestic issues merge into one — an unproductive, irreconcilable, ideological conflict about government itself. Yet ultimately this conflict is about our collective inability to treat our passion for individualism and community as the central yin and yang of American culture and politics.


The path to a more pragmatic politics inspired by a shared conception of the common good may now seem beyond our reach. But no trend continues forever. Along that path we must learn again that self-reliance and working together are equally necessary, not irreconcilable alternatives.


Tom Allen is President and CEO of the Association of American Publishers. He is former U.S. Congressman representing Maine’s 1st District from 1997 to 2009. He is the author of Dangerous Convictions: What’s Really Wrong with the U.S. Congress.


Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

View more about this book on the


Image credit: Official portrait of U.S. Congressman Paul Ryan (R-WI). 2012. United States Congress. Public domain via Wikimedia Commons.




 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2012 07:30

Air Force Two or a constitutional inconvenience?

Given the Vice Presidential debate tonight, we thought this excerpt from The Candidate would provide appropriate background on the role of these men in the White House.


By Sam Popkin



When vice presidents travel the world on White House assignments, be it to a foreign leader’s funeral, an international meeting not quite important enough for the president, or a “fact-finding trip” to give them exposure or soothe a rankled constituency, they are treated as the second most important person in America. They can build up countless IOUs by making detours from their official business and lending glamour and stature at a favored politician’s fundraiser, their presence heralded by the arrival of Air Force Two, a traffic-stopping motorcade, and a full retinue of secret service agents, aides, and press.


But from the moment they are picked as a running mate, they are already a source of contention and power struggles within the presidential candidate’s inner circle, and are regarded by some of the political elite as a compromise (at best) or the lowest common denominator (at worst).


The eventual choice is never the choice of the entire party. Unlike the winner of the primary, they did not earn their place via electoral combat. In addition to facing resentment from the power players within the campaign, there is the scorn and enmity from those passed over for the position.


When the nominee’s campaign screens potential running mates, the list of potential nominees includes those just on the list for their name to be leaked as a quid pro quo for an endorsement; people included to acknowledge needed constituencies; and contenders whose inclusion might actually help the candidate seal an electoral victory. (Of course, candidates will always say that their only criterion for choosing someone is that he or she is qualified to be president should the need arise.


There are always staff conflicts within the nominee’s camp about whom to choose because the evidence about who can help with which demographic or constituency is never clear-cut. The murky data estimating which running mate-in-waiting helps the ticket get so convoluted that Bob Teeter “used to look for 28 electoral votes or some demographic bloc. Now, the crucial question is how the press and public reaction the first 48 hours.”


Indeed, the choice almost always obscures the campaign’s message while the press digs deeper into the running mate’s past and finds covered-up slush funds, state house corruption, electroshock treatment, secret payments to party officials, a spouse’s tax problems, use of family influence to avoid active military duty during Vietnam, inconvenient votes against legislature designed to court needed voters, or a pregnant, unmarried daughter.


When Governor George W. Bush asked Richard Cheney to screen his candidates for vice president, Cheney prepared detailed, exhaustive questionnaires that only proved, as former VP Dan Quayle put it, “Everybody has negatives.” Cheney, who became Bush’s eventual choice, never filled one out himself, and his own negatives came to light before the campaign had a chance to hear about them. No one in the campaign saw his corporate, tax, or medical records in advance. They weren’t ready to talk about Cheney’s votes against programs Bush strongly supported as “compassionate conservatism,” or that while Cheney was CEO of Halliburton Oil, they defied US rules against trade with Iraq. Halliburton refused to disclose Cheney’s role in controversial decisions, and the result, the campaign’s press secretary told Bush, was that “We’re getting our asses kicked in the media because we’re not prepared.”


The candidate’s staff has a vested interest in the choice of a running mate, too: the Washington hands and the strategists all know their roles and positions will be influenced by the potential VP’s staff and consultants. The professionals nearly always favor candidates with whom they have worked over the candidates they don’t know, and the policy special- ists are interested in candidates whose expertise and issue areas are likely to make their role more central.


“It’s a lot easier to kill legislation than pass legislation,” Quayle noted when looking back at his own experience, “So it’s a lot easier to knock off VP candidates than to actually get one through the mill.”


Once inside the administration, vice presidents suffer fresh rounds of humiliation at the hands of the president and his staff, to whom the VP has become a constitutional inconvenience. When they fly around the world representing their country, it provides great footage if they run for president, but only rarely do vice presidents actually handle sensitive negotiations unless someone besides their own staff — a cabinet member, say, or a senior aide to the president — is present to give the final word.


Not until Franklin Roosevelt died did his staff bother to tell Harry Truman any details of their negotiations with Churchill or Stalin, or that the government was developing nuclear weapons leaving the newly inaugurated president “totally uninformed” about crucial events. Lyndon Johnson was one of the country’s most powerful, accomplished senators, but he spent three years being mocked by Robert Kennedy and all the cool, sophisticated friends of the family. Kennedy told everyone that he thought Johnson was a mistake, that his brother’s offer had been a courtesy offer Johnson was supposed to turn down. Johnson didn’t even have an office in the West Wing of the White House; he conducted his business from his Senate office. Now, however, his old colleagues did not even let him attend the democratic caucus; he was part of the White House — albeit a lonely one — and no longer one of them. When Vice President Spiro Agnew tried to buttonhole senators on behalf of Nixon, he was publicly rebuked by the Senate majority leader Mike Mansfield for meddling; Agnew was not entitled to do business on the Senate floor: “He’s a half-creature of the Senate and a half-creature of the executive.”


Vice President Bush was an outsider, even a heretic, in the Reagan White House. Few of his friends, other than James A. Baker, the chief of staff, got jobs in the administration, and when he entered a room for a meeting, all the conversation stopped and the subject changed. No vice president can argue — or even politely differ — with a president in front of staffers without a leak revealing rifts in the White House, so Bush kept silent in front of staffers or in cabinet meetings. Then he was further belittled by stories that he “had nothing to contribute.” At least Bush, thanks to Walter Mondale’s office in the West Wing, wasn’t banished to the Old Executive Office Building where previous VPs had been relegated.


Every president does their best to have smooth relations with the vice president, but staff tension and backbiting is part of the job. When presidents don’t want to accept a proposal or do someone a favor, they instruct their staff to do it in such a way that they, not the president, take the heat. Whenever President’s Ford’s staff analyzed a proposal by Vice President Nelson Rockefeller and told the president that Rockefeller’s ambitious proposal wouldn’t fly in a belt-tightening season, the chief of staff Donald Rumsfeld was the designated “bad cop.” Rockefeller was certain that all the animosity was motivated by Rumsfeld’s ulterior motivation — to persuade Ford to dump Rockefeller for a better candidate (him).


Who Are You?


The conflicts within the administration are one thing, but once a vice president decides to run for president, the way voters perceive him is quite another. Until the public sees evidence to the contrary, vice presidents not leaders but followers with questionable strength. They are no longer powerful senators or a successful governors; they are cheerleaders for someone else’s agenda. If a president is worth seventeen votes in the Senate (the difference between a simple majority and a veto-proof majority), then the vice president is worth only one — the tie-breaker.


Vice President Bush was the youngest war hero of World War II, an all-American first basemen, and Phi Beta Kappa at Yale. He was also ambassador to China and head of the CIA. Still, as vice president he was lampooned in Garry Trudeau’s Doonesbury comic strip as having placed his “manhood in a blind trust.” George Will gibed that “the unpleasant sound Bush is emitting… is a thin, tinny ‘arf’ — the sound of a lap dog.”


Samuel L. Popkin is the author of The Candidate: What It Takes to Win – and Hold – the White House and Professor of Political Science at the University of California, San Diego. He has also been a consulting analyst in presidential campaigns, serving as consultant to the Clinton campaign on polling and strategy, to the CBS News election units from 1983 to 1990 on survey design and analysis, and more recently to the Gore campaign. He has also served as consultant to political parties in Canada and Europe and to the Departments of State and Defense. His most recent book is The Reasoning Voter: Communication and Persuasion in Presidential Campaigns; earlier he co-authored Issues and Strategies: The Computer Simulation of Presidential Campaigns; and he co-edited Chief of Staff: Twenty-Five Years of Managing the Presidency. Read his previous blog post “Five pivotal moments from incumbent campaigns” and view his previous videos “How will Mitt Romney fare in the general election?” and “Who should Mitt Romney choose as his Vice Presidential running mate?”.


Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

View more about this book on the  


Image credit: Seal of the Vice President of the United States. Source: Wikimedia Commons.




 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2012 05:30

Glissandos and glissandon’ts

By Jessica Barbour


GLISSANDO. A term unfortunately used by composers anywhere but in Italy to indicate a rapid glide over the notes of a scale on keyboard instruments and the harp, as well as a slur with no definite intervals on strings and on the trombone. Italians do not use it for the simple reason that it is not an Italian word; in fact it is not a word in any language, but a hybrid form of the French glisser (to glide or slide) with an Italian present-participle ending. The proper Italian term is strisciando.”


So begins the article “Glissando” in the fifth edition of Grove’s Dictionary of Music and Musicians, edited by Eric Blom and published in 1954. In the Language section of his preface to the book in which he details each “false coinage” that the tome has “nailed to the counter,” Blom writes that “Neither is there such a word as glissando, a sort of mock-turtle with a French head and an Italian tail; but it is so widely used (not by Italians) that its meaning must still be explained. It is therefore given an entry, where, however, it has now been firmly put in its place.”


As a musician, I found this absolutely shocking — here I thought I’d been hearing the glissando (the effect created when, for example, a pianist runs his finger up or down the keyboard), all my life, and suddenly it turned out that the very legitimacy of the word had been dismissed by Blom, a prominent linguist and writer on music, more than 30 years before I was even born. I immediately turned to the entry “Glissando” in the book’s third volume to take a look at how Blom, who dutifully composed the entry himself, put the term “in its place.”


I continued to read as the word glissando, a peculiar sort of bilingual portmanteau (not to be confused with a portamento, which is not to be confused with a glissando — more on that later) was mercilessly condemned by the editor. Later in the entry the musical effect itself earned a dismissive tone. Blom described it as “ugly and ineffective” when played on the organ, “almost too cheaply effective” coming from the harp, and “comic at best and vulgar at worst” when performed on the trombone. A similar effect from a string instrument, meanwhile, was “far less offensive.”


The term doesn’t appear in the first edition of George Grove’s A Dictionary of Music and Musicians (1878–1889). The OED cites the earliest recorded usage of the word in the 1870s, just a few years before the first volume of Grove’s book was published. However, “Glissando” is defined in the second and third editions of Grove (edited by J.A. Fuller Maitland and H.C. Colles, respectively), where the language of origin is listed as Italian, not French: (Ital. ‘sliding’).


The second edition (1900) discusses it mainly as a piano technique used “of course exclusively on the white keys,” and also mentions its use on the harp. The third edition (1927) has three definitions for the effect: one for its use on piano, taken from the second edition; one for its use on the harp in orchestral music, which it deems “the most important”; one for its use on the violin, equating it with a long portamento; and one for its use on the trombone, saying it is “much used in music of the ‘Jazz’ type.”


David D. Boyden’s article “Glissando” from Stanley Sadie’s The New Grove Dictionary of Music and Musicans (1980), the first edition of Grove from the post-Blom era, offers a definition for the word but includes the following initial caveat: “It has proved difficult to confine the term to a single, unambiguous definition.” It doesn’t acknowledge the word’s “mock-turtle” origins except in listing its language variants (italianized, from Fr. glisser: ‘to slide’; It. strisciando).


The 1980 “Glissando” article also distinguishes portamento as a separate term. While still creating a sliding effect between two notes, a portamento does not distinguish separate pitches on the way to the destination note, and is mainly written for the voice, or for strings.


My favorite part of this edition’s article comes after the opening definition. “In practice, the terms glissando and portamento are often confused and used interchangeably…However, if, in the interest of clarity (which often entails some degree of arbitrariness), the distinctions made above are kept, it follows that the piano and the harp, which have fixed semitones, can play glissando but not portamento; and the voice, violin and trombone can produce either type of sliding, although glissando is far more difficult for them.”


That parenthetical statement about “some degree of arbitrariness” and the acknowledgment of the term’s ambiguity in practical use are vital points in this definition. The fluidity of Grove’s meaning for the word is reflected in (and perhaps influenced by) musical performance: I surveyed my friends via the celebrated research tool Facebook, asking what they do when they see glissando in a musical score, and their answers included variations on “glide,” “slide,” “it depends on the instrument,” and “panic.”


What makes Blom’s writing on the glissando so fun to read is the openness with which he tells the reader that he is in the business of defending the English language against “Musicologese.”And it seems only appropriate that he would be so protective of the lexicon—if you’re not passionate about words, why would you want to be the editor of a dictionary?


Yet you may be befuddled if you look up a word and find a definition that seems to merrily resent its own existence. The most recent version of the article is, I think, the easiest to use for practical musical purposes, even if it isn’t quite as zealously written as Blom’s. And I can’t help but feel relieved to know that the general consensus of musicians and musicologists seems to be that glissando, despite its questionable linguistic origins, remains a perfectly cromulent word.


Jessica Barbour is the Associate Editor for Grove Music/Oxford Music Online. You can read her previous blog posts, “Wedding Music” and “Clair de Supermoon”, or learn more about “Glissando” and “Portamento” on Grove Music Online. Thanks to Allison Wright for her assistance with this post.


Oxford Music Online is the gateway offering users the ability to access and cross-search multiple music reference resources in one location. With Grove Music Online as its cornerstone, Oxford Music Online also contains The Oxford Companion to Music, The Oxford Dictionary of Music, and The Encyclopedia of Popular Music.


Subscribe to the OUPblog via email or RSS.

Subscribe to only music articles on the OUPblog via email or RSS.

Subscribe to only lexicography, language, word, etymology, and dictionary articles on the OUPblog via email or RSS.




 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2012 03:30

Coming out for marriage equality

Polls and election results show Americans are sharply divided on same-sex marriage, and the controversy is unlikely to subside, especially with a presidential election almost upon us. As a result, Debating Same-Sex Marriage co-author John Corvino, chose to speak to some of the questions revolving around the same-sex marriage dilemma and why the rights and responsibilities of marriage are still important.


In a series of videos, Corvino explores issues such as: Why marriage? (Why not civil unions?) Is gay marriage a threat to religious freedom? Is homosexuality unnatural? Are people who oppose gay marriage bigots?


Click here to view the embedded video.


Watch the full playlist of videos where John Corvino answers common questions about marriage equality and homosexuality.


John Corvino, Ph.D. is Associate Professor and Chair of Philosophy at Wayne State University in Detroit, Michigan. As “The Gay Moralist,” he was a regular columnist for the now-defunct 365gay.com, as well as a frequent contributor to pridesource.com, The Independent Gay Forum, and other online venues. He has contributed to dozens of books, and is currently completing a book entitled What’s Wrong with Homosexuality? for Oxford University Press. An award-winning teacher, he has lectured at over 200 campuses on issues of sexuality, ethics, and marriage. Some of his writing and video clips of his lectures are available at www.johncorvino.com. John Corvino and Maggie Gallagher are the authors of Debating Same-Sex Marriage.


Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

View more about this book on the




 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2012 01:30

The consequences of alcohol and pregnancy recommendations

By Sarah CM Roberts and Lyndsay Ammon Avalos



What should be the public health messaging on drinking during pregnancy? The answer isn’t clear-cut. We know that there is strong evidence that high levels of alcohol consumption during pregnancy harm the developing fetus. However, we don’t know conclusively what the impact is of lower level alcohol consumption. That is, we don’t know if there is a truly safe level of alcohol use, nor do we know if the line between safe and unsafe alcohol consumption is the same for all pregnant women.


This inconclusive evidence regarding harms at low levels of drinking presents a challenge for public health officials who are responsible for creating official recommendations regarding alcohol use during pregnancy. What should these messages be, and what level and type of evidence warrants a complete abstinence recommendation? Professionals and official government bodies differ in their recommendations. For example, a recent debate in the British Medical Journal focused on whether it is “all right for women to drink small amounts of alcohol in pregnancy.” The “yes” side argued that, in the absence of conclusive evidence of harms from low-levels of alcohol use, women’s autonomy to make decisions for themselves should be respected. The “no” side argued that abstinence was the safest message. Further, even with access to the same evidence, some countries including the United States recommend complete abstinence during pregnancy; others including the United Kingdom recommend that women not exceed a low-level.



Another ongoing debate in this field centers on the unintended consequences of abstinence recommendations. In particular, some worry that policies recommending abstinence from alcohol during pregnancy will lead women drinking low-levels of alcohol to terminate otherwise wanted pregnancies. On this question, we have some compelling data. Our recent paper, “Alcohol, tobacco, and drug use as reasons for abortion,” examined whether evidence supports this argument. Briefly, in our study of 956 women seeking abortion in the United States, we found 2.5% reported alcohol as a reason for seeking abortion. Almost all (21 out of the 25) of the women reporting alcohol as a reason for were drinking more than a “low” amount. Specifically, their typical alcohol use in the month before they discovered their pregnancies included binge drinking or blacking out from drinking on average once a week. We also found that all the women reporting alcohol as a reason for abortion reported that their pregnancies were unplanned. Thus, in the context of a long-standing policy recommending abstinence from alcohol during pregnancy, we didn’t find evidence that these recommendations lead women using low-levels of alcohol to terminate otherwise wanted pregnancies.


Our findings have already been used to strengthen arguments for abstinence recommendations, and suggest that abstinence recommendations probably do not have the unintended consequence of leading women to unnecessarily terminate otherwise wanted pregnancies. However, to be clear, our findings don’t provide any additional evidence as to whether abstinence or low-level only recommendations actually have intended consequences. Intended consequences may include a decrease in the proportion of pregnant women who drink at all and the proportion of pregnant women who binge drink, drink moderately, or drink heavily. They also may include a reduction in the negative health effects of alcohol use during pregnancy including fetal alcohol syndrome as well as growth restrictions and cognitive impairments among children born to women drinking during pregnancy.


Unfortunately, there hasn’t been much research about whether official recommendations have intended consequences. Two recent Australian studies tried to answer this question. The alcohol and pregnancy recommendations in Australia have changed from abstinence to low-level to abstinence over the past 20 years. Both studies found that more pregnant women drank at all under low-level than abstinence guidelines, although this increase was statistically significant in only one case. Importantly, one of the studies also found that a low-level guideline was associated with a decrease in moderate or heavy drinking during pregnancy.


We need more research to confirm these Australian findings; if they are supported by additional research, we will need to have some difficult conversations. We will need to explicitly discuss whether it is more important to decrease the proportion of pregnant women who drink at all or whether we should concentrate on decreasing the proportion of pregnant women drinking at moderate or heavy levels.


Knowing that abstinence messages don’t seem to lead women to terminate otherwise wanted pregnancies doesn’t tell us what effects the abstinence messages have. We still don’t know whether or how much official abstinence recommendations influence pregnant women’s decisions to drink alcohol at all, or pregnant women’s decisions about how much to drink.  We also don’t know whether abstinence recommendations improve pregnancy outcomes or child well-being. To determine the most appropriate recommendations, we need to move beyond debating the conclusiveness of epidemiologic evidence regarding effects of low-level drinking, searching for a universal safe level, and focusing on unintended consequences of the abstinence recommendation. We need more research about how women interpret and act on official alcohol and pregnancy recommendations, and whether there are demographic or drinking-level-related subgroup differences in interpretation and action. Only when we have these data can we really establish the most appropriate public health recommendations.


Sarah CM Roberts is a public health social scientist at Advancing New Standards in Reproductive Health (ANSIRH) at the University of California, San Francisco, United States. Lyndsay Ammon Avalos is a Research Scientist at Kaiser Permanente Division of Research in Oakland, California, United States. Their recent paper, ”Alcohol, tobacco and drug use as reasons for abortion,” has been made freely available for a limited time by the journal Alcohol and Alcoholism.


Alcohol and Alcoholism publishes papers on the biomedical, psychological, and sociological aspects of alcoholism and alcohol research, provided that they make a new and significant contribution to knowledge in the field. It is the official journal of the Medical Council on Alcohol.


Subscribe to the OUPblog via email or RSS.

Subscribe to only health and medicines articles on the OUPblog via email or RSS.


Image credit: Pregnant woman with a glass of wine. Photo by DomenicoGelermo, iStockphoto.




 •  0 comments  •  flag
Share on Twitter
Published on October 11, 2012 00:30

October 10, 2012

On Ayn Rand and the 2012 presidential election

Stanford Professor Jennifer Burns recently spoke with the 92nd Street Y about her new book Goddess of the Market: Ayn Rand and the American Right. Pointing out the correlation between Rand and Vice Presidential candidate Paul Ryan, Burns explains how Rand’s philosophy of the “virtue of selfishness” and “favor of the individual” has become a tenant of American politics today. Interestingly, Burns says Rand might not have been a Romney/Ryan supporter, saying that “if these two religious believers are elected they will ultimately grow the state and endanger freedom because they don’t fully understand capitalism.”


Click here to view the embedded video.


A version of this article originally appeared on 92nd Street Y Campaign for the American Conversation.


Jennifer Burns is Assistant Professor of History at Stanford University and the author of Goddess of the Market: Ayn Rand and the American Right. A nationally recognized authority on Rand and conservative thought, she has discussed her work on The Daily ShowThe Colbert ReportBook TV, and has been interviewed on numerous radio programs. Read her previous blog posts: “Top Three Questions About My Interview On The Daily Show” and “Top 3 differences between The Colbert Report and The Daily Show.”


Subscribe to the OUPblog via email or RSS.

Subscribe to only American history articles on the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

Subscribe to only business and economics articles on the OUPblog via email or RSS.

View more about this book on the  




 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2012 15:30

In memoriam: Christopher Peterson

Oxford University Press is saddened to hear of the passing of Christopher Peterson, who died yesterday in his home.


One of the founders of the field of positive psychology, Chris’s focus over the last 15 years had been on the study of happiness, achievement, and physical well-being. His new book, Pursuing the Good Life, is scheduled to be published by OUP this December. He was also author of A Primer in Positive Psychology (OUP 2006) and Character Strengths and Virtues (OUP 2004, with Martin Seligman). His final blog post (on Psychology Today), published on 5 October 2012, was titled “Awesome: E Pluribus Unum / We are all the same, and each of us is unique, in death and in life.”


Christopher Peterson

18 February 1950 – 9 October 2012



Abby Gross is a Psychology Editor at Oxford University Press.




 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2012 11:30

Addressing mental disorders in medicine and society

By Norman Sartorius



Stigma attached to mental disorders often makes the life of people who suffer from such illnesses harder than the illness itself. Once marked as having a mental illness, the persons who have them (as well as their families) encounter difficulties in finding jobs, marital partners, housing, or protection from violence. If they happen to have a physical illness as well, they get lesser quality treatment for it. The outcome of physical illness in people who develop a mental illness is also poorer because those affected often delay seeking help, afraid that they might be recognized as having a mental illness and therefore experience discrimination and rejection.


A good example of this is the co-morbidity (the simultaneous occurrence) of diabetes and depression. These two diseases tend to occur together and there is some evidence suggesting that they may even cause each other. When depression is present, the complications of diabetes are considerably more frequent and its treatment is hugely more expensive. Depression is often not recognized by the physicians who treat diabetes and patients — afraid of being labeled as having a mental illness — are less likely to volunteer information about their depression.


The problems arising when depression and diabetes occur at the same time, the organization of health services so that they can provide comprehensive care to people with co-morbid mental and physical illness, and the scientific endeavors to learn more about the pathogenesis of co-morbidity is currently being reviewed during an International Conference on Depression and Diabetes organized by the National Institute of Diabetes and Digestive and Kidney Disease, in collaboration with the Dialogue on Diabetes and Depression.


The excellence of the participants in the Washington conference and its comprehensive, well-composed programme will make it possible to advance our understanding of mental disorders regardless of whether they appear alone or in combination with physical illness. In turn, this understanding will make it possible to improve the way in which mental disorders are managed and thus contribute to the betterment of quality of life for people with those disorders and to the reduction of the stigma of mental illness, which is one of the chief problems that they and their families have to face.


Professor Norman Sartorius, MD, PhD, FRCPsych is a co-author of Paradigms Lost: Fighting Stigma and the Lessons Learned. He is President of Association for the improvement of mental health programmes (MH), a previous Director of the World Health Organization’s Mental Health Program, and past President of the World Psychiatric Association and the European Psychiatric Associations. Read his previous blog post “The stigma of mental illness.”


Subscribe to the OUPblog via email or RSS.

Subscribe to only health and medicines articles on the OUPblog via email or RSS.

View more about this book on the  




 •  0 comments  •  flag
Share on Twitter
Published on October 10, 2012 07:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.