Oxford University Press's Blog, page 633
August 6, 2015
Genomically speaking
Today, the amount of global genetic data is doubling on the order of every seven months. This time span has shortened significantly over the past years as the field of genomics continues to mature. A recent study showed genomics is starting to compete with the data outputs of digital giants like Twitter and YouTube. This is a game-changer for both science and technology. At this pace, genomics will rival astronomy. In the future, as the authors of this study suggest, we might say ‘that’s a genomically big number’ instead of an astronomically big one.
It is clear that despite the headline-grabbing news surrounding the human genome project we are still at the early stages of realizing how genomic technologies will re-shape society. It is fascinating to watch these changes occur at a scientific and medical level, but also a cultural one.
One question that emerges is where to find all this data and how to make it accessible, both to drive further research and to increase public ‘genomic literacy’. Today, online information about DNA research is growing exponentially but is still spread across isolated pockets of the Internet on a bewilderingly diverse array of websites held by companies and research institutes.
Luckily, at least all public raw DNA data is centralized – a key factor to thank for the tremendous growth of this field so far. The value of DNA data is dependent on its context – i.e. the amount of relevant data available for comparison. One human genome is difficult to interpret in isolation. A million human genomes with extensive ‘annotations’, or information describing each individual, health status and traits, is far easier to draw interpretations from. The latter is the vision for the future.

In the early days of research the US, Europe and Japan invested in building a global DNA library. In fact, in the US, the National Center for Biotechnology Information (NCBI), which hosts “Genbank” is a part of the National Library of Medicine (NLM). Genbank, alongside the EBI’s ENA and Japan’s DDBJ collectively host all public DNA data – or aim to, when it can be drawn out of private stores held in companies and research institutions. China’s BGI has recently entered the game with a multi-million dollar investment in its China National GeneBank.
Some people researching their own genomes might visit one of these global portals but they are designed primarily to serve researchers with expert knowledge. More likely, members of the general public might visit second and third generation sites like OpenSNP specifically built to openly host human data in a more accessible way.
Companies like AncestryDNA.com, 23andMe, and Ubiome are building online interfaces to supplied personalized data for their customers of their genetic testing services. Illumina, the dominant DNA sequencing company offers the app mygenome for smart phones. Google, Amazon and Apple all want your DNA now raising new speculations about the ease at which we might access genomic data in the future.
There is a potential building boom coming. If you were a visionary in the earliest days of the internet you might have registered the generic names ‘carinsurance.com’ or ‘insurance.com’. These two URLs alone have a combined value of $84 million dollars and top the list of the sales yet with sale prices of $47m (2010) and $37m (2010), respectively.
A peek under the hood of the DNA URL land-grab reveals many are already speculating on this future. The URL ‘genomical.com’ boasts a price tag of $15k. The number and variety of other URLs that are already claimed and up for investment re-sale is shocking.
At the low end, there are URLs like: DNA101.com ($500), dnadoublehelix ($988) and dnabusiness ($1500), deoxyribonucleicacid.com ($1950), dnatrader.com ($2295), dnaprotection ($2895), dnapool.com ($3588), and dnaworld ($13k). More dear is humangenome.com, which is up for auction at a minimum bid of $25k. dnaculture.com bears double the asking price ($50k). The owner of genomics.com is entertaining offers upwards of $200k. Genomematch.com, DNAsearch.com, and DNAmatch.com are available for $180, $500k and $750k, respectively. Whether such prices are in line with demand or purely aspirations is yet to be seen. Imagine though, the worth of generic URLs such as DNA.com, genome.com, DNAtesting.com.
Such levels of URL saturation is a small but essential first hint of what might come. More important, obviously, is content and function. What will matter most in the long run is the creation and support of new ways to use genomics to bring about positive changes in society that help people.
It is interesting to speculate what this DNA URL portfolio might look like – and be worth – in 50 years. What advances will help researchers and the public, alike? Before it was co-opted to mean exceptionally large, genomically meant ‘of or pertaining to genomes’. Genomically speaking, it will likely be genomically large. Will you one day use it to access your genomic profile as easily as you shop online at Amazon today?
Featured image credit: Keyboard, by geralt. Public domain via Pixabay.
The post Genomically speaking appeared first on OUPblog.

August 5, 2015
Playing God, Chapter 1
While dealing with the etymology of the adjective bad, I realized that an essay on good would be vapid. The picture in Germanic and Slavic with respect to good is trivial, while the word’s ties outside those two groups are bound to remain unclear. Especially troublesome is Greek agathós “good,” from which we have the given name Agatha. Statistics is lacking on whether all the women bearing this name are good (let us hope they are), but I have nothing to add to the long discussion on the possible genetic ties between the Greek and the Germanic adjectives. Perhaps good and agathós are allied in some obscure, still undiscovered way; perhaps they are not. Guesswork will lead us nowhere. So I decided to turn to one of the hardest words in Germanic etymology, namely god. Here too I am unable to promise an original solution, but a few things I am going to say may be of interest to some of our readers.
My choice of god instead of good was not fortuitous. For centuries English speakers believed that God is called this because He is good. Among our earliest lexicographers Skinner (1671), Bailey (1721), and Johnson (1755) wrote so in their dictionaries. The great William Camden held the same view. Even today I sometimes receive letters that ask me to confirm the fact that the two words are not cognate. It seems strange to many that a minor thing like the discrepancy between the vowels can drive a wedge between such seemingly indissoluble concepts as “god” and “good.” Later we will see that gods were not meant to be good. The old root for good had a long vowel, while god had short o (for once, modern spelling—oo versus o—can be trusted). Short and long vowels can alternate, but according to certain rules. One of them states that short and long o never meet in the same Germanic root. The system of vowel alternations is called ablaut (the British rather common term is gradation): compare ride / rode / ridden. Ablaut is like a cage, and the rows of vowels squeezed into it are like non-intersecting railway tracks. Occasionally this neat system seems to break down and we witness derailment, but first, all cases of derailment are suspect, for they need special pleading, and second, short o and long o hardly ever “derail.” The verdict is final: god is not allied to good.
By definition, the idea of a single omnipotent being called “god” is present only in monotheistic religions. At one time, all human societies passed through the stage anthropologists call animism. Some tribes still believe that all objects (stones, trees, and so forth) are animate. Greek myths tell us about rivers and groves whose immortal guardians should not be offended. European folk tales celebrate the same plot. Some such guardians rise to the level of the supreme masters and mistresses of the elements. The terrible Russian Baba Yaga (stress on –ga) questions everyone who tries to invade her precincts. The vengeful Artemis punishes poachers and even those who happen to see too much by chance (thus, Actaeon stumbled on this virgin huntress naked, whereupon he was transformed into a stag and torn apart by his own hounds), while the jealous Apollo did not allow anyone to outdo him at a musical contest, as the fate of the luckless satyr Marsyas shows. Since Baba Yaga is a creature of so-called lower mythology, we don’t call her a goddess, contrary to Artemis, but their functions are similar: they control the forest.

The European pagan religions known to us have some traits of monotheism because in the family of gods and goddesses one of them is allowed to rise above everybody else (such is Zeus in the Greek pantheon and Woden, that is, Othinn, in Scandinavia), but in the south, as well as in the north, pagans were far from losing faith in the other omnipotent beings. Yaga, Zeus, and the rest were so-called speaking names, like Engl. April or Melody. I know a man called Frog. Sometimes we understand what such names designated, but more often we don’t. Thus, Thor (Þórr) is simply “thunder.” The obscure Scandinavian god Ullr can probably be interpreted as “Glory.” Quite often ancient people migrated to a new region, adopted the language of the indigenous population, but stuck to their old spirits, demons, and protectors. This is what happened to the Greeks. Some names of their divinities are almost certainly “foreign” and have no Greek etymology, but Zeus “shining” is native. Both he and Thor started their career as sky gods. Although the myths that have come down to us are relatively late, they may contain hints of prehistory. Thus, Frey, originally a Scandinavian fertility god, means “Lord.” However, in the extant tales he is no longer a supreme deity, and only one episode shows him occupying a seat from which he can see the entire world.

The Greeks, Romans, and the Scandinavians made their gods anthropomorphic, that is, endowed them with human bodies and characteristics: they fall in love, commit adultery, enjoy one another’s humiliation, and in Greece actively interact with human beings (a situation uncharacteristic of Scandinavia). As a prelude to the Trojan War, three goddesses allowed Paris, a hero, to decide which of them was the most beautiful. Each tried to bribe Paris in a truly scandalous way. What was a mortal man’s opinion to them? Vying for the laurels (nay, the apple) of Miss Olympus! But of course, sacrificial plants, animals, and quite often humans brought to the altars of the gods were also bribes, and the recipients of the gifts watched jealously and decided whether they had received enough. The same might happen in a monotheistic religion; God looked on Abel’s offering with favor but was displeased with Cain’s.
As far as we can judge, at a very early state of human existence, people believed that they were surrounded by numerous spirits that inflicted diseases, both physical and mental. They feared the deleterious agents and tried to propitiate them. Today we know very little about that stage of religion, but in Scandinavian myths and folklore alongside Thor, Frey, and others, we encounter “multitudes”: elves, dwarfs, giants, and trolls. Some have individual features, while others merge with the crowd. Elves must have been quite evil and very different from those populating the British landscape, for the word elf is related to German Alp “nightmare” (more about them will be said in Chapter 2). The etymology of dwarf is uncertain, but I believe that its root is allied to Engl. dizzy (the details are too technical for the present context; those interested in them may consult my dictionary). The original dwarfs (dwarves, if you prefer) seem to have prevented people form making rational decisions. Troll is also a partly obscure word. If it is akin to Engl. droll, the ancestors of trolls made people stupid.
The important thing is that, most probably, dwarf was at one time a word of the neuter grammatical gender (it, rather than he or she); troll was certainly neuter. This grammatical detail sheds some light on the fact that the Germanic word for “god” was also neuter and that originally it occurred only in the plural. Is it then possible that in the beginning the supernatural beings called gods were not different from dwarfs, trolls, elves, and the rest? In a serial, every episode must stop when the viewers are clamoring for more. Wait until next week.
Image credits: (1) Baba Yaga by Viktor M. Vasnetsov (1917). Public domain via Wikimedia. (2) Artemis, drawing after interior tondo of an Attic red-figure kylix. MCAD Library. CC BY 2.0 via MCAD Library Flickr. (3) The Offerings of Cain and Abel by Jan van Eyck (1425). Public domain via WikiArt.
The post Playing God, Chapter 1 appeared first on OUPblog.

The public life of Charles Dickens
Regarded as perhaps one of the most influential authors of his time, Charles Dickens has won the hearts of many with his writing, creating surprising plots and characters we love to love (and love to hate). Our Oxford World’s Classics reading group, in its third season, has chosen Dickens’s Great Expectations for discussion. In addition to analyzing that a work for its literary depth, it is just as important to consider an author’s life and the context in which the work was written.
Dickens’s life was not an easy one, nor was it ever boring. In the timeline presented below, we have only included a handful of dates, milestones, and events, which only proves that Dickens was, undoubtedly, one of the most interesting authors of his time.
Headline image: Dickens Characters by William Holbrook Beard. Public domain via Wikimedia Commons.
The post The public life of Charles Dickens appeared first on OUPblog.

Neighbourhood leadership in the wake of the Baltimore riots
Having visited several American cities in recent weeks and talked to public servants, business leaders, community activists, and academics about current urban stresses and strains, it is difficult not to conclude that they face deeply troubling challenges.
The riots in West Baltimore in April and May 2015 are only the most recent in a long line of outbreaks of urban violence suggesting that all is not well. On this occasion, the protests and mayhem erupted after Freddie Gray, an African-American man, died in police custody as a result of a spinal cord injury. These disturbances, mainly in the Sandtown-Winchester area of the city, led to violent confrontations with the police. Mayor Stephanie Rawlings-Blake declared a curfew and called in the Maryland National Guard. Governor Larry Hogan sent in 500 state troopers and 34 arrests were made. On 1 May 2015, the Baltimore City State’s Attorney Office filed charges against six police officers after a medical examiner ruled that Gray’s death was a homicide.
In some ways, recent events in Baltimore resemble the trajectory of events that took place in Ferguson, a small suburb of St. Louis, Missouri, in August last year. Michael Brown, a young African-American man, was shot dead by police on 9 August 2014. The killing sparked several days of protests that morphed into violence and looting. On 14 August, President Obama called for transparency in the investigations into the death of the unarmed black teenager.
When I visited St. Louis in April, I was honoured to share a platform with the Rev. Starsky Wilson. Last year, Missouri Governor Jay Nixon appointed him to co-chair the Ferguson Commission in order to study the underlying challenges facing the people of Ferguson, and to make policy recommendations to help the region address the causes of the urban riots.
On 22 April 2015, Rev. Wilson stressed three points in his presentation for ‘Leading the Inclusive City: Ideas for St Louis’ at the St. Louis Metropolitan Exchange: First, it is important to focus on strengthening neighbourhood leadership. Second, children matter and it is their experiences we should concentrate on. Third, equity in the city is critical, as wider forces are creating unequal cities and city regions, a trend that needs to be reversed. While the Commission is currently gathering evidence and has more work to do before it reaches any conclusions, Rev. Wilson’s initial comments are well considered.

Urban riots are not, of course, restricted to the United States. There have, for example, been two major outbreaks of civil unrest near where I live in Bristol. The first major disturbances, on 3 April 1980, heralded an English ‘summer of discontent.’ In 1981, urban unrest broke out in a number of inner city areas, including Brixton, Southall, Toxteth, and Moss Side. The second major urban riot in Bristol took place on 21 April 2011. In August that year, UK television screens were, once again, filled with disturbing pictures of disorder, looting and violence—this time not just in a string of big cities, but also in smaller towns like Croydon.
The precise causes of these episodes of social breakdown are complex and we should guard against generalising too freely. However, we can say with some confidence that the following factors appeared to play a part in sparking the outbreaks of civil unrest in Baltimore, Ferguson, and Bristol: insensitive, if not negligent, approaches to policing; racial discrimination and social deprivation; a lack of opportunities for young people; and a widespread sense of hopelessness.
But to understand the social tensions that underpin urban flare-ups, we have to dig deeper. As in many other US cities, the problems now confronting St. Louis stem, in large part, from government policies, not racial prejudice by individuals. Over a period of more than 70 years, urban zoning policies coupled with fragmented governance arrangements—there are over ninety municipalities in the St. Louis metropolis—have led to the creation of a highly segregated metropolis. The Federal Housing Administration (FHA), established during the New Deal era back in the 1930s, boosted the process of discrimination. It rendered black neighbourhoods ineligible for mortgage insurance and implemented, in effect, a suburban ‘whites-only’ policy right up until the mid-1960s.
During ‘Leading the Inclusive City: Ideas for St Louis’ at the St. Louis Metropolitan Exchange, I too outlined a number of themes alongside Rev. Wilson. In closing, I highlighted three points that are, perhaps, relevant to other cities seeking to avoid urban strife.
First, place-based leadership can make a major difference to the quality of life in an area. Encouraging and developing strong local government and socially aware, local leadership is critical. Second, effective civic leadership is multi-level. Those ‘at the top’ can set the tone but energetic local leadership is essential. In relation to Ferguson it seems clear that strategic action is needed at the state and metropolitan levels to re-balance opportunities in the city region. Improved public leadership is also needed at the municipal and neighbourhood levels. Third, international city-to-city learning can be very useful in challenging established assumptions and stimulating fresh thinking, including the creation of new policy options.
Communities can learn and recover from these tragedies when people are willing to step forward and seek to improve the places in which they live.
A version of this article originally appeared on the Policy Press blog.
Image Credit: “Operation Baltimore Rally” taken by the Maryland National Guard. CC BY-ND 2.0 via Flickr.
The post Neighbourhood leadership in the wake of the Baltimore riots appeared first on OUPblog.

A fist-full of dollar bills
The next time you are slipping the valet a couple of folded dollar bills, take a good look at those George Washingtons. You might never see them again.
Every few years, there is a renewed push for the United States to replace the dollar bill with its shiny cousin, the one dollar coin. The move is typically supported by companies that mine the ore, manufacture the metal, and supply vending machines, which digest coins more easily than bills. It is opposed by the company that has supplied the Treasury with paper for the dollar bill for more than a century. Judging from American’s lack of appetite for dollar coins, most of the public couldn’t care less.
Agitation in favor of the dollar coins typically leads to government reports analyzing the costs and benefits of the switch, such as those issued by the Government Accountability Office in 2011 and 2012 and, more recently, by researchers at the Federal Reserve Board. These are followed by media reports on the costs and benefits of “killing the bill.”
And then the issue dies.
Until it resurfaces several years later.

The United States is clearly an outlier when it comes to the use of paper money. The one dollar bill, the smallest denomination US currency note, is worth far less than the lowest value bank note in any other advanced industrialized nation. The minimum denomination notes in circulation Britain, Canada, the Euro-zone, Japan, and Switzerland are the equivalent of five to ten dollars, and countries that once-upon-a-time had equivalents of the one dollar bill abandoned them years ago.
Coins are certainly more durable than paper, although the difference in longevity of bills and coins has shrunk considerably during the last two decades: in 1990, the dollar bill lasted, on average, less than two years; by 2012, its useful life was nearly seven years. That pales by comparison to the useful life of a dollar coin, which is measured in decades.
A cost-savings argument made in favor of the coin is the resulting “seigniorage” that accrues to the government. Because dollars are, in a slightly roundabout way, a debt of the US government—a debt on which it pays no interest—the more of them there are in circulation, the larger the government’s interest-free loan. Most analyses suggest that dollar coins are more likely than bills to end up sitting in the coin a jar in your closet or among the loose change sitting in your car until you need it to feed a parking meter or pay for a cup of coffee, meaning that they stay in circulation longer than one-dollar notes.
A drawback of the dollar coin is that lacks the many security features built into the dollar bill. Following the UK’s introduction of the pound coin in 1983, it was estimated that as many as 2.8 percent of pound coins in circulation were forged, undermining confidence in it. If dollar coins were counterfeited at the same rate as British one pound coins, more than half a billion fake dollar coins would make it into circulation every year. By contrast, the estimated rate of bogus dollar bills is estimated to be less than one thousandth of one percent of the total in circulation.
Another, more potent drawback to the coin is that the public does not seem to be inclined to use it. The Susan B. Anthony dollar was first minted in 1979, but because its size, color, and shape is similar to that of the quarter, it never gained traction with the public. The Sacagawea dollar, first minted in 2000, similarly never became popular. Just ask the Federal Reserve—they have a stockpile worth approximately $1.4 billion.
The prospects for the adoption of the dollar coin are pretty dim. Given public apathy about the coin, the substantial costs of transitioning from bills to coins, and current low interest rates which give Uncle Sam access to ample cheap borrowing, there is no pressing reason give the dollar coin serious consideration… until the next time the issue surfaces.
Featured image credit: United States one dollar bill, obverse. Public domain via Wikimedia Commons.
The post A fist-full of dollar bills appeared first on OUPblog.

August 4, 2015
Celebrating 50 years of the Voting Rights Act
On 6 August 2015, the Voting Rights Act (VRA) will be turning 50 years old. In 1965, President Lyndon B. Johnson approved this groundbreaking legislation to eliminate discriminatory barriers to voting. The Civil Rights Movement played a notable role in pushing the VRA to become law. In honor of the law’s birthday, Oxford University Press has put together a quiz to test how much you know about its background, including a major factor in its success, Section 5. Delve deeper into its complex history in “Enforcing Section 5 of the Voting Rights Act” by Michael K. Fauntroy, available on Oxford Handbooks Online.
Image Credit: “President Lyndon B. Johnson, Martin Luther King, Jr., and Rosa Parks at the signing of the Voting Rights Act on August 6, 1965″ by Yoichi Okamoto. Public Domain via Wikimedia Commons.
The post Celebrating 50 years of the Voting Rights Act appeared first on OUPblog.

“Deflategate” and the “Father of Football”
The Wells Report besmirched the reputation of New England Patriots quarterback Tom Brady in its conclusion that the NFL ‘golden boy’ was likely aware that he was playing with under-inflated footballs in the 2015 AFL conference game against the Indianapolis Colts. If the report is to be believed, even Brady has stooped to less-than-savory methods to win a game of football.
There are a range of opinions about Brady’s innocence, offered by nearly every sports commentator and former football player. But more broadly, what does “Deflategate” reveal about football as a uniquely American game?
The scandal confirms just how technical a sport football is—technical not only because official rules specify exactly how much air should go into a football, but because even physicists have weighed in on how much more aerodynamic a football is when it deviates from the standard.
Other sports have adopted nit-picky guidelines too, but these rules aren’t as fussy as in football. A historical perspective of the game shows that this stringency is necessary, as players and coaches have pushed their limits to escalate physicality and create an edge. Even in the sport’s earliest days, before the twentieth century, when football was exclusively played on elite college grounds, rule-makers had to write a number of provisions, including a prohibition of metal on cleats, to create a sense of fairness and order. Whenever a rule became more stringent, football players became even “stricter constitutionalists,” contesting the boundaries of the language and essence of the rule book until even more precision had to be written into it. Fine, players agreed, no metal on cleats. Instead, they sharpened the tips of leather cleats so finely that the shoes had the effect of metal, at least until a more specific rule banned those as well. It’s no wonder that over one hundred years later, the NFL monitors the amount of air allowed in balls to the half-pound per square inch.

English bystanders of the early American game decided that this rule-stretching was tantamount to cheating, which went against the ways of true gentlemen. But Americans came to see it differently, thanks largely to a man named Walter Camp, once known throughout the country as the “Father of Football.” He was most responsible for turning English rugby into an American phenomenon, tweaking it significantly by convincing college football rule committees to create both a line of scrimmage (versus the English “scrum”) and a quarterback, and to locate legal tackles lower on the body. Envisioning a simulated experience of battle for college men, he saw his adjustments as the shots of virility rugby needed to make it optimal for American use.
Camp entered Yale University in 1876, just as classmates started to play variations of English rugby. His teammates were privileged, but also increasingly emasculated. Collectively, they had neither experienced the hardening lessons of war, nor the assurances of being successful breadwinners. Compared to their fathers, they would become more anonymous and less autonomous in a more volatile market economy—all as women and non-white, immigrant, and working-class men were making political headway in their previously undisputed dominion. Camp shaped the football gridiron to be one of the last bastions of elite college men in American life, not anticipating the extent to which non-white and working men would adopt his game and make it meaningful for themselves.
Like the NFL today, Camp’s football committees discovered that their original rules would have to be revised often—so often that Camp published refurbished rule books annually. Harvard University president Charles Eliot thought the changing rules proved that football was inherently flawed, otherwise the game would have been perfect the first time, but Camp insisted the opposite; football was necessarily malleable, he boasted, to cater to the nation’s changing cultural needs. Whereas English rugby players fell slave to undying tradition, Americans had no real sporting tradition to speak of. And thus while a leisured Englishman focused most on honor—on how he played his game—it was winning that mattered more for the American, eventually at any cost. Camp didn’t think this made American men ungentlemanly, only effective. America was a nation of winners, he exclaimed, virile ones whose destiny was to innovate in their pursuit of perfection in sports and everything else.
To Camp’s thinking, sharpening one’s cleats or deflating a ball was perfectly legitimate; until an explicit rule forbade it, all was fair in metaphorical war. When a rule was incontrovertibly broken, however, as might have been the case in Deflategate, Camp would have conceded that the offender ought to pay a price, but he also believed that pushing boundaries was a noble and natural tendency of enterprising men.
Today one could argue, as Eliot had over one hundred years ago, that the very size and convoluted nature of the NFL rule book is proof of football’s imperfection, but Camp saw this development as progress. True enough, many of the changing rules account for technological advancements—better helmets, for instance, or better training regimens that make defenders bigger and better tacklers than when Camp was alive. On the other hand, many rule changes have also been adopted to make the competition between sides more compelling. This narrative tension was Camp’s obsession. If he were alive today, he would likely see Tom Brady’s guilt as beside the point. A red-blooded American male in pursuit of perfection: this was the virility he hoped his game would cultivate better than anything else.
Image Credit: “Best Buddies Football Challenge, 5.29.15″ by Charlie Baker. CC BY NC-SA 2.0 via Flickr.
The post “Deflategate” and the “Father of Football” appeared first on OUPblog.

The Beatles, the Watts Riots, and America in transition, August 1965
Fifty years ago during their North American tour, The Beatles played to the largest audience in their career against the backdrop of a nation shattering along economic, ethnic, and political lines. Although on the surface the events of August 1965 would seem unconnected, they nevertheless illustrate how the world was changing and how music reflected that chaotic cultural evolution. In the space of a few weeks, American society pivoted in ways that still echo today, and The Beatles provided part of the soundtrack.
Washington, D.C., Friday 6 August 1965. Under the gaze of Lincoln’s statue in the rotunda of the Capitol Building, President Lyndon Johnson signed the Voting Rights Act into law after years of protests led by Dr. Martin Luther King and others in response to state and local limitations on the constitutional rights of African Americans. When the US Congress—politically in shock from the Kennedy assassination and from the media’s coverage of events in the American south—passed the legislation, some thought America had solved its race problem. Of course, Congress had done nothing of the kind and had once again applied a bandage to a concussion, reintroducing America’s race tragedy to a new generation on the anniversary of the Civil War.
In 1965, the oldest baby boomers were just coming of an age, were beginning to develop a social consciousness, and were questioning how adults saw the world. For many (but not all), the views of their parents felt incongruous with the world unfolding before them. Some questioned the privileges granted to them by the contexts of their birth, while others questioned why they had no privileges for the very same reasons.
Los Angeles, Wednesday 11 August 1965. Less than a week later, but after years of real estate practices that discriminated against non-whites, all it took was one spark on a hot afternoon to start a fire. The previous summer, Martha and the Vandellas had encouraged everyone to dance in the streets. Now the streets of Watts filled with smoke and debris as residents reacted to word that police had arrested a family after a traffic stop. For six days, America watched the evening news to see businesses and homes burning, leaving even Martin Luther King flummoxed by the violence. Four days later, soldiers patrolled the streets of south central Los Angeles.
Underlying these images were the long-established divides of class, ethnicity, and politics that had been buried during the war years, but that now surfaced under the gaze of television cameras. The economic disparities between white and non-white America provided the social seams that would burst in the summer heat. As it emerged, the Watts riots were the harbinger of the urban unrest that would engulf neighborhoods in Detroit, Chicago, and other American centers in the coming years. They had been preceded the previous year by riots in Harlem. Would the violence of Watts spread back to New York City?

New York City, Sunday 15 August 1965. A little after 9:00 p.m., The Beatles took the stage at Shea Stadium in Queens and performed a short set for about 55,600 mostly white middle-class female fans. The size of the audience set a world record and earned the band around $160,000, but the screaming rendered the concert nearly inaudible to many in attendance. The technology that would allow bands just a few years later to play such venues was still in development and The Beatles as a band would never benefit from it. Instead, they drilled through a set that they themselves could barely hear. Indeed, they were so unhappy with their performance that they would later visit a London soundstage to overdub parts for a television special.
George Harrison later commented that the world had used The Beatles as an excuse to go mad. The collective ecstasy surrounding Beatles (and other) performances has been the subject of much speculation, with explanations including women’s empowerment, adolescent psychological release, etc. Ultimately, we don’t really know, but perhaps metaphorically the roar was the sound of America fracturing. The economic and social disparities that underlay everyday reality had found articulation in John Lennon’s plea for “Help!”
Los Angeles, Tuesday 17 August 1965. With the streets of Watts now quiet, the state lifted the curfew and the clean up began. The contrasts and parallels between the crowd frenzy in Queens and the frustrated rioters in Watts could not be more dramatic and telling: the white middle class reveled as John Lennon cried that his independence from the world had “vanished in the haze,” while the neighbors of South Central seethed behind their doors as military trucks rolled past. A little under two weeks later, The Beatles would be in Los Angeles, playing at the Hollywood Bowl, sixteen miles and significant money away from Watts. The wealthiest patrons occupied the boxes at the front, while general admission filled the remainder of the hillside. The concert might as well have been in a different country, let alone the same county.
The Beatles had burst onto the American scene in early 1964, winning affection with apparent innocence, candor, self-deprecating humor, youth, and energy. In the American context of racial tensions, what could be more reassuringly white than four Englishmen with little-boy haircuts playing rockabilly interpretations of African-American music? When they weren’t covering or imitating Motown tunes (e.g., “You Really Got a Hold on Me”), they were singing harmonies they had learned from recordings by the Shirelles and others (e.g., “Baby It’s You”).
As successful as that image might have been, The Beatles too were changing along with the first wave of baby boomers starting college and/or families. The Shea concert probably marked the highpoint of Beatlemania. The band had already abandoned the innocent mop-top image crafted by their manager; the world was changing and they knew it.
Vietnam, 18 August 1965. A decade or more of white migration from cities to suburbs had revealed that the inequalities addressed by the Voting Rights Act were not limited to Mississippi and Alabama. While Herman’s Hermits and soon The Monkees would provide diversions for white teens, this comfort zone was about to be breached. Within days of the Shea Stadium concert and the end of the Watts riots, Operation Starlite would mark the first ground offensive by the US military in Vietnam and the beginnings of a war that would draft the leading edge of mostly the poorest boomers, both black and white.
Paul McCartney’s “Yesterday” would ring truer than he could have imagined.
The post The Beatles, the Watts Riots, and America in transition, August 1965 appeared first on OUPblog.

The role of cross-examination in international arbitration
Knowing when and how to cross-examine is an essential part of properly representing clients in international arbitrations. Many cases have been won by good cross-examinations and lost by bad cross-examinations, and that is just as true in international arbitrations as it is in any other dispute resolution procedure in which counsel are permitted to cross-examine witnesses.
But cross-examination by counsel in an international arbitration differs in highly significant ways from cross-examination by counsel in a domestic arbitration or court trial. That is obviously true when counsel is a lawyer from a legal tradition in which counsel hardly if ever cross-examine at all–such as a lawyer from some of the civil law countries or from China or Russia. Because of the many important differences between an international arbitration and an Anglo-American domestic arbitration or court trial, however, it is also true even when counsel is a lawyer from the Anglo-American legal tradition in which counsel routinely cross-examine witnesses. No matter what legal tradition counsel may come from, many of the assumptions on which counsel would proceed in a domestic arbitration or court trial are simply inappropriate in an international arbitration.
To begin with, an international arbitration is not tried before a decision maker from the same country as counsel. The decision maker in an international arbitration is the arbitrators, who will usually be three highly-experienced lawyers, often from three different legal and cultural traditions. Counsel may come from one or more other legal and cultural traditions. The result is that five, or sometimes even more, legal and cultural traditions may be present in a single arbitration proceeding. Those who are present may also have no mother tongue in common, so that the arbitration will be conducted in a language–often English–that none of them learned at home and in which they may not be equally competent. The arbitration proceeding will also differ, and almost certainly in different ways for each of them, from what they are accustomed to at home.

The arbitrators will usually have a detailed understanding before the main hearing starts of the issues they are to decide, quite possibly a far more detailed understanding than a domestic judge would have at the start of a domestic court trial. This is because the parties will have exchanged with each other, and submitted to the arbitrators, very extensive evidentiary material (including written witness statements) during the previous phases of the arbitration, and the arbitrators will usually have studied this material with considerable care. The arbitrators will evaluate the cross-examinations at the main hearing against the background of their detailed understanding of the issues they are to decide, and that will impact counsel’s freedom of action in cross-examining.
It is not only the decision maker that will be very different from what counsel would encounter in a domestic arbitration or court trial. Almost every aspect of the arbitration, including the main hearing, will be governed by the arbitration agreement between the parties, although the law of the country where the arbitration takes place will also play a role. The rules of procedure and of evidence are thus likely to be very different from what counsel would expect in a domestic arbitration or court trial, and because the grounds for challenging an arbitration award are very limited counsel will almost certainly have less control over how rules of law are applied than would be the case in a domestic court.
No national rules of evidence will apply in an international arbitration unless the parties have agreed otherwise, and the arbitrators will determine what evidence to admit based on whatever rules the parties have agreed to make applicable. Arbitrators are likely to be considerably more liberal in admitting evidence than a national court would be and thus have both the right and the duty to evaluate evidence very freely. The absence of national rules of evidence also means that some conduct which is customary, or even obligatory, in some national systems–the practice of “putting” a contention to a witness, for example–has no place in international arbitrations.
There will be considerable pressure on counsel, far more than in an Anglo-American court trial, to make their cross-examinations succinct and efficient. There are at least two reasons for this. One is that the arbitrators are likely to have a detailed understanding of the case. The other is that the witnesses will usually have given their direct testimony by means of written witness statements that counsel will have seen in advance of the hearing. Lengthy cross-examinations of the type often encountered in Anglo-American court trials are very rare in international arbitration, and an attempt to conduct that kind of cross-examination will often be seriously counter-productive. In addition, the chance that counsel will be able to surprise witnesses by confronting them with unexpected documents or prior inconsistent statements is far smaller than it would be in an Anglo-American court trial because the extensive exchange of evidentiary material in advance of the main hearing will usually enable adversary counsel to resolve any such issues in the course of preparing witnesses to testify.
Despite these and many other differences between international arbitrations and domestic dispute resolution procedures, the goal of the cross-examiner in an international arbitration is the same as in any other dispute resolution procedure. That goal is to maintain control of the witness so as to accomplish the purposes of the cross-examination without allowing the witness to say something the cross-examiner does not want the decision maker to hear. Our book is intended to be useful as an aid to achieving that goal, no matter what legal and cultural tradition counsel may come from and no matter how experienced or inexperienced counsel may be.
The post The role of cross-examination in international arbitration appeared first on OUPblog.

August 3, 2015
Medicare and end-of-life medical care
Medicare recently announced that it will pay for end-of-life counseling as a legitimate medical service. This announcement provoked little controversy. Several groups, including the National Right to Life Committee, expressed concern that such counseling could coerce elderly individuals to terminate medical treatment they want. However, Medicare’s statement was largely treated as uncontroversial—indeed, almost routine in nature.
In contrast, it was only six years ago that claims about “death panels” were central to the debate about Obamacare.
What has happened between now and then? I suggest that, over the last half decade, Americans have become better informed about the realities of end-of-life medical care. Much such care is expensive and futile. Families often wish in retrospect that they had not put their loved one through uncomfortable and unsuccessful end-of-life treatment. A significant chunk of the public debt we are leaving our children and grandchildren is attributable to unsuccessful end-of-life medical treatment. Such treatment, often involving expensive technology, is a major reason that health care costs are higher in the United States than in other developed nations.
In important respects, the issue is not new. A century ago, in one of his celebrated decisions on New York’s Court of Appeals, Schloendorff v. The Society of New York Hospital, Benjamin N. Cardozo affirmed the fundamental right of individuals to decline medical treatment they do not want:
Every human being of adult years and sound mind has a right to determine what shall be done with his own body; and a surgeon who performs an operation without his patient’s consent commits an assault, for which he is liable in damages.
In other respects, however, our current situation is, for two reasons, unprecedented: The size of the aging baby boom cohort and the expensive, high-tech way we die today. The net result of these two forces is that Medicare and other US retiree medical programs today spend enormous sums on unsuccessful end-of-life medical care and confront the prospects of even more such expense in the future.
Over the last half decade, Americans have become better informed about the realities of end-of-life medical care.
Since Cardozo’s time, the law has developed a variety of instruments designed to implement individuals’ desires about end-of-life medical care. Such instruments are variously denoted as health care instructions and living wills, and have become ubiquitous.
However, for two reasons, declining end-of-life medical treatment remains a problematic task for elderly individuals and their families. First, many physicians are often reluctant to say that there is no hope for a dying patient. Cynics suggest that this reluctance stems from the fee-for-service method of paying doctors. Under that method, doctors aren’t paid when they don’t perform services. I suggest that there is a deeper phenomenon: Physicians, like all of us, believe in what they do and have a natural reluctance to admit that their healing skills have been unsuccessful.
A second problem is that, even when the argument for terminating treatment is intellectually compelling, it is emotional wrenching to withhold such treatment. As a result, much expensive, end-of-life treatment proceeds because that is emotionally easier than withholding treatment. Only in retrospect does the family recognize that they futilely put their beloved through an unnecessarily unpleasant death.
But beyond these considerations is a harsher truth. Even if families and physicians sincerely want to continue treatment, it is often not in society’s interest to proceed with expensive treatment with little chance of success. Rarely are the patient and his family spending their own resources on such treatment. More typically, the taxpayer or other health insurance premium payers are financing this care.
Medicare is right to encourage discussion among patients and their doctors about end-of-life care. Opponents of Medicare’s decision are right to caution that counseling should not become coercion. However, at the end of the day, the United States needs to control the costs of end-of-life medical care. We cannot afford unlimited end-of-life medical care. Not controlling such care will saddle our children and grandchildren with the high costs of much futile end-of-life care for the aging baby boom cohort. Our children and grandchildren deserve better.
Image Credit: “Senior Citizen with Doctor” by Andy De. CC BY NC 2.0 via Flickr.
The post Medicare and end-of-life medical care appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
