Oxford University Press's Blog, page 508
May 19, 2016
How to be good
‘How to be good?’ is the pre-eminent question for ethics, although one that philosophers and ethicists seldom address head on. It was the question Plato posed in a slightly different form in The Republic when he said, “We are discussing no trivial subject, but how a man should live.” Marcus Aurelius thought he knew the answer. When he unequivocally stated in his Meditations “A King’s lot: to do good and be damned.” He was himself a king and ruled almost all of the world that was known to him. He could with impunity both do good and be damned. Edward Gibbon famously remarked that “If a man were called to fix the period in the history of the world during which the human race was most happy and prosperous he would, without hesitation, name that which elapsed from the death of Domitian to the accession of Commodus.” Marcus Aurelius, the father of Commodus ruled for the last 19 years of this period.
Recently philosophers and scientists have tried to identify how to make the world better by making people more likely to do good rather than evil. Many of these have proposed ways of changing human kind by chemical or molecular means so that they literally cannot do bad things, or are much less likely so to do, in other words by limiting or eradicating their freedom to do bad things.
This same problem has also faced those interested in artificial intelligence (AI). If we create beings as smart as or smarter than us how can we limit their power to deliberately eliminate us or simply act in ways that will have this result? How can we ensure that they act for the best? Many people have thought that this problem can be solved by programming them to obey some version of Isaac Asimov’s so-called “laws” of robotics, particularly the first law: “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” The problem of course is how the robot would know whether its actions or omissions would cause danger to humans, or for that matter, to other self-conscious AIs. Consider that ethical dilemmas often involve choosing between greater or lesser harms or evils, rather than avoiding harm altogether, allowing or causing some to come to grief for the sake of saving others.

How would a human being who, for example, had been rendered unable to act violently towards other people, or in ways that caused pain, defend herself or others against murderous attack? How would an AI programmed according to Asimov’s laws do likewise?
John Milton knew the answer. In Paradise Lost, Milton reports God as reminding humankind that if we want to be good, to be “just and right,” then we need autonomy: “I made him (mankind) just and right, sufficient to have stood, though free to fall.”
This dilemma, felt no less keenly by God than by the rest of us, of how to combine the capacity for good with the freedom to choose, is now facing those trying to develop moral bio-enhancers and those working on the new generation of smart machines. This is what Stephen Hawking meant when he told the BBC in 2014 that “the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” How could full AI which would enable the machine which (who?) possessed it to determine its own destiny as we do, be persuaded to choose modes of flourishing compatible with those of humans? Of course, we currently have these problems with respect to one another, but at least we have not as yet shackled our capacity to cope with these by foreclosing some options for self-defence by moral bio-enhancement.
In the future there will be no more “men” in Plato’s sense, no more human beings therefore, and no more planet earth. No more human beings because we will either of wiped one another out by our own foolishness or by our ecological recklessness; and no more planet Earth because we know that ultimately our planet will die and any surviving people or AIs along with it.
Initial scientific predictions on the survival of our planet suggested we might have 7.6 billion years to go before Earth gives up on us. Recently, Stephen Hawking said, “I don’t think we will survive another thousand years without escaping beyond our fragile planet.”
To be sure, we need to make ourselves smarter and more resilient. We may need to call on AI in aid to achieve this, if we are to be able to find another planet on which to live when this one is tired of us, or even perhaps develop the technology to construct another planet. To do so, we will have to change, but not in ways that risk our capacities to choose both how to live and the sorts of lives we wish to lead.
As Giuseppe di Lampedusa had Tancredi say in The Leopard, “If we want things to stay as they are things will have to change”… and that goes for people also!
Featured image credit: Synapse, by Manel Torralba. CC-BY-2.0 via Flickr.
The post How to be good appeared first on OUPblog.

May 18, 2016
The Hamilton musical and historical unknowns
With a record-breaking sixteen Tony Award nominations for his hit musical Hamilton, Lin-Manuel Miranda will soon have to clear some space on his trophy shelf next to his Grammy and Pulitzer. But there is something remarkable about the play that all the critical acclaim has missed entirely. Reviewers have rightfully celebrated Miranda for telling the life story of one of America’s greatest Founders using energetic numbers, a multiethnic cast, and a strong emphasis on hip-hop. Yet Miranda has not received due credit for an important and distinguishing characteristic of his musical: his unique approach to what is unknowable about the past. His script makes it clear that there are significant gaps in the historical record concerning Hamilton and his times. Of course, the play takes liberties, as any historical play must–and has a great deal of fun in the process. But in numerous places, Miranda shows us the limits of what we can know about the past.
In so doing, Miranda captures something central to the experience of every historian. The actual record of the past is riddled with conspicuous silences. Digging through an archival collection, we may find a reply to a letter, but the original letter to which it is responding has been lost. We are left to wonder what was in it, and about the sensibility of the shadowy correspondent on the other side. Multiply such conundrums by a factor of several thousand and you have the working reality of the historian.
The playwright has a different task. While the historian has an obligation to be candid with the reader about what the record fails to disclose, artists are under no such restrictions. In fact, they run a risk in observing them. No audience longs for two hours of scrupulous qualifications. Instead, audience members want to watch and hear a compelling story that transports their imaginations, something a talented playwright will provide, even if it means making whole the residual fragments of a bygone age.
Yet Miranda often chooses to face the unknowable head-on. One vivid example comes to us in a number called “The Room Where it Happens,” performed by the magnetic Leslie Odom, Jr. in the role of Aaron Burr. This scene pertains to a mysterious dinner in June of 1790 in which Alexander Hamilton, a New Yorker, brokered a historic deal with his Southern rivals, Thomas Jefferson and James Madison. Hamilton agreed to relocate the nation’s capital from New York to Philadelphia and ultimately to the Potomac. In turn, Madison and Jefferson would relent in their opposition to Hamilton’s plan for the federal government to assume states’ debts. While we live with the consequences of that dinner still today, how exactly these three statesmen crafted this bargain remains unclear. Miranda opts not to dream up the scene behind closed doors. Instead, he acknowledges the unknowability of the event. Its very obscurity is a source of intrigue for Miranda. He cleverly uses our lack of knowledge to showcase Aaron Burr’s desire to be a major player in American politics. Burr sings with jealousy, “No one really knows how the game is played/The art of the trade/How the sausage gets made/We just assume that it happens/But no one else is in/The room where it happens.” The number ends with Burr resolving, “I’ve got to be/In that room/In that big ol’ room.”
Miranda’s depiction of the Reynolds affair further illustrates his engagement with historical silence. In 1797, Hamilton was cornered into publicly confessing to an extramarital tryst. The historical record tells us nothing of the reaction of Hamilton’s fiercely loyal and pious wife, Eliza. Played with verve by Phillippa Soo, Eliza burns her letters while singing, “I’m erasing myself from the narrative/ Let future historians wonder how Eliza/ Reacted when you broke her heart.” She adds defiantly, “They don’t get to know what I said/I’m burning the memories/Burning the letters that might have redeemed you.” Historians know that sometimes silences in the archives are telling. A letter may not be merely missing but destroyed. And so we try to derive meaning, as Miranda does here, not only from existing evidence but also from its absence.
Miranda’s humble approach to the unknowable reflects the seriousness with which he takes history and surely grows out of his collaboration with the scholar Ron Chernow, a Pulitzer Prize winner in his own right whose thick biography on Hamilton inspired the musical. As Miranda’s hit expands its reach through technology and tours, it reminds us that many people learn about history from art. If a playwright’s role in historical drama is partly to entertain, then it is also partly to educate. Miranda has done much to teach his audience an important lesson about the limits of historical knowledge. Much like the subject of his play, he offers a model for how others might make use of the spotlight.
Featured image credit: Center from left: Anthony Ramos, Lin-Manuel Miranda, Daveed Diggs and Okieriete Onaodowan in the musical “Hamilton” at the Richard Rodgers Theater. Sara Krulwich/The New York Times.
The post The Hamilton musical and historical unknowns appeared first on OUPblog.

Not a dog’s chance, or one more impenetrable etymology
By this time, the thrust of the posts united by the title “Not a dog’s chance” must be clear. While dealing with some animal names, we plod through a swamp (or a bog, or a quagmire) and run into numerous monosyllabic words of varying structure (both vowels and consonants alternate in them), lacking a clear etymology, and designating several creatures, sometimes having nothing to do with one another (for instance, “doe” and “grasshopper,” though this is an extreme case). In a search for their derivation, we encounter calls to animals, their cries, color, specific qualities, taboo words, and so forth. As a final approach to the etymology of dog, the story of the noun cub may also sound instructive.
Once again it is the context (this time the surroundings of cub) that counts. Next to cub, we find cob and cop, displaying a variety of senses. Some refer to animals, some to round and lumpy objects, and others to the head. About two dozen words of the k-b/p structure in Germanic mean “cap,” “cup,” and “cop” (that is, “head”). They have look-alikes in Romance and in non-Indo-European languages, and final b tends to replace p in them for no obvious reason. We watched the same while dealing with tyke and bitch, but there the pool contained relatively few items, while here one faces an endless list. The arguments that (Old) French bisse ~ biche “doe, hind” is not related to bitch are probably convincing, but the word ended up in just the environment it needed: numerous similar nouns have the same shape, and it is no wonder that old etymologists tried to connect biche and bitch. As regards the cob ~ cop ~ cub ~ cup group, we probably have migratory words that influenced one another’s sound shape and meaning.

Cub surfaced in texts in the sixteenth century. Its only close cognate is Low (= northern) German kübbelken “the weakest nestling.” For more than two centuries, dictionaries, including Webster’s, repeated Minsheu’s idea (1617) that cub is related to Latin cubo “to lie, repose” (as in incubare). This is of course nonsense, even if we substitute “borrowed of” for “related to,” but Minsheu’s explanation deserves being quoted for curiosity’s sake: the cub “lies in his hole, and goeth not for prey as the Reynard, or old Fox doeth.” (With regard to old Fox, see the recent series .) Another etymology of cub, though also unacceptable, makes more sense. Its author was Noah Webster, who suggested (in 1828) that the English word is perhaps allied to Irish caobh “branch, shoot.”

Webster operated with a great number of languages, but, like everybody else in the English-speaking world, he looked away from Germany and missed the birth of what we today rather pompously call scientific etymology. In a way, it was scientific, except that it solved fewer problems than it hoped to solve. But such is the fate of all pioneering projects. Be that as it may, Webster’s derivations rarely present interest today. Yet the Celtic origin of cub is given as a possibility not only in the first edition of Skeat (the reference is to Modern Irish cuib “cub, whelp, dog”), but in some of the best modern dictionaries, though, wisely, not in the OED! The Old Irish word was extremely rare and cannot be considered as the source of cub. We’ll see that, paradoxically, nothing can be considered as its source.
Rather suggestive is the comparison of cub with Old Icelandic kobbi “young seal” and kubbi “block of wood.” As could be expected, some people did derive Engl. cob and cub from kobbi ~ kubbi, but cob has so many almost incompatible senses that no single etymon will probably do for all of them. Kobbi coexists with a doublet kópr, and we end up none the wiser. The farther we search, the more animal kob- ~ kop- words we find, all of them with affective geminates (that is, long “hypocoristic” consonants), unpredictable vowels (whose random use has been wittily called false ablaut), and—the most important result of all—of unknown origin. Compare two almost random examples: Swabian kōb “old nag” and Russian kobyla “mare” (stress on the second syllable; –yla looks like a suffix). Similar monosyllabic words (for instance, mok-, lob-, lop-), alongside tik– and bik-, turn up everywhere. They have typically been recorded late and seem to be of dialectal provenance, which is natural, for people in close contact with animals have always been peasants. Such words are not tied to any one sense and tend to refer to something small or soft. Some vague original sense like “lump” is possible.
From the historical perspective, cob “animal name” is indistinguishable from cub. The same monosyllabic words often designate young and useless (old!) animals. For example, we find German dialectal kippe ~ kibbe “ewe,” Danish kippe “small calf,” Swedish dialectal kebb, kubbe, etc. “calf,” Dutch dialectal kabbe ~kibbe “a small pig,” Scots keb ~ kebber “refuse sheep taken out of the flock,” and a host of others. One can wander forever among the words for “block of wood,” “fat person,” chips, chaps, and chops and find more of the same. That is why attempts to discover one certain source of Engl. cub are doomed to failure. In this forest of primitive creation, everything looks like something else. Even more futile is the hope to reconstruct a solid Proto-Germanic or Indo-European root for a word like cub. Nothing will change if we agree that Latin gibbus “lump” or Old Icelandic keipr “rowlock” is (“distantly, obscurely”) related to cob or cub. Relatedness in this sphere is a fiction.
Even the age of cub is indeterminate. Our records of cub do not go beyond the sixteenth century, but nothing follows from this fact. Perhaps cub was indeed coined or borrowed around 1600, but it could have existed for a long time before making its way into a text. If it was borrowed, we should try to discover its origin in German, Dutch, or Scandinavian, for it is not only English that concerns us, and the story begins anew. However, I don’t think that “origin unknown” is what dictionaries should say about cub. It would be more useful to explain why the exact origin of this word cannot be determined and what we do know about it, since it appears that we know not so little about its history or at least about its environment.

As a postscript, I would like to mention a word few people know, namely Old Icelandic kofan ~ kofarn “lapdog.” The word is remembered because a Danish chronicle tells a story about how Hagan, King of Sweden, sent a dog to rule the Danes. In the past, the word was sometimes dismissed as being of unknown origin. But we notice Modern Icelandic kofa “a young bird of the loon family” and the already cited Old Icelandic kobbi “seal.” Here then is the name of a dog containing a familiar root. If the original form was kofan, the word was wonderfully apt: it had the root signifying a puppy and the honorific suffix used in the words designating kings and aristocrats. A cub-king could not have had a more apt name.
With a long discussion of tyke, bitch, and cub behind us, we are ready to attack dog in the true dog-eat- dog spirit. The end of the series may strike some as an anticlimax, but, if we agree on at least something concerning the etymology of the truly impenetrable word dog, we will score an important victory. If, however, the attack is repelled, we’ll retreat and lick our wounds. Etymological wounds heal easily.
Image credits: (1) Piglet by Alexas_Fotos, Public Domain via Pixabay (2) Calf by Pezibear, Public Domain via Pixabay (3) Baby Harbor Seal by Ed Bierman, CC BY 2.0 via Flickr (4) Grey Fox Kits by skeeze, Public Domain via Pixabay (5) Dog Crown by Anja Kiefer, Public Domain via Pixabay.
Featured Image: “Lion cubs” by Matt Biddulph, CC BY-SA 2.0 via Flickr.
The post Not a dog’s chance, or one more impenetrable etymology appeared first on OUPblog.

Can nineteenth-century literature explain the rise of Donald Trump?
Historians and political scientists have quite the task ahead in making sense of the bizarre 2016 presidential race. Fissures in both major parties betray pervasive hostilities. The rise of Donald Trump from investment mogul to television personality to presidential candidate—a process that once horrified GOP insiders—has produced one kind of theater: the spectacle of anger and resentment. During primary debates, Trump dismissed his opponents as “losers.” He has exploited frustrations over immigration, multiculturalism, religious pluralism, and terrorism. He wants a Judeo-Christian America to “win,” that is, to become dominant by imposing its military will and its sweeping view of “national interests” on the rest of the world.
Hillary Clinton, on other hand, has had to fend off radical populist Bernie Sanders, who wants to redistribute the nation’s wealth and cut mega-banks down to size. Clinton has called this economic “extremism,” aligning herself with Wall Street even as she decries poverty. Her service as senator and Secretary of State brands her as a candidate of the establishment, forced to defend the mixed legacy of a president much disliked in historically red states. She intends to become the first female president but lacks broad support from women. Decisions made during her service as Secretary of State remain controversial; her vexed marriage to a tarnished former president still casts shadows.
In Trump and Clinton, we have two outsized candidates despised by large segments of the body politic. Will the two-party system endure? Which candidate is less odious? What does this weirdness signify?
So much outrage pervades this shining republic: we see that anger in recurrent episodes of random mass violence—troubled individuals with automatic weapons “getting even” with unknown victims. This social horror seems unique to the United States, but we seldom ask why. We see widespread anger also in the proliferation of Confederate Flags across the North, Midwest, and West, signifying not Southern pride but white supremacy and scorn for the federal government. We see indignation in the “Black Lives Matter” movement and the volatile, bloody confrontations between law enforcement officers and young African Americans who feel stigmatized and scapegoated. We see it in the fury provoked by the attack on the World Trade Towers—the seething anger toward jihadi Muslims that has long threatened to erupt into general hatred for all followers of Mohammed. A mindless bumper sticker—“I learned everything I need to know about Islam on 9/11”—says it all.
It turns out that the stories and myths essential to a heroic national self-concept emerged in the nineteenth century, when much of our identity as a nation was being formed, in the midst of horrific injustices and cultural conflicts that subverted a sanitized idea of “America.” So the gleaming identity we devised for ourselves then masked a nexus of egregious problems, like slavery, racism, Indian removal, nativism, and—beginning around 1839—the ethnocentric delusion that Anglo-Saxon Protestant Americans were destined to conquer the continent by seizing land owned by Hispanic and Catholic Mexico.

But as Freud observed, the repressed always return, whether in the individual or the political unconscious. These crimes—and every nation commits them—disturbed the smooth dissemination of national narratives. Tales and novels by Washington Irving, James Fenimore Cooper, Nathaniel Hawthorne, Catharine Sedgwick, and others help us to see both the construction of national identity and the inconvenient, sometimes inadvertent, eruption of jarring social or historical realities. Counter-narratives by fugitive slaves, abolitionists, reformers, indignant Indians, and outspoken women simultaneously pushed against official versions of America’s story. And during this messy invention of a national idea, Edgar Allan Poe mocked literary nationalism and ridiculed the expansionist jingoism it had fomented by the mid-1840s. His eye for the grotesque enabled him to see the cultural strangeness that American nationalism concealed.
All nations are strange in their own ways, creating glowing public images of imagined, idealized communities quite at odds with documentable evidence. This all seems easier to observe in the hyper-nationalism of American rivals; think of the heartwarming national symbolism created for the Russian Winter Olympics in 2014, just before the invasion of the Crimea. As Ernest Gellner remarked, “Nationalism is not what it seems, and above all it is not what it seems to itself.”
That insight has propelled my enquiry into American nationalism, not as an exercise in liberal guilt but as an effort to differentiate nationalism (nowadays indistinguishable from jingoism) from patriotism. Nationalism incites aggression through its appeal to birth, blood, and consanguinity; patriotism excites defensive pride, the urge to protect the land as well as its founding principles. American nationhood has from the outset been complicated by these conflicting notions of belonging—one tied to race, ethnicity, and righteous Puritan notions of founding a city on a hill, the other rooted in civic allegiance, love of liberty, and defense of inalienable rights. The clash between these inimical concepts erupts—as it has in election-year warnings about immigration—whenever we ask who really “belongs” to the nation.
In 1776, the Declaration’s founding contradiction between liberty and slavery also left, as Robert Kagan has remarked, a “split personality” rooted in racial difference that bedevils our national life to this day. Think Ferguson. The terrible “holocaust” that nearly exterminated the Native tribes symbolically associated with America remains an unrequited atrocity too shameful to be acknowledged. Meanwhile the threat to build a wall across the Southwest recalls the still-unfolding consequences of the U.S.-Mexico War. These are some of the more obvious sources of the latter-day American anger that we attempt to exorcise by identifying enemies and lashing out. Perhaps not until we confront our strangeness as a nation, our unresolved contradictions, will these self-destructive patterns cease to hold us in thrall.
Featured image: Donald Trump declares his loyalty to the Republican Party in a speech on 3 September 2015 at Trump Tower, CC BY-SA 4.0 via Wikimedia Commons.
The post Can nineteenth-century literature explain the rise of Donald Trump? appeared first on OUPblog.

Globalization in India
As an academic trained in geography, I have for a long time, been looking to develop a coherent understanding of this process that is popularly known as ‘globalization’. The term has acquired steady popularity from the 1980s onwards and has become as often used (in academic literature, journalistic literature, lay conversations) as terms like ‘modernity’ and ‘nationalism’. For many, it means that the goods that we consume today in India are made in some other part of the globe; that trends in music, fashion, and food travel quickly around the world leading to a kind of standardization and loss of uniqueness to the onslaught of “McDonaldization,” or “Coca-colanization.”
For many others, globalization has dangerous repercussions in terms of entry of foreign direct investment, and foreign corporations into national markets, thus eroding and eradicating indigenous business—for example, think of the street protest among small traders of Delhi against the entry of retail giant WalMart in India. My frustration with globalization is that the narratives I discovered were too fragmented. Those that spoke about cultural globalization, erosion of local uniqueness, spread of ‘foreign tastes,’ often ignored the economic dimensions behind say, ‘the Burger colonization’ of the world. And, on the other hand, those narratives that talk about foreign investment, mergers and take-overs of national business, World Bank, and International Monetary Fund generated structural adjustment have little to say about cultural change. Both narratives often tend to provide a birds-eye view without the satisfaction of a nuanced, grounded, place-based perspective on the culture-economy dialectic of globalization.
As I reflected over these omissions, I had the chance to visit the Akshardham temples in Delhi and Ghandinagar in Gujarat. The spectacle left me dizzy—here was a locally grounded version of globalization brought to fruition in spotless alabaster occupying hundreds of acres of beautifully manicured gardens and reflecting pools. Inside the temple, one moves from the Himalayan themed cyber-optic landscape of the Mansarovar lake and meditating Lord Swaminarayan, to the plastic jungles of Cherrapunji replete with fake moss, plastic snakes, and electronic owls that rotate their heads a full 360 degree. Then, the Vedic boat ride takes you through the simulated villages of Vedic India, which are actually spectacular dioramas with little historic accuracy. One realizes that the audience has been inducted into a “Vegasesque” spectacle where religion and fantasy mingle. The temple landscapes theme meticulously following Disney’s model of the Disney World, only here Mickey Mouse has been replaced with gods of the Hindu pantheon.

The audience files from one themed environment to another like gleeful children who can’t wait for the next ride. The production of this spectacle is a million dollar project of bringing globalization home through the meticulous study of Disney Worlds all over the globe. It is simultaneously an economic undertaking of pooling donations, NRI contributions, western engineering, and a cultural concoction of the commodification of religion and spiritualism. Globalization, I argue, does not constitute neatly sliced segments of economic changes in the market places insulated from cultural fluxes elsewhere. Globalizing realities are best understood through urban spaces like Akshardham temple complexes in India where economy and culture, religion and fantasy, spiritualism and commodification bleed into one another in the production of contemporary spectacles.
This visit piqued my interest, and I took trips to Akshardham temple complexes in Houston, Irving-Dallas, and Atlanta to understand how globalization is being grounded in the global north. Here the temple complexes are almost exact replicas of their Indian counterparts, but smaller. But the culture-economic narrative they wish to weave here is completely different. In the US where commodification, consumption, and alienation is rampant, and where the cities and their skyscrapers are the very epitome of the spectacle of capitalism—the Akshardham temples steer clear of any form of Disneyization. The temple complexes are not themed, no Vedic boat rides or sound and light shows here, but instead, they boast of museums that weave narratives of migrant-patriotism displayed carefully through selective rendering of history. Only Hindu freedom fighters, Hindu poets, and literary geniuses are carefully chosen and displayed to evoke the greatness and hoariness of primarily Hindu-India.
Because the temple complexes lack the patina of age, the museums seem to almost make-up for their newness by evoking the ancientness of Indian history, but a very carefully selected Hindu version of it. The Akshardham temple complexes also house schools, language teaching centers for migrant children, doubles up as cultural social networking sites for women, rents amenities like community halls, and boasts of food courts that cook Gujarati-Indian samosas, doklas, and sweets according to the specification of the Swaminrayan traditions. Although the temples are spectacular in their Italian alabaster facades, they attempt to ground the migrant Indian’s angst with globalization in more mundane ways than the spectacular theme park narratives of their Indian counterparts. Here nationalism, multiculturalism, patriotism, narratives of race and gender get negotiated through the everyday spaces of class-rooms where boys and girls are taught separately. Here the racial and cultural ‘others’ like Hispanic, Black, and White Americans are welcomed into the complex, but often viewed as spiritually vacuous ‘others’ who need the healing touch of Indian spiritualism. The temples in the US also become moral and cultural points for passing down the essentials of tradition that are swiftly being washed away by globalization. The second generation migrant youth is expected to get her/his bearings corrected through the sacred spaces of the temple’s everyday life, and not get deflected into drinking, or pre-marital sex for instance.
This collage of temple narratives from India and US enabled me to transcend my frustrations with globalization literature. Here I found tangible sites where culture and economy were deeply inflected in chalking the complex tropes of globalization. These complex tropes were a picturesque entry-point into the troubled textures of commodification, alienation, narratives of race, gender, and nationalism.
Featured image credit: McSpicy Paneer Passion, by Divya Thakur. CC-BY-SA-2.0 via Flickr.
The post Globalization in India appeared first on OUPblog.

May 17, 2016
Your brain on the scientific method
Coffee is good for you. Coffee is bad for you. Broccoli prevents cancer. Broccoli causes cancer. We are all familiar with the sense that we are constantly being pulled in a million different directions by scientific studies that seem to contradict each other every single day. When trying to make decisions about whether or not to drink coffee, we might be bombarded with equal amounts of data on both sides, half of the articles proclaiming that coffee is a miracle cure and half of the articles proclaiming that it is a death sentence.
Recently, John Oliver, celebrated British comic and host of the popular show Last Week Tonight, confronted the issue of media representation of scientific studies in a welcome and hilarious segment on his show. In a humorous yet searing look into the media’s dealings with science, Oliver points out that science is imperfect and that this fact is important but delicate and needs to be handled in the right way. Instead, he argues, the media turns science into “morning show gossip.” Indeed, Oliver poses the all-important question: “After a certain point, all that ridiculous information can make you wonder, is science bullshit?”
Oliver lists a number of important and irritating ways in which some media sources contribute to mass confusion around real science, which sometimes leads to tragic dismissal of scientific fact that can harm people’s health and wellbeing. But the problem is much deeper than media “sensationalism.” The problem, we believe, actually lies at the heart of the ways in which the scientific process inherently contradicts the ingrained ways of thinking that exist in every human brain. Perhaps it is not only the media that’s causing confusion and false scientific beliefs among perfectly well-educated and reasonable Americans. So what is going on here?
The fundamental problem with any individual’s ability to understand and accept science is two-fold: (1) that science proceeds by disproving rather than proving and (2) that science generally refuses to assert causality with any degree of absolute certainty. These two features of the scientific method are fundamentally antithetical to the way in which the human brain instinctively processes information and formulates conclusions.

Extensive research in both psychology and neuroscience has demonstrated just how stubborn our brains truly are. Once we believe something, our brains do everything they can to reinforce that belief and ensure that it is not shaken. Confirmation bias, a well-known psychological phenomenon, is a perfect example of this fact. Confirmation bias refers to the phenomenon in which new information is interpreted in the service of previously existing beliefs. In general, new information will be filtered and processed by our brains in order to confirm beliefs we already have. New information that conflicts with our beliefs is generally eschewed, as accepting it would create cognitive dissonance, an uncomfortable psychic scenario in which some information we have accepted as true runs counter to core beliefs we already have. Such beliefs may often form a key part of our identities. Neuroscience studies using imaging techniques such as fMRI largely confirm these psychological notions, often by revealing activation of our fear centers when presented with information that directly contradicts what we have previously held to be true.
The problem here is that science is all about disproving previously held beliefs. Science proceeds through a process of falsification, in which experiments are repeated ad nauseam until the initial effect can be disproven and a new theory can emerge. In some cases, replication of experiments reproduces the exact same results over and over again. Only once an experiment has been replicated many times with the same results can scientific theory become scientific fact. As you can probably see, we have a serious problem here. Science by its very nature asks us to be willing to constantly dismiss old ideas in favor of new ones. But the human brain by its very nature asks us to hold on tightly and stubbornly to any idea we have already formed. Our instincts are simply not set up in a way that allows us to go merrily along with scientific falsification without a fight.
The second feature of science that clashes with fundamental human instinct is its reluctance to establish cause. If you carefully examine statements by scientists in the press, you will notice that they are frustratingly hesitant to ever say that “A causes B” or, for that matter, that “A definitively does not cause B.” This is because it is actually extremely difficult to establish causality in science – more difficult than most people realize. The only way to establish true causality in science is to observe what’s called the “counterfactual.” The counterfactual is what would happen in an alternate universe if one aspect of the environment were changed but everything else stayed the same. If you want to know whether drinking orange juice caused your rash, the best way to establish this for sure would be for you to go back in time, not drink the orange juice, and then see whether you still got a rash. Because every circumstance was exactly the same, except whether or not you drank the orange juice, if you still developed the rash, you could be 100% certain the orange juice did not cause it.

Scientists have devised very clever and advanced methods of trying to approximate the counterfactual in experiments without the use of a time machine. However, they are by no means perfect and so scientists are very careful never to express 100% certainty about causality. The problem, again, is that human beings are not wired to accept this. A number of studies, again in both psychology and neuroscience, have demonstrated that human beings are extremely uncomfortable with any situation in which a cause is unknown. So we often fill in the blanks and invent causes when they don’t exist. This could explain why many parents are so quick to accept the notion that vaccines cause autism – no clear cause for autism has been established in science and so people are uncomfortable. At the same time, scientists are not very reassuring when they say things like “There is no evidence that vaccines cause autism.” Wouldn’t it be better if they would just say “We are 100% certain that vaccines do not cause autism”? The problem with this statement is that it technically does not represent the scientific method very well. But your brain doesn’t care about the scientific method when you’re desperately trying to make sense of a complex, often senseless world, in which many things seem to happen for no reason.
So when looking at the reasons behind public misperceptions of science, the media is certainly an important source. But we should be careful not to blame everything on journalists. One of the greatest enemies of the scientific method sits just inside the heads of scientists and non-scientists alike. We can overcome resistance to scientific fact only by being more aware of the devious ways in which our own minds can work against us.
Featured image credit: Brain by Jesse Orrico. CC0 public domain via Unsplash.
The post Your brain on the scientific method appeared first on OUPblog.

Radiohead’s A Moon Shaped Pool (XL, 2016): reflecting, looking forward
With the exception of Kid A/Amnesiac, Radiohead has reinvented themselves sonically on every album since OK Computer. Saying that a new release represents a departure from their previous style is therefore paradoxical—the only possible departure would be non-departure.
Radiohead’s ninth studio album, A Moon Shaped Pool (XL, 2016) is no exception. Jonny Greenwood’s string arrangements, an integral part of Radiohead’s sound since In Rainbows, now leap to the front of the soundstage. This isn’t so much a Radiohead record as a Thom Yorke/Jonny Greenwood collaboration.
Sure, Colin Greenwood’s signature bass lines make the occasional cameo (e.g., the Police-esque groove at 1:56 in “Identikit”), but Jonny’s expanded string orchestra leaves no room for Ed O’Brien’s Hail to the Thief-era ambient guitar work. Radiohead ditches the percussion-four-arms experiment that was The King of Limbs for a return to simpler grooves. Like his compatriot Charlie Watts, Phil Selway’s genius has always been in highlighting the song, rather than himself. With the exception of the groove that finally emerges toward the end of “Ful Stop” (c.f.,“Weird Fishes/Arpeggi”), he’s rendered nearly transparent.
Thom Yorke’s unique keyboard-based harmonies, always fodder for speculation among music analysts, regresses more than ever into a bluesy/classic rock idiom. His “blue notes” (minor thirds in major keys) are particularly effective at highlighting emotionally poignant lyrics on AMSP. Nowhere is this more heart-wrenching than in the ending of “Desert Island Disk.”
Nearly all melodies conclude, structurally, with a mi-re-do (“three blind mice”) in some key. This G major song is headed for just such an ending (B–A–G), but at the moment Thom sings “different kinds of love,” the B is soured to B-flat. We’re then told different kinds of love are possible on A, with a sustained dominant pedal on D heightening our anticipation. Will his self-described “amicable” split with a partner of 23 years really be possible? When the dominant pedal resolves and A finally falls down to G, it’s harmonized with the happy B-natural, not the sour B-flat, and so Thom wants us to believe so.

Using blue notes for text-painting purposes is old hat, but Yorke has a new harmonic trick up his sleeve on this record: sets of pandiatonic major chords to accompany otherwise diatonic melodies. True, he’s flirted with this twice before (“Everything in its Right Place” and “Pyramid Song”), but never to this degree.
The F-sharp major and E major chords that begin “Burn The Witch” suggest a V–IV loop in B major. But Yorke’s melody, clearly in A, grates constantly against the A-sharp in the accompaniment. The prechorus further clarifies A major melodically, but with C-sharp major and B major chords. Was that V–IV in F-sharp major? Maybe, but sets of major chords all related by perfect fifths and moving in parallel motion cannot establish a key (especially when Yorke’s singing in a different one).

The four pairs of major thirds that form the basis of “Tinker Tailor…” yield a similarly chromatic saturation. The resulting octatonic collection , a symmetrical palindrome, reminds me of Messiaen, a composer for whom Jonny Greenwood has a great affinity.
These timbral and harmonic departures notwithstanding, Radiohead fans will find some of the band’s old formal and rhythmic tricks still present on the record. Radiohead’s always separated themselves from the din of conventional rock music through their song forms, which continually transcend predictable verse/chorus formal designs. A Moon Shaped Pool includes two notable terminal climaxes—memorable sections of new material that appear only at the song’s ending. Like 2003’s terminally climactic “Sit Down, Stand Up,” “Ful Stop” builds gradually over three minutes to arrive on the repeated mantra “truth will mess you up.” Yorke tempers the uncharacteristically poppy B major climax of “Present Tense” with a dose of irony. Though he’s been singing in B major most of the song, he waits until the first appearance of a B major chord (at literally the last minute) to repeat “in you I’m lost.”
Formally speaking, the album’s second single, “Daydreaming,” is built around the repetition of a single riff. Our sustained interest in this riff is guaranteed largely though its omnipresent 3 vs. 2 rhythmic dissonance—a trick that comes straight out of “How to Disappear Completely.”


The record’s biggest throwback is surely the fan favorite “True Love Waits.” Previously a trite C major strummed acoustic guitar sketch, it’s been completely re-imagined as the dizzying piano riff shown below. Like “The National Anthem,” it’s composed of seven notes spread maximally even over a grid of 16 possible points. Just when we’ve figured out the pattern, Yorke switches the last two beats to .

In light of recent events, listeners will want to hear this final track on A Moon Shaped Pool as autobiographical (i.e., “little hands,” “don’t go”). However, Thom’s been woodshedding this one for nearly two decades. What if this track isn’t a farewell to Rachel, but to us, the fans? Wouldn’t it be bittersweet if this song isn’t just another epic album-ending Radiohead track, but actually the final track? True Love waits through decades of fits and starts. It suffers rough drafts. It forgives half-decades between albums. Perhaps True Love knows when enough is enough, when to leave well enough alone.
Image credit: “radiohead” by Radamantis Torres. CC by 2.0 via Flickr.
The post Radiohead’s A Moon Shaped Pool (XL, 2016): reflecting, looking forward appeared first on OUPblog.

Backward tracing
Some of the controversies in contemporary equity are of both theoretical and practical significance. This is particularly true of the controversy concerning so-called “backward tracing”.
If a defendant misappropriates trust money in order to buy a car, then the beneficiary can trace the value of his equitable proprietary interest in the money into the car. This is straightforward, since each stage of analysis goes “forwards” in time. But what if the car has already been purchased by the defendant by taking out a loan, and the defendant only later misappropriated the trust money in order to pay off the loan. Can the beneficiary still trace into the car? (It will not usually be possible to bring a proprietary claim against the lender because of the defence of bona fide purchaser). The traditional approach of English law has been to say that the beneficiary cannot trace into the car because this would involve “backward tracing”: a beneficiary is not able to trace into property that was already in the defendant’s possession before the beneficiary’s money was received, because the defendant’s property cannot then be regarded as representing the beneficiary’s money. On this view, tracing does not go backwards in time.
However, this traditional understanding of tracing appears to have been departed from in recent cases. This was implicit in the decision of the Court of Appeal in Relfo Ltd v Varsani and explicit in the advice of the Privy Council in The Federal Republic of Brazil v Durant International Corporation. In the latter case, the municipality of Sao Paolo brought claims against companies controlled by its former mayor and his son. The defendants had accepted bribes and then laundered the money received. The bribe money had been received by the controller of the companies and paid into a bank account in New York, from which payments were made to the defendant’s bank account in Jersey. The difficulty in this case was that some of the money paid into the New York account was credited only after money had been transferred to the Jersey account. It was necessary for the Privy Council to decide whether it was possible to trace from the New York account into the Jersey account even though the Jersey account had been credited before the New York bank account had been debited. It was acknowledged that this was typical of modern banking practice, and common in instances of money-laundering.
the Privy Council clearly held that there are limits to backward tracing.
Lord Toulson, giving the advice of the Board, recognised that backward tracing would be possible where there was “a close causal and transactional link between the incurring of a debt and the use of trust funds to discharge it”. The focus should be on the substance of the transaction and not on the strict order in which associated events occur: “the claimant has to establish a coordination between the depletion of the trust fund and the acquisition of the asset which is the subject of the tracing claim”. On the facts of the case it was possible to establish the necessary connection between the payment of the bribes from the New York account and the credit to the Jersey account, regardless of the precise order in which the debit and the credit occurred.
The recognition of backward tracing in Brazil v Durant is helpful in helping to combat fraud and money laundering. As a decision of the Privy Council on appeal from Jersey, it is, strictly, merely persuasive and not binding upon English courts (although the status of Privy Council judgments is currently being considered by the Supreme Court in Willers v Joyce). Nevertheless, it is highly likely that Brazil v Durant will be followed by English judges. Yet the Privy Council clearly held that there are limits to backward tracing. The Privy Council did not wish to expand equitable proprietary remedies in ways which may have adverse effects on innocent parties, such as the unsecured creditors of the defendant.
Where a defendant incurred a debt in order to acquire an asset, and always planned to discharge the debt with a beneficiary’s property, then this seems likely to satisfy the “coordination” and “close causal and transactional link” requirements demanded by the Privy Council. In these circumstances, allowing the beneficiary to assert a proprietary interest in the acquired asset might not unfairly prejudice the defendant or other creditors since the asset would never have been acquired by the defendant had he or she not known that he or she would be able to exploit the beneficiary’s property. The situation might be considered to be different where the trustee unwittingly misappropriated a beneficiary’s money in breach of duty, and discharged a debt that he or she had acquired long before. For example, if the trustee paid off the entirety of a mortgage taken out many years ago to purchase a house, it might seem unsatisfactory to allow the claimant to trace backwards through the debt, into the house, and then assert beneficial ownership of the house.
This is because, if the house has risen in value, the claimant would, fortuitously, receive the benefit of this increase. Such an outcome might be considered to be harsh on the trustee who made the sound investment decision to purchase the house long before, and who might have been able to pay off the mortgage with other funds anyway. In such a situation, it is unlikely that there is any “coordination” or “close causal and transactional link” that would enable the claimant to trace backwards. The boundaries of backward tracing still need to be made clear, but it is suggested that the judgment in Brazil v Durant International Corp is to be welcomed since it allows courts to provide remedies for fraud and money laundering more effectively and in a principled way.
Featured image credit: Credit card, by jarmoluk. CC0 public domain via Pixabay.
The post Backward tracing appeared first on OUPblog.

May 16, 2016
Why there can be no increase in all brain cancers tied with cell phone use
Several widely circulated opinion pieces assert that because there is no detectable increase in all types of brain cancers in Australia in the past three decades, cell phones do not have any impact on the disease. There are three basic reasons why this conclusion is wrong.
First of all, the type of brain cancer increased by cell phones is glioblastomas. Glioblastomas are in fact increasing, as exemplified in those age 35-39 in the United States, in precisely those parts of the brain that absorb most of the microwave radiation emitted or received by phones. But this increased trend in glioblastomas of the frontal and temporal lobes and cerebellum is not evident when all brain cancers are considered.
Secondly, proportionally few Australians or others were heavy cell phone users 30 years ago. In 1990, just one out of every 100 Australians owned a cell phone and calls were short and relatively costly. The first Motorola brick phone weighed close to two pounds, stood about a foot tall, lasted about half an hour of talk time, and cost almost $4000 – about $9600 in 2016. Only in the last few years have cell phones become ubiquitous with the heaviest use occurring in relatively young users.
Finally, the lag between when an exposure takes place and evidence of a disease occurs depends on two factors: how many people were in fact exposed and how extensive their exposure has been. While cell phones have been around since the 1990s, they have only lately become an affordable major component of modern life.
Consider what we know happened with tobacco smoking, according to the US Centers for Disease Control. The rate of smoking reached close to 70% in US males in the late 1950s, while the rate of lung cancer did not peak until the late-1990s. Thus, a lag of nearly four decades took place between an exposure that was shared by most of the population and a major increase in a related disease, as documented by the American Cancer Society, using data from the CDC and US Department of Agriculture.
The link between the carcinogenic effects of tobacco and cancer did not come about from studying population trends, but by special study of high-risk groups using case-control designs of selected cases and comparing their histories with those of persons who were otherwise similar but did not smoke, and cohort studies of groups with identified smoking histories followed for up to 40 years, as in the American Cancer Society and British Doctors studies. The fact that population-based trends in Australia do not yet show an increase in brain cancer does not mean it will not be detectable in the future—perhaps soon.
In point of fact, several studies from Australia and the United States do find increased rates of gliomas in those who have been the heaviest users of cell phones for a decade or longer. A paper from noted neurosurgeons Vini Khurana and colleagues examined reports from centers in New South Wales (NSW) and the Australian Capital Territory (ACT), with a combined population of over seven million and reported that from 2000-2008, there was an annual increase in gliomas of 2.5% each year, with an even greater increase occurring in the last three years of the study.

Another study by Zada and collegues in the US found significant increases in gliomas in those regions of the brain that are known to absorb the most microwave radiation—the cerebellum and the frontal and temporal lobes. Paralleling this result, the California Cancer Registry, which covers 36 million people, also reported significantly increased risks of gliomas in those same regions. Recent studies from China as well as those from the US Director of the National Institute of Drug Abuse, Nora Volkow, reporting in the Journal of the American Medical Association have noted significantly increased metabolic activity in these same components of the brain after 50 minutes of exposure to cell phone radiation.
Only a generation ago, the hazards of ionising radiation were unrecognized. It was common to find X-ray machines freely available in shoe stores so that you could see how new shoes fit relative to the skeletal bones of your feet. Teens were treated for the disease of acne with radiation to their faces, and those treated with X-rays for ringworm, later incurred increased thyroid and other cancers. Pelvic X-rays of pregnant mothers were routine until the 1970s when leukemia risks were established in children who had been exposed prenatally decades earlier. Today, those who worked as radiographers and radiologists years ago have increased rates of a number of types of cancer. In every one of the preceding instances, the hazards were not recognized by population-based data, but by special studies that compared detailed information on exposures that took place in those with diseases in contrast to those without them.
Thus the lack of an increase in all brain cancers in the general population of Australia or any other modern country is to be expected in light of what is known about this complex of more than 100 different diseases. These unexplained increases in glioma remain gravely worrisome as this is the tumor type that we expect to see grow if indeed cell phones and wireless radiation are playing an important role.
As public health experts who have documented the dangers of smoking, both active and passive, and tracked the growing experimental and epidemiological literature on the dangers of cell phone radiation to reproductive and brain health, we appreciate that the need for precaution must be exercised judiciously. There is no question that the digital world has transformed commerce, the nature of scientific discourse and research, our response to emergencies, and all forms of communication. The epidemic of lung cancer tied with smoking four decades prior provides sobering lessons about why we should invest in reducing exposures to wireless radiation. Like diagnostic radiation equipment today, wireless radiation transmitting devices can be designed to be as low as reasonably achievable (ALARA). In our considered judgment, based on more than 100 years of professional experience in this field, it is of critical public health importance that every effort be made now to reduce and control exposures to these wireless transmitting devices, especially to infants, toddlers, and young children.
Featured image credit: Cell phone by Matthew Kane. CC0 Public Domain via Unsplash.
The post Why there can be no increase in all brain cancers tied with cell phone use appeared first on OUPblog.

Father and son, inspired: Joshua and Paul Laurence Dunbar
Over the past several years I’ve been writing a biography of Paul Laurence Dunbar (1872-1906), the first professional African American writer born after slavery to become an international phenomenon. I’ve touched on his birth and rearing in Dayton, Ohio; his quest to be a strong reader and a skillful creative writer; his friendship with the famous Wright brothers; the women he loved in his early years of manhood; his difficulties in breaking into the heartless literary marketplace; his precarious physical and mental health; his premature death.
Equally fascinating is the life of his father, Joshua Dunbar.
If we cobble together the bits of information spread across the handful of biographies of Paul and his ex-wife, Alice Ruth Moore, we know this much: Joshua was a former chattel slave and Union army veteran who passed away in relative obscurity. Born in early 1820s Kentucky, he fled a slave plantation, traveled northward, and sojourned in Canada. By June 1863, he returned to America to enlist in the Fifty-fifth Regiment of Massachusetts Colored Infantry. After a disability discharge cut his service short, he proceeded to enlist a second time, in January 1864, in the Fifth Regiment of Massachusetts Cavalry Volunteers. In October 1865, around the Civil War’s conclusion, Joshua was mustered out in Boston. He then moved to Dayton, Ohio, where he met a widow named Matilda Murphy (along with Robert and William, her two toddling sons from a previous marriage). Joshua and Matilda got married on Christmas Eve in 1871. After a few years of contending with Joshua’s domestic violence, alcoholism, recalcitrance, and decision to be a deadbeat father, Matilda filed for divorce in 1876, which a court granted a year later. In 1885, Joshua died at an Ohio Veterans Home. Joshua’s Civil War files held at the National Archives and Records Administration confirm the details of his military background and activities, and the Paul Laurence Dunbar Papers held at the Ohio Historical Society verify those regarding his personal life.
To fill in the blanks among these details of Joshua’s life, biographies tend to regurgitate information from one of Paul’s most famous short stories, The Ingrate. Published first in the August 1899 issue of the New England Magazine, the story reappeared the following year, in Paul’s second collection of stories, The Strength of Gideon and Other Stories. Set in antebellum Kentucky, the story recounts the life of Josh Leckler, a slave hired out as an underpaid plasterer. James Leckler, his master, teaches Josh how to read, write, and cipher so that he might learn how to discern whether his contractors are ripping him off. With this education, Josh repays his master not with grateful deference but with sly disobedience: he forges a traveling pass in James’s hand, flees Kentucky, finds his way through Ohio, reaches Canada as a freeman, and returns to America to enlist as a “colored soldier” in the Union army.
Did Joshua’s life as a plasterer mean that Paul was predestined to be a literary artist?
Taken together, this collage of documents—the Civil War files, the Paul Laurence Dunbar Papers, and The Ingrate —allows us to make tenable assumptions about Joshua’s whereabouts as a slave, fugitive, and freeman. The reliance of Dunbar biographers on this mix of fact and fiction makes sense. Around four million African Americans were enslaved by the start of the Civil War, whereas only a mere fraction of them were literary enough to write their own narratives, or were fortunate enough to secure interviews with amanuenses, who could then write down the stories. Researchers tend to rely heavily on any such documentation they can find, since slavery by design and happenstance tended to mutilate the individual and family records of its victims. From this vantage point, the more evidence we can find about Joshua, the better.
Despite the biographical clues that historical fact and fiction may afford in excavating Joshua’s life, the investigation itself rests on a set of assumptions that implicate literary studies of slavery and, in particular, the social and intellectual historiography by which we delineate the agency of slaves themselves. The attractive notion that we can access the life of Joshua by way of the literature of Paul betrays the complexity of that actual investigation. Paul was not born a slave, after all. He was literary, extraordinary, and the liberal subject of intellectual history. How, within this matrix of experience, does one locate Joshua, who was quite the opposite—illiterate, more likely quite ordinary, and an enslaved object of social history? Characterized in this way, the genealogical or generational arc from Joshua Dunbar to Paul Laurence Dunbar defies simple illustration.
What I’ve found in writing the biography is that investigating the life of Joshua with his son Paul in mind has prompted me to think about the relationship between social history and intellectual history—the inherited fields that underwrite our recovery of Joshua and Paul, respectively. Scholars like Elizabeth McHenry and Christopher Hager have recently worked to intersect, in particular, literary studies and African American social history. Through this paradigm we are prompted not just to examine the texts of high literacy, or to approach textual artifacts only as empirical capsules of the past. Instead, we are encouraged to appreciate demonstrations of literacy in all their textual qualities.
Until this point I’ve suspended disclosing that Joshua was an artisan—a plasterer, in fact, during and after his enslavement. Artisanship was an alternative mode of literacy during slavery. It was one of many kinds of skilled labor that masters exploited among their slaves to operate their plantations or, in hiring out these slaves, to widen their investment and earning potential in neighboring regions of bondage. It arguably remains as one of the few viable concepts by which we could bridge the social and intellectual histories of slavery.
Did Joshua’s life as a plasterer mean that Paul was predestined to be a literary artist—to be the ‘Poet Laureate of the Negro Race’, as he was called during his time? Not necessarily. But Paul, in his short story The Ingrate was at the very least imagining the opportunities that being a plasterer could bring a slave. History indeed tells us that a slave’s condition as a skilled laborer or applied artist granted him a social empowerment that wasn’t always rooted in alphabetical literacy, an intellectual ability that has been a hallmark of the slave narratives we’ve canonized today and one that is being scrutinized anew by scholar John Ernest. Artisanship enabled both the fictional Josh Leckler of The Ingrate and the actual Joshua Dunbar to attain social privileges and political freedom despite their enslavement. Even if we can’t say that Joshua Dunbar’s creative skills were passed down to Paul in the same way that a father passes a gene down to his son, we can say that they inspired the son enough to write about someone like his father.
Featured image credit:Paul Laurence Dunbar 1903 by The Booklovers Magazine. Public domain via Wikimedia Commons.
The post Father and son, inspired: Joshua and Paul Laurence Dunbar appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
