Oxford University Press's Blog, page 501

June 12, 2016

Shakespeare: living in a world of witches

Since he was born a year after the Witchcraft and Conjuration Act of 1563 brought about the era of the witch trials in England, it is hardly a surprise that witches and witchcraft would come to feature in Shakespeare’s work. His writing career began as the witch trials reached their peak in the 1580s and 1590s, though by his death in 1616 the number of trials had plummeted and seemed to be in terminal decline, though that would not have been apparent to him or his contemporaries. The influence of the Civil War would later, briefly reignite the flames of persecution.


What personal experience or knowledge of the trials Shakespeare might have had we do not know. There are very few remaining records of the assize courts for Warwickshire, which dealt with capital crimes such as witchcraft in Shakespeare’s day. The heartland of the prosecutions was far away in Essex. Still, pamphlets and tracts relating the terrible tales of witchery told at the trials disseminated far and wide, and across the social spectrum. It is quite likely he was aware, for instance, of the Witches of Warboys, who were tried at Huntingdon in 1593. The Most Strange and Admirable Discoverie of the Three Witches of Warboys, related how one of the accused had a host of familiar spirits named Smackes, Pluck, Blew, Catch, White, Callico, and Hardname. While living in London in 1602 he might have heard or read about the trial of the London charwoman Elizabeth Jackson for cursing and bewitching Mary Glover of All-Hallows-the-Less.


Trial pamphlets were not the only source of inspiration or insight available. A raft of intellectual demonological texts and discourses on witchcraft were printed at the time. Most were deeply influenced by continental books that related the torture-fuelled confessions of orgiastic witches’ Sabbats and satanic conspiracies. Others reflected more sober concerns with the supposed plague of day-to-day acts of malicious witchcraft. One book that explored both the diabolic conspiracy theology and the solitary village witch, The Discoverie of Witchcraft, has been of particular interest to Shakespeare scholars. Its author Reginald Scot was sceptical about much, though not all, of the evidence that was being put forward by the witch-mongering demonologists. Thomas Middleton’s The Witch, most likely written in the final years of Shakespeare’s life, clearly borrows from accounts of continental evidence provided by Reginald Scot. Many scholars now see traces of Middleton in Macbeth. It has also been argued that Shakespeare drew upon The Discoverie of Witchcraft in his depiction of the weird sisters in Macbeth, and the expressions of emotion and imagination through witchcraft in A Midsummer Night’s Dream.



Three_Macbeth_WitchesThe Three Witches from Shakespeare’s Macbeth by Daniel Gardner, 1775. Public domain via Wikimedia Commons.

The witch was also a prominent figure in the ancient Greco-Roman drama and myth that so informed Shakespeare’s writing and imagination. Ovid’s Metamorphoses, translated into English by Arthur Golding, was a clear influence on Shakespeare, with its story of Medea, and how that other goddess-witch of antiquity, Circe, ‘‘fell a mumbling spelles, and praying like a witch.’’ The Greek witch-goddess Hecate appears before the three weird sisters in Macbeth.


Mentions of witches and witchcraft can be found in nearly all Shakespeare’s plays. The historian Diane Purkiss sums them up well. They are used as ‘‘a topic, a metaphor, a joke, a story, a half-formulated reference point, a piece of the plot.’’ So in Anthony and Cleopatra, an embittered Anthony calls Cleopatra a ‘witch.’ Othello is no witch but he is accused of using ‘witchcraft’ to seduce Desdemona. In The Tempest Prospero comments that Caliban’s ‘‘mother was a witch, and one so strong that could control the moon.’’ But it is Macbeth that most people today associate with Shakespeare’s witches.


‘‘Double, double, royle and trouble; Fire burne, and Cauldron bubble.’’ So repeated the three witches in Macbeth, as they stirred their cauldron to conjure up ‘‘a Charme of powrefull trouble.’’ In fact, in the first printed version of 1623, from which I have just quoted, the three weird sisters are not constantly referred to as ‘witches’, as they are in subsequent editions that contain speech prefixes and stage directions. He also refers to them as ‘instruments of darkness’, ‘midnight hags’ and ‘night’s black agents.’ They are characters that transcend the mundane world of humans and are more than just neighbourhood witches.


Shakespeare’s witches are largely fantasies drawn from classical literature and influenced to a degree by demonological texts. There are no witches in Shakespeare that reflect the everyday fears and belief in witchcraft held by his Stratford and London neighbours. They would have recognised the stereotype of the elderly hag-witch he depicted, and the credibility he sometimes gave to witchcraft as an evil; but there is little in his plays that represent the local witch who, out of jealousy and envy, was thought to bewitch pigs, horses, humans, butter, and beer. But as scholars have argued, Shakespeare’s plays are still valuable for understanding how the image of the witch, or to be more precise, the female witch, was constructed in early-modern art and literature. And while there are other contemporary artistic depictions of witches stirring a cauldron, it is Macbeth’s instruments of darkness that have most inspired twentieth-century media representations of a trio of witches concocting their spells.


Featured image credit: Macbeth meets the three witches by Wellcome collection. CC-BY-4.0 via Wikimedia Commons.


The post Shakespeare: living in a world of witches appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 12, 2016 00:30

June 11, 2016

Understanding the Middle East chaos at its core: A struggle for belonging and immortality

Taken by itself, the election of the next American president, Democrat or Republican, will have little or no discernible impact on Middle Eastern chaos. To make any meaningful difference to this still-expanding problem, American decision-makers would first need to look behind the news. Only after such a penetrating look, could our country’s next president ever hope to progress beyond uttering useless second-order narratives of regional names, places, and ideologies.


Should this core obligation to look beneath the surface be declined yet again, our national government (and certain other allied governments) would remain unable to implement any meaningful remedies. At that point, only a continuous regional disintegration, — along with vast new legions of Middle Eastern refugees — could be expected. Ironically, the required forms of improved understanding regarding Syria, Iraq, Iran, Libya, Lebanon, Afghanistan, and the wider region are unhidden. After all, for literally millennia, nothing here has really changed. Rather, from the beginning, war and genocide have stemmed from seemingly fixed and universal human needs to belong and never to die.


In essence, in area geopolitics, the personal and the political have always been more-or-less interdependent. Today, moreover, when we can better understand that geopolitics is not geometry, and that sometimes, at least, the geopolitical whole can be greater than the sum of its parts, we should start to recognize not just interdependence, but also “synergy.” Indeed, no one can persuasively purport to understand Middle East chaos without first being willing to consider regional geopolitical relationships in their most fully reciprocal and complex expressions.


There is more. Inevitably, such willingness will bring the strategist or analyst back to the individual human being. Inevitably, all area geopolitics will be contingent upon the specific wants and behaviors of this single human person, which are casually observable and readily predictable.


For the moment, what we still stubbornly choose to recognize and emphasize in world politics is merely epiphenomenal, for example, the size and the presumed “order-of-battle” of enemy forces. This means that whatever we decide to emphasize in such politics has remained largely a passive reflection of deeper truth, just a flimsy shadow of what is happening, “underneath,” amid the underlying and shifting strata of area policies and social dissolution.


Why not, for the future, look elsewhere, look underneath, directly?



Man pointing by sneakerdog. CC-BY-2.0 via Flickr.Man pointing by sneakerdog. CC-BY-2.0 via Flickr.

To try and “fix” Islamic Middle Eastern chaos by imposing yet another contrived amalgam of military and political responses would once again miss the point. The core problem, our leaders, both Democrat and Republican, should finally understand, is not narrowly political or military. It is, rather, very deeply “psychological,” and also, very broadly “civilizational.” To be sure, it will be difficult to get our next president, to make the needed shift in orientation — especially because what is genuinely necessary is simultaneously less tangible or calculable — but there is no plausible American alternative to accepting a much greater tolerance of ambiguity in US strategic planning.


The “real world” of Middle Eastern chaos is ambiguous. There is no point to overlooking this staggering complexity, or simply pretending that it need not impact our pertinent regional policies. At the same time, beneath this ambiguity and searing chaos lies a decipherable and plainly longstanding corpus of individual human needs. Among these needs, moreover, none is more harshly compelling, or authentically causal, than the unwavering human desire to belong, and, to live perpetually.


Once, Picasso had reminded us that “art is a lie that lets us see the truth.” Further along this avant garde line of thinking, Swiss sculptor Alberto Giacometti’s Man Pointing offers a potentially illuminating representation of pervasive human isolation and alienation, a troubling image that could nonetheless begin to lead us toward a far deeper understanding of genocide, war, and terrorism. Such an understanding could then produce much more thoughtful and correspondingly more auspicious American foreign policies.


Normally, as Giacometti’s art hints at obliquely, each individual person can feel empty and insignificant apart from membership in some sort of crowd. Sometimes, this presumptively sustaining crowd is the ‘State.’ Sometimes, it is the ‘Tribe.’ Sometimes, as with ISIS, or Hezbollah, or Muslim Brotherhood, it is, at least residually, the Faith. Sometimes, it is the self-proclaimed “Resistance Movement,” as in the fiendishly similar examples of Hamas, Fatah, Islamic Jihad, or still-other relentlessly murderous terror groups.


Art is a lie that may help us to see the truth. “Reading” Giacometti’s emaciated figure, the outlines of a distinctly pragmatic conclusion may appear:  


Unless we humans can finally learn how to temper our overwhelming and nearly-ubiquitous desire to belong at all costs, our recurrent military and political schemes to remedy genocide, war, and terrorism will inevitably fail.


Without augmentation by far more basic sorts of human transformations – namely, changes that produce more expressly individualistic human beings – these time-dishonored strategies for national security, collective security (United Nations), or collective defense (alliances) will continue to be ineffectual.


It is largely this craving for membership and, as corollary, craving for belonging, that threatens to subvert individual moral responsibility, and, thereby, to ignite monumental crime. The lethal consequence of such intersecting cravings, as humankind has been witnessing from time immemorial, is a convulsive and sometimes orgasmic triumph of collective will. The most easily recognized twentieth-century case of such a grotesque triumph, of course, is Nazi Germany, an instructive observation already perfectly obvious to anyone who has seen Leni Riefenstahl’s infamous 1935 film, Triumph of the Will.


 In the Middle East, geopolitics is merely a secondary reflection of something much more primary. This “something” is the unrelieved yearning of individuals for both belonging and immortality.

Not every human crowd or herd need be insidious or destructive; not even in the Middle East. Still, grievously ongoing crimes against humanity could never take place in the absence of such collectivities. Whenever individuals join together and form a crowd or herd, certain latently destructive dynamics of mob psychology are made available for explosive release. Significantly, this fateful combining of membership with destructiveness lowers each affected person’s ethical and intellectual level to a point where even crimes against humanity may become acceptable. In the case of such barbarous groups as ISIS, the rabidly murderous behavior is not merely agreeable to the membership. It is also deeply welcome, satisfying, and lascivious, a viscerally continuous source of unparalleled ecstasy.


On the surface, ongoing brutalities in the Islamic Middle East represent fragmenting struggles between assorted warring herds. These herds, in turn, are the product of certain critically underlying individual needs to belong. These needs are themselves derived from the most primary human want of all. This, our leaders must finally understand, is the generally unquenchable yearning for immortality.


Understood as pathology, the current chaos in the Islamic Middle East remains only a symptom. But, as an appropriately aesthetic start to more promising and enduring policy solutions, Giacometti’s Man Pointing may be taken as an imaginative signpost of what is typically most determinative in spawning war, terrorism, and genocide. Sooner or later, what is happening here and elsewhere will need to be “fixed” at the “molecular” level of conflict, that is, at the needful level of the individual human being.


In the Middle East, geopolitics is merely a secondary reflection of something much more primary. This “something” is the unrelieved yearning of individuals for both belonging and immortalityUnless we finally begin to acknowledge the ubiquity and core importance of twin human longings for membership and immortality, our foreign policies there will assuredly fail. It follows that for the upcoming American presidential election, it is now finally time for all candidates to “take Giacometti seriously.”


The sculptor’s figure already knows where he is pointing. We, too, should try to find out. The alternative is simply another endless cycle of war, terrorism, and genocide.


Featured image credit: Middle East. View on Aleppo by Игорь М. CC-BY-2.0 via Flickr


The post Understanding the Middle East chaos at its core: A struggle for belonging and immortality appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2016 03:30

Professional skills and problem solving: the undergraduate research movement

I was recently asked to comment on “who benefits from research with students, and particularly how do undergraduates who do research benefit?” Like many of us, I have a set of answers in my pocket that I often use when I speak to colleges and universities about engaging undergraduate students in research. However, the audience for this question was not the group of like-minded peers who already believe in research as a fundamentally important thing in higher education. Instead, my audience was a skeptical member of the business community who was visiting campus. I explained that undergraduate research not only prepares students for graduate school, but that it also develops the kinds of attributes she would look for in her potential employees.


I am often dismayed by a shortage of imagination about what undergraduate research can accomplish for students and how the experience can become valuable long after that summer research project or that last poster session. Too often we think of the “next” destination for our research students as graduate school. We want them to succeed. Combine that with the fact that we know and understand the path of success we took (graduate school), and accordingly help our students get into graduate programs every year. Make no mistake; sending our students to graduate school is a good thing that needs to continue. However, many of our very best students do not want to go to graduate school; for a range of reasons, they want to move into the professional sphere. As faculty mentors, how do we help these students translate their success with us in research into an explanation of how they will be successful in professional settings outside of the higher education world we know so well? How do we “imagine” and how do we help students “imagine” these settings and what might be asked of them? How do we help them explain to employers that the research experience in and of itself aligns with the values of the range of organizations our students want to work for.


The issue becomes more salient as the undergraduate research movement looks to expand numbers of students participating by building undergraduate research into curricula. The undergraduate research movement needs to make clear the importance of embedding research into the curriculum. The notion that research is valuable and that having undergraduates participate is valuable may be evident for those of us in higher education, but it is not always evident to our students. Similarly, it is not always evident to businesses that hire our students. In fact, the value of research is an idea that is often lost in larger public conversation these days. Undoubtedly, we have to effectively talk to students and others about the “product” of research, i.e., the value of our research questions and their outcomes. However, we have to do more. We have to talk to students about the “process” of research, and how learning to state and test hypotheses, to collect and interpret data and draw careful conclusions based on that data prepares them with tangible skills they can offer the professional world.


Because the intersection of what the professional world seeks in new graduates and what undergraduate research emphasizes is not always readily apparent, we need to teach students how to be as explicit about their skill sets as they are about their research findings. Students’ work is often complex and not well understood by recruiters. It is important to help students both explain their project in a way that is readily understood, but more importantly they need discuss the skills they acquired during the process and how they would be valuable to their prospective employer. A good road map to help faculty and students hone the conversation comes from the results the Association of American Colleges & Universities has learned through its work with Hart Research (Hart Research Associates 2013). Faculty and students can tie undergraduate research directly to the qualities employers declare they are looking for. The traits employers believe colleges and universities should emphasize (more, less, and the same) with students are (percentage follows):



Critical thinking and analytical reasoning skills (82% more; 7% less; 11% the same)
The ability to analyze and solve complex problems (81% more; 6% less; 13% the same)
The ability to effectively communicate orally (80% more; 8% less; 12% the same)
The ability to effectively communicate in writing (80% more; 8% less; 12% the same)
The ability to apply knowledge and skills to real-world settings (78% more; 6 % less; 16% the same)
The ability to locate, organize, and evaluate information from multiple sources (72% more; 9% less; 19% the same)
The ability to innovate and be creative (71% more; 9% less; 20% the same)
Teamwork skills and the ability to collaborate with others in diverse group settings (67% more; 11% less; 22% the same)

The line between these traits and undergraduate research is clear. Without exaggerating, most undergraduate research experiences provide students with an authentic experience of refining a complex question, being analytical about how to approach the question, harmonizing and reconciling difficult data (this includes textual documents as evidence) and conflicting information into a synthesis or conclusion. The project must be written up and often presented at a conference or meeting. Along the way, the project created new problems that students needed to solve before moving on to complete the project. Often projects are done in teams and require effective communication and interpersonal skills. Our responsibility as research mentors is to help our students articulate their work as both a distinct project and as a set of attributes they now have that they can explain to prospective employers.


Featured image credit: Student panorama created with 40 Pictures, of the Audimax Lecture Theatre at Karlsruhe. By Vins120. CC-BY-SA-3.0 via Wikimedia Commons.


This article first appeared on the Epigeum Insights blog, Tuesday 7 June 2016.


The post Professional skills and problem solving: the undergraduate research movement appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2016 02:30

Prejudice you aren’t aware of (what to do about it)

Employment, education, healthcare, justice, housing. These are some of the central services in society because they help people live the best life they can. But it will come as no surprise to most people that access to these services and treatment at their hands differs greatly depending on whether you are a man or a woman, the way you are racialized, your sexuality, whether or not you have a disability, and so on. In the US, defendants on trial for the murder of a white person are more likely to be sentenced to death the more stereotypically black their facial features are perceived to be. Asian Americans are less likely to be recommended for certain cancer screening tests than white Americans with the same symptoms. In the UK, half of the country’s population are women, but women make up only 38% of permanent academics at its higher education institutions, and only 22% of the professoriate.


What causes these disparities? No doubt there are many factors that interact with each other. Explicit prejudice will be part of the story. Failing to value black lives as highly as white lives likely plays a role in the first example – note that, when the murder victim is black, there is no discrepancy in the sentencing along racialized lines. And the sorts of microaggressions and sexual harassment that feed a hostile and misogynistic academic climate are sure to contribute to the under-representation of women in UK academia. But here we’d like to focus on just one factor, namely, implicit or unconscious bias. Social science research over the past few decades has lead psychologists to believe that we all carry with us unconscious associations based on stereotypes; and that these associations drive our behaviour in certain circumstances. You might, for instance, carry an implicit association between being a woman and being bad at mathematics, and that might influence whom you hire for a job that requires numeracy; or between being a young black man and being aggressive, and that might affect where you choose to live; or between being an immigrant and being a benefit fraudster, and that might affect how you vote in an election. These associations can come in different degrees of strength; and they need have no basis whatsoever in fact. They are often formed by internalizing the way that people from the groups in question are portrayed in the society you live in. When a young black person is killed in a police shooting, the media tends to choose a picture of them that portrays them as solitary or confrontational; they rarely choose the equally available photograph of the young man at his grandmother’s birthday party, or the young woman graduating from college. These discrepancies in the representation of certain groups in the media become fixed in our unconscious stereotypical associations, which we often call upon when we make a decision – who to hire; who to sit beside on the train; how to vote. What’s more, once in place, the stereotypes are reinforced because we are so susceptible to confirmation bias, a widespread psychological phenomenon that leads you to ignore evidence that tells against your favoured hypotheses, and accords great weight to evidence that supports them.



Chess by phil1256. Public domain via Pixabay.Chess by phil1256. Public domain via Pixabay.

In one important study of implicit bias, researchers submitted hundreds of fictitious applications in response to a broad range of job advertisements in the Boston and Chicago areas. Some bore typically white names; others bore typically black names. The applications with the white names received 50% more callbacks than those with black names. In another study, trauma surgeons were presented with fictitious vignettes describing clinical scenarios accompanied by photographs of the black or white patients described. The surgeons were much more likely to assume a history of alcohol abuse in the black patients. In a third study, people were placed in a simulated scenario and asked to identify which of the characters in that scenario were armed. They were much more likely to misidentify an item as a gun when it was held by a black character than by a white character. In each of these three studies, and in the many, many others in this area, researchers believe that at least part of the difference in the way people are treated is due to implicit bias against certain groups. Potential employers associate being white with ability and diligence, and they associate being black with laziness and incompetence. In the shooter scenarios, people associate being black with an involvement in gun crime. The associations have no validity, of course; but that doesn’t weaken the biases to which they give rise.


I’ve been talking as if everybody harbours these implicit prejudices – that’s because they do. While people from the group that the bias works against will have slightly weaker biases, they will still have them. Women will have anti-women biases; and black people will have anti-black biases. And the same goes for people who are explicitly and publicly egalitarian. We publicly-committed feminists and advocates for racial justice are just as likely to have some degree of implicit bias towards women and people of colour as those who are not so committed. As philosophers Jennifer Saul and Michael Brownstein put it, we too are “part of the problem.”


So the phenomenon is extremely widespread. And this makes it particularly urgent that we try to find ways to reduce these biases or at least mitigate their effects. Of course, one way to do this is to remove the food that nourishes them. Balanced reporting in news outlets; more nuanced, less stereotypical, and offensive portrayals of women, people of colour, and transgender people in television and film; diverse groupings in positions of power; and so on. These are all large-scale societal interventions that would help enormously. But at the individual level, there are also a number of effective strategies. Psychologists have found that your brain will call on your implicit associations rather than your explicit (and hopefully less prejudiced) thought processes when it is making a judgment in a rush, or under stress, or when there is little information to go on, or when it’s distracted thinking of something else. So we can try to avoid making important decisions about other people – who will be tasked with a management role, who you will vote for, how you will treat a particular trauma victim in your clinic – when you are in such situations. Your bias towards someone from a particular social group can also be reduced by increasing your contact and interactions with people from that group, at least so long as the conditions of the contact are right – it must be informal and personal and it must be on an equal footing and when you pursue a common shared goal together; bias is not reduced in circumstances in which white participants are in a management role, while black participants play a subservient administrative role, for instance. Finally, you can reduce the strength of your bias by thinking about counter-stereotypical exemplars. These are people – like mathematicians who are women and political leaders who are African-American – who provide counters to the prejudiced negative stereotype that is held about the group to which they belong. Considering such people before making a decision will mitigate the effects of your bias on that decision.


What is so unsettling when we learn about implicit biases is that they control our behaviour in ways we disavow; and they do so without our conscious consent. It is as if we learn of an inner demon hell-bent on sabotaging our best-laid egalitarian plans. But there are ways to quiet these monsters that lurk below the level of consciousness. And as research progresses and knowledge of these mitigation strategies increases, the effects of these demons will, we hope, diminish.


Featured image: Lego doll amphitheatre by eak_kkk. Public domain via Pixabay.


The post Prejudice you aren’t aware of (what to do about it) appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 11, 2016 01:30

June 10, 2016

Hamilton and the theatrical legacy of Leonard Bernstein

Lin-Manuel Miranda’s Hamilton: An American Musical is a runaway success on Broadway—enough so that just about everyone reading this post, regardless of personal demographics or geographic location, will likely have heard about it. They might also be listening obsessively to the original-cast CD. Perhaps they’ve even memorized it. Hamilton has already won a Pulitzer Prize for Drama, and it earned a record 16 Tony Award nominations, with high expectations for a sweep at the awards ceremony on Sunday, June 12th. The show explores the legacy of Alexander Hamilton, a Revolutionary War hero and first Secretary of the Treasury, through the language of hip hop, highlighting Hamilton’s roots in the West Indies and hailing him as an immigrant who made good. Brilliantly conceived, with exceptional virtuosity in singing, rapping, and dancing, the show also makes a political statement by casting actors of color. Doing so challenges traditional all-white narratives about the founding of the United States.


As a music scholar who writes about Leonard Bernstein, I found myself musing about historical connections between his work and that of Miranda. Bernstein died in 1990, as hip hop was gaining mainstream force and Miranda was a ten-year-old, so he lived in a different musical orbit. Yet Bernstein dedicated himself to articulating issues of social justice through his Broadway shows; he established a life-long focus on hiring performers of color; and he thrived on a powerful intermingling of dance, words, and music. Bernstein also loved to deliver high-intensity conversations through intricately crafted ensemble numbers.


Let’s explore links between these two theatrical talents, doing so through the lens of shows written by—or important to—Bernstein.



The Cradle Will Rock, with words and music by Marc Blitzstein. Bernstein first music-directed Blitzstein’s blistering agit-prop show about the plight of unionized workers in 1939, when Bernstein was a senior at Harvard and the show was brand new. As time passed, he returned to the work repeatedly, becoming one of its staunchest advocates. With an edgy blend of social commentary and wry twists on popular culture, The Cradle Will Rock was a crucial model for Bernstein’s own theatrical vision. Bernstein also aspired to Blitzstein’s dual role as lyricist and composer, a slot that Stephen Sondheim has occupied quite comfortably. Bernstein achieved that goal in a number of theater works, notably in his one-act opera Trouble in Tahiti. Miranda shares this lineage, writing activist theater and having a gift for both words and music.
On the Town, with a score by Bernstein, book and lyrics by Betty Comden and Adolph Green, and choreography by Jerome Robbins. This World War II musical marked the Broadway debut of its creative team—all twenty-somethings at the time. As a groundbreaker in mixed-race casting, On the Town’s initial production was mildly yet flagrantly multi-racial.  The show included 6 African Americans out of a cast of 54, which appears as tokenism today. Within the politics of the 1940s, however, it was a significant step, disrupting the strictures of racial segregation. In the dance chorus, black male dancers held hands with white females, violating a taboo of the time, and black and white soldiers mingled on stage, offering an alternate vision to the racially segregated military of the day. Just as racially audacious, the show’s star was Sono Osato, a Japanese-American dancer whose father was interned during the war, like so many other Japanese nationals living in the U.S. On the Town was also dance-driven, and it yielded the kind of holistic vision of words, music, and movement that makes Hamilton
West Side Story, with a score by Bernstein, book by Arthur Laurents, lyrics by Stephen Sondheim, choreography by Jerome Robbins. One of the most respected and beloved musicals of all time, West Side Story had a formative impact on Miranda. “I don’t know if there is a score that I know as well as I do that score,” Miranda declared in an interview with the actress Rita Moreno. Perhaps most striking was the show’s focus on Puerto Rican immigrants. Set on the streets of New York City in the contemporary world (1957), West Side Story dramatized urban teen violence bred of ethnic conflict; at the same time, it aimed to humanize immigrants. Another link with Hamilton appears in West Side Story’s glorious ensemble numbers, especially the famous “Tonight Quintet.” Hamilton, too, thrives on complex ensemble writing.
MASS: A Theatre Piece for Singers, Players, and Dancers, with a score and text by Bernstein, additional text and lyrics by Stephen Schwartz, choreography by Alvin Ailey. Debuted at the opening of the John F. Kennedy Center for the Performing Arts in Washing, D.C. in 1971, Bernstein’s Mass grappled with a crisis of faith, and it did so in part by brashly reveling in the sounds of contemporary popular music, including a rock band among its multiple instrumental ensembles.  In other words, Mass embraced the soundscape of its day. For Hamilton, that realm is hip hop.
1600 Pennsylvania Avenue, with a score by Bernstein, book and lyrics by Alan Jay Lerner. A notorious flop that debuted during the U.S. Bicentennial, 1600 Pennsylvania Avenue only lasted for 7 performances—a sadly different fate from Hamilton. Yet both shows have race and a re-examination of American history at their core. 1600 Pennsylvania Avenue explored race relations in the White House during the nineteenth century, viewing the imbalance of power within the “people’s” mansion and highlighting the crucial role played by African Americans in the building’s social and cultural history. In Hamilton, the Founding Fathers are raced as black and Latino – yet another means of upending historical narratives that privilege white protagonists.

“Times Square Ballet,” photograph from the original production of On the Town in 1944. Peggy Clark Collection, Music Division, Library of Congress.“Times Square Ballet,” photograph from the original production of On the Town in 1944. Peggy Clark Collection, Music Division, Library of Congress.

 


Within Bernstein’s legacy, West Side Story looms. Two years ago, when Miranda interviewed Rita Moreno about her role as Anita in the 1961 film of West Side Story, he articulated his deep relationship with Bernstein’s show:



The first time I saw West Side Story, I was in sixth grade. I will never forget it because I remember that I was cast as Bernardo in our sixth grade play and so my mother rented the movie so we could watch it together. . . . When “America” started and it was about whether to live in Puerto Rico, or live in the U.S. – as a kid who grew up here and was sent there every summer – I was like, “Holy sh*t! ‘West Side Story’ is about Puerto Ricans?!” It really blew my mind. . . .


I ended up directing West Side Story my senior year of high school, [and] I ended up working on the 2009 revival with Arthur Laurents and Stephen Sondheim and doing the Spanish translations for Arthur’s take on the revival. It’s been very instrumental to my life. . . . And Leonard Bernstein’s music is immortal. It still sounds different from every other Broadway score you’ll hear. The scope and the size of it really is incredible.  It’s incredibly ambitious writing.



“Incredibly ambitious writing”: of the many affinities linking Miranda to Bernstein, perhaps the most fundamental is a capacity to think big – to devise major creative visions that tackle conundrums in the American experience and to persuade prodigiously talented collaborators to help put those ideas on stage.


 


Featured image: publicity still of Daveed Diggs, Okieriete Onaodowan, Anthony Ramos, and Lin-Manuel Miranda, in a scene from Hamilton: An American Musical. Photo by Joan Marcus for The New Yorker, February 9th, 2015.


The post Hamilton and the theatrical legacy of Leonard Bernstein appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2016 04:30

The Mediterranean-style diet on heart disease and stroke

Coronary artery disease is the leading cause of death in many countries, but its prevalence has changed significantly during the last 50 years. Death rates from heart disease have fallen dramatically in western countries, but increased in many ‘developing’ countries. These large population-wide changes suggest environmental factors, including diet, are a major determinant of the risk of heart disease. A typical western dietary pattern has been widely considered to be unhealthy, because it contains more processed foods, which can increase obesity and diabetes, and salt, which can increase blood pressure. For many years, foods containing saturated fat have also been thought to increase the risk of heart disease. Of ‘healthy’ diets, most evidence favors a Mediterranean dietary pattern, which includes fruits and vegetables, whole grain foods, fish, little meat, modest alcohol, and olive oil.


Although diet is likely to be an important determinant of heart disease risk, and much research has been undertaken, there is continued controversy over which foods are beneficial and which are hazardous. One problem is that diet is complex and includes many foods, so the influence of individual foods is difficult to determine. Another issue is that risk factors for heart disease act over years during which diet may change. Many studies have evaluated associations between diet and markers of heart disease risk, such as blood pressure, cholesterol levels, or measures of inflammation. However, diet could influence risk by other pathways, so relying on these surrogate measures could be misleading. A further problem is that an unhealthy diet may be associated with other adverse lifestyle ‘risk factors’ such as a sedentary lifestyle, smoking, and socio-economic deprivation, which could explain health problems thought to be from a poor diet.  The most reliable evidence comes from clinical trials where participants are randomized to one diet or another, and outcomes followed over several years. However, randomized trials of diet are very hard to undertake and few have been done.



sandwichSandwich by Life-Of-Pix. CC0 Public Domain via Pixabay.

The European Heart Journal recently reported the results of a large study, which evaluated dietary patterns in over 15,000 patients with stable coronary heart disease from 39 countries. Subjects were participating in the ‘STABILITY’ trial, which was primarily designed to evaluate a novel medication, darapladib, which turned out not to influence cardiovascular risk. At the start of the study, a simple questionnaire asked participants how often they ate common food groups. Two dietary patterns were evaluated;  a ‘Mediterranean’ pattern, which is characterized by a greater consumption of whole grains, fruits, vegetables, legumes, fish, and alcohol, and less meat; and a ‘western pattern’ characterized by greater consumption of refined grains, sweets and deserts, sugared drinks, and deep fried foods. During nearly four years follow-up, individuals who most closely followed a ‘Mediterranean dietary pattern’ were about one third less likely to die, or to suffer a heart attack or stroke, after statistical adjustment for all other factors. In contrast, and to our surprise, there was no relationship between the ‘western dietary pattern’ and these adverse outcomes. The benefits associated with a high ‘Mediterranean diet score’ were consistent across many countries, but the study could not evaluate the importance of individual foods. These observations suggest the healthy types of foods included in the Mediterranean diet score are likely to be beneficial. Foods in this dietary pattern can be part of widely varying diets around the world, many of which are not in a ‘traditional Mediterranean diet’.


Dietary advice has traditionally focused on avoiding ‘unhealthy’ processed foods, which are a major contributor to obesity. However this study is part of increasing evidence which suggests less ‘healthy’ foods, which are increasingly consumed globally, may not be the major cause of cardiovascular disease. It is possible dietary advice which emphasizes the importance of ‘healthy’ foods, including at least three servings of fruit and vegetables each day, may be more positively received, and more successful for lowering the risk of heart disease and stroke.


Featured image: Tomato by Aromaengle. CC0 Public Domain via Pixabay.

The post The Mediterranean-style diet on heart disease and stroke appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2016 02:30

Prodigy or savant: two sides of the same coin?

As adults, we remain fascinated with images of young children performing extraordinary feats, with platforms such as YouTube offering us an unending wealth of mini Mozarts and baby Einsteins for us to feast our eyes and ears on, and providing the perfect fodder for our daily Facebook feeds. We are filled with awe at the sight of such small individuals undertaking tasks that most adults only dream of undertaking. But what word should we use to describe these children? Two words in particular – ‘prodigy’ and ‘savant’ – are often used interchangeably; but what is the difference, and why does it matter?


The most common definition of a prodigy focuses on results or achievement: a child, typically under the age of ten, who can perform at an adult professional level in a highly demanding culturally recognised field of endeavour. But unpicking this definition reveals its flaws. Why to the age of ten, what does ‘adult professional level’ really mean, and would ‘culturally recognised field of endeavour’ favour classical music above other forms? This definition also fails to mention the process so fundamental to prodigious achievement. What really sets prodigies apart is the extraordinary pace at which they are able to progress in the mastery of their chosen field of expertise. So we think of Umi Garrett playing Liszt on the stage of Ellen to a rapt audience after only a few of years of formal musical training.



A savant on the other hand is someone with savant syndrome, a rare condition in which people with developmental disabilities demonstrate remarkable talents that stand in stark contrast to their overall intellectual deficits. The psychiatrist Darold Treffert has elegantly labelled these talents as “islands of exceptional mental performance in a sea of disability”. Importantly, a savant is not necessarily a child, as the condition can be either congenital (present from birth and evident in early childhood), or acquired (later development after injury or disease to the central nervous system). Savant skills typically occur within the fields of music, art, calendar calculating, mathematics, and mechanical/visual-spatial areas, with musical savantism being the most common subtype. Here, we are more likely to picture a Hollywood version of Kim Peek, whose extraordinary memory was so famously portrayed by Dustin Hoffman in the movie Rain Man.


So the distinction seems pretty clear, right? Well, not quite. Recent research suggests that savants may have less severe intellectual deficits than previously believed. Daniel Tammet, a writer and autistic savant, has both high IQ and extraordinary savant skills; his abilities suggest that the traditional interpretation of prodigies as children with extremely high intelligence, and savants with extremely low intelligence, is more complicated than originally thought.


It seems then that the hallmark differential characteristic between prodigies and savants is not necessarily intellectual deficit, but rather the occurrence of a developmental disability, such as (but not always) Autism Spectrum Disorder. Individuals with a developmental disability who demonstrate skills that stand in contrast to their overall disabilities, regardless of the extremity of these skills, are classified as savants, and cannot be classified as prodigies, even if these skills are exhibited before the onset of adolescence. This means that when we’re looking at a prodigious young talent in performance, it may not be obvious at first whether the child is a prodigy or savant.


So if the distinction between these two groups of precocious individuals can be initially indiscernible, do they then simply represent two sides of the same coin? Again, maybe not. While the perceivable outcomes of both groups can often be identical, it seems that what differs is how the two groups attain their skills. For prodigies, it is the speed in which their skills develop that is their hallmark characteristic. The learning approaches of savants, on the other hand, can be classified as atypical, for not only can the early onset of savant skills become noticeable in the absence of any formal training, but their skill development may skip typical learning steps.


Prodigies and savants may therefore remain two exceptional and distinct groups of individuals. But the debate is far from closed, with academics divided as to whether a true distinction exists or not, or if prodigies and savants are simply one and the same phenomenon, occurring along continuums of giftedness, talent, intelligence, and autism. What is indisputable, however, is that further research into the remarkable abilities of these precocious children is needed, to not only offer us insights into exceptional ability, but to also broaden our understanding of musical learning and development in general.


Featured image credit: Quartet in C by WikimediaImages. CC0 Public Domain via Pixabay.


The post Prodigy or savant: two sides of the same coin? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2016 01:30

Mapping the moral high ground on fossil fuels

When I was writing Isotopes: A Very Short Introduction, I began to wonder why it took us so long to appreciate the effect that burning fossil fuels is having on the Earth’s atmosphere. The so-called Suess effect in radiocarbon (14C) has been known for decades. Geological sources of carbon like coal and oil, that formed many millions of years ago, long since lost their radiocarbon through radioactive decay – they contain 14C-free “dead” carbon. From the mid-19th century the radiocarbon activity of the atmosphere declined as dead carbon from fossil fuels was dug out of the ground and burnt producing carbon dioxide (CO2) in the atmosphere. The nuclear tests of the 1950s and 60s reversed this trend but that’s another story. Why did we not realise that such a massive shift would have dramatic consequences?


Isotopic records from ice cores drilled in Greenland and Antarctica indicate that the Earth’s temperature and atmospheric CO2 levels are closely linked. Natural oscillations between glacial (cold) and inter-glacial (warm) conditions have occurred for at least half a million years. During an ice age atmospheric CO2 is about 180 parts per million (ppm), at peak interglacial CO2 increases to about 280ppm. Ours is a time close to peak interglacial but recently we have just passed 400ppm. With CO2 at the peak of the natural cycle we have added another 120ppm of CO2. 120ppm or 0.012% doesn’t sound much but CO2 is a powerful molecule. Importantly, we did this in a hundred years or so, the natural cycle takes 10,000 years or more. Climate change is real and we definitely need to control CO2 emissions.


Two years ago my university made a decision to become the first European university to divest its interests in fossil fuels; a decision that attracted lots of positive publicity. When the story broke I remember explaining to my daughter why I wasn’t particularly impressed by the moral high ground the university was claiming. She was bemused that I was so against a decision that was being lauded in the media. So, why wouldn’t someone so persuaded by the need to restrict CO2 emissions support a decision to divest in fossil fuels?


Recently, I visited Whitelee wind farm, Europe’s largest onshore wind farm – actually, I just realised how disingenuous the word ‘farm’ is in this context – it’s a power station! Whitelee is on Eaglesham Moor – a bleak place where a regular supply of wind is guaranteed. The site has 215 wind turbines which together are able to generate a maximum of 539 million watts or megawatts (MW). The total UK wind power capacity is about 13 thousand million watts or gigawatts (GW).



450px-Whitelee_Wind_Farm_-_geograph.org.uk_-_1074051Whitelee Wind Farm, by Iain Thompson. CC-BY-SA-2.0 via Wikimedia Commons.

13 GW is a little less than the electricity the UK uses at 3am. Energy experts talk about electricity in terms of ‘baseload’ and ‘dispatchable’. Baseload is the minimum needed to keep the grid working – about 17 GW. Dispatchable is the additional power needed to respond quickly to demand – at 6pm UK dispatchable is roughly an extra 40 GW. The UK relies mainly on nuclear and gas to deliver baseload but the vast majority of dispatchable still comes from coal – about 25 GW at peak demand. The moral high ground might appear to involve fossil fuel divestment but abandoning coal and gas would have massive impact on security of electricity supply.


It isn’t just electricity; heating makes up more than 50% of energy use and 82% of UK households depend on gas for heating. About a quarter of UK homes live in fuel poverty, defined as needing more than 10% of household income to keep warm. Homes without mains gas are more likely to be in fuel poverty because they rely on more expensive oil and electrical heating. Experts talk about the energy ‘trilemma’ – how do we ensure (i) security of supply while maintaining (ii) environmental sustainability and (iii) affordability? I think there is a serious political problem that the electoral cycle does not encourage the long-term planning and investment that is required to address the trilemma.


So, I simply pose the question – what is socially responsible? Saving the planet? Well, that usually actually means saving the people and an alien might observe that the people are largely the problem. Earth did rather well without us for about 4.5 billion years and my instinct is that it will manage pretty well when we are gone. Is unreliable electricity supply socially responsible? No, but it might become the norm if we removed fossil fuel power generation. Is it socially responsible to plunge people into fuel poverty? No, but the obvious corollary of the energy trilemma is more expensive energy. Is it socially responsible to expect other parts of the world not to aspire to western lifestyles? No, and it is worth noting that China’s famous growth of coal-fired electricity has seen 650 million people taken out of poverty and been accompanied by an 80% decrease in female illiteracy and 70% decrease in infant mortality.


So, what’s the answer? Well, I don’t think we are going back to living without heat, electricity, and transport any time soon. To my mind we need to use the same ingenuity that caused the ‘carbon crisis’ to engineer our way out. Improving the fuel efficiency and emissions standards of our homes and vehicles surely has a part to play. Making such technology affordable is also a key ingredient. But I think we also have to explore new technologies. Using thorium instead of uranium to generate nuclear power is one option and has the advantage of producing nuclear waste that is much shorter-lived. Returning carbon to the deep Earth for long-term storage in geological reservoirs is another viable technology. Local renewable technologies are attractive but might require an unprecedented (in the UK) level of community cooperation. None of this will be cheap and much of the expertise we will need resides in the current nuclear and fossil fuel industries. It might turn out that the moral high ground is actually to invest in those industries.


Featured image credit: ‘Windmills and renewable energy’, by makunin. CC0 public domain via Pixabay.


The post Mapping the moral high ground on fossil fuels appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 10, 2016 00:30

June 9, 2016

Is there a human right to the city? Rethinking the politics of rights

What gives you the right? We are familiar with rights claiming, it comes easily to our lips when we believe we are entitled to something—to respect, to our fair share. Rights are fighting words. We invoke them when we have been wronged, when a situation has become intolerable. Rights claims are a way of fighting for control.


What do we have a right to? In a sense, this is a question over what we should have control over. Conventional human rights to life, freedom from torture and arbitrary arrest reflect our sense that we should have control over our bodies, for example. So, what should we have control over other than ourselves? This is actually the central question of nearly all rights controversies. Here I do not want to offer a general answer to this question, as it is my considered view that there are no final answers to this question, only an always ongoing contest over our rights.


Instead, I want to think about a specific right. Do we have a human right to the city? The locution is strange, I know. Yet, to claim we have a right to the city is to challenge the basic political and economic structure of contemporary urban life. The right to the city is a slogan used by urban political movements across the globe, which demands that each of us should have control over the cities in which we live.


What could ground such a right? Rights, it seems, must be justified by something of significance and with wide acceptance. Philosophers have argued for generations about what that something is: dignity, autonomy, wellbeing, or our membership in a privileged community. This logic of justification makes rights claiming into a question of who we are, and especially what is valuable in us, ontologically speaking, that could justify us having control—of our bodies, over decisions that effect us, or even the cities in which we live.


The right to the city, however, is something of a deviant rights claim. It is not made in the name of our membership in particular nation-states or ethnic groups. It is not based on an account of each of us as rational and autonomous individuals. It is not even rooted in our common capacity for suffering. The right to the city finds its justification in our experience of the contemporary city, and particularly the experience of the marginalised and oppressed.


To understand the right to the city requires confronting the myriad injustices of the contemporary city, as it robs people of control over the most basic aspects of their lives. In the United Kingdom homelessness is reaching levels not seen since the Victorian-era. While in the United States, rents have risen to the point that there is no state in which low-income workers can afford to live without spending an excessive portion of their income on rent.



londonterracesLondon Terraced Houses by Kathleen Tyler Conklin. CC-BY-2.0 via Flickr.

In London young mothers are forced to move hundreds of miles from their homes while their local council sells off social housing. In Rochester, New York, a heavily armed police unit forces an elderly woman out of her home in the middle of the night because of the fraudulent actions of her bank. In Athens migrants and asylum seekers are subjected to abuse from police and fascist political groups. In Washington DC young black men are stopped by undercover police officers and searched without cause—in nearby Baltimore, Freddie Gray was killed by police officers, an all too common tragedy. In Durban, shack dwellers are displaced to make way for new development projects, moved to unsustainable land and into inadequate housing, and then they are targeted when they fight for their rights.


This brief summary of contemporary injustices points towards a global process of urbanisation that is profoundly anti-democratic and results in extreme inequality, which is maintained through state-sanction violence and the exploitation of vulnerable people, as well as the natural environment. What we can see is that even as more and more of us live in cities; we have almost no control over the processes that affect us so profoundly.


The right to the city, then, is not justified by some obscure and contested truth about human nature, but rather by lived-in conditions of injustice. The right to the city is a claim for a more radically democratic urban politics, in which the denizens of a city are empowered to make decisions about common resources, control, and limit police and security forces, distribute land and housing more equitably, and build cities that are dedicated to supporting communities rather than generating profits. The right to the city is a challenge to the existing distribution of power—it is a demand for greater control.


This then takes us back to my general reflection on rights: rights are a means of challenging and reconstructing the structures of social power, or redefining who has control of what in society. We will never discover a final or authoritative list of human rights, nor will we ever reach a lasting and complete consensus, despite the fervent efforts of many profound thinkers. Rather, human rights will always be contested, and in fact, will always be a tool of contestation. And this is at it should be, because at their best, rights have a profound democratic potential to increase the power each of us has over our lives and to ensure greater equality by insisting that each of us counts in our collective life —which is a job never done.


Headline image credit: London, United Kingdom by NASA Goddard Space Flight Center. CC-BY-2.0 via Flickr.


The post Is there a human right to the city? Rethinking the politics of rights appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2016 02:30

On the Singularity, emotions, and computer consciousness

The term ‘artificial intelligence’ was coined as long ago as 1956 to describe ‘the science and engineering of making intelligent machines’. The work that has happened in the subject since then has had enormous impact. Margaret Boden is a Research Professor of Cognitive Science at the University of Sussex, and one of the best known figures in the field of Artificial Intelligence. We put four key questions to her about this exciting area of research.


Could AI surpass human intelligence?

In principle, probably. That’s because there’s nothing magical about the mind/brain. It works according to (still largely unknown) scientific principles that could conceivably be simulated in computers. If AI could equal human intelligence, it could probably also surpass it.


Some people believe in ‘the Singularity’: a point at which AI will surpass human intelligence and ‘the robots will take over’. Some believers even predict that it will happen within a few decades from now. This notion is hugely controversial, and in my opinion, it’s nonsense. We are nowhere near simulating general human intelligence—which is what is presupposed by talk of the Singularity.


That doesn’t mean that we shouldn’t already be worrying about certain dangers posed by AI.


Could AI even equal human intelligence?

Yes, in principle. In practice, however, it probably won’t. The project is too difficult, and also too expensive.


Hugely increased computer power and data storage will certainly come. But they won’t be enough. We need powerful ideas, not just powerful hardware. Truly human-level AI would require a complete theoretical understanding of all aspects of human psychology.


Current AI can perform a host of tasks with an extraordinary level of success, often far beyond what unaided humans can do. But all of these tasks are highly specialist. In other words, today’s computer intelligence is very narrow. Or rather, there are lots of computer intelligences, each of which is very narrow.


A single, integrated, intelligence—like that of human beings—was a prime goal of the 1950s/1960s AI pioneers. They wanted to build systems that could use vision, language, learning, creativity, and motor control—all functioning across the board (i.e. with respect to many different sorts of problem), and all cooperating with each other when necessary.


We’re nowhere near that. Most (though not quite all) AI researchers have given up on that dream, turning instead to more specialized tasks.


Could a computer be “conscious”?

No-one knows, because the concept of consciousness isn’t well understood.


‘Functional’ consciousness includes various distinctions about information processing. For instance: attending to something versus ignoring it; being receptive to incoming stimuli versus being unaffected by them; thinking deliberately about something versus reacting unthinkingly; and so on. All these could be modelled in AI.


Indeed, all these aspects of functional consciousness have already been modelled, up to a point. There are several interesting ‘machine consciousness’ systems. One example is LIDA—a complex project/program that’s based on a neuropsychological approach called Global Workspace Theory (GWT). LIDA is a ‘project’ in the sense that it is largely a verbally described plan for a computational system based on GWT. It’s also a ‘program’, in the sense that part of it has actually been implemented, and can be used for solving various sorts of problem.


For the record, the neuropsychologist who first formulated GWT had been inspired by ideas drawn from early AI, called ‘blackboard architectures’. This is one of many examples of mutual influence between AI and theoretical psychology and/or neuroscience.


But ‘phenomenal’ consciousness, or experience—the taste of sugar, the smell of perfume, the terror on seeing a tiger or a forest fire, etc—is seemingly different. Hardly any philosophers (or psychologists) claim to understand what this is. And those few who do, are believed by almost no-one else.


In short, this is a philosophical mystery, not just a scientific one. If we don’t even understand just what phenomenal consciousness is, nor how it’s possible for it to arise in human minds, we’re in no position to assert or to deny its possibility in computers.


Could AI machines have emotions?

Emotions usually involve conscious feelings. But they aren’t only feelings. In addition, they are information-processing mechanisms that have evolved in multi-motive creatures to schedule actions in the pursuit of various (and sometimes conflicting) goals.


In other words, emotions involve both phenomenal and functional consciousness.


And as phenomenal consciousness isn’t understood by philosophers, never mind scientists (see above), no-one knows whether an AI system could have emotions in that sense.


However, the functional aspect of emotions is, in principle, open to AI modelling.


Most AI models of emotion that exist today are theoretically shallow, because their builders don’t appreciate the functional complexities involved. Even those (few) researchers who do recognize this complexity can’t yet implement it fully in working systems.


The best example is a model of certain aspects of anxiety. A program called MINDER simulates a nursemaid looking after several babies, each of whom is an autonomous agent acting in largely unpredictable ways. She has to feed them, watch them, keep them safe from falling into ditches, and rescue them if they do. (A real nursemaid, of course, also has to clean them, cuddle them, speak to them, and so on.) But these goals can conflict: she only has two hands, and can’t be in two places at once. Some are urgent, some aren’t. Some are necessary, some can be ignored. Some can be reactivated after time passes, whereas some can’t. These aspects of the nursemaid’s anxiety-arousing situation are distinguished by the program, and her actions are scheduled accordingly.


Featured image credit: Tissue Network. CC0 public domain via Pixabay.


The post On the Singularity, emotions, and computer consciousness appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on June 09, 2016 00:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.