Oxford University Press's Blog, page 467
September 10, 2016
Social work and suicide prevention, intervention, and postvention
Social workers regularly come into contact with those who are at risk of or exposed to suicide, through direct practice, as well as in family, group, and community roles. It would be reasonable to assume that social workers would have a well-established research base to inform practice with such vulnerable populations.
However, social work authors have been notably missing in the scholarly literature on suicide.
Recently, our research team set out to expand on Joe & Neidermeier’s (2006) review of social worker-authored literature in the field of suicide. What we found closely mirrored Joe & Neidermeier’s results, namely the scarcity of studies aimed at evaluating the effectiveness of interventions.
Approximately 33% of the articles included in our review were categorized as descriptive studies, meaning that they merely offered a description of the suicide phenomena. A majority of the articles were categorized as explanatory (57%), which means that the focus of the study was explaining the relationship between variables or comparing groups. The focus of the explanatory and descriptive research being published by social workers was on young people, followed by psychotherapy, cultural/ethnic issues, bereavement, and research that aimed to understand attitudes toward and about suicide among professionals and the general public.
Only 10% of the articles were classified as control studies, meaning that the study was an examination of a service intervention for suicide prevention, treatment, or postvention. The result of this is that there is a paucity of research guiding direct practice through evaluation of efficacy and utility of social work interventions for suicide prevention.
Social workers bring to the table an emphasis not only on the individual factors that may lead to suicide, but also the environmental influences that contribute to vulnerability.
Suicide rates have been rising for the last decade, and in the United States, suicide is currently the 10th leading cause of death for adults and the 2nd leading cause of death for youth under the age of 24. These rates are similar across the globe in high and middle-income countries, with some groups significantly more vulnerable. For example, in Australia, Aboriginal and Torres Strait Islander people are at higher risk, with double the rate of suicide as the non-Indigenous population. This is largely due to societal issues including oppression and marginalization, impacting these communities over long periods.
To reverse the trend, we need empirically supported interventions for those at risk of suicide and those suffering from the loss of a loved one to suicide, and to take into account the impact of suicide (attempt and death) on those who are exposed to others’ suicidal behaviors.
Social workers bring to the table an emphasis not only on the individual factors that may lead to suicide, but also the environmental influences that contribute to vulnerability.
Social workers work collaboratively in a variety of areas to meet the needs of clients and communities. Likewise, interdisciplinary efforts are needed in suicide research to tackle the challenge of suicide. Remaining committed to the core values of the social work profession while collaborating with other disciplines will be crucial. It is important that social workers contribute to the development, empirical testing, and dissemination of interventions, particularly interventions that are designed for marginalized and oppressed groups, as well as interventions that occur at the level of the environment, such as the family, peer group, or community. Interventions also need to address environmental risk factors that may predispose an individual to suicide.
Social workers should also lead the effort to research suicide among indigenous and rural populations. In the US, Canada, and Australia, Aboriginal suicide rates are higher than the suicide rates for the general population. As the social work profession engages in efforts to right the wrongs of the past, we must contribute to the development of culturally sensitive interventions.
Featured image credit: “1800” by Nic Dayton. CC BY-SA 2.0 via Flickr.
The post Social work and suicide prevention, intervention, and postvention appeared first on OUPblog.

The impact of suicide: World Suicide Prevention Day and why suicide awareness matters
Each year over one million people worldwide die by suicide. In the United States, approximately 42,000 people die by suicide each year, with a suicide occurring every 12.3 minutes. It is the 10th leading cause of death overall, and the 2nd leading cause of death for youth under the age of 24. For World Suicide Prevention Day, we’d like to tell you why this matters to us and why it should matter to you.
A few things we’ve heard:
From a fellow parent at school: “I could not remember your email so did a quick google search and read through your profile. I wanted to say how much respect I have for the work you do
regarding suicide prevention. My younger brother took his own life 2 ½ years ago and I know there are so many out there who have lost a loved one or know someone who has been affected by
it.”
From a colleague: “I don’t think I told you before, but my brother took his life about 10 years ago.”
From a long-time family friend: “I wish I could attend the presentation you are giving. My mom killed herself, and I identify as a loss survivor.”
When someone dies by suicide, their family, loved ones, and communities are often forever changed. The long-held conventional wisdom is that suicide only really impacts close family members, and typically after a suicide, only those considered immediate kin are the recipients of sympathies and condolences. For a long time, this was thought to be only six people. It is #not6. Those who fall outside the circle of next-of-kin are frequently forgotten as grievers in the aftermath of a suicide death, but there is growing evidence that schools, workplaces, places of worship, and communities are also shaken by suicides.
We have developed a new continuum model which shows that a large portion of the population is exposed to suicide, meaning that they personally know someone who has died by suicide. In fact, our studies show that almost half of people report they have lifetime exposure to suicide– they personally know at least one person who has died by suicide. Of those exposed to suicide, some will experience a minor impact on their lives from the suicide, while further along the continuum of exposure others will go on to have short or long term bereavement with the death impacting their life in a devastating way.
In the United States, approximately 42,000 people die by suicide each year, with a suicide occurring every 12.3 minutes.
Each suicide leaves behind as many as 130 people who report they directly knew the person. This means that there are probably up to 25 people for each suicide who have a great deal of distress following the suicide and are probably in need of services to get through the intense emotions. In the US, this represents 4.7 million people annually who are exposed to suicide. There are approximately 20.65 million suicide loss survivors from a suicide in the immediate family living in America (1 of every 15 Americans).
The impact of exposure to suicide deaths at the various points along the continuum is still largely unclear. What we do know is that suicide is a stigmatized death, and that many loss survivors struggle in the aftermath of the death. The stigma of suicide may mean that traditional sources of support and comfort are withdrawn from the loss survivor as those who would typically provide such support are uncomfortable or unsure of a proper response. Loss survivors may also experience self-imposed isolation, caused in part by a fear of judgment and negative reactions from others. There are so many people in our communities who have been directly affected by suicide but do not discuss it due to the shame and stigma individuals often feel about the suicide. The comments above come directly from people we’ve known in our extended personal networks, who perhaps only shared their loss experience because they knew the door to the conversation was open. Due to the unique nature of suicide bereavement, loss survivors are at an increased risk of depression, anxiety, posttraumatic stress disorder, and suicide.
We also know that suicide affects everyone – white, middle aged males have the highest rates of suicide in the US, but suicide cuts across all demographic factors. While fear of negative reactions may prohibit those exposed to suicide from sharing their experience of loss, the reality is that recognizing they aren’t alone in the loss, feeling connected to others, and learning that their grief reactions are normal are important elements of the healing journey for loss survivors.
Discussion about exposure needs to include attempt survivors, those who have survived their own suicide attempt. Estimates suggest that there are 25 suicide attempts for each suicide death, which translates to 1.1 million adults who attempt suicide each year. A previous attempt increases risk for future attempts, though it is important to remember that hope and healing is possible. Because so many more people attempt suicide than die by suicide, even more than half the population is exposed to suicide attempts. Because suicide attempt survivors are still with us, living in our homes and communities, exposure to attempt survivors is important to suicide prevention and helping ensure that further losses to suicide don’t happen. Our colleague Dese’Rae Stage is helping show the wide variety of attempt survivors and we are working with her to understand what we can learn from attempt survivors.
What can you do? First, talk about suicide. Don’t be afraid to tell people that you knew or loved someone who attempted or died by suicide. Even if you don’t know someone personally, don’t be afraid to talk to people when you find out they have experienced a loss to suicide. Their grief is probably difficult and they might be used to getting hurtful answers or hiding their loss. Being open and honest about suicide helps people realize that this leading cause of death is something that affects so many of us. It is #not6.
Featured image credit: Worried Girl by Ryan McGuire. Public domain via Pixabay.
The post The impact of suicide: World Suicide Prevention Day and why suicide awareness matters appeared first on OUPblog.

Is Shakespeare racist?
The following is an extract from the General Introduction to the New Oxford Shakespeare, and looks at the way race is presented in Shakespeare’s work.
Just as there were no real women on Shakespeare’s stage, there were no Jews, Africans, Muslims, or Hispanics either. Even Harold Bloom, who praises Shakespeare as ‘the greatest Western poet’ in The Western Canon, and who rages against academic political correctness, regards The Merchant of Venice as antisemitic. In 2014 the satirist Jon Stewart responded to Shakespeare’s ‘stereotypically, grotesquely greedy Jewish money lender’ more bluntly: ‘Fuck you, Shakespeare! Fuck you!’ (The Daily Show, 2014).
Reactions to Shakespeare’s portrayal of black men have been just as visceral. In 1969 the African-American political activist H. Rap Brown recorded that, as early as high school, he ‘saw no sense in reading Shakespeare’. Why? Because ‘After I read Othello, it was obvious that Shakespeare was a racist.’ (Brown, H. Rap, Die, Nigger, Die! A Political Autobiography of Jamil Abdullah Al-Amin).
In 2001, a committee of teachers in an important South African province wanted to ban Shakespeare from schools because he ‘failed to promote the rejection of racism and sexism’. In 2015 a black activist called for ‘the racist William Shakespeare’ to be completely banned from schools in Zimbabwe.
In 2016, undergraduate English majors at Yale University petitioned to eliminate the monopoly of “white male poets” (including Shakespeare) in a compulsory introductory course. Hindu nationalists in India want to ban the teaching of Shakespeare, first imposed on the country by England’s oppressive colonial rule.
Yet against every one of these indictments, Shakespeare’s attorneys can summon a long list of witnesses for the defense.
Olaudah Equiano (1745-97), the former African slave who became an abolitionist and the first best-selling black Anglophone writer, quoted Shakespeare more than any other author.
Williams Wells Brown (1814-84), born into slavery in Kentucky, author of the first published novel and the first published drama by an African American, visited Stratford-upon-Avon and used quotations from Shakespeare to frame his play The Escape and several chapters of his books.
Frederick Douglass (1818-95), the escaped slave who became the most influential African-American of the nineteenth century, when asked in 1892 to name his favourite authors, listed Shakespeare first; in a reading at the Uniontown Shakespeare Club in December 1877 Douglass took the part of Shylock in The Merchant of Venice; a painting of Othello and Desdemona hung over the fireplace in his Cedar Hill home.
L. R. James (1901-89), the West Indian Marxist and anti-colonial activist, had a lifelong interest in Shakespeare, and argued that Shakespeare, among creative artists, was ‘the most political writer that Britain has ever seen’, and ‘no racist’.
Nelson Mandela (1918-2013) and other anti-apartheid activists imprisoned on Robbin Island shared a Complete Works of Shakespeare; Mandela signed his name opposite ‘Cowards die many times before their death; The valiant never taste of death but once’ (Julius Caesar 2.32-3), and in his autobiography recalled the prisoners’ minimalist performances with ‘no stage, no scenery, no costumes’ but only ‘the text of the play’.

Do such testimonials prove that Shakespeare was not racist? Do they prove that the original performances of Othello, Titus Andronicus and The Merchant of Venice, by white male actors for white audiences, were a critique of early English prejudices about blacks, Jews, and Spaniards? No. But they do prove that outstanding members of persecuted groups have found Shakespeare useful to their own imaginative and political work, to the creation of their own identities, and to the project of transforming an imperfect world.
Even Shakespeare’s critics, from Greene/Chettle/Nashe to Anne Tyler, connect their own work to his.
Zadie Smith’s best-selling prize-winning novel White Teeth contains a scene in which a student asks her teacher whether the ‘dark lady’ in Shakespeare’s Sonnets is ‘black’. Her teacher answers, ‘She’s not black in the modern sense’–because there weren’t any ‘Afro-Carri-bee-yans in England at that time’; or, if there were, such a ‘black’ woman they would have been ‘a slave of some kind’.
This condescending pedagogical put-down has its intended effect: the student blushes with embarrassment, then retreats into indifference. ‘She had thought, just then, that she had seen something like a reflection,’ as though Shakespeare’s sonnets to and about a black woman might be addressing her own experience, but that apparent connection ‘was receding’.
The teacher here is not a sympathetic character, and she is wrong, historically: we now know that there were ‘black’ people in Tudor England, and especially in London. Most of them were inconspicuous, but none of them were slaves: indeed, in Shakespeare’s lifetime the Protestant English proudly distinguished themselves from the Spanish because, unlike their Catholic rivals, they did not enslave people, and slavery had no legal status or enforcement mechanisms (yet). None of Shakespeare’s indisputably black characters are slaves.
Smith’s own attitude to Shakespeare’s sonnets remains opaque in this scene. What is clear, however, is that the fictional student, involuntarily, and the real author, voluntarily, are engaging with, in some way connected to, Shakespeare. In a 2013 essay on her novel NW, Smith reported that one of her inspirations for that novel was Measure for Measure, and in particular a performance she had seen when she was in school, where Claudio was played by a black actor and his sister Isabella by a white one.
Shakespeare’s most politically incorrect plays–Taming, Othello, Merchant of Venice–have become some of his most popular, in theatres and classrooms, precisely because of the controversies surrounding them. Controversy just generates more interest, more dialogue, more connections.
The post Is Shakespeare racist? appeared first on OUPblog.

Is the mind just an accident of the universe?
The traditional view puts forward the idea that the vast majority of what there is in the universe is mindless. Panpsychism however claims that mental features are ubiquitous in the cosmos. In a recent opinion piece for “Scientific American” entitled “Is Consciousness Universal?” (2014), neuroscientist Christof Koch explains how his support of panpsychism is greeted by incredulous stares–in particular when asserting that panpsychism might be the perfect match for neurobiology (see also his piece for Wired in 2013):
“As a natural scientist, I find a version of panpsychism modified for the 21st century to be the single most elegant and parsimonious explanation for the universe I find myself in. … When I talk and write about panpsychism, I often encounter blank stares of incomprehension.” (Koch, 2014, n.p.)
Yet despite abundant skepticism, in the end of 20th century, panpsychism has seen nothing short of a renaissance in philosophy of mind–a trend which is also beginning to be mirrored in the sciences: Physicist Henry Stapp’s “A Mindful Universe” (2011) embraces a version of panpsychism heavily influenced by the works of Harvard mathematician and philosopher Alfred North Whitehead.
Panpsychism has a long, albeit unfortunately sometimes forgotten tradition in the history of philosophy. Philosophers including Giordano Bruno, Gottfried Wilhelm Leibniz, Teilhard de Chardin, and Alfred North Whitehead have embraced different forms of panpsychism, and indeed the presocratic Thales of Miletus claimed that “soul is interfused throughout the universe” (Aristotle, De Anima, 411a7).
In in his seminal 1979 work “Mortal Questions,” NYU philosopher Thomas Nagel put forth the idea that both reductive materialism and mind-body dualism are unlikely to be successful solutions to the mind-body problem. Specifically, a reductive world-view leaves the mind lacking any purpose, while a dualist conception deprives the non-spatial Cartesian mind of any connection to spatial matter. Additionally, the idea of an emergent mind seems inexplicable, even miraculous; it merely puts a label on something that otherwise remains completely mysterious. Thus some version of panpsychism might be a viable alternative–and may even be the “last man standing.”
Yet it was not until David Chalmers’s groundbreaking “The Conscious Mind” (1996) that debates on panpsychism entered the philosophical mainstream. The field has grown rapidly ever since.

Panpsychism is the thesis that mental being is an ubiquitous and fundamental feature pervading the entire universe. It rests on two basic ideas:
(1) The genetic argument is based on the philosophical principle “ex nihilo, nihil fit”–nothing can bring about something which it does not already possess. If human consciousness came to be through a physical process of evolution, then physical matter must already contain some basic form of mental being. Versions of this argument can be found in both Thomas Nagel’s “Mortal Questions” (1979) as well as William James’s “The Principles of Psychology” (1890).
(2) The argument from intrinsic natures dates back to Leibniz. More recently it was Sir Bertrand Russell who noted in his “Human Knowledge: Its Scope and its Limits” (1948):
“The physical world is only known as regards certain abstract features of its space-time structure – features which, because of their abstractness, do not suffice to show whether the world is, or is not, different in intrinsic character from the world of mind.” (Russell 1948, 240)
Sir Arthur Eddington formulated a very intuitive version of the argument from intrinsic natures in his “Space, Time and Gravitation” (1920):
“Physics is the knowledge of structural form, and not knowledge of content. All through the physical world runs that unknown content, which must surely be the stuff of our consciousness.” (Eddington, 1920, 200).
Panpsychism is a surprisingly modern world-view. It might even be called a truly post-modern outlook on reality–mainly for two reasons:
On the one hand, panpsychism bridges the modern epistemological gap between the subject of experience and the experienced object, the latter of whose intrinsic nature is unknown to us. Panpsychists claim that we know the intrinsic nature of matter because we are familiar with it through our own consciousness. Freya Mathews argues in her “For the Love of Matter” (2003):
“… the materialist view of the world that is a corollary of dualism maroons the epistemic subject in the small if charmed circle of its own subjectivity, and that it is only the reanimation of matter itself that enables the subject to reconnect with reality. This ‘argument from realism’ constitutes my defense of panpsychism.” (Mathews, 2003, 44)
On the other hand, panpsychism paints a picture of reality that emphasizes a humane and caring relationship with nature due to its fundamental rejection of the Cartesian conception of nature as a mechanism to be exploited by mankind. For the panpsychist, we encounter in nature other entities of intrinsic value, rather than objects to be manipulated for our gain.
We’d like to end this post with an interview of David Chalmers discussing panpsychism at the Emergence and Panpsychism – International Conference on the Metaphysics of Consciousness held in Munich, Germany in 2011, which brought together almost all the major players of the current debate. You can watch interviews with attendees with our conference playlist.
The post Is the mind just an accident of the universe? appeared first on OUPblog.

September 9, 2016
What do we talk about when we talk about ‘religion’?
Let us start at the Vatican in Rome. St. Peter’s Basilica has a strict dress code: no skirts above the knee, no shorts, no bare shoulders, and you must wear shoes. At the entrance there are signs picturing these instructions. To some visitors this comes somewhat as a surprise. Becky Haskin, age 44, from Fort Worth, Texas, said: “The information we got was that the dress code only applied when the pope was there.” Blocked on her first attempt, she bought a pair of paper pants and a shawl. “It was worth it,” she commented. Other special places are marked in a similar way. When visiting the Vietnam Veterans Memorial in Washington, D.C., you will see signs like “no smoking, no food or drink, no bikes, no running.”Apparently, it cannot be taken for granted that visitors know how to behave in such spaces. Their special character is marked by prohibition signs.
The extraordinary character of such places is often characterized by saying that they are sacred or holy . But what exactly does it mean, calling a particular space “sacred”? One of the most famous scholars in the history of the study of religion who has tried to answer this question is the French sociologist Emile Durkheim. In his classical work The Elementary Forms of Religious Life (1912), he wrote that all religions have a common feature:
They presuppose a classification of the real or ideal things that men conceive of into two classes […] that are widely designated by two distinct terms profane and sacred.
After explaining that not only gods and spirits, but also rocks, trees, houses, etc. can have a sacred character, he asked: how, then, are sacred things distinguished from profane things? Durkheim’s final answer is that the relation between the sacred and the profane must be defined by their heterogeneity, which is absolute. There is no other example of two categories of things as profoundly different from or as radically opposed to each other. Yet, this does not mean that a thing cannot pass from one of these worlds to the other. Initiation rites are an example, the change of status accomplished by such rituals is radical. It is a fundamental transformation: after the initiation, the young man or woman gets a completely new status as a full member of the clan.
It is a fundamental transformation: after the initiation, the young man or woman gets a completely new status as a full member of the clan.
What strikes me most is that the distinction between the sacred and the profane is first of all, a structural one and seems to lack any kind of content: there seem to be no special features that are characteristic of the sacred, or for that matter the religious. Even saying that the sacred is characterized by enormous power, does not add much content. In line with this, scholars have stressed – and rightly so – that nothing is inherently sacred, and that “sacred” is best regarded as a linguistic or classificatory device. Something is called sacred, and thereby (if the speech act succeeds) becomes sacred. This, again, implies that the sacred can be contested. Did the consecration succeed? Is this really a sacred site, and for whom? And at what times (as the example of the woman from Texas shows)? No doubt, some elements – such as burning candles, inscriptions and flowers – are frequently associated with ‘the sacred’ and are used to sacralize, but these are not in themselves things which make a spot, a person or a deed sacred. Something becomes sacred by the elusive act of making it sacred.
At closer inspection the absolute dichotomy is not completely convincing. In the older books on “primitive” and ancient religions, scholars were looking for particularly “strong” examples of religion, which would reveal the essence of religion. The sacred, however, is no absolute value, but a value that indicates specific situations. The fact that “sacred” is a relational notion explains that various levels or grades of sacredness may be possible. With the ongoing processes of the diffusion of established religions into more fluid types of religiosity, ‘the sacred’ has lost its strong contours as well.
The American historian R. Laurence Moore has argued that in the USA “sacred” and “secular” are mixed and in his essay Touchdown Jesus he states: “religion is about something else.” Religion is always connected with attitudes and practices that we would not call “sacred.” Conservative forms of Christianity are also about family values, anti-abortion and the right to carry a gun. New Agers may define themselves by organic gardening or a vegetarian diet. And, thus, it becomes harder and harder to distinguish “sacred” and “profane.” “Religion” is “mixed up” with other areas such as law, politics, sport and sexuality. I would even claim that the interrelationships between the religious and the not-religious is one the most important themes in the future study of religion.
Featured image credit: “Faith” by Thomas Leuthard. CC BY 2.0 via Flickr.
The post What do we talk about when we talk about ‘religion’? appeared first on OUPblog.

Fifteen years after 9/11
Fifteen years after the devastation in lower Manhattan that is routinely referred to as 9/11, the site that was once Ground Zero is unrecognizable. The Twin Towers have been replaced by Michael Arad’s memorial Reflecting Absence, anchored by two voids in part of the space once filled by Minoru Yamasaki’s skyscrapers. Adjacent to it is the September 11 Memorial Museum by Davis, Brody, Bond that abuts the busy downtown thoroughfare of Greenwich Street. Once again it is a bustling site, filled with tourists and those who work in the surrounding buildings and use it as a crosswalk; if anything it is busier than the open plaza that once stood between the towers and featured, among other sculptures, Fritz Koenig’s The Sphere (damaged in the attack and displayed for a time as a relic in Battery Park). Standing in the midst of this flurry it is hard to remember the scene of the crime that wiped out so much of lower Manhattan. It has been effectively erased, sudden death replaced by teeming life. The memory lingers on, however, not prompted by the site that originally defined it, but by both the memorial and museum that in various ways reenact the experience.
Staring into Michael Arad’s voids, with the sound of the waterfalls that line their perimeters roaring in your ears, you confront a black empty hole. The effect is one that mimics the sound and experience prompted by the crashing buildings–over and over again. This is emphatically reinforced by the museum in many places. Visitors descend into the main space where the first experiential installation is a cacophony of sounds recorded on the day of the bombings. Further on you encounter a series of relics, building parts that survived and took on an iconic significance (the survivor staircase; the slurry wall; the last column removed from Ground Zero, the beam that became known as the survivor cross; a section of beams that Philippe de Montebello, then director of the Metropolitan Museum of Art, suggested would make a fitting memorial). Nearly everywhere there are reminders that one is standing in a place that many consider a literal cemetery, especially since actual remains of unidentified bodies are contained behind a wall that fronted by a grid of blue squares. Spencer Finch’s Trying to Remember the Color of the Sky on That September Morning reproduces the various shades of blue on that defined what seemed for a time a perfect New York late summer day.

To mark the fifteenth anniversary of the attacks the museum will mount its first temporary art exhibition inaugurating a special gallery built for that purpose, thereby expanding its focus from the historical artifacts that thus far defined it to contemporary art that responds to the attacks. “Rendering the Unthinkable: Artists Respond to 9/11” will open on 12 September 2016 and is, according to director Alice M. Greenwald, “a way to bring people back to the museum for a second time, and it’s a way to bring people in who might not choose to come otherwise.” The works by 13 artists will include Eric Fischl’s Tumbling Woman (2002; Whitney Museum of American Art) that was previously installed in Rockefeller Center’s indoor underground concourse around the time of the first anniversary of the attacks. Because it suggested images of falling bodies that had previously been largely suppressed in the media, it prompted cries for its removal that were successful; the sculpture was on display for just a week. Fischl’s comments initially appeared inconsistent. At one time he noted that he began working with this image a year earlier when he took photographs of a model tumbling around on a studio floor; he also said that he created the work in response to the death of a friend who had worked on the 106th floor of one of the towers, “by making a monument to what he called the ‘the extremity of choice’ faced by the people who jumped.” Elsewhere he associated pose of the work with the falling sensation that often occurs just before waking. Just recently, however, he declared that the work “was meant to represent those who fell or jumped from the towers in what he called ‘the clearest illustration of the level of horror’ that day, as well as his sense that the country had become less sure-footed after the attacks.” But he now sees an element of hope in the in the extended arm of the woman because he “’had this fantasy that if this sculpture is out in public people will reach out and grab the hand’…in an attempt to connect and also maybe to slow the tumbling down.” In this wish to somehow halt the events of the day, however briefly, maybe even changing their outcome, he reminds us of the Tribute in Light, the temporary twin beams initially created by Julian La Verdiere and Paul Myoda that lit up the night sky first on the six-month anniversary of 9/11 and every anniversary thereafter. Evoking the missing towers, if closely observed the particles of dust that seem to be the basis of their composition are visible, returning us again to the memory of so many and so much being cremated in an instant before cameras that carried these images to the far corners of the world. Reenactment, it seems, defines our memorial experience, but isn’t it time to refocus on lessons learned? Where do you think we should begin?
Featured image: 9/11 Memorial. Photo by Christopher Michel. CC BY 2.0 via Wikimedia Commons.
The post Fifteen years after 9/11 appeared first on OUPblog.

Africa-based scholars in academic publishing: Q&A with Celia Nyamweru
In an effort to address current discussions regarding Africa-based scholars in academic publishing, the editors of African Affairs reached out to Celia Nyamweru for input from her personal experiences. Celia Nyamweru spent 18 years teaching at Kenyatta University (KU) and another 18 years teaching at a US university with a strong undergraduate focus on Africa.
This post follows a previous post written by Ryan C. Briggs and Scott Weathers, regarding their recent research highlighting the effect of gender and location in African Politics scholarship.
What role might the ‘brain drain’ of some of the best African scholars to the US and elsewhere play in academic publishing in Africa?
It is hard to tell. This subject would very much benefit from a study of the career paths of individuals. From my own KU experience, two of our most promising historians (Tabitha Kanogo and Paul T. Zeleza) both moved to North America quite early in their careers and have had productive professional careers with lots of recognition and publication. On a personal level, my own move from KU to the USA in 1991 opened up great opportunities for conferences, participation at the committee level in the (US) African Studies Association, research funding, and eventually publications – which I doubt I would have had, had I continued in Kenya.
How do the daily pressures and tasks required of professors in African universities affect their ability to research, write, and publish in high ranking journals?
Promotion to administrative positions: One early challenge in the 1960s and 1970s was the quick promotion to administrative positions (i.e. Department Chair and Dean – and higher) that faced many of the young African university faculty as they took over from the expatriates who had been there earlier. This gave access to power and higher salaries, but took time away from teaching and research.
The temptation of paid consultancies: such offers can be very appealing in view of the relatively generous per diem and stipends – but it means that the research agenda/question is set by the employer and tends to be more of a pragmatic, problem-solving one that does not easily convert into the kind of scholarship that western academic journals require/prefer.
Heavy teaching loads: Especially for younger faculty, heavy teaching loads take time away from research. For example, in Kenya there has been a significant increase in the number of new colleges across the country. Junior faculty often travel from one campus to another to earn extra money teaching on a part-time basis. Grading loads can be extremely heavy as well. For example, my last class at KU had 510 students and the exam format was a two-hour written paper with three or four essay questions!
Local journals: In the 1970s, there were a number of locally-based scholarly journals in East Africa that could provide a first step on the publication ladder before the big international journals. Examples might be the Uganda Journal, the Kenyan Geographer, the East African Geographical Review, Tanganyika Notes and Records – and I am sure there are other examples. And now – how many of these journals are still published?
Political pressures: African scholars have often faced political pressure to avoid certain topics or go for safely ‘neutral’ topics. In Kenya this was probably worst during the 1980s under Moi’s regime. One Kenyan historian even authored a (positive) book on Moi’s ‘Nyayo Philosophy.’ Some scholars were detained and/or left the country, including the historian Maina wa Kinyatti.
On a personal level, my own move from KU to the USA in 1991 opened up great opportunities.
From your experience, what are the challenges that African-based scholars face in publishing articles with highly ranked African studies journals?
Lack of access: African scholars often lack access to recently published articles and books – even with internet access I doubt that most African universities can afford the subscriptions to the expensive online journals, and books are even more difficult to obtain.
Social networks: Recent generations of African scholars may not have had the same opportunity to develop social and professional networks with scholars in Europe and North America. Many of the first generation of African scholars at African universities completed their higher degrees in Europe or North America before returning to Africa to teach. Later generations of African scholars are more likely to have done their higher degrees at African universities and may not have had the same interaction with those from outside the continent. This can lead to a gradual weakening of their contact with (a) with ‘main-stream’ scholarship in Europe and North America and even (b) with the English language as written and spoken in scholarly discourse. Even if the internet now provides access to both language and scholarship, it cannot fully replace the influence of individual mentors and networks.
What do you think can and should be done to support African-based scholars in publishing in top-ranked journals?
Funding: More support for academic research that is not NGO/consultancy directed – this means funding for the actual fieldwork/lab work and time to conduct research. Back in the 1970s, KU had research funds of its own that were administered through the Deans’ Committee, and all university faculty were free to apply through their department chairs. I am not aware of how well this system still works, or whether most research funding comes through big externally funded projects. I remember a recent conversation with an employee of the National Museums of Kenya who told me that there are virtually no research funds available from the Museums’ own resources – staff are supposed to set up relationships with foreign scholars in order to work jointly, with the funds coming from overseas. This obviously relates to the question of ‘power dynamics.’
Travel programs: Medium term visits to North American/European universities – such as the fellowships offered through the Cambridge University African Studies Institute in the early 2000s – for about 5 to 6 months, with living expenses covered, computer access, use of the university library system, etc. I personally benefited greatly from one of these in 2004/2005. Or even possibly shorter (two to three week) workshops where scholars can bring their work in progress for critique and advice from people experienced in the publishing and editing field. This could be combined with internet-based communication but I think the face-to-face contact, discussions with others, and then time to work on one’s own project would also be very beneficial.
Featured image credit: Jomo Kenyatta University Juja Campus Main Library by Stephenwanjau. CC-BY-SA-3.0 via Wikimedia Commons.
The post Africa-based scholars in academic publishing: Q&A with Celia Nyamweru appeared first on OUPblog.

Leibniz and Europe
At the turn of the seventeenth and eighteenth centuries, national states were on the rise. Versailles was constructed as a stage on which the Sun King, Louis XIV, acted out the pageant of absolute sovereignty, while his armies annexed neighbouring territories for the greater glory of France. At the death of Charles II of Spain in November 1700, the Spanish throne and its extensive possessions in Italy, the Low Countries, and the New World passed to his grandson, Philip, Duke of Anjou. To the east and south, the Ottoman Empire, which already controlled most of the Balkans, Greece, Turkey, the Middle East, and North Africa, once again threatened Vienna. To the north, Sweden consolidated its empire on the shores of the Baltic while the union of the crowns of England and Scotland in 1707 under Queen Anne established Great Britain as a single kingdom.
At the heart of Europe, by contrast, lay a hugely complex and fragmented political entity which resisted the ‘modernizing’ trend of national state formation, and preserved medieval arrangements conceived as rooted in antiquity: the Holy Roman Empire. After three decades of bloodshed retrospectively known as the Thirty Year War (1618-1648), the Empire had achieved a somewhat precarious equilibrium in which hundreds of semi-autonomous imperial estates co-existed under the loose authority of an emperor and a college of princes. Disparaged as a multi-headed monster by many (including a countryman, Samuel Pufendorf), for Leibniz the Holy Roman Empire remained a preferable alternative to national and absolutist states. In his mind, the Empire offered an ideal of shared sovereignty in which limited territorial autonomy could be combined with a central imperial authority, and the main Christian confessions could cohabit peacefully in a balanced, representative Reichstag. Alongside his more famous works on logic, metaphysics, and mathematics, Leibniz wrote innumerable memos and proposals advising rulers on how to strengthen and re-order the Empire into a stable, supra-national political structure which could protect and promote common interests while maintaining local self-determination in territories and imperial free cities. In short, Leibniz regarded political unity in diversity under a supra-national authority as a better path to peace, prosperity, and stability in Europe than the ascendancy of competing national states.
Leibniz’s political vision swam against the main current of European history for hundreds of years. Within a century of his death, the Holy Roman Empire disintegrated before the onslaught of an even more powerful and aggressive French ruler, Napoleon Bonaparte. A century later, the nationalistic aggression of the Second and Third German Reichs provoked two devastating world wars. In the midst of the second of these wars, two intellectuals imprisoned on the tiny Italian island of Ventotene by Mussolini’s Fascist state, Altiero Spinelli and Ernesto Rossi, drafted a manifesto calling For a Free and United Europe organised on federal principles. Last month, the Italian Prime Minister Matteo Renzi invited the German Chancellor Angela Merkel and French President François Hollande to Ventotene, where they placed a wreath at Spinelli’s tomb and reviewed the policy of the European Union in the aftermath of the Brexit referendum.

The political debates of Leibniz’s day – the period in which the nation state was emerging as the dominant form of political organisation in Europe – have a particular resonance this summer – in which the struggle between federal and national principles is being fought out in such momentous fashion. But for Leibniz, politics was part of a comprehensive system of thought, which involved logic and mathematics, epistemology and metaphysics, science and theology. There are few thinkers alive today whose political thought is rooted in an intellectual system of remotely comparable breadth. This is part of the reason why political thinking today is dominated by a tendency to reduce democratic politics to market economics, and human nature to the rational maximization of material self-interest. Perhaps the best reason for reading Leibniz this autumn is as a means of gaining broader historical and philosophical perspectives on the crises of our time.
Featured image credit: Gottfried Wilhelm Leibniz c1700 by Johann Friedrich Wentzel. Public domain via Wikimedia Commons.
The post Leibniz and Europe appeared first on OUPblog.

September 8, 2016
Rebuilding the Houses of Parliament: Victorian lessons learned
“What a chance for an architect!” Charles Barry exclaimed as he watched the old Palace of Westminster burning down in 1834. When he then went on to win the competition to design the new Houses of Parliament he thought it was the chance of a lifetime. Instead it turned into the most nightmarish building project of the nineteenth century. What ‘lessons learned’ might the brilliant classical architect draw up today based on his experiences?
Establish who is the client from the start
The source of many of Barry’s problems and the biggest challenge he faced was ambiguity over his accountability. The creation of a new Houses of Parliament was commissioned by the Board of the Office of Woods, the body in charge of royal palaces and public works in the 1830s and 1840s. This should have been Barry’s client throughout. Instead, during the rebuilding, Barry’s had to deal with others in government, in Parliament, and even among his contractors, who behaved as if they were the client and pressured him to do likewise, leading to conflicting instructions from different stakeholders, ‘scope creep’, and acrimonious disputes that spilled over into the press.
Ensure you have strong governance that will last
The rebuilding took so long due to the difficulties Barry faced, and was so complex, that the Office of Woods couldn’t cope. From an initial estimate of six years to complete the works, it actually took twenty four years to finish – and a further ten years to complete what today would be called the ‘snagging’. In the late 1840s a Royal Commission was set up – a further competing client – to oversee the budget and approve changes to the plans. After three years it threw in the towel in despair and the Office of Woods collapsed under the strain in 1851, to be replaced by a new government department, the Office of Works. Prime Ministers came and went, and in the end Barry was the only constant person throughout the rebuilding.
Allow for contingency in your project plan
The construction of the foundations over gravel beds and quicksand on the land side, and into the riverbed on the other was hugely difficult and time consuming. The first contractors were sacked for slow progress and then the second contractors, Grissell & Peto, well-known for their work on various railway projects, ended up provoking a major strike when their foreman cracked down on the stonemasons to make up for lost time. The changing requirements, suspensions of work caused by select committee inquiries, problems with the stonework, the extraordinary detail and manufacturing of the gothic finishes and furnishings, and last but not least the cracking of the Great Bell – not once but twice – all led to further delays.

Keep a tight hold on the budget
The finances got way out of control because of the extension of time required. Originally planned to cost £710,000 it eventually came in at £2.4m. Quantity surveying – that is, building accountancy – was in its infancy at the time, and even though Barry used the best professionals available to estimate progress, there were arguments with the Office of Woods over when payments should be made to contractors, austerity cuts, and even disputes over the architect’s own salary.
Have a great media strategy
Charles Barry was a very savvy operator when it came to communications. He gave The Illustrated London News exclusive access to the Palace as it was constructed. Its artists were able to tour the building and make sketches for the paper – founded in 1842 – as each part of the building was opened, in return for favourable copy and spectacular publicity such as when the House of Lords chamber opened in 1847. Barry published his own plans of the Palace for the public in a bestselling volume, and when things got tough and he was criticised in the letter columns of the papers by disgruntled contractors, he was swift to rebut their claims in print.
Find a celebrity champion
Barry wisely cultivated a variety of prominent royals to promote the project, and also to keep his spirits up. The most famous was Prince Albert, who had been appointed as Chair of the Fine Arts Commission in 1841, responsible for all the painting and sculpture in the new Palace. Many royals from outside Britain also came to visit as the works progressed – the Tsar of Russia was overwhelmed by the building, calling it ‘a dream in stone’, and sent Charles Barry a gold and diamond snuffbox through the Russian ambassador afterwards as a thank-you present.
Get your experts in a row
Barry’s practice was to use the best people he possibly could to help him achieve his aim of creating the most important building in the country. Of these, AWN Pugin is the most famous – Barry’s collaborator on the gothic detailing of the exterior and the designer of most of the interiors and the furnishings. But there were many others, including James Walker the engineer he employed to work out how the Palace could be built into the Thames; Michael Faraday the scientist who was asked for advice on the air-conditioning; and William ‘Strata’ Smith the geologist who helped to choose the stone. However, some experts proved very difficult to manage, particularly David Boswell Reid ‘the Great Ventilator’ and Edmund Becket Denison, the horologist who designed the clock mechanism. Their inability to get their own way with Barry on their personal schemes and inventions led to rows in the press, litigation, threats of libel and yet more delay.
Take time out for yourself
Finally, Barry kept to a regular and balanced routine throughout his career. He rose before dawn and worked for four hours, before breakfast at eight. Occupied with business in his office at Westminster (now the Sports and Social Club of the Palace of Westminster) during the day, he liked nothing better than to return to his family in the evening and have dinner at six or seven. He then had a quick refreshing snooze, chatted with his wife and children, played the flute, or read until eight, had a cup of tea, and then carried on with work until midnight. This highly-effective regime sustained him in the first ten years of his involvement with the Palace. When things got tough he turned vegetarian to help with the stress. He also took occasional holidays to unwind, especially in mountain scenery, and Paris was a favourite city which he visited on several occasions.
Featured image credit: Houses of Parliament by tm. CC BY 2.0 via Wikimedia Commons.
The post Rebuilding the Houses of Parliament: Victorian lessons learned appeared first on OUPblog.

10 facts about the recorder
You might associate the recorder with memories of a second grade classroom and sounds vaguely resembling the tune of “Three Blind Mice” or “Mary Had a Little Lamb.” While the recorder has become a popular instrument in music education, it also has an extensive and interesting history. As our thoughts once again return to the school year, let us learn more about the structure and performance capabilities of the recorder with our ten facts:
Recorders come in a number of sizes. The four most commonly played today – descant, treble, tenor, and bass – roughly correspond to the four principal voice parts – soprano, alto, tenor and bass.
The term ‘recorder’ was first used in reference to a musical instrument in the year 1388, when it was listed as a part of the household of the Earl of Derby (who later became King Henry IV).
In many European languages, the word for recorder was the same as the word for flute.
The details of the construction of a recorder have changed drastically throughout history. However, the basic structure of the principle characteristics –whistle mouthpiece, seven finger-holes, thumb hole – have always remained the same.
The earliest known version of a recorder is a 14th century instrument found in Göttingen, Germany. It is 256 mm in length, and made from a single piece of plumwood. The design is conducive to players who are left- or right-handed due to the presence of widely-spaced double holes for the bottom finger.
During the 16th century, recorders became a staple instrument of professional wind players and were possessions of many upper class households and palaces in Europe. Some members of the upper class even tried their own hand at the recorder. It then became a popular amateur instrument among the middle class as well.
During the 17th century, or early Baroque period, recorders were constructed in three parts, called joints: the head, middle, and foot. The middle section had 7 finger-holes while the foot had only one.
After 1750, the popularity of the recorder declined and it was not often found in musical repertoire. However, the turn of the 20th century brought a revival of the instrument in a variety of different musical styles ranging from avant-garde and theatrical to minimalist and microtonal.
Several attempts have been made to modernize the structure of the recorder. The ‘midfield blockflute,’ created by Michael Barker is one which seeks to combine the traditional recorder with synthesized sounds.
Recorders have been a valuable asset to music education since the Renaissance and Baroque periods.
Headline Image Credit: Recorder by Shunichi Kouroki (CC by 2.0) via Flickr
The post 10 facts about the recorder appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
