Oxford University Press's Blog, page 765
September 8, 2014
10 reasons why it is good to be good
The first question of moral philosophy, going back to Plato, is “how ought I to live my life?”. Perhaps the second, following close on the heels of the first, can be taken to be “ought I to live morally or not?”, assuming that one can “get away with” being immoral. Another, more familiar way of phrasing this second question is “why be moral?”, where this is elliptical for something like, “will it be good for me and my life to be moral, or would I be better off being immoral, as long as I can get away with it?”.
Bringing together the ancient Greek conception of happiness with a modern conception of self-respect, it turns out to be bad to be a bad person, while in fact, it is good to be a good person. Here are some reasons why:
(1) Because being bad is bad. Some have thought that being bad or immoral can be good for a person, especially when we can “get away with it”, but there are some good reasons for thinking this is false. The most important reason is that being bad or immoral is self-disrespecting and it is hard to imagine being happy without self-respect. Here’s one quick argument:
Being moral (or good) is necessary for having self-respect.
Self-respect is necessary for happiness.
____________________________________________
Therefore, being good is necessary for happiness.
Of course, a full defense of this syllogism would require more than can be given in a blog post, but hopefully, it isn’t too hard to see the ways in which lying, cheating, and stealing – or being immoral in general – is incompatible with having genuine self-respect. (Of course, cheaters may think they have self-respect, but do you really think Lance Armstrong was a man of self-respect, whatever he may have thought of himself?)
(2) Because it is the only way to have a chance at having self-respect. We can only have self-respect if we respect who we actually are, we can’t if we only respect some false image of ourselves. So, self-respect requires self-knowledge. And only people who can make just and fair self-assessments can have self-knowledge. And only just and fair people, good, moral people can make just and fair self-assessments. (This is a very compacted version of a long argument.)
(3) Because being good lets you see what is truly of value in the world. Part of what being good requires is that good people know what is good in the world and what is not. Bad people have bad values, good people have good values. Having good values means valuing what deserves to be valued and not valuing what does not deserve to be valued.
(4) Because a recent study of West Point cadets reveals that cadets with mixed motivations – some selfish, instrumental, and career-oriented, while others are “intrinsic” and responsive to the value of the job itself – do not perform as well cadets whose motivations are not mixed and are purely intrinsic. (See “The Secret of Effective Motivation”)

(5) Because being good means taking good care of yourself. It doesn’t mean that you are the most important thing in the world, or that nothing is more important than you. But, in normal circumstances, it does give you permission to take better care of yourself and your loved ones than complete strangers.
(6) Because being good means that while you can be passionate, you can choose what you are passionate about; it means that you don’t let your emotions, desires, wants, and needs “get the better of you” and “make” you do things that you later regret. It gives you true grit.
(7) Because being good means that you will be courageous and brave, in the face of danger and pain and social rejection. It gives you the ability to speak truth to power and “fight the good fight”. It helps you assess risk, spot traps, and seize opportunities. It helps you be successful.
(8) Because being good means that you will be wise as you can be when you are old and grey. Deep wisdom may not be open to everyone, since some simply might not have the intellectual wherewithal for it. (Think of someone with severe cognitive disabilities.) But we can all, of course, be as wise as it is possible for us to be. This won’t happen, however, by accident. Wise people have to be able to perspicuously see into the “heart of the matter”, and this won’t happen unless we care about the right things. And we won’t care about the right things unless we have good values, so being good will help make us be as wise as we can be.
(9) Because being good means that we are lovers of the good and, if we are lucky, it means that we will be loved by those who are themselves good. And being lovers of the good means that we become good at loving what is good, to the best of our ability. So, being good makes us become good lovers. And it is good to be a good lover, isn’t it? And good lovers who value what is good are more likely to be loved in return by people who also love the good. What could be better than being loved well by a good person who is your beloved?
(10) Because of 1-9 above, only good people can live truly happy lives. Only good people live the Good Life.
Headline image credit: Diogenes and Plato by Mattia Preti 1649. Capitoline Museums. Public domain via Wikimedia Commons
The post 10 reasons why it is good to be good appeared first on OUPblog.










Nick Bostrom on artificial intelligence
From mechanical turks to science fiction novels, our mobile phones to The Terminator, we’ve long been fascinated by machine intelligence and its potential — both good and bad. We spoke to philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, about a number of pressing questions surrounding artificial intelligence and its potential impact on society.
Are we living with artificial intelligence today?
Mostly we have only specialized AIs – AIs that can play chess, or rank search engine results, or transcribe speech, or do logistics and inventory management, for example. Many of these systems achieve super-human performance on narrowly defined tasks, but they lack general intelligence.
There are also experimental systems that have fully general intelligence and learning ability, but they are so extremely slow and inefficient that they are useless for any practical purpose.
AI researchers sometimes complain that as soon as something actually works, it ceases to be called ‘AI’. Some of the techniques used in routine software and robotics applications were once exciting frontiers in artificial intelligence research.
What risk would the rise of a superintelligence pose?
It would pose existential risks – that is to say, it could threaten human extinction and the destruction of our long-term potential to realize a cosmically valuable future.
Would a superintelligent artificial intelligence be evil?
Hopefully it will not be! But it turns out that most final goals an artificial agent might have would result in the destruction of humanity and almost everything we value, if the agent were capable enough to fully achieve those goals. It’s not that most of these goals are evil in themselves, but that they would entail sub-goals that are incompatible with human survival.
For example, consider a superintelligent agent that wanted to maximize the number of paperclips in existence, and that was powerful enough to get its way. It might then want to eliminate humans to prevent us from switching if off (since that would reduce the number of paperclips that are built). It might also want to use the atoms in our bodies to build more paperclips.
Most possible final goals, it seems, would have similar implications to this example. So a big part of the challenge ahead is to identify a final goal that would truly be beneficial for humanity, and then to figure out a way to build the first superintelligence so that it has such an exceptional final goal. How to do this is not yet known (though we do now know that several superficially plausible approaches would not work, which is at least a little bit of progress).
How long have we got before a machine becomes superintelligent?
Nobody knows. In an opinion survey we did of AI experts, we found a median view that there was a 50% probability of human-level machine intelligence being developed by mid-century. But there is a great deal of uncertainty around that – it could happen much sooner, or much later. Instead of thinking in terms of some particular year, we need to be thinking in terms of probability distributed across a wide range of possible arrival dates.
So would this be like Terminator?
There is what I call a “good-story bias” that limits what kind of scenarios can be explored in novels and movies: only ones that are entertaining. This set may not overlap much with the group of scenarios that are probable.
For example, in a story, there usually have to be humanlike protagonists, a few of which play a pivotal role, facing a series of increasingly difficult challenges, and the whole thing has to take enough time to allow interesting plot complications to unfold. Maybe there is a small team of humans, each with different skills, which has to overcome some interpersonal difficulties in order to collaborate to defeat an apparently invincible machine which nevertheless turns out to have one fatal flaw (probably related to some sort of emotional hang-up).
One kind of scenario that one would not see on the big screen is one in which nothing unusual happens until all of a sudden we are all dead and then the Earth is turned into a big computer that performs some esoteric computation for the next billion years. But something like that is far more likely than a platoon of square-jawed men fighting off a robot army with machine guns.

If machines became more powerful than humans, couldn’t we just end it by pulling the plug? Removing the batteries?
It is worth noting that even systems that have no independent will and no ability to plan can be hard for us to switch off. Where is the off-switch to the entire Internet?
A free-roaming superintelligent agent would presumably be able to anticipate that humans might attempt to switch it off and, if it didn’t want that to happen, take precautions to guard against that eventuality. By contrast to the plans that are made by AIs in Hollywood movies – which plans are actually thought up by humans and designed to maximize plot satisfaction – the plans created by a real superintelligence would very likely work. If the other Great Apes start to feel that we are encroaching on their territory, couldn’t they just bash our skulls in? Would they stand a much better chance if every human had a little off-switch at the back of our necks?
So should we stop building robots?
The concern that I focus on in the book has nothing in particular to do with robotics. It is not in the body that the danger lies, but in the mind that a future machine intelligence may possess. Where there is a superintelligent will, there can most likely be found a way. For instance, a superintelligence that initially lacks means to directly affect the physical world may be able to manipulate humans to do its bidding or to give it access to the means to develop its own technological infrastructure.
One might then ask whether we should stop building AIs? That question seems to me somewhat idle, since there is no prospect of us actually doing so. There are strong incentives to make incremental advances along many different pathways that eventually may contribute to machine intelligence – software engineering, neuroscience, statistics, hardware design, machine learning, and robotics – and these fields involve large numbers of people from all over the world.
To what extent have we already yielded control over our fate to technology?
The human species has never been in control of its destiny. Different groups of humans have been going about their business, pursuing their various and sometimes conflicting goals. The resulting trajectory of global technological and economic development has come about without much global coordination and long-term planning, and almost entirely without any concern for the ultimate fate of humanity.
Picture a school bus accelerating down a mountain road, full of quibbling and carousing kids. That is humanity. But if we look towards the front, we see that the driver’s seat is empty.
Featured image credit: Humanrobo. Photo by The Global Panorama, CC BY 2.0 via Flickr
The post Nick Bostrom on artificial intelligence appeared first on OUPblog.










Catching up with Matthew Humphrys
Katherine Marshall sat down with her law department colleague to discuss life in the Oxford office, what’s on the bookshelf, and becoming Chancellor of the Exchequer.
What is your typical day like at Oxford University Press?
I normally start by planning each day in relation to the week and months ahead, prioritizing what needs to be done. Then I run through emails. After, the day can vary quite a lot depending on what needs doing for the various titles in production. I try to tackle the more complicated or sensitive items in the morning, such as going through complicated e-proof corrections, resolving complex issues (for example in terms of typesetting layout or corrections), checking covers, collating copy-edited files or proof corrections, or speaking to authors about queries or issues.
Later in the day, I might turn my attention to things such as reviewing schedules, booking freelancers, chasing up late corrections or responses, e-book checking, passing on files to the typesetters, sending titles to the printers, writing briefs to freelancers and suppliers or letters to authors, quality checking typescript PDF files, or dealing with invoices. These are all typical things a production editor might do in a day, indeed most of this list would be things I would turn my attention to in any given day. Everything is dealt with digitally these days, so a production editor is really glued to the computer screen.

What was your first job in publishing?
As the production editor of the Philosophy Press (sadly now defunct). It was an unusual role in a very small publishing company which involved running the company administratively, and helping to produce The Philosophers’ Magazine (print and digital) and a couple of titles about philosophy.
What are you reading right now?
One of the several books I’m reading at the moment (for the third time) is The Black Swan: the Impact of the Highly Improbable by Nassim Nicholas Taleb, which is about the impact of highly improbable events on life, particularly in terms of economics. One of the key subjects is about the widespread lack of understanding within economics about risk and probability, particularly in terms of the fractal nature of economic data which dictates that data cannot be predicted into the future from past events with any certainty. It covers a number of related psychological and epistemological subjects.
What’s the first thing you do when you get to work in the morning?
Have a cafetière of the strongest coffee I can find.
Open the book you’re currently reading and turn to page 75. Tell us the title of the book, and the third sentence on that page.
Cicero, “Discussions at Tusculum (V)” in On the Good Life (Penguin, 1971): “A man who lacks the absolute certainty that everything depends on himself and himself alone is in no condition to hold his head high and distain whatever hazards the chances of human life may inflict.”
If you could trade places with any one person for a week, who would it be and why?
Probably the Chancellor of the Exchequer, as I strongly disagree with the dominant contemporary approach to economics, which involves so much platonifying of ideas and then thrusting them upon a world which they bear so little relation to. I would do everything I could to switch the outlook to a much more Keynesian approach which is about being compatible and adaptable to the way people behave and aims at a full employment equilibrium. I appreciate I wouldn’t be able to achieve much in one week!
If you were stranded on a desert island, what three items would you take with you?
A copy of Edgar Alan Poe’s poetry and prose, some good coffee, and my iPod.
What is the most important lesson you learned during your first year on the job?
How to be efficient. I thought I was before, but I really wasn’t.
If you didn’t work in publishing, what would you be doing?
I would probably still be working as a pipe organ builder, which was my role before I made the move to publishing. Now, I keep my hand in by tuning instruments in my holidays.
The post Catching up with Matthew Humphrys appeared first on OUPblog.










Biocultural community protocols and the future of conservation
On 17 July 2014, the Namibian, a local daily in Namibia, reported a rather momentous event: the development of a biocultural community protocol of the Khoe community of the Bwabwata National Park — the first of its kind in Namibia.
Around 6,700 Khoe people reside in Bwabwata National Park in Namibia’s West, and in the Kavango and Zambezi regions; they survive mainly as hunters and gatherers. The Khoe developed the protocol with assistance from the Namibian Ministry of Environment and Tourism and Natural Justice (an international collective of environmental and community lawyers). The protocol sought to articulate the Khoe’s values, priorities, and procedures for decision-making around their resources, as well as set out their rights and responsibilities under customary, state, and international law. The protocol would be used as the basis for engaging with external actors such as the government, companies, academics, and non-governmental organizations, who seek access to the Khoe lands, and traditional and genetic resources for research and development, commercialization, conservation, and other legal and policy frameworks.
To appreciate the momentousness of the Khoe protocol, it would be important to put it in the context of the larger law and policy debates around biodiversity conservation and community rights. The legal discourse around conservation of biodiversity during the colonial and post-independence period has been based on a ‘fines and fences’ approach. Lands and waters that had been historically stewarded by communities were fenced off and classified as national parks, wildlife sanctuaries, and other kinds of protected areas. Communities were dispossessed in the name of conservation and penalized for carrying on their traditional livelihoods and customary practices.
In the late 1960s, the excesses of the ‘fines and fences’ approach was ‘scientifically’ justified on the basis of a theory of ‘tragedy of the commons’. The theory argued that where consequences regarding commonly held resources are borne by the community as a whole, individuals would maximize self-interest to the detriment of the community and sustainability of the resources. The theory therefore proposed that long-term sustainability of common-pool resources is best ensured when such resources are privatized or state-controlled.
Extensive research since the 1990s on governance of the commons by political scientists and economists such as Elinor Ostrom and Arun Agarwal unequivocally established that state control and privatization of common pool resources are not necessarily the best solutions to ensure conservation, and in many cases are counter-productive. Contrary to the ‘tragedy of the commons’ assertion of the destruction of common pool resources due to mismanagement by communities, researchers working on the commons established that under certain conditions communities are best able to conserve ecosystems.
Recent research evaluating the effectiveness of protected areas under different kinds of management regimes traced forest change in three diverse landscapes: the Chitwan District of Nepal, the Mahananda Wildlife Sanctuary in West Bengal, India, and the Tadoba-Andhari Tiger Reserve in Maharashtra, India. The research found that a protectionist approach that excludes local communities is likely to fail without expensive government inputs. Conservation is also likely to fail in cases where outsiders or dominant insiders impose rules on the community for use of resources. However the research also proved that effective management of forest resources occurs when community members are genuinely involved in decision-making and in developing rules for the use of these resources.

Perhaps the most far-reaching legal instrument recognizing the role of indigenous peoples and local communities in conserving ecosystems is the Convention on Biological Diversity (CBD). The CBD entered into force in 1993 and currently has 193 states that are parties to it. The CBD advocates a ‘rights and incentives’ approach to conservation and sustainable use of biodiversity. This approach seeks to recognize certain rights over genetic resources and associated traditional knowledge while ensuring the fair and equitable sharing of benefits arising from the commercial and research utilization of such resources and knowledge.
While the Convention on Biological Diversity is explicit in Article 15.1, regarding the rights of states over genetic resources, Articles 8(j) and 10(c) of the CBD recognize the rights of communities to their knowledge, innovations, practices, and customary sustainable use of relevant biological resources. Through Articles 8(j) and 10(c), the CBD firmly lays the foundation for a discourse of stewardship affirming the rights of communities to local ecosystems and ways of life that nurture these ecosystems. They are based on the principle that biodiversity is best conserved when common pool resources are governed and managed by communities whose lifestyles are integrally intertwined with these resources.
The principles and the framework of the Convention on Biological Diversity has spawned a range of other legal instruments all of which underscore the role of communities in conserving ecosystems and affirm community rights to common pool resources as a way to stem the alarming loss of biodiversity. These instruments include the Akwé: Kon Guidelines, the Addis Ababa Principles, and the Tkarihwaié:ri Code of Ethical Conduct, the Programme of Work on Protected Areas (PoWPA), and the Nagoya Protocol on Access and Benefit Sharing. The preamble of the Nagoya Protocol notes ‘the interrelationship between genetic resources and traditional knowledge, their inseparable nature for indigenous and local communities, the importance of the traditional knowledge for the conservation of biological diversity and the sustainable use of its components, and for the sustainable livelihoods of these communities.’ The Nagoya Protocol in Articles 6and 7 goes further than the CBD and explicitly recognizes the rights of communities to their genetic resources and associated traditional knowledge commons.
Rethinking Property and the Emergence of Biocultural RightsThe rights of communities in the swathe of legal instruments birthed by the Convention on Biological Diversity are rooted in the principle that effective conservation and sustainable use of ecosystems can only be ensured by recognizing the rights of those who manage and govern these ecosystems as common pool resources. These rights are increasingly referred to in law as ‘biocultural rights’ and are justified not on the basis of communities having a formal legal title to certain lands and waters, but on the basis of historical stewardship founded on the cultural practices and spiritual beliefs.
The emergence of biocultural rights forces a rethink of the conventional understanding of property as private property. Instead biocultural rights make a case for the right to commons by arguing that property need not be perceived purely as a thing that one has absolute rights over, but can also be viewed as a network of use and stewarding relationships amongst a number of rights holders. Within a rights discourse, biocultural rights can be contextualized as a subset of the third generation grouporsolidarityrights. The notion of stewardship is critical for a discourse of biocultural rights, for it provides the ethical content for these rights — whereby rights to land, culture, traditional knowledge, self-governance, etc. are informed by a set of values that are not anthropocentric but biocentric.
Realizing Biocultural Rights — Towards Biocultural Community ProtocolsThe steady recognition of biocultural rights in international environmental law has led to questions about how best to affirm these rights to steward common pool resources. The dilemma in law presents itself as: ‘when there are multiple stewards of common pool resources, how can decisions regarding these resources effectively take on board the diverse concerns and interests?’
This question became particularly relevant in the context of the international negotiations towards the Nagoya Protocol. State parties on many occasions argued that when it comes to community managed genetic resources or traditional knowledge commons, it would be best for the state to make decisions regarding third party access to such resources and knowledge since communities are neither homogenous nor have homogenous interests. The private and the research sectors also raised concerns of high transaction costs in securing the consent of communities in accessing their resources and knowledge especially due to the inability of companies or researchers to discern the customary laws or decision making structures.
It was in this context, that the African Group of countries supported by the indigenous peoples groups in the Nagoya Protocol negotiations suggested biocultural community protocols (BCPs) as a solution. BCPs — or what later came to be known as community protocols in the Nagoya Protocol — are community-led instruments that promote participatory advocacy for the recognition of and support for ways of life that are based on the sustainable use of biodiversity, according to customary, national, and international laws and policies. The value and integrity of BCPs lie in the process that communities undertake to develop them, in what the protocols represent to the community, and in their future uses and effects.
Biocultural community protocols in essence begin with the end in mind, which is conservation and sustainable use of biodiversity. They then describe the way of life of the community, its customary laws, cultural and spiritual values, governance and decision-making structures, etc., all of which contribute to the stewarding of the ecosystem commons. The community then identifies its current challenges and lays claim to a range of rights in domestic and international law. In essence, the broad rights claim allows the community to determine for itself its way of life, which in turn ensures the continuation of their stewardship practices. The value of community protocols lies in their ability to act as the glue that holds together the total mosaic of a community life that is fragmented under different laws and policies, with the understanding that the conservation of Nature is a result of a holistic way of life.
The Nagoya Protocol in Article 12.1 requires parties to recognize biocultural community protocols or other community protocols as legal documents that assert community claims over their common pool resources and providing clear rules and conditions of access to community commons by third parties. Increasingly communities such as the Khoe are now developing BCPs as charters of biocultural rights asserting stewardship claims over community managed commons in areas that extend beyond access and benefit sharing to potentially address situations of mining, carbon stocks, and ecosystem services.
While the Nagoya Protocol foregrounded biocultural community protocols as innovative legal tools for communities to assert stewardship claims over their resource and knowledge commons, communities are also advocating BCPs as effective safeguards in the context of REDD+ under the UN Framework Convention on Climate Change (UNFCCC). The cross-sectoral application of biocultural community protocols was bound to happen since the critical issue that underlies all the innovative financing mechanisms for conservation — be it REDD+, ABS or other kinds of payments for ecosystem services — is one of recognizing and incentivizing stewardship of ecosystems through safeguarding the biocultural rights of communities.
For communities such as the Khoe, biocultural community protocols make the critical link in law between conservation of ecosystem commons and the recognition of the biocultural rights of communities stewarding these commons. The immense value of BCPs lie in their ability to act as effective legal vehicles engendering the discourse of biocultural rights thereby transforming the basis of property from ownership to stewardship.
The post Biocultural community protocols and the future of conservation appeared first on OUPblog.










What constitutes a “real” refugee?
Refugee identity is often shrouded in suspicion, speculation and rumour. Of course everyone wants to protect “real” refugees, but it often seems – upon reading the papers – that the real challenge is to find them among the interlopers: the “bogus asylum seekers”, the “queue jumpers”, the “illegals”.
Yet these distinctions and definitions shatter the moment we subject them to critical scrutiny. In Syria, no one would deny a terrible refugee crisis is unfolding. Western journalists report from camps in Jordan and Turkey documenting human misery and occasionally commenting on political manoeuvring, but never doubting the refugees’ veracity.
But once these same Syrians leave the overcrowded camps to cross the Mediterranean, a spell transforms these objects of pity into objects of fear. They are no longer “refugees”, but “illegal migrants” and “terrorists”. However data on migrants rescued in the Mediterranean show that up to 80% of those intercepted by the Italian Navy are in fact deserving of asylum, not detention.
Other myths perpetuate suspicion and xenophobia. Every year in the UK, refugee charity and advocacy groups spend precious resources trying to counter tabloid images of a Britain “swamped” by itinerant swan-eaters and Islamic extremists. The truth – that Britain is home to just 1% of refugees while 86% are hosted in developing countries, including some of the poorest on earth, and that one-third of refugees in the UK hold University degrees – is simply less convenient for politicians pushing an anti-migration agenda.
We are increasingly skilled in crafting complacent fictions intended not so much to demonise refugees as exculpate our own consciences. In Australia, for instance, ever-more restrictive asylum policies – which have seen all those arriving by boat transferred off-shore and, even when granted refugee status, refused the right to settle in Australia – have been presented by supporters as merely intended to prevent the nefarious practice of “queue-jumping”. In this universe, the border patrols become the guardians ensuring “fair” asylum hearings, while asylum-seekers are condemned for cheating the system.
That the system itself now contravenes international law is forgotten. Meanwhile, the Sri Lankan asylum-seeking mothers recently placed on suicide watch – threatening to kill themselves in the hope that their orphaned, Australian-born children might then be saved from detention – are judged guilty of “moral blackmail”.

Such stories foster complacency by encouraging an extraordinary degree of confidence in our ability to sort the deserving from the undeserving. The public remain convinced that “real” refugees wait in camps far beyond Europe’s borders, and that they do not take their fate into their own hands but wait to be rescued. But this “truth” too is hypocritical. It conveniently obscures the fact that the West will not resettle one-tenth of the refugees who have been identified by the United Nations High Commission for Refugees as in need of resettlement.
In fact, only one refugee in a hundred will ever be resettled from a camp to a third country in the West. In January 2014 the UK Government announced it would offer 500 additional refugee resettlement places for the “most vulnerable” refugees as a humanitarian gesture: but it’s better understood as political rationing.
Research shows us that undue self-congratulation when it comes to “helping” refugees is no new habit. Politicians are fond of remarking that Britain has a “long and proud” tradition of welcoming refugees, and NGOs and charities reiterate the same claim in the hope of grounding asylum in British cultural values.
But while the Huguenots found sanctuary in the seventeenth century, and Russia’s dissidents sought exile in the nineteenth, closer examination exposes the extent to which asylees’ ‘warm welcome’ has long rested upon the convictions of the few prepared to defy the popular prejudices of the many.
Poor migrants fleeing oppression have always been more feared than applauded in the UK. In 1905, the British Brothers’ League agitated for legislation to restrict (primarily Jewish) immigration from Eastern Europe because of populist fears that Britain was becoming ‘the dumping ground for the scum of Europe’. Similarly, the bravery of individual campaigners who fought to secure German Jews’ visas in the 1930s must be measured against the groundswell of public anti-semitism that resisted mass refugee admissions.

British MPs in 1938 were insistent that ‘it is impossible for us to absorb any large number of refugees here’, and as late as August 1938 the Daily Mail warned against large number of German Jews ‘flooding’ the country. In the US, polls showed that 94% of Americans disapproved of Kristallnacht, 77% thought immigration quotas should not be raised to allow additional Jewish migration from Germany.
All this suggests that Western commitment after 1951 to uphold a new Refugee Convention should not be read as a marker of some innate Western generosity of spirit. Even in 1947, Britain was forcibly returning Soviet POWs to Stalin’s Russia. Many committed suicide en route rather than face the Gulags or execution. When in 1972, Idi Amin expelled Ugandan’s Asians – many of whom were British citizens – the UK government tried desperately to persuade other Commonwealth countries to admit the refugees, before begrudgingly agreeing to act as a refuge of “last resort”. If forty years on the 40,000 Ugandan Asians who settled in the UK are often pointed to as a model refugee success story, this is not because but in spite of the welcome they received.
Many refugee advocates and NGOs are nevertheless wary of picking apart the public belief that a “generous welcome” exists for “real” refugees. The public, after all, are much more likely to be flattered than chastised into donating much needed funds to care for those left destitute – sometime by the deliberate workings of the asylum system itself. But it is important to recognise the more complex and less complacent truths that researchers’ work reveals.
For if we scratch the surface of our asylum policies beneath a shiny humanitarian veneer lies the most cynical kind of politics. Myth making sustains false dichotomies between deserving “refugees” there and undeserving “illegal migrants” here – and conveniently lets us forget that both are fleeing the same wars in the same leaking boats.
The post What constitutes a “real” refugee? appeared first on OUPblog.










September 7, 2014
Moving from protest to power
Now that the National Guard and the national media have left, Ferguson, Missouri is faced with questions about how to heal the sharp power inequities that the tragic death of Michael Brown has made so visible. How can the majority black protestors translate their protests into political power in a town that currently has a virtually all-white power structure?
Recent experiences demonstrate that moving from protest to power is no easy task. For 18 days in 2011, hundreds of thousands of protestors filled Tahrir Square in Egypt to bring down the government of Hosni Mubarak, but three years later, the Egyptian military is back in power. Hundreds of Occupy Wall Street protestors encamped in Zucotti Park for 60 days in the fall of 2011, but few policies resulted that help ameliorate the income inequality they protested. Both of these movements, and many others like them — from Gezi Park in Turkey to the Indignados in Spain — were able to draw hundreds or thousands of people to the streets in a moment of outrage, but lacked the infrastructure to harness that outrage into durable political change.
Protestors in Ferguson risk the same fizzle unless they can build — and maintain — a base of engaged activists and leaders who will persist even after the cameras leave. Transformation of entrenched power structures like a military regime in Egypt, or structures of inequality and state-sanctioned police force in the United States happens only when there is a counterbalancing base of power. That counterbalancing base of power, has to come from the people.
How do people, in these instances, become power? Research shows that building collective power among people depends on transforming people so that they develop their own capacity as leaders to act on injustices they face. Transforming protest into power, in other words, starts with transforming people.

So how are people transformed? Research shows that 79% of activists in the United States report becoming engaged through a civic organization. Every day, thousands of civic organizations across the country, from the NAACP to the Tea Party, work to transform people into activists to win the victories they want.
Yet many of these organizations are still unsure of the best way to build the kind of long-term activist base needed in Ferguson. Many organizations know how to craft messages or leverage big data to find people who will show up for a rally or one event. Few organizations know how to take the people who show up, and transform some of them into citizen leaders who will become the infrastructure that harnesses energy from a week of protest into real change.
I spent two years comparing organizations with strong records of ongoing activism to those with weaker records to try to understand what they do differently. I found that it comes down to their investment in building the motivation, knowledge, and skills of their members. Turning protest into power begins with creating opportunities for people like the residents of Ferguson to exercise their own leadership.
Consider Priscilla, a young organizer working in the rural South to engage people around shutting down coal. When she first started organizing, Priscilla spent all of her time finding people who would show up for town halls, public meetings, and press events. She devoted hours to writing catchy messages and scripts that would get people’s attention, and asked her volunteers, mostly older retirees, to read these routinized scripts into the voicemail of a long list of phone numbers.
After several months of this work, Priscilla was exhausted. She wanted something different. An experienced organizer told her to invest time in developing the leadership of a cadre of volunteers, instead of spending all her time trying to get people to show up to events. Others scoffed at this advice: volunteers don’t want to take on leadership, they said. They want to take action that is easy, makes them feel good, and doesn’t take any time.
[image error]“Occupy Wall Street” by Aaron Bauer. CC BY 2.0 via Flickr.
Priscilla decided to give it a try. She reached out to a group of likely volunteers to ask them to coffee. She began to get to know them as people. When some agreed to volunteer, she sat them down and explained the larger strategy behind the town hall meeting they were planning, instead of handing them a long list of phone numbers to call. Then, she asked the volunteers what piece of the planning they wanted to be responsible for.
Priscilla started spending her time training and supporting these volunteers in the tasks they’d chosen to oversee. With her help, these volunteers developed their own strategies for getting media for the event, identifying a program of speakers, and leveraging their own social networks to generate turnout. When the big day arrived, more people showed up than Priscilla would have been able to get on her own. More importantly, after the event was over, she also had a group of volunteer leaders exhilarated by their experience running a town hall and eager to do more.
Instead of just getting bodies to fill a room, Priscilla had begun the process of developing leaders. Instead of just coming to one rally, those leaders stayed with and built the campaign that eventually shut down the coal plant in their community.
There are talented organizers on the ground in Ferguson trying to do just what Priscilla did: give residents opportunities to develop the skills and motivation they need to make the change they want. Only by developing those kinds of leaders will organizations in Ferguson develop the infrastructure they need to turn the protest into real power for the residents who feel disconnected from it now.
When Alexis de Tocqueville observed America in the 1830s, he famously wrote that civic organizations are the backbone of our nation because they act as “schools of democracy,” teaching people how to work collectively with others to advance their interests. De Tocqueville is as right today as he was 174 years ago. We have always known that people power democracy. What protests from Occupy to the Arab Spring to Ferguson are teaching us is that democracy can also power people.
Headline image credit: “Occupy Wall Street” by Darwin Yamamoto. CC BY-NC-ND 2.0 via Flickr
The post Moving from protest to power appeared first on OUPblog.










Clerical celibacy
A set of related satirical poems, probably written in the early thirteenth century, described an imaginary church council of English priests reacting to the news that they must henceforth be celibate. In this fictional universe the council erupted in outrage as priest after priest stood to denounce the new papal policy. Not surprisingly, the protests of many focused on sex, with one speaker, for instance, indignantly protesting that virile English clerics should be able to sleep with women, not livestock. However, other protests were focused on family. Some speakers appealed to the desire for children, and others noted their attachment to their consorts, such as one who exclaimed: “This is a useless measure, frivolous and vain; he who does not love his companion is not sane!” The poems were created for comical effect, but a little over a century earlier English priests had in fact faced, for the first time, a nationwide, systematic attempt to enforce clerical celibacy. Undoubtedly a major part of the ensuing uproar was about sex, but in reality as in fiction it was also about family.
Rules demanding celibacy first appeared at church councils in the late Roman period but were only sporadically enforced in Western Europe through the early Middle Ages and never had more than a limited impact in what would become the Eastern Orthodox Church. In Anglo-Saxon England moralists sometimes preached against clerical marriage and both king and church occasionally issued prohibitions against it, but to little apparent effect. Indeed, one scribe erased a ban on clerical marriage from a manuscript and wrote instead, “it is right that a cleric (or priest) love a decent woman and bed her.” In the eleventh century, however, a reinvigorated papacy began a sustained drive to enforce clerical celibacy throughout Catholic Europe for clerics of the ranks of priest, deacon, or subdeacon. This effort provoked great controversy, but papal policy prevailed, and over the next couple of centuries increasingly made clerical celibacy the norm.
In England, it was Anselm, the second archbishop of Canterbury appointed after the Norman Conquest, who made the first attempt to systematically impose clerical celibacy in 1102. Anselm’s efforts created a huge challenge to the status quo, for many, perhaps most English priests were married in 1102 and the priesthood was often a hereditary profession. Indeed, Anselm and Pope Paschal II agreed not to attempt in the short term to enforce one part of the program of celibacy, the disbarment of sons of priests from the priesthood, because that would have decimated the ranks of the English clergy. Anselm, moreover, found himself trying to figure out how to allow priests to take care of their former wives, and priests who obediently separated from their wives were apparently sometimes threatened by their angry in-laws. Not surprisingly, Anselm’s efforts were deeply unpopular and faced widespread opposition.

Priests then and in subsequent generations (for Anselm’s efforts had only limited success in the short run) were often deeply attached to their families. A miracle story recorded after Thomas Becket’s death in 1170 describes a grieving priest getting confirmation from the recent martyr that his concubine, who had done good works before her death, had gone to heaven. Other miracle stories show priests and their companions lamenting the illness, misfortune, or death of a child and seeking miraculous aid. It took a long time to fully convince everyone that priestly families were ipso facto immoral. Even late in the twelfth century, the monastic writer John of Ford, in a saint’s life of the hermit Wulfric of Haselbury, could depict the family of a parish priest, Brictric, as perfectly pious, with Brictric’s wife making ecclesiastical vestments and his son and eventual successor as priest, Osbern, serving at mass as a minor cleric. John also depicted a former concubine of another priest as a saintly woman noted for her piety. Proponents of clerical celibacy had a difficult challenge not only in enforcing the rules but in convincing people that they ought to be enforced in the first place.
Inevitably, priests’ families suffered heavily from the drive for celibacy. The sons of priests lost the chance to routinely follow in their father’s professional footsteps, as most medieval men did. After priestly marriage was legally eliminated, sons and daughters both were automatically illegitimate, bringing severe legal disadvantages. However, it was the female companions of priests who suffered most. Partly this was because one of the key motives behind clerical celibacy was the belief that sexual contact with women polluted priests who then physically touched God by touching the sacrament as they performed the Eucharist. Moralists constantly preached that this was irreligious, even blasphemous, and disgusting. However, the female partners of priests also suffered because preachers constantly denigrated them as whores and used misogynistic stereotypes to try to convince priests that they should avoid taking partners. Thus preachers repeatedly attacked priests for wasting money on adorning their “whores” or for arising from having sex with their “whores” to go perform the Eucharist. It is hard to know the precise position of priests’ wives in the eleventh century but it is quite likely that most were perfectly respectable. Nonetheless, the attacks of reformers had a powerful impact. In 1137 King Stephen decided to do his part to encourage clerical celibacy, and raise money in the process, by rounding up clerical concubines and holding them in the Tower of London for ransom. Some of these were probably partners of canons of St Paul’s cathedral, who were rich and powerful men, but even so, while in the tower they were subject to physical mockery and abuse. Increasingly, it was impossible to be both the partner of a priest and a respectable member of society.
Many of the proponents of clerical celibacy were fiercely idealistic in their efforts to prevent what they saw as widespread pollution of the Eucharist, to remove the costs of families from the financial burdens of churches, to make the priesthood more distinctive from the laity, and simply to enforce church law. As the historian Christopher Brooke suggested nearly six decades ago, however, and as subsequent research has clearly demonstrated, one result of their efforts was a social revolution that resulted in broken homes and personal tragedies.
Headline image: 12th Century painters, from the Web Gallery of Art. Public domain via Wikimedia Commons.
The post Clerical celibacy appeared first on OUPblog.










Why study paradoxes?
Why should you study paradoxes? The easiest way to answer this question is with a story:
In 2002 I was attending a conference on self-reference in Copenhagen, Denmark. During one of the breaks I got a chance to chat with Raymond Smullyan, who is amongst other things an accomplished magician, a distinguished mathematical logician, and perhaps the most well-known popularizer of `Knight and Knave’ (K&K) puzzles.
K&K puzzles involve an imaginary island populated by two tribes: the Knights and the Knaves. Knights always tell the truth, and Knaves always lie (further, members of both tribes are forbidden to engage in activities that might lead to paradoxes or situations that break these rules). Other than their linguistic behavior, there is nothing that distinguishes Knights from Knaves.
Typically, K&K puzzles involve trying to answer questions based on assertions made by, or questions answered by, an inhabitant of the island. For example, a classic K&K puzzle involves meeting an islander at a fork in the road, where one path leads to riches and success and the other leads to pain and ruin. You are allowed to ask the islander one question, after which you must pick a path. Not knowing to which tribe the islander belongs, and hence whether she will lie or tell the truth, what question should you ask?
(Answer: You should ask “Which path would someone from the other tribe say was the one leading to riches and success?”, and then take the path not indicated by the islander).
Back to Copenhagen in 2002: Seizing my chance, I challenged Smullyan with the following K&K puzzle, of my own devising:
There is a nightclub on the island of Knights and Knaves, known as the Prime Club. The Prime Club has one strict rule: the number of occupants in the club must be a prime number at all times.

The Prime Club also has strict bouncers (who stand outside the doors and do not count as occupants) enforcing this rule. In addition, a strange tradition has become customary at the Prime Club: Every so often the occupants form a conga line, and sing a song. The first lyric of the song is:
“At least one of us in the club is a Knave.”
and is sung by the first person in the line. The second lyric of the song is:
“At least two of us in the club are Knaves.”
and is sung by the second person in the line. The third person (if there is one) sings:
“At least three of us in the club are Knaves.”
And so on down the line, until everyone has sung a verse.
One day you walk by the club, and hear the song being sung. How many people are in the club?
Smullyan’s immediate response to this puzzle was something like “That can’t be solved – there isn’t enough information”. But he then stood alone in the corner of the reception area for about five minutes, thinking, before returning to confidently (and correctly, of course) answer “Two!”
I won’t spoil things by giving away the solution – I’ll leave that mystery for interested readers to solve on their own. (Hint: if the song is sung with any other prime number of islanders in the club, a paradox results!) I will note that the song is equivalent to a more formal construction involving a list of sentences of the form:
At least one of sentences S1 – Sn is false.
At least two of sentences S1 – Sn is false.
————————————————
At least n of sentences S1 – Sn is false.
The point of this story isn’t to brag about having stumped a famous logician (even for a mere five minutes), although I admit that this episode (not only stumping Smullyan, but meeting him in the first place) is still one of the highlights of my academic career.

Instead, the story, and the puzzle at the center of it, illustrates the reasons why I find paradoxes so fascinating and worthy of serious intellectual effort. The standard story regarding why paradoxes are so important is that, although they are sometimes silly in-and-of-themselves, paradoxes indicate that there is something deeply flawed in our understanding of some basic philosophical notion (truth, in the case of the semantic paradoxes linked to K&K puzzles).
Another reason for their popularity is that they are a lot of fun. Both of these are really good reasons for thinking deeply about paradoxes. But neither is the real reason why I find them so fascinating. The real reason I find paradoxes so captivating is that they are much more mathematically complicated, and as a result much more mathematically interesting, than standard accounts (which typically equate paradoxes with the presence of some sort of circularity) might have you believe.
The Prime Club puzzle demonstrates that whether a particular collection of sentences is or is not paradoxical can depend on all sorts of surprising mathematical properties, such as whether there is an even or odd number of sentences in the collection, or whether the number of sentences in the collection is prime or composite, or all sorts of even weirder and more surprising conditions.
Other examples demonstrate that whether a construction (or, equivalently, a K&K story) is paradoxical can depend on whether the referential relation involved in the construction (i.e. the relation that holds between two sentences if one refers to the other) is symmetric, or is transitive.
The paradoxicality of still another type of construction, involving infinitely many sentences, depends on whether cofinitely many of the sentences each refer to cofinitely many of the other sentences in the construction (a set is cofinite if its complement is finite). And this only scratches the surface!
The more I think about and work on paradoxes, the more I marvel at how complicated the mathematical conditions for generating paradoxes are: it takes a lot more than the mere presence of circularity to generate a mathematical or semantic paradox, and stating exactly what is minimally required is still too difficult a question to answer precisely. And that’s why I work on paradoxes: their surprising mathematical complexity and mathematical beauty. Fortunately for me, there is still a lot of work remains to be done, and a lot of complexity and beauty remaining to be discovered.
The post Why study paradoxes? appeared first on OUPblog.










September 6, 2014
What good is photography?
We’re bombarded with images today as never before. Whether you’re an avid mealtime Instagrammer, snapchatting your risqué images, being photobombed by your pets, capturing appealing colour schemes for your Pinterest moodboard, or simply contributing to the 250,000 or so images added to Facebook every minute, chances are you have a camera about your person most of the time, and use it almost without thinking to document your day.
Images have great social currency online, keeping visitors on a page longer, and increasing the shareability of your content. The old adage that “a picture’s worth a thousand words” comes into its own in an environment where we’re all bombarded with more information than we can consume, where there’s a constant downward pressure on your wordcount, and where you need to be eye-catching and tell a story within 140 characters or fewer. Lives have been changed, public opinion shifted, history made by a single picture. Think of an iconic image, and odds are that many spring straight to your mind, from the powerful – Kim Phuc running from a napalm attack in Vietnam, ‘Tank Man’ facing down the military in Tiananmen Square – to the stage-managed – those construction workers lunching on a skyscraper beam above Manhattan or Doisneau’s ‘Kiss by the Hotel du Ville’ — and many more.

Consider just the last few weeks: the violent protests following the death of Michael Brown in Ferguson, encapsulated in a single image of heavily armed policemen training their weapons on a lone man with his hands in the air; the images pouring out of Gaza, so at odds with the formal tweets of the IDF; or American photojournalist James Foley – a man who dedicated his life to ensuring such images streamed into our front rooms, into our news feeds, into our consciousness – kneeling next to the man who was about to become his killer. Wherever time and space are at a premium, wherever narrative matters, an image gets the story across in the most direct and powerful way.
Here in Oxford, a new international photography festival seeks to look at just these questions around the power and the purpose of photography, opening up debate about the many issues which surround it in the current climate, aiming to bring world-class work to a new audience and to elevate awareness and appreciation of the form to a level long-since enjoyed by painting, sculpture, and the other visual arts. On Sunday 14 September, colleges, museums, art galleries, and even a giant safe, will welcome visitors into more than 20 free exhibitions showing the work of internationally-renowned photographers, alongside a film programme mixing documentaries and feature films which have images and their use at heart, and a series of talks and panel discussions.
The exhibitions range widely, from powerful photojournalism such as Laura El-Tantawy’s images of a post-Mubarak Egypt, Robin Hammond’s work inside Mugabe’s Zimbabwe, and the Document Scotland collective’s recording of this truly decisive moment in Scottish history, to Yann Layma’s stunning macros of butterfly wings and Mark Laita’s vibrant images of brightly-coloured snakes; from Susanna Majuri’s elaborate photographic fictions, hovering somewhere between dream and reality, to the vibrant architectural images of Matthias Heiderich; and from Mariana Cook’s portrait series of those who risk their lives for justice to Paddy Summerfield’s moving documentation of the final years of his parents’ 60-year marriage. The UK debut of this year’s World Press Photo award features prominently, alongside French photographer Bernard Plossu’s first-ever British show, and a showcase of work from members of the Helsinki school, including the eminent Pentti Sammallahti and Arno Minkkinen.

The festival brings us shows documenting the NGO use of images in campaigns across the decades, or looking at photos which trick us, whether deliberately or inadvertently; a moving exhibition on photography and healing; and one exploring how different artists use photography – digitally, printed on surfaces such as ceramics or metals, or using Victorian techniques. Yet other exhibitions feature powerful portraits of the famous buildings of Oxford and their custodians, of the descendants of some of the world’s most famous historical figures, and Vermeer-inspired portraits of female domesticity from Maisie Broadhead.
Meanwhile the talks and debates include the BBC’s David Shukman on photography and climate change, celebrated landscape photographer Charlie Waite talking about the challenges and joys of landscape photography, and Bodley’s Librarian Richard Ovenden chairing a discussion on Henry Fox Talbot. Panels cover the role of photojournalism in the Northern Ireland peace process, the role of the critic in photography, images and the business world, and the merits and challenges of shooting photographic stories in areas close to home rather than travelling to far flung exotic locations.

The festival will draw to a close on Sunday 5 October, with ‘The Tim Hetherington Debate: What Good is Photography’, looking at the importance of photography in the twenty-first century, and a screening of Sebastian Junger’s Which Way is the Front Line from Here, a documentary about the photographer and filmmaker Tim Hetherington, killed in 2011 by mortar fire in Misrata, Libya, where he had been covering the civil war.
As festival founder and director Robin Laurance, himself an acclaimed photojournalist, concludes: “It’s time to celebrate the city’s links with the beginnings of an art form that has become ever-present in all our lives. We intend Oxford to be the place where photography is not only celebrated, but where it is debated, examined and challenged. Our aim is to open people’s minds as well as their eyes to photography.”
The post What good is photography? appeared first on OUPblog.










Catesby’s American Dream: religious persecution in Elizabethan England
Over the summer of 1582 a group of English Catholic gentlemen met to hammer out their plans for a colony in North America — not Roanoke Island, Sir Walter Raleigh’s settlement of 1585, but Norumbega in present-day New England.
The scheme was promoted by two knights of the realm, Sir George Peckham and Sir Thomas Gerard, and it attracted several wealthy backers, including a gentleman from the midlands called Sir William Catesby. In the list of articles drafted in June 1582, Catesby agreed to be an Associate. In return for putting up £100 and ten men for the first voyage (forty for the next), he was promised a seignory of 10,000 acres and election to one of “the chief offices in government”. Special privileges would be extended to “encourage women to go on the voyage” and according to Bernardino de Mendoza, the Spanish ambassador in London, the settlers would “live in those parts with freedom of conscience.”
Religious liberty was important for these English Catholics because they didn’t have it at home. The Mass was banned, their priests were outlawed and, since 1571, even the possession of personal devotional items, like rosaries, was considered suspect. In November 1581, Catesby was fined 1,000 marks (£666) and imprisoned in the Fleet for allegedly harboring the Jesuit missionary priest, Edmund Campion, who was executed in December.
Campion’s mission had been controversial. He had challenged the state to a public debate and he had told the English Catholics that those who had been obeying the law and attending official church services every week — perhaps crossing their fingers, or blocking their ears, or keeping their hats on, to show that they didn’t really believe in Protestantism — had been living in sin. Church papistry, as it was known pejoratively, was against the law of God. The English government responded by raising the fine for non-attendance from 12 pence to £20 a month. It was a crippling sum and it prompted Catesby and his friends to go in search of a promised land.
The American venture was undeniably risky — “wild people, wild beasts, unexperienced air, unprovided land” did not inspire investor confidence — but it had some momentum in the summer of 1582. Francis Walsingham, Elizabeth I’s secretary of state, was behind it, but the Spanish scuppered it. Ambassador Mendoza argued that the emigration would drain “the small remnant of good blood” from the “sick body” of England. He was also concerned for Spain’s interests in the New World. The English could not be allowed a foothold in the Americas. It mattered not a jot that they were Catholic, “they would immediately have their throats cut as happened to the French.” Mendoza conveyed this threat to the would-be settlers via their priests with the further warning that “they were imperilling their consciences by engaging in an enterprise prejudicial to His Holiness” the Pope.

So Sir William Catesby did not sail the seas or have a role in the plantation of what — had it succeeded — would have been the first English colony in North America. He remained in England and continued to strive for a peaceful solution. “Suffer us not to be the only outcasts and refuse of the world,” he and his friends begged Elizabeth I in 1585, just before an act was passed making it a capital offense to be, or even to harbor, a seminary priest in England. Three years later, as the Spanish Armada beat menacingly towards England’s shore, Sir William and other prominent Catholics were clapped up as suspected fifth columnists. In 1593 those Catholics who refused to go to church were forbidden by law from traveling beyond five miles of their homes without a license. And so it went on until William’s death in 1598.
Seven years later, in the reign of the next monarch James I (James VI of Scotland), William’s son Robert became what we would today call a terrorist. Frustrated, angry and “beside himself with mindless fanaticism,” he contrived to blow up the king and the House of Lords at the state opening of Parliament on 5 November 1605. “The nature of the disease,” he told his recruits, “required so sharp a remedy.” The plot was discovered and anti-popery became ever more entrenched in English culture. Only in 2013 was the constitution weeded of a clause that insisted that royal heirs who married Catholics were excluded from the line of succession.
Every 5 November, we British set off our fireworks and let our children foam with marshmallow, and we enjoy “bonfire night” as a bit of harmless fun, without really thinking about why the plotters sought their “sharp remedy” or, indeed, about the tragedy of the father’s failed American Dream, a dream for religious freedom that was twisted out of all recognition by the son.
Featured image: North East America, by Abraham Ortelius 1570. Public Domain via Wikimedia Commons.
The post Catesby’s American Dream: religious persecution in Elizabethan England appeared first on OUPblog.










Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
