Oxford University Press's Blog, page 532
March 22, 2016
Transplanting India’s patent laws
Montesquieu said that “[Laws] should be so specific to the people for whom they are made, that it is a great coincidence if those of one nation can suit another”. Despite Montesquieu’s belief in the uniqueness of law, there have been numerous instances of nations adopting laws of other nations. Once such instance is the adoption of the ancient Hammurabi’s code (Babylonian law code) by Persia, Greece and Rome in the 17th century B.C. Subsequently, European Civil Codes modeled on Roman law were adopted by Peru, Egypt and Japan. This phenomenon of the movement of a rule or system of law across countries has been aptly described as “legal transplant”, a term coined by W.A.J. Alan Watson in the 1970s.
Legal transplant may be defined as the “adoption into the national legal system by one state (the adopter country) of a rule originating in a foreign state (the originator country)”. In recent times, legal transplant has found its place within the wider concept of legal acculturation or diffusion of law.
Five causal mechanisms of legal transplantation have been identified – emulation, coercion, contractualization, regulatory competition, and socialization. Emulation (also referred to as “lesson-drawing”) suggests that legal transplantation takes place when law-makers strive to solve a problem by looking beyond national borders for solutions. This kind of emulation was observed during the Uruguay Round of negotiations when developing countries mimicked the actions of India and Brazil which had similar interests and concerns on development. This mimicry can be attributed to the lack of capacity and resources on the part of other countries to counter the proposals made in the Dunkel draft by the developed countries.
Recently, patent reforms in different parts of the world have shown an emerging trend towards the emulation of Indian patent law. Countries like China, South Africa, Botswana and Brazil are now trying to amend their domestic patent laws based on India’s model. The Philippines was among the first countries to emulate India’s patent regime when it introduced a provision very similar to Section 3(d) of the Indian Patents Act 1970 on patenting new forms of known substances. In 2012, when India granted the first compulsory licence allowing Natco, a generic drug maker to manufacture and sell the anti-cancer drug Nexavar which was patented by Bayer Corporation, China soon followed suit by amending its domestic law to permit the government to issue compulsory licences to local pharmaceutical companies to manufacture patented drugs. South Africa and Botswana have also suggested amendments to their patent laws to incorporate the pre-grant opposition procedure which has been used quite successfully to challenge a number of patents at the India Patent Office. Brazil has adopted the most provisions from the Indian patent law.
While many countries have been inspired to adopt India’s patent laws, few countries such as the US have criticized India for its patent reforms; in a ‘Special 301’ (2013) report (annual review of global IP rights protection and enforcement conducted by the US Trade Representative), the US criticized India’s pre-grant opposition procedure and the grant of the Nexavar compulsory licence. The US has also entered into bilateral free trade agreements with its trading partners with provisions that counter India’s patent norms by increasing the level of IP protection. These measures taken by the US to resist strong IP rights have been criticized by some countries as hindering their development.
India’s unique patent system has earned it the epithet of “the pharmacy of the developing world”; millions of people worldwide depend on affordable life-saving drugs produced in India. The compulsory licensing provisions under the Indian law have made it possible for generic pharmaceutical companies in India to manufacture and sell drugs (protected by patents owned by multinational companies), thereby enhancing access to medicines at an affordable price.
The recent legal transplantation of India’s patent laws can thus have some far-reaching implications in international law. The adoption of India’s patent laws by a large number of countries could lead to the emergence of a new norm in customary international law; it has been well-established that widespread practice of a norm could amount to “state practice” which is an essential element of customary international law.
State practice could then be helpful in interpreting international treaties such as the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS Agreement) which regulates the protection of intellectual property rights in the member-states of the World Trade Organization. India has been lauded internationally for its ability to reform its patent laws in sync with its developmental needs by making use of the flexibilities within the TRIPS Agreement. The implication of legal transplantation in treaty interpretation of TRIPS is especially useful for the developing countries since the constructive ambiguities provided by the Agreement can serve as “wiggle room” for the countries to strategically use the ambiguities to accommodate their domestic regulatory preferences and to promote access to medicines, the way India did.
Featured image credit: Medication, by frolicsomepl. CC0 Public Domain via Pixabay.
The post Transplanting India’s patent laws appeared first on OUPblog.

March 21, 2016
Uterus transplants: challenges and potential
The birth of a healthy child in Sweden in October, 2014 after a uterus transplant from a living donor marked the advent of a new technique to help women with absent or non-functional uteruses to bear genetic offspring. The Cleveland Clinic has now led American doctors into this space, performing the first US uterine transplant in February, 2016 as part of an Institutional Review Board (IRB)-approved series of ten transplants using cadaveric donors. Dallas and Boston medical centers have also been approved for this program, as will other programs as progress continues. An estimate of 50,000 American women are potential clients.
The path to womb transplants, however, will not be easy. On 7 March, the Cleveland Clinic celebrated its transplant with a media announcement full of joy and celebration. Two days later in a decidedly different key, the Clinic informed the world that the organ was surgically removed because the recipient had “suddenly developed a serious complication.” One can only imagine the disappointment of the patient and medical team, who had smiled so happily in media coverage. Of course, early failure is not surprising with innovative surgery, and no doubt the Cleveland clinic will proceed with other patients. The case is a reminder that the road to success is long, and initial steps should be closely monitored by IRBs, as is occurring in Cleveland, Sweden, and elsewhere.
As with other reproductive innovations, ethical, legal, and medical strands – in addition to safety and efficacy – interweave this new option. Patients who opt for this procedure are not always women who simply who want the pregnancy experience. In most of the world, women who lack a functioning uterus have no alternative because surrogacy is forbidden (Turkey, Saudi Arabia, and France), or a ban on paid surrogacy puts that option out of reach (UK, Sweden, and Spain). A subset of women in those countries may be willing to travel abroad to find a surrogate. Surrogacy tourist options, however, are now constricted as India and Thailand have closed their borders to foreign surrogacy clients. Whether other countries (Nepal, Kazakhstan, and the Philippines) will fill the gap is still unknown. For many would-be tourists, safe and effective uterus transplant may be an appealing alternative.
Transplant will also be an attractive option even where paid surrogacy is legally available, as it is in several US states. The Cleveland transplant patient, a 26 year old woman from Texas, if she had chosen, would have been able to enter into a legally binding surrogacy agreement with advance certification of parentage. Surrogacy, however, has its own problems, including outsourcing the burden of pregnancy to a paid stranger. In some countries, confinement in surrogate compounds and exploitation of poor women has occurred. While these dangers are less likely in the United States, women may still be uncomfortable with paying another woman, over whom she will have limited control during the pregnancy, to have her child. Uterus transplant could be an attractive substitute for those with moral or pragmatic objections to surrogacy.
Even safe and effective uterus transplant will be fraught. The question of donor is a key inflection point. Living donors (who cannot be paid) are likely to be close relatives (the donor in the first Swedish case was the recipient’s mother) or friends. And donating will be a substantial sacrifice, usually entailing a nine hour operation with many post-operative risks and burdens. Cadaveric organs are more easily retrieved, though it adds another level of emotional complexity to a decedent family’s consent. The transplant operation for the recipient will also be onerous—the Cleveland case took nine hours even with top vascular surgeons doing the intricate reattachment of veins and arteries.
The recipient, who will have undergone an IVF cycle and produced at least six frozen embryos before the transplant, must wait a year after the transplant before embryo transfer, and perhaps another year for the birth of the child. Failure to implant or miscarriage will make it all for naught, as will rejection of the graft. If pregnancy occurs, the woman might feel movement of the fetus but nerves from the uterus are not reconnected so some of the gestational experience will be missing. Daily immunosuppression can cause chronic kidney disease or skin cancer within a year or two of transplant. The effects of immunosuppression on the children born after transplant are also important concerns. However, data from offspring of kidney recipients suggests that those effects are tolerable. Finally, when a recipient’s child-bearing plans are done, she will have to undergo a hysterectomy to remove the foreign uterus.
Women seeking a uterus should undergo an elaborate screening and consent process to assure that they will be clear-sighted about the risks, benefits, and alternatives (surrogacy, adoption, and childlessness). Although not life-saving, uterus transplants joins the group of vascularized composite allografts (VCAs) that correct or ameliorate serious non-life threatening conditions, such as hand, face, larynx, and penis transplants. True, some requests for vascularized composite allogenic transplants are more compelling than others. Yet having a child of one’s own (in both the genetic and gestational sense) could be an important human experience, and arguably as important as other VCAs.
Viewed more broadly, uterus transplant may be the first reproductive technology involving a third party in which the recipient bears most of the physical and psychological burden rather than shifting to another woman, as occurs with gestational surrogacy and egg donation. At the present time, uterus transplant requires more research and fine-tuning. If uterus transplants achieve the medical status of other solid organ transplants, it may provide an opportunity for women who otherwise would be unable to have a child of their own. If only it were not so complicated and expensive.
Featured image credit: Doppler Ultrasound by beeki. CC0 public domain via Pixabay.
The post Uterus transplants: challenges and potential appeared first on OUPblog.

Can design thinking challenge the scientific method?
The scientific method has long reigned as the trusted way to test hypotheses so as to produce new knowledge. Shaped by the likes of Francis Bacon, Galileo Galilei, and Ronald A. Fisher, the idea of replicable controlled experiments with at least two treatments has dominated scientific research as a way of producing accepted truths about the world around us.
However, there is growing interest in design thinking, a research method which encourages practitioners to reformulate goals, question requirements, empathize with users, consider divergent solutions, and validate designs with real-world interventions. Design thinking promotes playful exploration and recognizes the profound influence that diverse contexts have on preferred solutions. Advocates believe that they are dealing with “wicked problems” situated in the real world, in which controlled experiments are of dubious value.
The design thinking community generally respects science, but resists pressures to be “scientized”, which they equate with relying on controlled laboratory experiments, reductionist approaches, traditional thinking, and toy problems. Similarly, many in the scientific community will grant that design thinking has benefits in coming up with better toothbrushes or attractive smartphones, but they see little relevance to research work that leads to discoveries.
The tension may be growing since design thinking is on the rise as a business necessity and as part of university education. Institutions as diverse as the Royal College of Art, Goldsmiths at the University of London, Stanford University’s D-School, and Singapore University of Technology and Design are leading a rapidly growing movement that is eagerly supported by business. Design thinking promoters see it as a new way of thinking about serious problems such as healthcare delivery, community safety, environmental perseveration, and energy conservation.
The rising prominence of design thinking in public discourse is revealed by these two graphs. (see Figures 1 and 2).

Google Books Ngram Viewer by Ben Shneiderman.

1975 by Ben Shneiderman. Public domain via NYT Chronicle.
These two sources both appear to show that after 1975 design took over prominence from science and engineering.
Scientists and engineers might dismiss this data and the idea that design thinking could challenge the scientific method. They believe that controlled experiments with statistical tests for significant differences are the “gold standard” for collecting evidence to support hypotheses, which add to the trusted body of knowledge. Furthermore, they believe that the cumulative body of knowledge provides the foundations for solving the serious problems of our time.
By contrast, designing thinking activists question the validity of controlled experiments in dealing with complex socio-technical problems such as healthcare delivery. They question the value of medical research by carefully controlled clinical trials because of the restricted selection criteria for participants, the higher rates of compliance during trials, and the focus on a limited set of treatment possibilities. Flawed clinical trials have resulted in harm such as when a treatment is tested only on men, but then the results are applied to women. Even respected members of the scientific community have made disturbing complaints about the scientific method, such as John Ioannidis’s now-famous 2005 paper “Why Most Published Research Findings Are False.”
Designing thinking advocates do not promise truth, but they believe that valuable new ideas, services, and products can come from their methods. They are passionate about immersing themselves in problems, talking to real customers, patients, or students, considering a range of alternatives, and then testing carefully in realistic settings.
Of course, there is no need to choose between design thinking and the scientific method, when researchers can and should do both. The happy compromise may be to use design thinking methods at early stages to understand a problem, and then test some of the hypotheses with the scientific method. As solutions are fashioned they can be tested in the real world to gather data on what works and what doesn’t. Then more design thinking and more scientific research could provide still clearer insights and innovations.
Instead of seeing research as a single event, such as a controlled experiment, the British Design Council recommends the Double Diamond model which captures the idea of repeated cycles of divergent and then convergent thinking. In one formulation they describe a 4-step process: “Discover”, “Define”, “Develop”, and “Deliver.”
The spirited debates about which methods to use will continue, but as teachers we should ensure that our students are skilled with both the scientific method and design thinking. Similarly, as business leaders we should ensure that our employees are well-trained enough to apply design thinking and the scientific method. When serious problems need solutions, such as healthcare of environmental preservation are being addressed, we will all be better served when design thinking and scientific method are combined.
Featured image credit: library books education literature by Foundry. Public domain via Pixabay.
The post Can design thinking challenge the scientific method? appeared first on OUPblog.

The principle of distinction in complex military operations
In the lead up to this year’s ASIL Annual Meeting, we asked some of our leading authors in international law to reflect on the most important new frontier in the area, what challenges it confronts, and how the field of international law is adapting to it. Rachel E. VanLandingham considers the principle of distinction in the context of complex military operations and its impact on international humanitarian law.
While exciting topics such as autonomous weapons and cyberwarfare may at first blush seem like the most “important new frontiers” in international humanitarian law, there is another more immediate and complex challenge confronting those engaged in current and looming wars, a challenge with a human face. Today and unfortunately tomorrow, professional militaries find, and will continue to find, it increasingly difficult to determine who is friend or foe in the modern battle-space. Adherence to and implementation of the principle of distinction in the context of complex military operations against lawless opponents is the vitally important new frontier in international humanitarian law.
I’m not speaking about the not-so-new difficulties of fighting wars in an urban terrain, an environment in which elementary schools sit next door to anti-aircraft batteries. Yes, that’s a volatile situation that brings infantry men and women face-to-face with human consequence of combat operations in close proximity to civilians living their daily lives. It is instead the difficulties created by engaging enemies whose fighters intentionally cloak themselves in the protective appearance of civilians, exploiting the hesitation to attack anyone with such an appearance to gain tactical and strategic advantages against their law-abiding opponents. The legal answer to these complex situations is defined by the law of armed conflict principle of distinction, which allows deliberate attack against such enemies once the presumption of civilian status is rebutted. But what may seem simple is not; quickly determining who falls within the scope of lawful attack is often a momentous task, especially in the pressure-filled situation of close quarter combat. While challenging, conduct-based attack decisions are essentially the equivalent of a law enforcement use of force, dictated by a manifestation of hostile act or hostile intent by the opponent – and hence legally not a new frontier.
But lawful attacks on enemy belligerents are not limited to this type of conduct-based decision-making, even though this method reduces the risk of error (of mistaking civilians for the enemy). The law of armed conflict (LOAC, synonymous with international humanitarian law) allows opposing parties to target—to kill—enemy fighters based purely on their status as enemy fighters. That is, regardless if enemy belligerents happen to be sleeping, or otherwise not fighting at the moment, their status as members of an enemy belligerent group triggers the LOAC-permitted legal authority to attack. Only when the belligerent is rendered “hors de combat,” such as having surrendered or been incapacitated, is this authority terminated. On the micro level, this legal authority to target members of an opposing armed force necessarily means that those who seek to engage in such targeting must first make an objectively reasonable determination whether the individual qualifies as a lawful object of attack. Those engaged in targeting must ask and answer: is this individual a lawful target? And this is where things get really, really hard.
Membership determinations—what is known militarily as threat identification—are and will remain frustratingly difficult in most of today’s and tomorrow’s armed conflicts, in large measure because today’s enemies do not adhere to the law of war. They refuse to distinguish themselves from civilians, instead exploiting their anonymity in order to gain the tactical advantage of attack hesitation, and the strategic advantage that flows from mistakes that result in civilian casualties. In addition to their refusal to wear uniforms or otherwise distinguish themselves from civilians, the very nature of today’s enemy forces make membership determinations of their ranks fraught with challenge. Instead of the hierarchical, organized armed group that the law of war primarily developed to regulate, today’s non-state armed group is often de-centralized and maximizes technology such as the Internet to recruit, train, and execute operations. So is a member of al-Qaeda one who swears bayat (an oath of allegiance) but then only posts comments on social media, exhorting the group to defeat the enemy at all costs? If the answer is yes—and the extant law doesn’t clearly answer this question—then they are targetable under the law of armed conflict. What about the individual who travels with a group of known al-Qaeda members, cooking for them, driving them to various locations, acting as an armed bodyguard on occasion? Are they a “member” despite no formal oath of allegiance, at least one that isn’t discernable? Or what about the individual that provides legal advice to various known members of ISIS? And has sworn some type of allegiance to that group? Are they a “member” for targeting purposes per the law of armed conflict? (Assuming an on-going armed conflict against these groups, of course.)
This is where understanding how the law is operationalized is pivotal to understanding how law, facts, and process intersect to advance the ultimate goal of legal compliance and civilian risk mitigation. Of course, not all targeting decisions are made with the luxury of time and extensive process. But even in the time-sensitive context, the standard for compliance must be whether the judgment to attack was, under the circumstances, reasonable. Making such judgments against illicit enemies is the great challenge of the contemporary battle-space.
Featured image: U.S. Army Sgt. Robert V. Graham, of Fayetteville, N.C., is greeted by young villagers during a mission to assess an irrigation ditch-clearing project in the Beshood district of eastern Afghanistan’s Nangarhar province. Photo by Sgt. Albert Kelley, US Army. Public domain via Wikimedia Commons.
The post The principle of distinction in complex military operations appeared first on OUPblog.

March 20, 2016
Passion season / Bach season
The arrival of Lent and the anticipation of Holy Week on the Christian liturgical calendar bring with them what professional musicians call “passion season.” In a close parallel to “Messiah season” in December, singers and players hope to find work performing musical settings of the crucifixion narrative, to help audiences and congregations listen and worship and to help get themselves through the next few months’ rent.
Musical settings of the passion go back to the beginning of the Christian faith, starting with chanted recitations of the gospel narrative. Enhancement of these intoned liturgical presentations with more elaborate vocal music started in the fifteenth century, and by the eighteenth there were settings for voices and instruments adorned with extensive passages of poetic commentary and reflection designed to evoke listeners’ emotional responses to the story. The most famous works of this kind are, of course, the two surviving settings by J. S. Bach: the St. John Passion BWV 245 to an anonymous libretto (first version 1724), and the St. Matthew Passion BWV 244 to a text by Christian Friedrich Henrici (first version 1727).
Approaching their 300th anniversaries, these works might be the oldest pieces in the standard repertory not routinely found in the specialized “Early Music” bin of the digital record store—both landed instead squarely in the domain of mainstream classical music. After the definitive transformation of these compositions from functional liturgical pieces into artworks typically heard in concert halls, it is striking that in today’s world they should remain so closely tied to the season in which they were first heard three centuries ago.
Of course there are listeners for whom performances of the Bach passions remain closely connected to their faith and to religious commemoration, and many performances do take place in church buildings. But it is rare to find a liturgical performance of the work today that parallels Bach’s presentation of these pieces in the Good Friday vespers service in Leipzig, where they formed part of a liturgy of hymns, a long sermon, and the presentation of the passion narrative according to one of the gospels.
Bach’s passion settings fell out of liturgical use shortly after his death in 1750, largely because their many commentary movements (solo recitatives and arias and poetic choruses) became textually and theologically out of date. They re-entered the repertory three quarters of a century later with the legendary performance of a greatly abbreviated St. Matthew Passion in 1829 under the direction of Felix Mendelssohn, transformed into works of musical history, of moral (rather than theological) edification, and even of emerging German national identity (with Bach as a symbol of German-ness). And they were concert works heard in public performance, not part of a liturgy.
Yet those first revived performances took place during Lent and Holy Week, maintaining a close seasonal link despite the change in the music’s context, and this association has stuck. Almost every aspect of modern performances of these works is different from Bach’s own presentations (in vocal and instrumental forces, performing context, and listeners’ understanding of stylistic elements and their meanings), yet the seasonal tie is still valued. Perhaps against all expectation, the musical world still observes passion season.
And passion season is also Bach season. Other works are occasionally presented—Heinrich Schütz’s passion settings from time to time, recent (Bach-inspired!) pieces like Krzysztof Penderecki’s now and then, the odd work by G. P. Telemann, C. H. Graun, or C. P. E. Bach—but passion repertory is overwhelmingly Bach repertory. Even the efforts to expand it are often Bach-centric, most notably the quixotic attempts to “reconstruct” the lost St. Mark Passion that surface again and again despite the impossibility of the task.
Not even Bach was so limited. His working portfolio included his own settings, of course, but not even they were stable; every time he performed the St. John Passion he revised it, using substitute commentary movements and sometimes new poetic texts. These changes, particularly in the large framing choral movements, altered the theological tone and focus of the work and arguably made it a new piece. The St. Matthew Passion was more fixed, but a recently discovered printed text for a reperformance of Bach’s now-lost St. Mark Passion BWV 247 shows that he revised that work as well.
And Bach did not restrict himself to his own settings. On several occasions he performed a St. Mark passion he attributed to Reinhard Keiser, including once in a version that incorporated movements from a passion by Georg Friedrich Händel. And in the last few years scholars have identified a setting by Bach’s contemporary Gottfried Heinrich Stölzel—a work of a somewhat different textual and musical type—that Bach performed in the 1730s. In all there was more variety in the passion repertory heard under Bach in Leipzig than in our concert life today, to say nothing of the music composed and performed by other musicians of the time.
Passion season offers employment for musicians and inspiration for religious adherents, and it gives us the annual opportunity as listeners of discovering new things in passion performances presented in a wide array of musical interpretations. But it is worth recognizing, at least, that passion season is Bach season, and that this is a distinctive feature of our modern musical life in the twenty-first century.
Featured image: Leipzig, Neues Bachdenkmal an der Thomaskirche” by Andreas Praefcke. CC BY 3.0 via Wikipedia Commons.
The post Passion season / Bach season appeared first on OUPblog.

Exam preparation: More than just studying?
Do you know of a colleague who is extremely good at their job, yet cannot pass the professional exams required to ascend the career ladder? Or an exceptionally bright friend – who seems to fall apart during exam periods? Or do you yourself struggle when it comes to final assessments? I’m sure most of us are familiar with situations like this, as they are a very common occurrence. Failure to pass specialist exams in one’s field is not down to lack of intelligence or an inability to do the job. Rather, it is usually down to inadequate preparation for the examination.
“Studying” refers to learning the knowledge competencies required by an exam syllabus or training curriculum. Contrastingly, “exam preparation” means learning to present the required parts of that knowledge clearly, in the format demanded by each component of the exam – in such a way as to demonstrate competence and confidence. A large part of this involves practice of each exam component in a simulated environment – in order to perfect performance. Further, it encompasses all the non-technical life skills required to arrive at the point of readiness to sit an exam: prioritisation, motivation, focus, support, time management, and importantly, life management.
Surprisingly, exam preparation is seldom alluded to in training or practice, and is almost never taught – the focus being on acquisition of knowledge (i.e. traditional “studying”). I often refer to practice and preparation as covering all the things no one ever tells you, yet you are assumed to know, and indeed need to know, to pass the exam!
Specific exam preparation is underrated. There is an assumption that by studying hard and having the knowledge, that you can turn up and pass the exam. What’s more, there is a feeling you deserve to do so if you have put in the hours beforehand.

To prepare for an exam, you must treat it as you would any other significant challenge in your life; by getting everything in place beforehand, you maximise your chance of success. Just as elite sportsmen and women show, winning is about more than the individual. Winning is the result of sustained efforts by a multi-disciplinary team. We, as academics, clinicians, or students, are no different. To succeed, you need to identify your team members and get them prepared. Decide how they each will help you and delegate roles and chores. Cleaning, cooking, and childcare should be allocated to partners, grannies or nannies, or hired external help. Paying for a cleaner each week, or to have your shopping delivered is a small cost when considered as part of the total cost of sitting (and failing!) professional exams. This is not wasted money on luxury services; this is just as much an essential part and cost of preparation as the textbooks you buy.
Next, identify the obstacles to success and plan how to deal with each of them. What will you say to those who demand your time? How can your team support you in this? What is necessary, and what is not – how can you prioritise your time to allow more for exam preparation? Can you take annual leave, study leave, unpaid leave? Can you swap nightshifts with a colleague? Can you stop aimlessly procrastinating online in your spare moments?
One of the main threats to success is maintaining motivation over a prolonged preparation period. It is important in the early days to make good, regular study habits, so that there is no room for negotiation later on. It goes without saying that to maintain your campaign, you really need to want to pass, to want to succeed so much so that you make the sacrifices required to do so.
Many students say, almost in desperation, they are 100% committed to the exam and all that it involves. On closer questioning, what they actually mean is they really want to pass but do not want to put in place the difficult measures or make the changes required to bring this about. This is often presented as “I have young kids,” “my wife works shifts,” or “I always watch football at the weekend.” These all sound entirely reasonable, and it comes down to your choice how you spend your time. Preparing for exams is much easier if you can be selfish and put yourself and your goal at the top of your list of priorities, for a few weeks and months.
Effective exam preparation is difficult in the short term but passing exams efficiently, at the first sitting, will bring many rewards in the longer term. As Abraham Lincoln once stated, “Give me six hours to chop down a tree, and I will spend the first four sharpening the axe.”
The choice is yours but remember, nothing changes if nothing changes… and fortune favors the prepared.
Featured Image Credit: ‘Knowledge, Book, Library’ by DariuszSankowski. CC0 Public Domain, via Pixabay.
The post Exam preparation: More than just studying? appeared first on OUPblog.

The questionable logic of international economic sanctions
Whatever the international crisis – whether inter-state war (Russia-Ukraine), civil strife (Syria), nuclear proliferation (North Korea), gross violations of human rights (Israel), or violent non-state actors on the rampage (ISIS, al-Qaeda) – governments, pundits and NGOs always seem to formulate the same response: impose economic sanctions. In the mid-20th century, only five countries were targeted by sanctions; by 2000, 50 were. Once obscure and rarely used, sanctions are now central to international economic and security policy.
Troublingly, though, proponents of sanctions generally offer only the vaguest account of how they expect the imposition of economic pain to elicit political gain. Historically, these accounts invoke liberal ideas of statehood. Thus, target rulers are understood as rational utility maximisers. If sanctions can impose costs exceeding the benefits of objectionable policies, they will change course; if they don’t, the harmed population will rise up and force them to do so. This ‘naïve theory of sanctions’ was disproven as early as 1967 by Galtung’s study of Rhodesia – where the population rallied around the targeted regime. Nonetheless, it surfaced again in the 1990s, when Western ambassadors declared that sanctions should aim to harm the Iraqi population to coerce Saddam Hussein. Infamously, US Secretary of State Madeleine Albright said that 500,000 children’s deaths were ‘worth it’. But, of course, this humanitarian disaster did not unseat the Iraqi regime.
The resultant political backlash led policymakers to adopt so-called ‘smart’ or ‘targeted’ sanctions. These measures supposedly target those directly responsible for wrongdoing, avoiding ‘collateral damage’. They typically involve financial restrictions, travel bans and other inconveniences targeted at a few dozen to a few hundred individuals, companies or state entities.
While this sounds superficially sensible, its underpinning logic is actually no less ‘naïve’. It assumes that target states are driven entirely by the preferences of a small clique of individuals, such that tweaking their personal incentives will lead to policy change. This is nonsense. Despite media representations to the contrary, even highly authoritarian states are based on coalitions of social and political forces – which are often surprisingly broad, and shape what regimes can and cannot do. Saddam’s regime, for instance, was underpinned by a shifting coalition of Sunni tribes, urban middle and working classes, a ‘fat cat’ contractor class, smugglers, rural landlords and Kurdish collaborators. While frozen bank accounts or visa bans may irritate authoritarian leaders, they matter far less than keeping ruling coalitions satisfied and intact.
Targeted sanctions pose little threat to these coalitions because they are generally rather trivial and more easily evaded than comprehensive measures. For most targets, the costs are minor compared to the spoils of continued rule or the political issues at stake. Many targeted elites simply don’t have financial or other linkages to Western states – as in Myanmar, for instance, whose generals sourced all the personal services they wanted in Singapore and other Asian states. And of course, since they are targeted precisely because they are powerful, the targets are often able to evade restrictions, by using proxy companies or generating false passports, for instance. Elites are also well placed to displace costs onto the wider population, creating the very collateral damage that ‘smart’ sanctions are supposed to avoid. In Myanmar, for example, targeted ‘cronies’ of the military regime simply lobbied for more business concessions and other scams that imposed higher domestic prices.
Moreover, despite the aura of precision surrounding ‘targeted’ sanctions, targeting is often done extremely crudely. In Myanmar’s case, pages were just torn from the local Yellow Pages and sanctions slapped on business owners – resulting in sanctions being imposed on Western sympathisers or . This has led to legal challenges in the EU. Frankly, smart sanctions are often pretty dumb.
Given all these concerns it’s perhaps unsurprising that even their cheerleaders admit that sanctions – whether comprehensive or targeted – fail two-thirds of the time. Critics say their success rate is closer to five percent.
Clearly, what’s needed is a far more modest assessment of our very limited capacities to engineer social and political outcomes in other states – a lesson underscored by the failure of other forms of intervention, like statebuilding operations. If we are to continue using sanctions, we need far more serious assessments of target societies and far more sophisticated models of regime dynamics. We need to study the coalitions underpinning regimes and those supporting alternatives, and consider how sanctions will impact these different groups and the struggles between them. We need to tell plausible causal stories about how imposing economic gain is likely to lead to the concessions sought – about the mechanisms by which sanctions are supposed to ‘work’. And we need to monitor sanctions to see if these mechanisms are being triggered and amend them accordingly.
These basic tasks are, astonishingly, not undertaken by any state or international organisation currently imposing sanctions. Sanctions are thus being imposed on a wish and a prayer – in the vague hope that, somehow, they will translate into the outcomes sought. Given the real and often severe damage inflicted on target states and societies, that is highly irresponsible, frequently counter-productive and, for a policy often justified with appeals to morality, decidedly unethical.
Image credit: Fence Barbed Wire Clouds Gainesville B&W by cdsessums. CC BY-SA 2.0 via Flickr.
The post The questionable logic of international economic sanctions appeared first on OUPblog.

“The economics of happiness” – an extract from Happiness Explained
What is happiness and how can we promote it? These questions are central to human existence, and human flourishing now plays a central role in the assessment of national and global progress. Paul Anand shows why the traditional national income approach is limited as a measure of human wellbeing and demonstrates how the contributors to happiness, wellbeing, and quality of life can be measured and understood across the human life course. The following extract looks at the connection between income and wellbeing.
A significant strand of economics research into quality of life seeks to understand the relationship between income and life satisfaction, and thereby to address one of life’s ultimate questions—does money make us happy? The simple question does not, always, lead to straightforward answers and we shall look at some of the relations between life satisfaction, income, age, employment, and the affluence of others that have featured in this field. A natural starting point for this work can be found in the work of Richard Easterlin who showed that throughout a decade of significant GDP growth, average levels of life satisfaction in the U.S. population had remained relatively flat. One can argue that as income is unbounded, and life satisfaction was reasonably close to the top of the measurement scale at the start of the period, the result was not that surprising but it helps to raise questions about the reasons for pursuing income growth. If asked, most people would say they would be better off if their income were increased and yet—in terms of our experience of life—it seems it doesn’t actually push the needle over the long term.
The paradox has been challenged more recently in a survey of the ‘new stylized facts’ about income and happiness in which three economists have suggested that life satisfaction is, in fact, positively associated with income and that this constitutes a refutation of the Easterlin paradox. Their observations include the facts that: richer people report higher life satisfaction than poorer ones; richer countries have higher per capita experiences than poorer countries; economic growth over time is related to rising life satisfaction; there is no satiation point beyond which the relationship between income and wellbeing diminishes.
This body of evidence serves to make the point that connections between wellbeing outcomes and income inputs can be assessed in a variety of ways, that comparisons at one point in time take a form of their own, and of course that important positive connections between experience and income can be found. However, much if not all of this evidence is also consistent with the fact that experiential measures of life satisfaction are relative judgements. For example, when comparing countries, the relationship between income and life satisfaction flattens off after a country’s average income reaches about $15,000 (in 2005 prices)—not a huge sum for average households in high-income countries. More income beyond this will enable you to consume or save more, of course—but don’t expect to feel much better as a result.
Employment Income and Health
Job design and unemployment can have particularly significant impacts on experience. Using data on thousands of working age adults in Germany, one early study found that the negative impact on life satisfaction of being unemployed was some three times that of being in bad health. There were considerable variations across the life course, however, with younger people being particularly affected; for at least some older people, the decision to not be employed reflects a positive choice to retire so the study leaves open a question about the impact of forced unemployment. Nonetheless the effect on experience when compared with health problems is rather surprising and provides a reason, additional to the obvious financial ones, for making sure that young people have access to decent work.
And indeed, a review by psychologists covering over a hundred empirical studies confirms that unemployed individuals have lower psychological and physical wellbeing and, therefore, provides evidence that, for a large proportion of the young, being without work is not a state freely chosen. The same study finds, in addition, that while the duration of unemployment has an impact on mental health, the level of unemployment benefits does not. Most significantly perhaps, for those who are unemployed, the centrality of work in their lives, their personal, social, and financial resources, time structure and coping strategies are more strongly related to experiential wellbeing than is human capital—at least as measured in terms of educational attainment.
Coping strategies can play an important role by helping people in their search for work and managing their emotional reactions to the lack of decent and full employment. Repeated but unsuccessful job search is inevitably discouraging and dealing with emotional responses could help people through difficult periods. There is evidence, for example, that people restructure daily life in response to spells of unemployment in very different ways: some are able to organize their time, keep a sense of purpose, carry on with their activities, and avoid excessive dwelling on the past while others find all of these things a challenge.
There is also some evidence that worklessness has an impact on future outcomes. Psychological wellbeing and physical wellbeing are not statistically associated with chances of returning to work (according to this review), suggesting that either these variables are poorly measured or that environmental factors are much more important predictors of the return to work. In addition, previous spells of unemployment have been shown to reduce life satisfaction, for both those in employment as well as those out of it. In other words, unemployment has the capacity to leave a psychological scar on those who experience it and, given the implications for future prospects, it would seem that the impact is well founded.
Featured image credit: happiness field nature by marcisim. Public domian via Pixabay.
The post “The economics of happiness” – an extract from Happiness Explained appeared first on OUPblog.

Original pronunciation: the state of the art in 2016
In 2004, Shakespeare’s Globe in London began a daring experiment. They decided to mount a production of a Shakespeare play in ‘original pronunciation’ (OP) – a reconstruction of the accents that would have been used on the London stage around the year 1600, part of a period known as Early Modern English. They chose Romeo and Juliet as their first production, but – uncertain about how the unfamiliar accent would be received by the audience – performances in OP took place for only one weekend. For the remainder of the run, the play was presented in Modern English. The poor actors had to learn the play twice.
The experiment was a resounding success. It turned out that all sorts of people were interested in original pronunciation – what it sounds like, how it affects actors’ performance, how historical phonologists reconstruct it (the ‘how do we know?’ question). At the talkback sessions following the performances, alongside British Shakespeareans there were early music enthusiasts, people involved in heritage sites, and visiting theatre buffs from abroad. They all had one thing in common: they wanted to get closer to the speech or song patterns that would have been around in the Jacobethan period. The Romeo performances had convinced them that this was possible, and they wanted a slice of the action. As did the Globe itself, of course. The following year the experiment was repeated. But instead of a tentative toe in the linguistic water, a new production of Troilus and Cressida was entirely devoted to OP.
Ten years on, it’s interesting to reflect on the way events subsequently ‘galloped apace’. In the Shakespeare world, the Globe went in other directions and the initiative moved to the United States. In 2006, OP extracts from Shakespeare were presented during the 400th anniversary of Jamestown celebrations. In 2007 OP readings took place in an off-Broadway venue in New York. In 2010, a full-scale OP production of A Midsumer Night’s Dream was put on at Kansas University, and this was followed up by a recording for radio and a DVD (now available commercially). In 2011, another university production, this time at the University of Nevada (Reno) mounted an OP production of Hamlet. My actor/director son Ben, who was becoming an expert in OP performance, was invited to be an artist in residence and to play the Dane.
I was the consultant on both these productions and saw the results. Thanks to solid and sustained periods of rehearsal by all concerned, the original pronunciation was phonetically excellent. It’s absolutely essential to devote time and expertise to ensure that the pronunciation is confidently and consistently produced. There have unfortunately been a few instances of companies jumping on the OP bandwagon, and rushing out a production, but without spending the time needed to develop consistency and to think through the various choices that need to be made (for OP allows several alternative pronunciations, just as English accents do today).
The number of works I know of that have been produced with appropriate attention paid to the OP has grown dramatically over the past three years, and include the Sonnets, Twelfth Night, As You Like It, Julius Caesar, The Merchant of Venice, Macbeth, and Pericles. At the same time, the number of resources has increased, so that more people are able to hear what OP sounds like, notably via the British Library CD, Shakespeare’s Original Pronunciation, an anthology of extracts curated by Ben in 2012. Ben’s reading of Sonnet 141 in OP for the best-selling app The Sonnets (2013) made the accent reach a wider audience than ever before. But the recording he and I made on OP at the Globe for the Open University in 2011 has had the widest reach yet. It went viral, with over two million hits to date.

In addition to the website created to accompany Pronouncing Shakespeare, there’s now a website dedicated to the whole subject of original pronunciation – going well beyond Shakespeare to include anyone exploring accents from any period of the history of English. That’s where you’ll find out about those who are using OP to produce fresh versions of Dowland, Byrd, and Purcell, or projects involving other authors.
Two of these other authors have had special attention – one earlier, and one later. I made a CD for the British Library of William Tyndale’s Matthew Gospel, in the OP of the early 16th century – a notable difference is that silent letters in words like know are pronounced, so we get gnashing of teeth with a mouth-watering onset. And Ben adopted the persona of John Donne for a recording of his 5 November 1623 sermon. It was selected by the curators of the Virtual St Paul’s project – an online recreation of how St Paul’s would have looked and sounded at the time, with the aim of answering the question how it was possible for 2000 or more people to hear Donne speak in the Cathedral grounds.
With all this going on, the Globe eventually took another bite of the apple, the opportunity being provided by the completion of the new indoor theatre at Shakespeare’s Globe, called the Sam Wanamaker Playhouse (named after the American actor whose vision it was to reconstruct the Globe). In July 2014 the OP story was told in a three-part series of events, using play-extracts, sonnets, and songs, and ending with a full reading of Macbeth by the Shakespeare ensemble of Ben’s company, Passion in Practice, the foremost developers of OP practice over the past few years. A year later, they performed Henry V in the Playhouse, for the anniversary of the Battle of Agincourt. In January 2015, there was a fascinating OP Pericles in Stockholm, accompanied by violinist Daniel Hope and the Trondheim Soloists, and this will be taken to New York in 2016. Other plays and playwrights are waiting in the wings for an OP production, and the demand for resources is urgent – which is why I compiled my Dictionary of Original Shakespearean Pronunciation for OUP. It will be useful for his contemporaries too.
It’s an exciting time. People often say that there’s nothing new to be learned about Shakespeare, given that he has been the subject of study for hundreds of years. Not so, when it comes to OP. Every time I explore a play in this way I discover something new – some previously unnoticed piece of wordplay, for example – or experience a fresh auditory impact from individual lines and interactions. With a play that has lots of rhymes that don’t work in Modern English (such as Dream), suddenly all the rhymes work. Audiences immediately notice the effect. When the Three Witches open Macbeth, they speak in rhyming couplets (as witches – and fairies – always do), but ‘Upon the heath’ doesn’t rhyme with ‘There to meet with Macbeth’ in Modern English. It does in OP. Heath was pronounced with a more open vowel.
An exciting time lies ahead. As of 2016, only a dozen plays have been explored in original pronunciation, and few places have yet had the chance to hear it in action. Over the next few years I’m expecting there to be many more occasions for audiences to experience an OP production, so that they can judge for themselves the dramatic and aesthetic impact of presenting a play or a poem in a way that is as close as possible to how it would have been performed 400 years ago.
The post Original pronunciation: the state of the art in 2016 appeared first on OUPblog.

Imagining zombies
Understanding the relationship between the mind and the body remains one of the most vexed problems in philosophy, cognitive science, and neuroscience. Throughout much of the last hundred years, physicalism has been the orthodox position in the philosophy of mind. Physicalist views share the characteristic attitude that mental phenomena — such as beliefs, desires, experiences and emotions — are either nothing but physical phenomena — brain states, say — or are in some important sense accounted for or made real by physical phenomena.
Physicalism has not reigned unchallenged, however. A number of arguments have been raised which promote dualism in its place — the view that fundamentally, the mind and body are separate, and mental phenomena can never be adequately characterised in terms of physical goings-on.
Perhaps the most prominent and widely discussed of these is the ‘Zombie Argument’, developed and defended by David Chalmers over the past twenty-five years or so — although the line of thought behind it goes back at least as far as Descartes.
Chalmers’ argument focusses on one particular aspect of mental phenomena – phenomenal experience or that-which-it-is-like to undergo a particular mental process or to be in a particular mental state, such as:
“… the felt quality of redness, the experience of dark and light, the quality of depth in a visual field … the sound of a clarinet, the smell of mothballs… bodily sensations from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion … the experience of a stream of conscious thought.” (Chalmers, 2010)

In motivating the argument, Chalmers asks us to consider creatures that he calls ‘zombies’ — not these ones! — which are physically identical to human beings but which lack all phenomenal experience. For all the similarity of a zombie’s behaviour to ours, when a zombie peers out into the gradually darkening red-hued sunset; inhales the musty smell of her closet whilst strains of her daughter’s clarinet practice come screeching through the wall; when she cries out wildly due to the touch of a red hot poker, or that of her lover, and so on … there is nothing that it is like to be her. In other words, none of this is accompanied by phenomenal experience.
Zombies may well inhabit zombie worlds; worlds that are complete physical duplicates of our own, but without any phenomenal experiences occurring there. In such a world, for example, your ‘zombie twin’ is currently sat reading about zombies, just as you are, but there is nothing it is like for your zombie twin to do so.
The key to the zombie arguments is the following line of thought: if zombies are possible then physicalism must be false. This is because if all the physical features of a human have been duplicated and there’s still something missing as far as mentality goes, then whatever’s missing can’t be physical: if it were, it wouldn’t be missing! If it can be argued that zombies are possible, then it looks like a good argument against physicalism is in the offing.
The simplest version on Chalmers’s argument runs as follows:
(P1): Zombies are conceivable
(P2): Whatever is conceivable is possible
(C): Zombies are possible
The argument is valid: if the premises (P1) and (P2) are both true, then the conclusion, (C), follows. So any response to the argument ought to target the truth of one of the premises. Typically, those responding to the argument have accepted the first premise, that we can conceive of zombies, but have questioned the second, arguing that whether or not we are able to imagine or conceive of something isn’t a good guide to whether or not that thing is possible.
However, it’s far from clear whether (P1) is in fact true: that is, whether or not we can really conceive of zombies. To see why, it’s easiest to consider some related things we clearly can conceive of, but which don’t live up to the aspirations of (P1). For instance, we can form a picture a human being in our heads, and say to ourselves ‘and it doesn’t have any phenomenal experience’. But this is far from conceiving of an exact physical duplicate of a human which lacks phenomenal experience.
Here’s an analogy, think of a mechanical clock, indeed, an exact duplicate of a mechanical clock you’re acquainted with. Can you conceive of the duplicate’s hands running anti-clockwise, rather than clockwise, or not running at all? You certainly could form a mental picture of the clock and say ‘and the hands run backwards’. But under close inspection, it’s not clear one could maintain this picture under scrutiny without making some change to the clock — say by rearranging the gears, or changing the direction of the motion imparted by the motor.
Additional pressure can be put on the notion of conceivability when one realises that things which are conceivable individually aren’t always conceivable in combination. Think about the following mathematical case: Goldblach’s conjecture says that every even integer greater than 2 can be expressed as the sum of two primes. Whilst we know the conjecture holds up to very large numbers, it remains unproven: given all our evidence, it could be true or it could be false. So it seems that individually, we can conceive of Goldblach’s conjecture either being true or of it being false: but we can’t conceive of both together, of its being both true and false. The contradiction here is obvious.
What about the zombie case? Well, it’s clear we can conceive of the notion of ‘an exact duplicate of a human being’, and, separately, of the notion of ‘lacking phenomenal experience’. Conjoining the two doesn’t lead to an obvious contradiction, like in the Goldblach case. But it is far from obvious, given our relative lack of knowledge of the relationship between the mind and the body, whether or not a contradiction lies waiting to be unearthed in the notion of zombie. Without further information on the nature of the brain, of mentality, and of the sorts of features we take to typify each, assent to (P1) should be withheld.
Featured image credit: ‘Scary Landscape Reflections’, by Leon Fishman. CC BY 2.0 via Flickr.
The post Imagining zombies appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
