Oxford University Press's Blog, page 281

February 6, 2018

The Islamic monuments of Spain: four centuries ago and today

Spanish historians and antiquarians in the sixteenth and seventeenth centuries revised the medieval reception of Islamic monuments in the Peninsula as architectural wonders and exotic trophies. They endeavoured to re-appropriate these hybrid architectures by integrating them into a more homogeneous cultural memory focused on Spain’s Roman and Christian past. These early modern interpretations of Spain’s Islamic monuments may have been pushed aside by the cultural and ideological power of the modern Orientalist myth. However, awareness of those early modern discourses is significant because they did not vanish overnight, but merely became superficially less visible.


These sixteenth and seventeenth ideas re-emerge now and again, reconstituting the readings of Islamic architecture that arose in the wake of the taking of Granada. Once the conquest and conversion stories had been told, Aljama mosques had begun to be called cathedrals (iglesias mayores). The possibility of their being pre-Islamic foundations had started to be entertained. The endurance of these old Christianizing and antiquarian arguments is probably a consequence of the parallel survival of the agenda that gave birth to them almost 500 years ago.


For the past few decades, Córdoba Mosque has been an ideological micro-battlefield where popular perception of Spain’s multicultural Islamic past and the Christian essentialism of the bishops and clergy responsible for the building cross swords. Consciously or otherwise, the latter have echoed, word for word, the Christian appropriation arguments created in the Early Modern Period. The cathedral chapter, for instance, has fostered academic research into the primitive Visigothic church that may or may not have stood on the site of the mosque. In the meantime, tourist information on the monument issued by the Church today refers to the Islamic construction as an interruption in the temple’s Christian nature and history. This attempt to interfere with visitors’ interpretation of the building, when most of them are precisely seeking encounters with the Islamic past, is usually unsuccessful and many find it preposterous. It can be understood as an example of the staying power of sixteenth century thinking.



Mihrab of the Mosque–Cathedral of Córdoba by Ruggero Poggianella. CC BY-SA 2.0 via Wikimedia Commons.

Seville’s Giralda, formerly the minaret of the Aljama Mosque and today the cathedral’s belfry, enjoys an iconic power that goes well beyond its original Islamic identity. This is a consequence of several centuries of narrative emphasis on the metonymic identification of the tower with its patron saints as the visible embodiment of the city. While the Christian genii loci incarnate, the presumed eternal spirit of a city dominated by Catholic liturgy, the Islamic builders are but a small part of a mythical discourse in which the issue under debate is not religion. Since the nineteenth century, Seville’s romantic exoticism has never been essentially or exclusively Islamic.


Lastly, in Granada, the sixteenth century debate on the model of inter-confessional religious coexistence is today livelier than ever. Washington Irving’s legacy seems to have obliterated all possibility of escaping the city’s natural identification with Islam. Even as it benefits from this image, however, local identity has never abandoned the Christianization discourse. Granada holds its controversial celebration of the Christian conquest year after year, and the myth of the Sacromonte Lead Books still thrives in the twenty-first century because some of the population believes that both narratives are necessary to counter the dazzling power of Granada’s Islamic heritage. These strategies were designed many years ago because, like other buildings of al-Andalus, the Alhambra was an icon that made it impossible to ignore the nation’s hybrid past.


Generally speaking, Spain’s Islamic legacy is more alive today than ever as a cultural and tourist commodity for international consumption. It is also at the heart of the multiculturalism debate that arose in the wake of 11 September 2001. This background feeds, to this day, the need for ideological negotiation through the monuments that embody our memory of the past.


Featured image credit: Mosque–Cathedral of Córdoba by Alonso de Mendoza. CC BY-SA 3.0 via Wikimedia Commons .


The post The Islamic monuments of Spain: four centuries ago and today appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2018 04:30

What does my cancer gene mutation mean for my family?

For 15 years I have counseled patients about what it means to carry a mutation in a gene that can lead to a higher risk of developing cancer. Hundreds of times I have said, “A mutation was found.” Our patients have different mutations in different genes. They come from different parts of the world. They speak a variety of languages, and bring their cultural heritage and expectations to our sessions. Their initial reactions are different; for some it is shock and sadness and for others it is relief to have an explanation and a tool for prevention. Yet, the same question is asked by almost everyone: “What does this mean for my family?”


For most patients, it means their parents, brothers, sisters, and children all have at least a 50% chance of carrying the same gene mutation. The implications are significant. While inherited cancer conditions lead to a greatly elevated risk for certain cancers, often at ages younger than typically seen, there are steps that can be taken to reduce the risk of developing cancer or to detect it at earlier stages. However, to take advantage of these potentially life-saving interventions an individual needs to know if they carry the family mutation.


After I disclose genetic test results to a client, we discuss which family members to inform and how the patient will do so. I give them a sample letter to share with their family, along with copies of their result. Armed with these tools and a crash course in genetics, patients are then asked to share information with family members.


“While inherited cancer conditions lead to a greatly elevated risk for certain cancers, often at ages younger than typically seen, there are steps that can be taken to reduce the risk of developing cancer or to detect it at earlier stages.”

Simple, right? You know you carry a mutation so you go home and tell your family. But families are not simple and communicating is inherently complex—as we can all attest. Just imagine how and what you would tell your relatives. Who might be more interested in this information? What current family dynamics might make the conversation difficult?


I have seen the downstream effects of this communication played out in clinic, both with the scenario of early-stage cancer detected in someone known to be at high-risk who is undergoing the right screening at the right time, as well as the more devastating flipside of a late-stage diagnosis that might have been prevented had gene mutation information been shared.


Even after a mutation is identified in a family, many people remain unaware and more than half of at-risk relatives don’t undergo genetic testing. We asked 136 of our patients with gene mutations: What happened after they were given their genetic test results? Our work was based at two Los Angeles hospitals (University of Southern California Norris Comprehensive Cancer Center and Los Angeles County+USC Medical Center) and at Stanford University Cancer Institute in Palo Alto, CA. Ninety-six percent of our patients shared with us that they had told a family member about their genetic test results. Encouragingly, 30% said family members had undergone genetic testing, some within the first three months.


Past research has been almost exclusively conducted in ethnically homogenous, predominately Caucasian groups. Uniquely, our study reflects the diversity of California, with 40% of the patients identifying themselves as being of Hispanic/Latino ancestry, 10% as having Asian ancestry, and 41% identifying with a Caucasian, non-Hispanic background. In addition to ethnic diversity, more than a third of our patients spoke a language other than English, more than 40% had less than a high-school education, and almost half were born outside of the United States.


Broadly speaking, there were no major differences among these diverse groups when asked about whether they shared their genetic test information with their family. However, some subtle differences emerged. For example, we also asked if they encouraged their family to undergo genetic testing. Our patients of Asian ancestry were less likely than those of Caucasian ancestry to report having encouraged their family to undergo genetic testing. Yet, the same patients of Asian ancestry were eight times as likely to report that their family members had undergone genetic testing. Why? Maybe their relatives were more motivated to undergo genetic testing after being informed and less encouragement was needed. Or there could be cultural differences in how “encouraging family members” is defined or how acceptable it might be.


There can be great hope in genetic information, despite its psychological weight. The maximum benefit of genetic information requires it to move through families. We found that our diverse patients, to whom we had to say, “a cancer gene mutation was found,” did, in fact, communicate this genetic test result to their family. And many of their family members went on to get the right genetic test within a matter of months. However, culturally informed interventions are needed to support and facilitate communication for families, as well as to enhance the preventive impact of genetic information.


Featured image credit: Family picnic by John-Mark Kuznietsov. CC0 public domain via Unsplash.


The post What does my cancer gene mutation mean for my family? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 06, 2018 03:30

February 5, 2018

Can you pick up the ‘core’ of ten languages in a year?

I previously wrote about how Scientific English is a specialized form of language used in formal presentations and publications. It is rich in ‘rare’, or extremely low frequency words, and the colocations that define them (i.e. we ‘sequence a genome’ or ‘stretch of ‘DNA’). Learning to comprehend the meaning of such formal language requires considerable exposure and writing it well truly exercises one’s knowledge of the ‘long tail’ of vocabulary.


By contrast, ‘the’ is the most common word in English and we use ‘a’ and ‘or’ in the above examples all the time. High and low frequency words can easily be identified by getting computers to do the heavy lifting counting and frequencies vary considerably. In the 560-million-word Corpus of Contemporary American English (COCA) (the list of the first 5,000 words is free) one can look up the rank of any word. In this corpus, for example, ‘attic’ is 7309, ‘unsparing’ is 47309, and ‘embryogenesis’ is 57309.


This considerable variation has many ramifications for language acquisition and use. A developmental biologist might require the word ‘embryogenesis’, but no one would think of it as the first word to learn in another language (that word is actually thank you). Of course, context defines utility, but in general ‘frequency’ is a hard and fast rule: one will encounter ‘cat’ long before ‘catnip’, ‘dog’ before ‘dogcatcher’, and cat and dog repeatedly before ‘armadillo, aardvark, or baleen whale’.


The nature of the word frequency curve offers good and bad news to language learners. On the positive side, analysis of the distribution of frequencies yields a surprising statistic: the 100 most common words in English make up all 50% of spoken and written language. One can get a long way by learning the front of the curve.


The flip side is that the frequency curve gets harder precipitously. The ‘lexical’ words of language (e.g. nouns and verbs that carry information) are numerous and drop off fast in use. The ‘diminishing returns’ statistic states that with the most common 1,000 words you’ll be able read 75% of most texts, with 2,000 you’ll be at 85%, but another 1,000 only adds less than 2% more.


Getting to native level means learning thousands of words that are hardly used. Authors of literature take full advantage of this fact to find exactly the right, colourful words, thus flavouring their creative works. This steep learning curve is what makes the mastery of scientific language by non-native speakers such an achievement.


Similar patterns of low and high frequency ‘words’ are found in DNA, as the rule of ‘a few common and many rare’ is a fundamental part of how nature works. An integral part of the study of DNA is the use of computers to mine such patterns. My field of research, bioinformatics, works at the intersections of computing, biology, math, statistics, and data management. This domain exploded into existence because of the advent of genomics and given the wealth of data and speed of progress adopted an ‘open source’ ethos: the collective view that the benefit of freely available and shared data and software is accelerated discovery.



Language by monika1607. CC0 public domain via Pixabay.

With such thoughts, I recently tackled Swedish while living in Sweden as a guest professor. I’d dabbled in other languages, including living in Japan for a year teaching English, so I was well-versed in failing to get language to stick. Just before moving, I spent a year self-studying Hindi. I was intrigued because it is one of the world’s ‘big five’ languages, a lingua franca in India, derives from the ancient language of Sanskrit, and is a well-spring of wisdom for both Greek and Latin.


Attempting Hindi ‘for fun’ opened my eyes to how the Internet has revolutionized access to native speaking teachers all over the world and the wealth of online materials they produce. So, I proceeded with my life-long language ‘experiment’ aiming to learn how to read basic Swedish as quickly as possible using free, online materials. I collected up Swedish words I saw most frequently around me in a ‘word log’ and trawled YouTube for productive listening videos to gain an overview of Swedish grammar. I could read Easy Swedish News at 90%-100% comprehension within two months, about a 1,500-word recognition vocabulary.


Mostly, it offered a chance to think more deeply about how we learn language best. Working in a frequency-based way, allowed a ‘memorization-free’ philosophy. I bulked up quickly by curating vocabulary lists, but I also consumed ‘real world’ materials (from ABBA songs in Swedish to Facebook ads). Picking my resources carefully meant I saw the same words over and over. I didn’t worry that I didn’t know a word until I’d interacted with it ten times, what I think of as the “10x” rule. I used the same method for Spanish, a cognate pool more similar to English, and with even more online resources, in only two weeks. I can’t speak a word of these languages, but satisfyingly, I feel I got a ‘flavour’ of each. I also feel just that bit more the ‘global citizen’.


My ‘linguistic tourist’ experience was overshadowed by the fact that so many teaching sources break the ‘frequency’ rule and few use it. I kept thinking how much faster I could learn with graded, interlinked resources. Ruminating this, I most recently forayed into Estonian, a language more foreign (and therefore interesting) to English speakers than Hindi because it is outside the Indo-European language family. The experience struck home a blindingly obvious fact that we overlook to our peril. How much time is spent learning the English – it is always different and knowing it is fundamental to learning the new language. This goes deeper than wishing for ‘uniform’ materials: there is a ‘true core’ to language, and we know scientifically that it is frequency based. Plus, polyglots swear the trick to language is caring about the words you are learning in the first place.


What if the languages community agreed upon a shared ‘core’ hosted in the public domain and built resources around it? Could such an experiment one day support a crazy ambition to learn the ‘flavour’ of ten languages in a year? Let’s just say the first 1,000 words for reading?


A free and open “First Words” list could be built on in infinite ways, thus making it easy to learn memorization-free by the “10x” rule. My first wish would be a choice selection from the list of ‘100 words for speaking’ engineered to cover sentence construction and grammar with a view to getting one speaking. Prioritizing the ‘5Ws and H’ the focus would be on beginner statements (“My name is…”), forming questions to support dialogues, the greetings, a few power nouns and verbs, such as ‘to be’ and ‘have’ and key glue words (the, of, and, or, but, etc). Even 20 are sufficient to cover pronunciation, the basics of sentence formation, first grammar rules, and support simple dialogue, as an over-simplified illustration of how a ‘language works’.


While dreaming, why stop at 1,000 words? At 2,000 words one is pretty much ‘fluent’ in daily conversation and at 5,000 can make good sense of a newspaper. There is no reason, in theory, it could not include the whole language (dictionaries) right up to the complexities of Scientific English. If you would like to collaborate, please contact me at unityinwriting(at)gmail(dot)com.


Featured image credit: Quote by Maialisa. CC0 public domain via Pixabay.


The post Can you pick up the ‘core’ of ten languages in a year? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2018 04:30

Responding to the rise of extremist populism

The rise of extremist populism in recent years places liberal democracy, not to mention committed liberal democrats, in an awkward position. There has been an alarming rise in public support for such extremist movements, even in established liberal democratic states. In states such as Hungary, Poland, Turkey, and Venezuela, democratically elected governments are enacting illiberal and anti-democratic political goals and values into law and in some cases directly into their constitutions. Once in power, these movements have sought – among other things – to concentrate political power in the executive branch, subordinate the judiciary and the civil service to the executive, intimidate and disempower domestic opposition, undermine press freedoms, infringe on minority rights, limit freedoms of expression and assembly, control universities, and finally stoke xenophobic, anti-pluralist, anti-Semitic, and racist sentiments.


Victor Orban, Hungary’s Prime Minister, has characterized this development as “illiberal democracy.” Defenders of liberal democracy and theorists of populism, in turn, have responded by condemning this turn as simply undemocratic. The abuse of democratically obtained powers to dismantle liberal and democratic commitments is democratic suicide.


A sort of paradox sits at the heart of democratic suicide, however. If a majority or supermajority legally seeks to enact laws that undermine liberal constitutionalism and democracy, is it best characterized as undemocratic? What constitutional measures can be taken to prevent democratic suicide from happening, if any?


It is widely believed in the West today that democracy is the ultimate basis of political authority. If true, liberal democrats have little recourse when a majority or super majority of the people legally amend liberal and democratic values out of the constitution – except to hope that voters will come around by the next election.


  Extremist populist movements seeking to dismantle liberal democratic states according to the ‘rules of the game’ are not a new phenomenon. 

Instead of outright denying the democratic nature of populist movements, I believe there is value in conceiving of them as instances of “illiberal democracy.” The legal rise to power of these extremist movements uncovers a tension within everyday notions of what democracy consists in, a tension at the heart of liberal democratic states. The tension is that, not only do democracy and liberalism have no necessary relationship to one another, they can even be inimically opposed. A democratic will expressed legally can pursue deeply illiberal legal goals. It can seek to enact laws overturning individual and minority protections as well as constitutional limitations on its power. To be sure, liberal constitutionalism and democracy combined tend to strengthen the values each promotes individually. And both are important values in their own right. But they are not the same.


By resisting the urge to subsume liberalism under democracy conceptually, politicians, jurists, and political scientists instead confront a dilemma. That dilemma is over whether democratic procedures or liberal values have higher constitutional authority. In other words, is the legally expressed will of the people, no matter what the content of that will expresses, the source of political legitimacy? Must liberal democrats remain true to their colors by accepting the outcome of democratic procedures, no matter how morally abhorrent they may be? Or, alternatively, do liberal constitutional values have higher authority? That is, must a democratic will – no matter how popular it may be – necessarily operate within a framework of basic liberal rights and other constitutional checks?


The first horn of this dilemma leads to the same conclusion as above: liberal democrats must accept the outcome of democratic procedures, no matter how undesirable it may be.


Deciding for the other horn of this dilemma avoids the paradox of the problem of democratic suicide. It results in constrained democracy and constitutional mechanisms that limit democratic legal change when directed at the fundamentals of liberal constitutionalism.


Constrained democracy originated with an unlikely source: the notorious jurist Carl Schmitt. In Weimar Germany, Schmitt insisted on separating liberal constitutionalism from democracy. In his state and constitutional theory, he argued that basic individual rights and the separation of powers alone provided a coherent and stable foundation for constitutional democratic states in the twentieth century. Moreover, he argued that basic liberty rights and other liberal constitutional values must be conceived of as more than constitutional laws, subject to the constitutional amendment procedure. They must be understood as amendment-proof constitutive commitments of the state. He argued that – if the Weimar state was to provide a lasting and stable public order – democracy, constitutional change, and the will of the people could only operate within a framework of inviolable constitutional commitments. In other words, democracy cannot be the ultimate basis of political authority. But, under the right circumstances, liberalism could be.


Carl Schmitt is a controversial figure because he joined the Nazi party once they came to power, incorporated vicious anti-Semitism into his writings, and legally defended early acts of the Nazi state. But committed liberal democrats today can reject Schmitt’s despicable actions without also throwing away the entirety of his state and constitutional theory. When paired with other policies intended to buttress liberal democratic states, like civic education and social welfare provisions, Schmitt offers a theoretical justification for constrained democracy and theorizes its core institutional mechanisms. He provides us with reasons why basic liberal constitutional commitments can be set beyond the reach of a democratic amendment procedure – including basic individual and minority protections as well as constitutionally guaranteed checks on an otherwise legally expressed democratic will. Among other things, Schmitt’s thought has been argued to have inspired the Eternity Clause (Article 79.3) of Germany’s current constitution, the Basic Law (Grundgesetz), which entrenches beyond any legal change fundamental principles of human rights and human dignity among other things.


Extremist populist movements seeking to dismantle liberal democratic states according to the ‘rules of the game’ are not a new phenomenon. Their possibility is embedded in the nature of democracy itself. Recognizing this possibility, constrained democracy provides an invaluable theoretical framework to prevent such movements from legally subverting liberal democracy. Constrained democracy offers the best constitutional solution to the problem of democratic suicide.


Featured image: “Teilnehmer einer Anti-KOD-Demonstration in Bielsko-Biala” by Silar. CC BY-SA 4.0 via Wikimedia Commons.


The post Responding to the rise of extremist populism appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 05, 2018 00:30

February 4, 2018

Does nationalism cause war?

Nationalism is often blamed for the devastating wars of the modern period, but is this fair? Critics pinpoint four dangerous aspects of nationalism: its utopian ideology (originating in the late 18th century), its cult of the war dead, the mass character of its wars, and its encouragement of the break-up of states. I argue, however, that the case against nationalism is not proven.


A first charge is that the ideology of nationalism is itself a cause of war. Nationalism is a secular ‘religion’ that proclaims that the world is composed of unique and ancient nations which have exclusive homelands, and that the’ sacred duty of individuals is to defend the territory, independence, and identity of their nation. But critics say that nationalism is historical fantasy: such nations have not existed before the modern world, human populations more often than not have been intermingled, and attempts to separate them into exclusive territories generates conflict. Moreover, nationalists also deny all existing agreements, including treaties between states, that are not based on the free will of peoples. Nationalism thus necessarily leads to war. An awkward fact for this argument is that the number of interstate wars in the era of nationalism (the 19th and 20th centuries) has fallen. Many nationalisms are pragmatic and conservative in character. In fact, warfare was more frequent in the period before modern nationalism during the sixteenth and seventeenth centuries that were wracked by religious and imperial conflicts. The modern nation state system is arguably an effect of these wars.


A second accusation is that nationalism glorifies war through its cult of ‘fallen soldiers.’ The purpose of war commemorations, it is claimed, is to ensure a ready supply of recruits for future wars by appealing to the idealism as well as the ‘aggressive instincts’ of young males. Some have even argued that ‘regular blood sacrifice’ for the nation is necessary for social cohesion. To prevent social disunity, political leaders create external enemies, thereby diverting the violence inherent in human beings outwards rather than towards the powers that be.


But this is to simplify. While some politicians might evoke memories of ‘ancient hatreds’ in their drive for power, remembrance ceremonies of the world wars, at least in Western Europe, are rather ‘sites of mourning’ offering the lesson of ‘never again.’ The continued power of such ceremonies may be linked to the decline of traditional religions and the need to find a transcendent meaning for the tragedies of mass death. As George Mosse showed, nationalism took on the characteristics of a surrogate ‘religion’, appropriating the iconography and even the liturgies of Christian religion during the 19th century, to extoll the fallen soldiers as national martyrs and make their military cemeteries, places of pilgrimage. Nationalists promise a kind of immortality to all who die for the nation in ‘being remembered for ever.’



Flag europe switzerland by strecosa. Public domain via Pixabay.

A third argument is that nationalist wars, if less frequent than those of the early modern period, are more destructive since nation states are based on a new contract between the state elites and the people. In return for citizenship, the masses agree to fight for the nation state. Before wars were fought largely by military professionals and for limited objectives, now they are peoples’ wars conducted with unbridled passions. This began with the French Revolutionary period in the late 18th century and by the twentieth century wars became total,  involving all the population. In the mechanised conflicts of industrialised nation states during the First and Second World Wars, civilians were as central to the war effort as the military and become targets. At its extreme, war can slide into genocide.


This third position too is one-sided. As states have been more representative of the people (more national), there has been a major shift in public spending from guns to butter. Today, at least in Western Europe, the welfare state has replaced the warfare state. Moreover, liberal nationalists and their nation states have been prominent in constructing international laws and bodies designed to regulate the conduct of war. These include Hague Peace Conferences, Geneva Conventions, the League of Nations, and the United Nations whose Charter narrowed the scope of legitimate war to that of self-defence. After the Second World War interstate wars between the great powers has declined steeply, in part because of this new internationalism.


Admittedly, violence within states (including civil wars) has increased since 1945, much of which arises from secessionist movements of minorities which claim the right to national self-determination. This can lead to state breakdowns and the creation of ungoverned spaces that become havens for global terrorist movements.


A fourth criticism, then, is that the principle of self-determination can lead to the break-up of states. This accusation too is overdrawn. The principle of self-determination by itself should not lead to state breakdown. States can offer federal and consociational (power-sharing as in Northern Ireland) arrangements, through which different national communities can be reconciled. Much of nationalist violence occurs in weak postcolonial states with multiple minorities that were rapidly constructed after Empires dissolved after 1945. It is the absence of unifying national loyalties that is the problem. Furthermore, the international peace-keeping missions formed to restore order in conflict zones are led by coalitions of nation states, often working under United Nations mandates.


To conclude, there are many varieties of nationalism, some xenophobic, others liberal-democratic and internationalist. Even the former is not necessarily a cause of war: other factors are usually required such as an external threat and a breakdown of the state. Indeed, where this occurs, the restoration of stability in the contemporary period is above all dependent on nation states  acting in concert in the name of a ruled-based international order.


Featured image credit: Graves war cemetery by MaartenB. Public Domain via Pixabay.


The post Does nationalism cause war? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2018 03:30

Jacopo Galimberti on 1950s and 1960s art collectives in Western Europe

The phenomenon of collective art practice in the continental Western Europe of the late 1950s and of the 1960s is rarely discussed. Jacopo Galimberti looks at a comparative perspective, engaging with a cultural history of art deeply concerned with political ideas and geopolitical conflicts in his book Individuals Against Individualism. He focuses on artists and activists, and their attempts to depict and embody forms of egalitarianism opposing the Eastern bloc authoritarianism as much as the Free world’s ethos.


Individuals Against Individualism examines the phenomenon of collective art practice in the continental Western Europe of the late 1950s and of the 1960s. What drew you to focus on this phenomenon and this period?


Over the past twenty years scholars and curators have concentrated on art collectives. The literature on this topic is engaging, but few publications have developed analytical tools to distinguish between collaborations that are sometimes informed by antithetical principles. The concept of “collective art practice” serves this purpose. When I pinned down this notion I began to discern the contours of a neglected cultural and artistic phenomenon. I came to realise that the worldwide protests of 1967-1969 marked only the beginning of a third phase in the temporality that I was exploring. The origin of collective art practice can be traced to 1956-1957, and is located far from the main centres of the western art world. This discovery was fascinating, because it pushed me to look at the political turmoil of 1956, which is an unusual year to begin an art-historical narrative. In hindsight, I believe my search for alternative timeframes owed a lot to the conjecture that Europe was experiencing at the time of my research. The main ideas of the book were developed between 2009 and 2012, in the midst of a political, economic and humanitarian crisis that has radically changed how we see the world and the Left.



I did my best to combine “theory” with social history.



How did you go about your research for this book? Did you come across anything that you found particularly interesting or surprising?


I wanted my research to be based on primary sources and to be truly transnational, so I had to become more transnational myself. I started learning German, for example. In terms of methodology, I did my best to combine “theory” with social history. Sarah Wilson had a profound impact on my approach, which involved dozens of interviews with artists. I was particularly surprised by what I have called “the myth of the invisible artist”, which is a counterpart to what Hans Belting has described as “the invisible masterpiece”. I discuss this in Chapter 4, but it is a leitmotif of the entire book.


In what ways did political ideas and geopolitical conflicts affect the cultural history of art in this period?


During the first two decades after World War II, art was often considered politics by other means. This was implemented through cultural diplomacy (for example, Malraux bringing Mona Lisa to John F. Kennedy in 1963), blockbuster shows and high-profile exhibitions such as Documents and biennials. But there were also less straightforward ways to exert “soft power”. In my book, I focused on the idea of the individual artist, which had unprecedented geopolitical overtones in the 1950s. The figure of the individual and individualist artist became part and parcel of the cultural confrontations of the Cold War, which is a misleading term as many proxy wars actually took place. Some leftist artists did not want to be treated as “useful idiots”, and put in place authorial policies that contradicted the mainstream views of the artist.


What was the role of ‘minor’ art centres in heightening interest in collaborative art practices?


In the 1960s, the dichotomic frameworks of the post-War years began being challenged on a geopolitical level, think of the “Third World”, a term that had no derogatory connotations, in fact, it was quite the opposite. The western art world saw a similar shift, with “minor” art centres like Düsseldorf, Milan, Nice and Turin quickly developing a thriving art scene. “Minor” centres were the cradle of collective art practice; two Spanish cities, Cordoba and Valencia, proved crucial, but so were also Munich, Padua and Zagreb. This having been said, Paris and New York still catalysed artists and money.


How do you think Individuals Against Individualism paves the way for further research in this area?


The book addresses a myriad of themes that deserve further investigation, such as the cultural diplomacy of the Francoist regime and the geopolitical agenda of the Paris Biennale. The issue of “minor centres” is another engaging topic. I hope my research will contribute to the emancipation of the discipline from its discriminations, illuminating, for instance, how the “margins” were also in the heart of Europe and cleaved Western art canons. My research could also pave the way for studies of contemporary collectives. The work and theoretical discourse on authorship of Claire Fontaine (a duo) has conspicuous precedents in the 1960s. Some governments and cultural institutions still treat artists as “useful idiots”, but the goals have changed. It is no longer covert anti-communist campaigns that enlist their “personality” and “freedom”, but rather property developers and authoritarian states opening world-class museums. There is nothing nostalgic in my book, I hope; the 1960s are a toolbox to understand, and act in, the present time.


Featured image credit: I-ypszilon (Amás Emodi-Kiss, Kata György, Csaba Horváth and Tamás Papp), National Monument of the 1956 Hungarian Revolution and War of Independence, 2006, Budapest. Used with permission.


The post Jacopo Galimberti on 1950s and 1960s art collectives in Western Europe appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2018 02:30

The healthiest body mass index isn’t as simple as you think

The body mass index (BMI) is a crude but useful measure of how heavy someone is for their weight. It consists of your weight in kilograms, divided by the square of your height in metres. Guidelines suggest that a BMI between 18.5 and 25 is healthy for most people. You are classed as overweight if it is 25-30 and obese if it is more than 30. You might think that establishing the healthiest BMI is simple. You take a large, representative sample of people and put them into groups according to their BMI. In each group you then measure some aspect of average health, such as the average lifespan. If you take this approach, which I’ll call the observed association, you find that the apparent ideal BMI is a little over 25. People classed as overweight actually live a little longer, on average, than those with a BMI in the recommended range. This has prompted numerous press articles advising people not to worry about being overweight, and some have accused scientists of deliberately misleading the public. But it’s a little bit more complicated than that.


When we advise people on what a healthy BMI is, what matters is the causal effect of BMI on health. The causal effect tells us if changes in BMI will bring about changes in health. The causal effect of BMI is not necessarily the same thing as the observed association between BMI and health. In other words, correlation is not (necessarily) causation. The reason for this is a process which statisticians call “confounding”. For example, the association between BMI and mortality may be confounded by smoking. Smoking causes a reduction in BMI (through appetite suppression). It also causes the premature death of many smokers. The result is that a lot of thinner people die young; not because they are thinner, but because they are smokers. Another important source of confounding is the early stages of disease; this is sometimes called reverse causation. Many diseases can cause weight loss, even in their early, undiagnosed stages. These same diseases, in time, can contribute to a person’s death. These people are not dying because they are thin; they are thin because they have a disease that will eventually kill them. If we interpret a confounded observed association between BMI and mortality as if it were a causal effect, we will get a false impression of the healthiest BMI. Disentangling the causal effect from the observed association is not a straightforward task.


“When we advise people on what a healthy BMI is, what matters is the causal effect of BMI on health. The causal effect tells us if changes in BMI will bring about changes in health.”

There is no single statistical method that can completely eliminate confounding, leaving us with an unbiased, precise estimate of the causal effect of BMI on survival. The most common approach is to measure potential confounding factors and take them into account in a process known as statistical adjustment. However, this method is only as good as the measurement of the confounder. Many confounders will be poorly measured, or not even thought of. We can restrict the people we analyse so that they are all similar in terms of the suspected confounders—only analysing apparently healthy non-smokers, for example. But then our study sample no longer represents the population properly, and who knows what confounders may remain? To reduce the problem of reverse causation, we can exclude deaths which happen shortly after BMI measurement. But a person could lose weight through illness years before their eventual death; how far back do you go? Another statistical method is to use something related to a person’s BMI, but not affected by the confounding factors, in place of BMI in the analysis. Examples include the BMI of their offspring (which is still somewhat related to the confounding factors) or a gene affecting BMI (which provides only very imprecise evidence).


Each of these alternatives to the simple observed association has its own unique limitations and biases. However, when we compare them to the observed association, the interesting thing is that they all point towards the same conclusion. The causal effect of overweight is more harmful than suggested by the observed association, and the causal effect of low BMI is less harmful. The combined evidence of this “triangulation” approach suggests that the recommended BMI range of 18.5-25 is about right, and that being overweight is not good for you. We should beware of placing too much faith in simple observed associations, however much we might want to believe them.


Featured image credit: Weight by TeroVesalainen. CC0 public domain via Pixabay.


The post The healthiest body mass index isn’t as simple as you think appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2018 01:30

Philosopher of the month: George Berkeley [timeline]

This February, the OUP Philosophy team honours George Berkeley (1685-1753) as their Philosopher of the Month. An Irish-born philosopher, Berkeley is best known for his contention that the physical world is nothing but a compilation of ideas. This is represented by his famous aphorism esse est percipi (“to be is to be perceived”).


Born in Kilkenny, Berkeley studied at Trinity College in Dublin, graduating with his Master of Arts degree in 1707. In Berkeley’s early work, his focus was mostly on the natural world and mathematics; however, his move towards philosophy is marked with his first philosophical publication, An Essay towards a New Theory of Vision (1709). His philosophical work was somewhat controversial and his importance initially quite overlooked. Berkeley’s influence was scarcely recognized at Trinity College until W. A. Butler wrote about him in the Dublin University Magazine in 1836.


Although most scholarly work has focused on Berkeley’s idealism and immaterialism in A Treatise Concerning the Principles of Human Knowledge and Three Dialogues between Hylas and Philonous, his work was not solely limited to metaphysics. Berkeley’s works on vision have influenced discussions of visual perception from the 1700s to the present. He also wrote on ethics, natural law, mathematics, physics, economics, and monetary theory. At his death he was already recognised as one of Ireland’s leading men of letters but it wasn’t until A. A. Luce’s and David Berman’s twentieth-century scholarship that Berkeley’s true impact was fully appreciated.


For more on Berkeley’s life and work, browse our interactive timeline below:



Featured image credit: Kilkenny, Ireland. CC-BY-SA-3.0-migrated via Wikimedia Commons.


The post Philosopher of the month: George Berkeley [timeline] appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 04, 2018 00:30

February 3, 2018

Nuclear deterrence and conflict: the case of Israel

“Deliberate ambiguity” notwithstanding, Israel’s’ core nuclear posture has remained consistent. It asserts that the tiny country’s presumptive nuclear weapons can succeed only through calculated non-use, or via systematic deterrence. By definition, of course, this unchanging objective is based upon the expected rationality of all pertinent adversaries.


Significantly, from the standpoint of operational deterrence, national enemies of the Jewish State must be considered rational by Jerusalem/Tel-Aviv. This is the case even though such enemies could operate in alliance with other states, and/or as “hybridized” actors working together with specific terror groups. At some point, moreover, Israel’s nuclear enemies might need to also include certain sub-state adversaries that could act alone.


For the country’s nuclear deterrence posture to work long-term, prospective aggressor states will need to be told more rather than less about Israel’s nuclear targeting doctrine. To best prepare for all conceivable nuclear attack scenarios, Israel must plan for the measured replacement of “deliberate ambiguity” with certain apt levels of “disclosure.” In this connection, four principal scenarios should come immediately to mind. When examined properly and comprehensively, these coherent narratives could provide Israel with much-needed theoretical foundations for preventing a nuclear attack or a full-blown nuclear war.


1. Nuclear retaliation


Should an enemy state or alliance of enemy states ever launch a nuclear first-strike against Israel, Jerusalem would respond, assuredly, and to whatever extent possible, with a nuclear retaliatory strike. If enemy first-strikes were to involve other available forms of unconventional weapons, such as chemical or biological weapons of mass destruction (WMD), Israel might then still launch a nuclear reprisal. This grave decision would depend, in large measure, upon Jerusalem’s informed expectations of any follow-on enemy aggression, and on its associated calculations of comparative damage-limitation.


If Israel were to absorb a massive conventional attack, a nuclear retaliation could not automatically be ruled out, especially if: (a) the state aggressors were perceived to hold nuclear and/or other unconventional weapons in reserve; and/or (b) Israel’s leaders were to believe that non-nuclear retaliations could not prevent annihilation of the Jewish State. A nuclear retaliation by Israel could be ruled out only in those discernible circumstances where enemy state aggressions were clearly conventional, “typical” (that is, consistent with all previous instances of attack, in both degree and intent) and hard-target oriented (that is, directed towards Israeli weapons and related military infrastructures, rather than at its civilian populations).


  Israel must prepare systematically for all conceivable nuclear war scenarios.

2. Nuclear counter retaliation


Should Israel ever feel compelled to preempt enemy state aggression with conventional weapons, the target state(s)’ response would largely determine Jerusalem/Tel Aviv’s next moves. If this response were in any way nuclear, Israel would doubtlessly turn to some available form of nuclear counter retaliation. If this retaliation were to involve other non-nuclear weapons of mass destruction, Israel could also feel pressed to take the escalatory initiative. Again, this decision would depend upon Jerusalem/Tel Aviv’s judgments of enemy intent, and upon its corollary calculations of essential damage-limitation.


Should the enemy state response to Israel’s preemption be limited to hard-target conventional strikes, it is unlikely that the Jewish State would then move to any nuclear counter retaliations. If, however, the enemy conventional retaliation was “all-out” and directed toward Israeli civilian populations as well as to Israeli military targets, an Israeli nuclear counter retaliation could not immediately be excluded. Such a counter retaliation could be ruled out only if the enemy state’s conventional retaliation were identifiably proportionate to Israel’s preemption; confined to Israeli military targets; circumscribed by the legal limits of “military necessity;” and accompanied by certain explicit and verifiable assurances of non-escalatory intent.


3. Nuclear preemption


It is highly implausible that Israel would ever decide to launch a preemptive nuclear strike. Although circumstances could arise wherein such a strike would be both perfectly rational, and permissible under authoritative international law, it is unlikely that Israel would ever allow itself to reach such irremediably dire circumstances. Moreover, unless the nuclear weapons involved were usable in a fashion still consistent with longstanding laws of war, this most extreme form of preemption could represent an expressly egregious violation of international law.


Even if such consistency were possible, the psychological/political impact on the entire world community would be strongly negative and far-reaching. An Israeli nuclear preemption could be expected only: (a) where Israel’s pertinent state enemies had acquired nuclear and/or other weapons of mass destruction judged capable of annihilating the Jewish State; (b) where these enemies had made it clear that their intentions paralleled their genocidal capabilities; (c) where these enemies were believed ready to begin an operational “countdown to launch;” and (d) where Jerusalem/Tel Aviv believed that Israeli non-nuclear preemptions could not achieve the needed minimum levels of damage-limitation — that is, levels consistent with physical preservation of the Jewish State.


4. Nuclear war fighting


Should nuclear weapons ever be introduced into any actual conflict between Israel and its many enemies, either by Israel, or by a regional foe, nuclear war fighting, at one level or another, could ensue. This would hold true so long as: (a) enemy first-strikes would not destroy Israel’s second-strike nuclear capability; (b) enemy retaliations for an Israeli conventional preemption would not destroy the Jewish State’s nuclear counter retaliatory capability; (c) Israeli preemptive strikes involving nuclear weapons would not destroy enemy state second-strike nuclear capabilities; and (d) Israeli retaliation for conventional first-strikes would not destroy the enemy’s nuclear counter retaliatory capability.


This means that in order to satisfy its most indispensable survival imperatives, Israel must take appropriate steps to ensure the likelihood of (a) and (b) above, and the simultaneous unlikelihood of (c) and (d).


Without hesitation, Israel must prepare systematically for all conceivable nuclear war scenarios. To best ensure national survival, no other preparations could possibly be more important.


Featured image credit: Israel flag by Mabatel. Public domain via Pixabay.


The post Nuclear deterrence and conflict: the case of Israel appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 03, 2018 03:30

What are the critical brain networks for creativity?

The concept of creativity is imbued with two contradictory notions. The first notion usually considers that a creative production is the result of high-level control functions such as inhibition, mental manipulation, or planning. These functions are known to depend on the anterior part of the brain: the prefrontal cortex. The second notion says that creative ideas emerge from relaxing the constraints and inhibitions, and letting the mind wander freely and spontaneously. In this case, shutting down the control functions may facilitate creativity.


Previously, we had assessed creative abilities on healthy individuals by adapting a famous task developed by Mednick in the ’60s based on the idea that creativity is the “forming of associative elements into new combinations, which meet specified requirements”. We asked participants to find a solution word related to three remote cue words or ideas presented to them, and found that more creative people had higher performance in this task than less creative people. In this task, participants are asked several questions such as: “Could you see a connection between the words “bridge” “social” and “to tie”, a word linked to all these words?”* To succeed, the participants need to connect and combine remote words or ideas, reflecting creative abilities, but no specific knowledge is required as we used frequent words.


To understand whether creativity is affected by the prefrontal cortex in the current study, we explored patients who had a frontal damage. Twenty-nine patients with a single focal frontal lesion performed this same combination task and we explored the impact of these lesions on performance. Based on brain MRI, we found that patients had poor performance when their lesion was located on a left lateral network that supports cognitive control, called the frontoparietal network. Within this network, the rostrolateral part of the prefrontal cortex was the critical node. In addition, further analyses revealed that when a lesion was located in another region, the medial prefrontal region, patients also had poor performance. These findings indicate that the left rostrolateral and the right medial prefrontal regions are both critical for creative abilities (Figure).



Figure: Critical regions for combining and generating remote associates. Top panel: Left rostrolateral prefrontal region; lower panel: right medial prefrontal region. Used with permission.

Based on Mednick’s theory that creative people have more flexible semantic associations, we hypothesized that, conversely, less creative people, including our patients, may lack flexibility in their semantic associations, and tend to consider strong associations rather than remote associations. Such a lack of flexibility would impact the combination task because it would constrain idea generation to the strong associates of each cue word preventing the generation of remote associates required to find the solution. Thus, in this study, we examined whether frontal patients had difficulties in generating remote associates that could explain their difficulties in the combination task.


To explore the ability to generate remote associates, we used a simple word association task. In this association task, the patients were asked to generate a word in response to a cue word according to two conditions. In the first condition, they were asked to give the first word that came to their mind (for instance, “black” in response to “white”). In the second condition, the distant condition, they were asked to give a word that is unusually associated with the cue word (for instance “marriage” in response to the word “white”). We estimated the patients’ ability to generate remote associates by measuring the unusualness vs commonness of their responses compared to normative data obtained from healthy participants.


We found that patients had poor performance when their lesion involved a medial network that supports spontaneous cognition (such as free mind wandering), called the default network. Within this network, the right medial prefrontal cortex appeared as the critical node. Patients with a lesion in this region produced more common or typical responses in both the first and the distant conditions, which is consistent with rigidity in semantic associations. Conversely, patients with a left rostrolateral lesion did not have any difficulties in generating remote associates. This second set of findings shows that the right medial prefrontal region, but not the left rostrolateral one, is crucial for generating remote associates.


Overall, different regions of the prefrontal cortex were found critical for creative abilities: patients with either a right medial lesion or a left rostrolateral lesion had low creative performance measured by the combination task. However, low creative performance can be explained by high constraints in semantic associations only in patients with a right medial lesion.


These findings suggest that the spontaneous and controlled aspects of creative thinking both depend on the prefrontal regions but involve distinct systems. One system includes the right medial prefrontal cortex within the default mode network, and appears to be essential for flexible semantic associations and idea generation. The other system includes the left rostrolateral cortex within a frontoparietal network, known for its role in cognitive control, relational integration, and multitasking. This region may be critical to integrate the retrieved associates of each of the cue words in the combination task.


These findings are consistent with recent functional connectivity studies that have shown the interaction between the lateral frontoparietal network and the medial default mode network during creative performance. Our data may help to better understand some of the processes computed by these networks, and they additionally demonstrate that there are critical nodes within these networks, especially in the rostral prefrontal region.


*The solution is the word “link”.


Featured image credit: Man Painting by JakeWilliamHeckey. CC0 public domain via Pixabay.


The post What are the critical brain networks for creativity? appeared first on OUPblog.


 •  0 comments  •  flag
Share on Twitter
Published on February 03, 2018 02:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.