Oxford University Press's Blog, page 19
August 25, 2024
Some very unique redundancies

Long ago, a teacher circled the expression “very unique” in an essay I wrote in some lower grade or other. “REDUNDANT,” her marginal note said. If I had had the presence of mind to look up “unique” in the dictionary, I might have responded that it is sometimes defined as “unusual” and I might have pointed to its use by J. D. Salinger (“we were fairly unique, the sixty of us, …”).
I was too shocked by the red marks on my essay to do any of that. And I avoid “very unique” to this day and wrinkle my nose at any offers of a “complimentary gift.”
I still wonder about redundancy though. Recently I heard someone use the expression “kill them dead” (he was referring to weeds in his yard, thankfully). Had my teacher been around, she might have commented.
But “kill them dead” is a pretty common expression, and it seems to me that it is emphatic. It shades the meaning of “kill,” with all its metaphoric uses, in the direction of literalness and finality. If you kill the weeds dead, you are eradicating them.
And it’s not uncommon for repetition to be used emphatically as well: the common intensive “very, very” is very easy to spot as are clonings like “coffee coffee” (rather than decaf) or the ne plus ultra “writer’s writer.”
Some redundancies still jump out at us. If you write that someone “ascended up the staircase,” the preposition will sound like nails on a chalkboard and should be cut. But what about “The detective wrote down our names in his notebook.” There’s a whiff of redundancy: either “The detective wrote our names in his notebook” or “The detective wrote down our names” would do. But “write” does not imply “down” to the same degree that “ascend” implies “up,” so it is less likely to raise editorial hackles.
Other redundancies paint a picture or slow the pace of prose. If you shrug, it’s always with your shoulders, so saying “She shrugged her shoulders” is redundant. But sometimes just writing “She shrugged” is too little. Adding the shoulders can make for a more meaningful shrug. The same goes for “squint” versus “squint your eyes” or “kneel” versus “kneel down.” These are judgement calls.
And sometimes an overt redundancy demands that we interpret a sentence creatively. You may recall astronaut Neil Armstrong’s first words on the moon. In a slip of the tongue, he produced the redundant “That’s one small step for man—one giant leap for mankind.” But his earth-bound hearers interpreted the redundancy figuratively enough to imbue it with historic meaning.
Or consider George Orwell’s slogan from Animal Farm: “All animals are equal, but some are more equal than others.” If you try to read that porcine proclamation literally, you run into trouble. Instead we interpret “more equal” as a comment on hypocrisy. “More equal” is a bit like “very unique,” only more so.
I guess one way to look at all this is that some redundancies are more redundant than others.
Featured image by Scott Graham via Unsplash
OUPblog - Academic insights for the thinking world.

August 22, 2024
Informers: secrets, truths, and dignity

Informers: secrets, truths, and dignity
Over 100,000 individuals acted as secret informers reporting to state security police in Czechoslovakia during the Communist years. The contents of all their reports were saved in extensive police files. Similar dynamics occurred throughout all of Eastern Europe.
The intricacies of informers, the mist of their secrets and muck of their revelations, has even inspired novelists and song writers. For example:
We’d like to know a little bit about you for our files
Simon and Garfunkel, ‘Mrs. Robinson’ (1968) from the album Bookends
We’d like to help you learn to help yourself
Hide it in a hiding place where no one ever goes
Put it in your pantry with your cupcakes
‘[O]ur only immortality is in the police files.’
Milan Kundera, The Book of Laughter and Forgetting (Aaron Asher tr, Faber & Faber 1996)

Why do people inform on others—including neighbors, family members, co-workers, friends, lovers—to the secret police in repressive societies? Once repression abates, and regimes democratize, how should law and political transition approach erstwhile informers?
Emotions are among key drivers that motor people to inform on others. Four emotions in particular should be noted:
resentment (getting even and settling scores)desire (getting ahead and grabbing things)allegiance (to an ideology, to the state, to a vision)fear (of the state, of the police, of being exposed)Informing is a tool of social navigation: in the words of Simon and Garfunkel, as a way for people to help themselves. Informers, for the most part, are marginal ordinary folks who are victimized by the state and, in turn, victimize others.
Informers are not limited to one place in time; they are not boxed into post-Communist Central and Eastern Europe. Indeed, they are everywhere, including very close to home. The only part of the United Kingdom to be occupied by the Nazis—the Channel Islands—was chock-full of informers and collaborators, along with resisters, during World War II, and all are featured in a local museum in Jersey. No state or social movement—no matter how virtuous or vile, how maudlin, Machiavellian, or magnificent—can operate without informers. As early as one month into the Russia-Ukraine war, thousands of Ukrainians faced prosecution for collaborating with, and supplying information to, the invading Russians. Indeed, armed with our iPhones, we are all recorders, informers, and cancellers now. We are all whistleblowers, for better or for worse.
What should one do with informers after the repressive regime falls? In post-Communist Czech Republic, informers were largely scapegoated and ostracized. They were purged from public offices and jobs. They were openly mocked. Politically, this was an easy task because, throughout history and across cultures, informers are largely seen as sniveling rats, moles, snitches, turn-coats, and finks. These words used to describe informers reveal near universal disdain.
What is more, the Czechoslovak Communist secret police files—a major data source—were opened to the public to peruse and review. These files are even being digitized. Their subjects thereby became immortalized, to draw from Kundera. As a result, the life-stories of informers became gossipy grist for the public mill. But so too did all the scurrilous and embarrassing details of what they reported about the lives of others—namely, individuals we call the informed-upons—such as affairs, outbursts, addictions, indiscretions, inanities, awkwardness, incompetence, petty crimes, dysfunctions, and health woes.
Transparency and clarity measures have been mainstreamed as part of transitional justice. The United Nations has declared March 24th as the international day for the right to the truth. That said, our findings cast some doubt upon the unadulterated nature of this embrace. We believe there may be cause to pause the pursuit of ‘the truth’ at all costs. The right to the truth can lead to grotesque privacy invasions. Ultimately, in the Czech Republic, the scapegoating of informers delivered comfort to many people otherwise complicit in Communist dictatorship while the opening of the files visited cruelties upon many others.
The right to the truth can lead to grotesque privacy invasions
In 2025, collaborator archives in the Netherlands will go public after having been shuttered for 75 years. These archives contain the files of a special court, the Bijzondere Rechtspleging (BR). The BR was established after WWII to prosecute alleged Nazi collaborators. The BR investigated over 300,000 individuals; it tried 65,000. Some of those tried were executed, some imprisoned, and others stripped of their civil rights. The BR archive, however, implicates a much larger array of individuals, including persons whose investigations were interrupted, stopped, never started, and those who were falsely accused. Understandably, opening these archives to the public has triggered controversy, just as it did in the Czech Republic. One big difference in the Dutch case is that almost all the collaborators have passed away. But they have families, children, and grandchildren. The dead, moreover, can never explain, clarify, apologize, cry, or argue.
Informing—driven by basic human emotions we identify as resentment, desire, allegiance, and fear—sits uneasily with many transitional justice measures. Our work offers a new lens—rooted in dignity—through which to manage this controversy, alleviate this unease, and ensure that transitional justice is more ‘emotionally intelligent’, respects fairness, and does not succumb to politics.
Featured image by Grianghraf on Unsplash.
OUPblog - Academic insights for the thinking world.

August 21, 2024
Coffee all over the world

An instructive essay on etymology need not always be devoted to a word going back to the hoariest antiquity. It can also deal with an “exotic” borrowing like coffee, for example. Our readers will not find any revolutionary discoveries in this post, but they may be entertained to see how evasive the goal even in such a simple case sometimes is.

Photo by Quang Nguyen Vinh via Pexels.
The entry coffee in the original OED is excellent, but some details need further discussion. James A. H. Murray, the dictionary’s first editor, doubted (even denied) the connection between coffee and the Abyssinian state Kaffa or Kâfa. Today’s scholarship disagrees with him. For example, in 1955, Paul Kretschmer, one of the best specialists in all things Greek, wrote (I am translating from German): “The name KAFFA is of special interest to us, because the word coffee [he of course said Kaffee] is derived from it. In Kaffa, however, coffee is called bûnô…. Kaffa is now looked upon as the native land of coffee. From there, it spread to Arabia, once believed to be the place where the drink originated. In Arabic, coffee is called bunn, which shows that the Arabs borrowed it from Africa. Other than that, the name of coffee in various languages poses many problems.” He paid some attention to hw in the middle of some names of coffee but did not go into detail. After all, the text in question was only a long footnote.
Below, I will return to hw, but equally intriguing is the stressed vowel in English coffee: why o? German, as we have seen, has Kaffee, and so does the familiar word café from French. In search of some suggestions, I turned to the discussion in Notes and Queries for 1909. It occurred twelve years after the publication of the OED volume with the letter C. Several knowledgeable people took part in that exchange. One of them was James Platt, Junior (1861-1910). This is what he wrote: “The exact sound of a [short], in Arabic and other Oriental languages is that of the English short u, as in cuff. This sound, so easy to us, is a great stumbling block to other nations. A learned German professor once confided to me, with tears in his eyes, that after years of study and long residence in England he was still utterly unable to distinguish between the words colour and collar. In fact, he pronounced them both with o, and most foreigners do the same.”
Platt added that Dutch koffie and kindred forms were the product of similar phonetic despair, while the French type (café) is more correct. And here I’ll risk adding a note that may be of some interest to specialists in the recent history of English sounds. Perhaps similar observations can be found in books earlier and later than Alexander John Ellis’s On Early English Pronunciation (1869-1889). Foreigners hardly have too much trouble mastering today’s British o in hot, not, pot, or collar. But this vowel is an almost insurmountable barrier to some when they try to pronounce those words the American way (especially in its Midwestern variety). For example, Russian speakers in the United States are sure that Boston should be pronounced as Buston, and in their accent, both vowels in potluck sound the same.

Kofetarica by Ivana Kobilca. Public domain, via Wikimedia Commons.
I would like to suggest that at the time when Britishers got their first taste of coffee (the middle of the seventeenth century), which is roughly when the colonization of the New World began, the English short o sounded as it does in today’s American Midwest. Americans must have preserved this value. I also believe that even in Platt’s childhood, the British short u (as in put) was different from what we hear today: it seems to have resembled Modern German ö, Scandinavian ø, and French œ. The name Ruskin was in Russia transliterated as Rëskin (here, rë is not unlike German rö). Lunch probably became lënch (quasi lönch; I already knew only the form lench). In Russia, I taught several old women English, and they stubbornly insisted on pronouncing short English u as ö. It appears that the vowel of English coffee owes its existence to the old value of short o (as in pot) and perhaps short u (as in put). Presumably, they were much closer to u (as in today’s cut) and German ö than they are today. To sum up: [o], as in cot, was perilously close to [a], as in cut, while this [a] gravitated toward [ö]. All this caused great confusion.
One also wonders what happened to the original hw in the word’s middle. Platt noted that Finnish has kahvi and Hungarian has kávé and ascribed the change of hw to v to Turkish intermediaries. But other participants in the discussion offered different explanations. According to V. Chattopádhyáya, everything depended on the place of stress: initial in English and variable in other languages. Some examples offered to substantiate this hypothesis are, to my mind, not fully convincing.
Still another participant in the discussion was Colonel W. F. Prideaux, a specialist in several Oriental languages. His short essays on etymology in Notes and Queries (32 in my bibliography; he wrote about other subjects too) deserve being collected and reread. Even Wikipedia has no notion of their existence. Prideaux took issue with James Murray and Platt’s transliteration of the Arabic word as qahwah and of the Turkish word as kahveh: the Turkish form, he insisted, has the same initial consonant as the Arabic one. In his reconstruction, he did without the imaginary Turkish form kafvé and suggested that “on the coast of Arabia and in mercantile towns, the Persian pronunciation was in vogue, whilst in the interior… the Englishman reproduced the Arabic.”
Be that as it may, the multitude of the forms, recorded by travelers between 1573 and 1673, is both curious and confusing. Englishmen wrote coho, cohu, couha, coffao, copha, cowha, cowhe, and once even coffee (1609). The French, Italians, and a Portuguese Jew recorded chaube, caveah, cave, cavàh chaoua, cahoa, and cahue. I am not sure I can easily refer all such differences in the transliteration of the unfamiliar word by English and continental travelers to the differences in the phonetic makeup of their native languages.

Anonyme, dessinateur-lithographe, CC0, via Wikimedia Commons.
Professor Michiel de Vaan, the author of the most recent work known to me on the history of coffee, 2008, was not aware of the exchange discussed above (my bibliography of English etymology appeared only in 2010) and came to the conclusion that Italian, English, and Hungarian forms “are based on Turkish kahve in one way or another.” Prideaux, as we remember, preferred to do without “an imaginary Turkish intermediary” and ascribed the vowel o to the influence of hw. According to De Vaan, some other differences are allegedly due to the place of stress, as Chattopádhyáya thought. The existence of f in coffee and other similar forms remains partly unexplained. “It has been attributed to Armenian traders, who were responsible for part of the spread of coffee from Vienna across Europe.” Platt wrote: “I would point out that in Turkish there is a disposition to substitute f for h.” De Vaan, like all his predecessors, speaks about how European travelers reproduced the stressed vowel of the exotic new word. Perhaps my idea that the older value of British o is also a factor in the history of coffee deserves some attention. In any case, do not let linguistic problems interfere with your enjoyment of a cup of coffee and remember that in Italy, only foreigners order cappuccino after midday.
Header photo by form from PxHere. Public domain.
OUPblog - Academic insights for the thinking world.

August 17, 2024
A duty of care? Archaeologists, wicked problems, and the future

A duty of care? Archaeologists, wicked problems, and the future
Archaeology needs to stay relevant. To do so, it will need to change, but that won’t be simple given how much needs to change, and how many of the things that need changing are systemic, firmly embedded both within disciplinary traditions and practice and within society. In many parts of the world, archaeology remains deeply colonial for example. Many consider it to be exclusive and privileged, while others find it meaningless.
Let me focus on the last of these statements: that it is ‘meaningless’. My work confronts this opinion head on, aligning the study of archaeology with contemporary global challenges to not only demonstrate the subject’s relevance, but to proclaim its central position in discussions of planetary health and global security. Archaeologists have a long tradition of collaborating across disciplinary boundaries. However, to take that central position successfully and with credibility, we need to be even more concerted as well as creative in the professional relationships we form, and in the types of work that we do.
Many people still associate archaeology with the study of ancient human societies, investigated usually through excavation. This work remains vital in promoting new knowledge and insight, while giving time-depth to those contemporary debates around, for example, human adaptation to a changing climate. However, archaeology has outgrown this traditional definition. Archaeology also views the world as it exists now and as it will exist in the future, making it a contemporary and future-oriented discipline that is both vibrant, relevant, and necessary.
Archaeologists now view the contemporary world through those same lenses that archaeologists used to study the ancient past, providing both perspective and focus. In terms of perspective, these lenses allow archaeologists to look critically at the evidence they uncover and create interpretations of human behaviours through the traces people have left behind. For the contemporary world, archaeology has the ability to use this evidence to render the supposedly familiar unfamiliar, or to call into question those things that we take for granted. These lenses also allow us to focus on specific topics, themes or places, with the agility to close in on detail at a micro-scale, or to pan out to encompass the bigger picture. Archaeologists (ideally working with scholars from other disciplines) can then relate these different scales of investigation to one another in ways that improve our understanding of global challenges such as climate change.
In my research I refer to ‘wicked problems’, a term created in the 1960s to describe those tough (and possibly, ultimately irresolvable) global challenges that threaten planetary health, human health, and security. CAs well as climate change, environmental pollution, health and wellbeing, social injustice and conflict are examples of such problems which are generally ill-formulated, where the information is confusing, involving decision makers with conflicting values, and where the ramifications in the whole system are incredibly complex. The adjective ‘wicked’ describes the evil quality of these problems, where proposed solutions often turn out to be worse than the symptoms.
Yet, I am optimistic due to the novel ways that archaeology can contribute to tackling some of these world’s most wicked problems through adopting what psychologists have referred to as a small-wins framework. Studying the past in ways that are creative, bold, and interdisciplinary, can create significant ‘small wins’.
As teachers, archaeologists can ensure that students are prepared for a career in which duty of care is both encouraged and embraced.
I referred earlier to the need for archaeology to change. While there are many examples of successful small wins that address wicked problems, such as York Archaeology’s current Archaeology on Prescription project or Rachael Kiddey’s Heritage and Homelessness work, many archaeologists do not see how their work aligns with wicked problems. Some may even question whether it should. I believe that all archaeological work has the potential to align with wicked problems through this small wins framework and, furthermore, that archaeologists have an obligation, a duty of care, to create opportunities for small wins. This isn’t necessarily the same as demonstrating ‘impact’, a term all archaeologists who apply for research funding will be familiar with. Duty of care is a responsibility, and arguably one that all citizens should take, acknowledging that small wins matter while being realistic about what they can achieve.
As teachers, archaeologists can ensure that students are prepared for a career in which duty of care is both encouraged and embraced. One example of this might be a familiarity amongst archaeologists with the language of policymakers, an understanding of how practice informs policy, and where and how archaeology can contribute to policy-making. As archaeologists we can also learn to work more effectively with communities to co-produce projects and facilitate community-led programmes, while finding new ways to promote the valuable collaborative work that we do, and its social relevance, to new audiences.
Of course, archaeologists cannot change the world on their own. But with this unique set of lenses at our disposal, and using the small wins framework, we can make a difference.
Featured image by Fateme Alaie via Unsplash.
OUPblog - Academic insights for the thinking world.

August 16, 2024
Bringing decolonisation to law teaching: fulfilling the promise of legal pedagogy

Bringing decolonisation to law teaching: fulfilling the promise of legal pedagogy
I, like many others, came to the law school because I heard justice and freedom and peace in its name. For many, like me, the sojourn into the study of law is triggered by some event or situation. For me it was the Rwandan genocide of 1994. In April of 1994 nearly a million people were brutally murdered in that country. Yet, the international community was unwilling or unable to act, despite the fact that the killing was covered by international media. The hopelessness was overwhelming. And I wondered and hoped that the study of law would give me answers to how we stop endless suffering and devastation. This experience of coming to the law for hope is replicated across the world. For some, their triggering event is something happening far away to people they do not know and will never meet. For others, it is something more personal but no less earth-shattering. Something happening to them, a family member, or a friend—extreme poverty, domestic violence, alienation, police brutality, forced migration etc. Many people continue to come to the law school for answers. For me, like numerous others, the promises of the law school did not deliver exactly as expected. Especially for students who have experienced racism, students who are struggling to understand its persistence, students born into the shadow of empire, students for whom the imminence of environmental devastation is immediate and unyielding… decolonisation has provided some solace to their unfulfilled hopes.
What is decolonisation?Decolonisation can be described as a collection of repudiatory and resistant responses to the multifaceted inauguration of colonial ways of thinking, being, and doing in the world—this inauguration is often dated to the fifteenth century. These colonial logics rely on unequal ways of thinking of the body, space, and time that have helped develop structures reliant on racism, classism, sexism, overexploitation, and xenophobia among others. As such, these systems of thought have helped produce, inter alia, racial injustice, extreme inequality, and environmental devastation, through the manufacture of race as a hierarchy of humanity, the kidnap and enslavement of African peoples, as well as the territorial commodification and occupation of land across the globe. Decolonisation describes a set of immediate and continuing responses developed by indigenous, racialised, and colonised peoples to resist these multifaceted methods of imperialism. These responses have come in different forms—independence demands, outright resistance, calls for sovereignty, and the restoration of lost knowledges etc. As such, we should understand that decolonisation is not one thing, but a set of context-dependent strategies, adopted by peoples resisting all forms of enduring colonisation—strategies specifically relevant to the particular ways in which colonial ideologies manifest themselves in those particular places. In other words, decolonisation, in practice, has often involved indigenous peoples, colonised peoples, racialised peoples, and their allies taking up the tools that they have, to resist the specific forms of oppression that they experience, in the places where they experience it, at the time they experience it. For them, decolonisation is a tool to make their futures possible, liveable, and flourishing.
How do we as agents of law use decolonisation as a tool to make all our futures possible, liveable, and flourishing?Can we use this decolonisation in the Law School?
Decolonisation as I have described it here has had a long history—inside and outside the classroom. In our present context, the demand that #RhodesMustFall, which emerged at the University of Cape Town (UCT) in early 2015 and quickly spread across South Africa and beyond, found fertile ground with students and staff across the world grappling with the present manifestations of empire’s long shadow. Very often the fruits of this sprouted in law schools under the mandate “decolonise the law school.” These demands have also been taken up by many law teachers across the world as they seek to unpack the afterlives of colonialism in their work. For me, this has involved the design on a completely new unit, called “Law and Race.” In that unit, we use multidisciplinary methods to present a wide array of texts, music, films, histories, and knowledges to students to get them to reflect on how the history of colonialism has an impact on the nature of the law they study. In this unit, we consider various aspects of both the history of the British Empire and the role of law as means of attaining justice, as well as being complicit in producing the situations from which justice is being sought. We also consider what the students’ role is in the world as people who will soon be in a possession of a law degree. Threatened as we are by the dangers of racism, inequality, and environmental devastation, we unpack what they can do in response to these perils. I want my students to take a look at the history of law and the history of the world and to consider what this history means for how we understand the world and repair current harms. What does this look into the past and the present mean for the future? How do we as agents of law use decolonisation as a tool to make all our futures possible, liveable, and flourishing? My proposition is that we need to change the lens through which we understand the present, by looking to the past, so we can craft better futures for us all and for the earth upon which we at present just precariously survive. To survive at all, we need new ways of thinking, being and doing in the world—including in the classroom.
Where can decolonisation take us?It is important to remember that decolonisation is not its own goal, but what we hope to achieve with it is. For ourselves and our students, decolonisation may provide us with the vocabulary and framework we need to develop tools to help us craft a discipline that will be able to rescue the planet from the perdition of racial injustice, extreme inequality, and environmental disaster. This challenge requires creativity, imagination, innovation, and courage. As such, I suggest that rather than asking formulaic questions like, “how do we decolonise the curriculum?”, we must ask more creative ones. For example: “What does it mean to dream of new anticolonial worlds from within the law school?” This prevents us from applying cosmetic changes to the curriculum with no real change to the structure and role of law schools or to the situations that bring our students to the law school. In this endeavour, we have a responsibility to use all the tools at our disposal to consider the ways in which our discipline can bring an end to the perils that continue to put our planet and all its inhabitants in jeopardy. This is a task that we can carry out now and hand over to our students—while we are still here. Survival is being threatened on a planetary scale through, among other things, the combined forces of global inequality, racial violence, and climate change. My hope is that our joint work on decolonisation and in innovative legal pedagogy will contribute to the fulfilment of those dreams.
Featured image by Giammarco Boscaro via Unsplash. Public Domain.
OUPblog - Academic insights for the thinking world.

Six books to understand international affairs [reading list]

Six books to understand international affairs [reading list]
The world we live in is complex and ever-changing. This year India, Iran, the UK, and the US, to name a few countries, are facing pivotal elections, and many diplomatic relationships—in Eastern Europe, the Middle East, and beyond—are at a turning point.
Check out these titles to better understand the state of geopolitics and the movement of power in the world.
1. The Unfinished QuestIn the last two years, India has achieved two significant milestones. In 2022 it became the fifth largest economy in the world, surpassing its former colonial ruler the United Kingdom, and in April of 2023, its population surpassed China, making it the single most populous country in the world with over 1.4 billion citizens. All five permanent members of the UN security council except China have openly acknowledged the need to include India among their ranks.

In The Unfinished Quest, T.V. Paul looks at the history and future of India’s great power bid. The book takes into consideration the entirety of India’s modern history from its independence in 1947 to the election of Nehru and beyond. Paul examines the techniques India has adopted to develop both hard and soft powers which have made it such an integral regional player in the context of the US-China rivalry. And finally, the book examines India’s persistently low human development indices—health, education, welfare, and lifespan of its people—which Paul argues is the nation’s final challenge to overcome to achieve true major power status.
2. Upstart

Thirty-five years ago, the idea that China would ever challenge the US economically, globally, and militarily was almost laughable. Today China produces almost 50% of the world’s major industrial goods and it is the world’s largest producer of ships, high-speed trains, robots, computers, cellphones, tunnels, bridges, and highways. Last year the US government estimated that the Chinese military likely possesses 500 operational nuclear warheads and is on track to have over 1,000 by 2030. How was China able to build power in such a short period of time, and what informed the strategies that Beijing pursued to accomplish this level of growth?
Upstart presents a model for viewing China’s growth that will be instantly recognizable to anyone who has even a passing acquaintance with the business world. It lays out China’s “upstart approach” whereby China carefully and strategically chose areas of growth that wouldn’t trigger international backlash. It then pursued a gradual strategy emulating the US where possible, exploiting known weaknesses in the US development strategies, and pursuing entrepreneurial actions through innovation. This model for understanding China’s growth provides guidance on how the US can maintain their competitive edge in this new era of great power competition.
3. Lost Decade

Across the political spectrum, there is wide agreement that Asia should stand at the center of US foreign policy. But this worldview, first represented in the Obama Administration’s 2011 “Pivot to Asia,” marks a dramatic departure from the entire history of American grand strategy. More than a decade on, we now have the perspective to evaluate it in depth.
In Lost Decade, Robert Blackwill and Richard Fontaine—two eminent figures in American foreign policy—take the long view. Lost Decade argues that for more than a decade, the United States tried, and failed, to focus its foreign policy on Asia. The authors argue that while the Pivot to Asia embraced seemingly straightforward strategic logic, it raised more questions than it answered. The book reviews in detail the attempted Pivot and illustrates that across the last few presidencies, policymakers have felt an increasing urgency to put Asia first, even as they deal with a raft of issues and crises in multiple regions. How they attempted to resolve such dilemmas, and how they allocated diplomatic, military, and economic resources, tells us a good deal about the proper conduct of US foreign policy and grand strategy over the coming decades.
4. Wars of Ambition

The battle to dominate the Middle East regional order, from 2003 to the present, resulted in an empowered Iran and a deeply unsettled broader region in which nominally pro-Western states began to recalibrate their relations with Washington as they welcomed its key rivals: Russia and China.
Afshon Ostovar—like many—believes that the Middle East is at a critical juncture right now. We are all witnessing yet another pivotal moment in the region’s history. In Wars of Ambition, Ostovar presents a recent history of the region and the myriad parties jockeying for power since the dawn of the 21st century. The book provides a gripping narrative of the conflicting visions for the future, deftly weaving in the aims and efforts of Israel, Turkey, Saudi Arabia, Russia, and China, all in the larger context of the US’s declining influence and Iran’s rising power. It explores the evolution of the conflict between the US and Iran and shows how the ideological contest for the Middle East has become a microcosm of the larger geopolitical battle between those who support an American-led global order and those who stand in staunch opposition to it.
5. On Xi Jinping

Xi Jinping came to power in China in the spring of 2013, and in the ensuing 10 years we’ve seen a dramatic evolution in China’s stance towards the rest of the world and a corresponding shift in their domestic, economic, and foreign policies. Since China abolished term limits in 2018 and Xi is poised to rule the country indefinitely, a full understanding of his worldview and the long-term implications of it is critical for understanding our global future.
On Xi Jinping argues that Xi has adopted a more Marxist political and economic approach to government—a dramatic departure from the leaders that preceded him. Kevin Rudd—the former Prime Minister and Foreign Minister of Australia—walks through how Xi has taken the Chinese Communist Party further to the left in a more statist direction. At the same time, however, Xi has taken Chinese nationalism to the far right which has shaped the nation’s much more hard-edged approach to foreign policy.
6. Oceans Rise Empires Fall

It is the decisive decade for climate change action, yet great power competition is surging. Geo-economic rivalries and territorial conflicts over Ukraine and Taiwan appear more important than collective action against catastrophic climate change. Why do great powers seem to continuously favor competition and rivalry over transnational policies to address the greatest threat humanity has ever faced?
In Oceans Rise Empires Fall, Gerard Toal, one of the world’s leading scholars of geopolitics, identifies geopolitics as the culprit. Examining its meaning, history, and leading thinkers, he exposes the geo-ecological foundations of geopolitics and the struggles for living space that it expresses. Toal makes a startling argument about what the conflict between Russia and Ukraine means for the world’s hopes to arrest climate change. The book makes the powerful assertion that globally we will never be able to slow the catastrophic effects of global warming because the global competition for geopolitical power will always cause states to prioritize the access to carbon-based fuels that is required for economic growth and the security to compete with rival states.
Featured image by Vladislav Klapin via Unsplash
OUPblog - Academic insights for the thinking world.

August 15, 2024
Overconfidence about sentience is everywhere—and it’s dangerous

Overconfidence about sentience is everywhere—and it’s dangerous
Years before I wrote about The Edge of Sentience, I remember looking at a crayfish in an aquarium and wondering: Does it feel like anything to be you? Do you have a subjective point of view on the world, as I do? Can you feel the joy of being alive? Can you suffer? Or are you more like a robot, a computer, a car, whirring with activity but with no feeling behind that activity? I am still not sure. None of us is in a position to be sure. There is no magic trick that will solve the problem of other minds.
Yet if I have no magic trick, and am self-aware enough to realize this, why have I written a book about the topic? Books about sentience or consciousness often promise marvels: you will be uncertain about the nature of sentience at the beginning, but worry not, for by the end a magnificent (if enormously speculative) theory will have answered all your questions. Reading these books, I feel like I’ve fallen for a bait-and-switch. Speculation is cheap and settles nothing: there are speculations on which crayfish are sentient and speculations on which they’re not.
The Edge of Sentience, rather than offering Houdini-like escapes from uncertainty, is all about how to make evidence-based decisions in the face of uncertainty. The trouble is that overconfidence about sentience is everywhere—and it’s dangerous. In researching the book, I encountered some shocking examples. Did you know that, until the 1980s, surgery on newborn babies was routinely performed without anaesthesia? Surgeons doubted newborns could feel pain, and they worried about the risks of using anaesthetics. But they were thinking about risk in a deeply flawed way. When researchers investigated the consequences of this practice, they discovered massive stress responses doing lasting developmental damage to the baby: operating with anaesthesia was far safer. A public outcry, together with the new evidence, changed clinical practice.
The crucial concept we need is proportionality: our precautions should be proportionate to the identified risks.
The case has a pattern of features that I’ve now seen many times: initial overconfidence about the absence of sentience, new evidence shaking that overconfidence, and a crucial role for the public in shattering the groupthink that sometimes grips cadres of experts. I’ve seen the same pattern with patients unresponsive after serious brain injury, often still described problematically as “vegetative”. Clinicians have long used diagnostic categories that starkly imply the absence of any sentience when, in reality, there is evidence that a fraction (and we don’t know the precise fraction) of these patients have residual conscious experiences. Overconfidence has, at times, led to horrific cases of patients presumed unconscious who were then able, later, to report that they had suffered terribly from routine procedures performed without any pain relief. Clinical practice, in the UK at least, has recently started to shift in the right direction.
We need to get serious about erring on the side of caution in all cases where sentience is a realistic possibility: those involving humans and those involving other animals. But it is not enough to just tell people to ‘err on the side of caution’ and leave it there. Almost any action, from outrageously costly precautions to the tiniest gesture, can be described as ‘erring on the side of caution’. We need ways of choosing among possible precautions: a precautionary framework. The crucial concept we need is proportionality: our precautions should be proportionate to the identified risks.
I do not think proportionality reduces to a cost-benefit calculation. It requires us to resolve deep value conflicts: conflicts that obstruct any attempt to quantify benefits and costs in an uncontroversial common currency. What sort of procedures can we use, in a democratic society, to assess proportionality? My proposals give a key role to citizens’ assemblies, which attempt to bring ordinary members of the public into the discussion in an informed way in order to reach recommendations that reflect our shared values.
Because I think these decisions should be made by democratic, inclusive processes—and not by any individual expert or group of experts—I think my own precautionary proposals about specific cases should be read as just that: proposals. They are not supposed to be the final word on any of these issues. I am not auditioning for the role of ‘sentience tsar’. But I have given a lot of thought to what actions are plausibly proportionate to the challenges we currently face, and I am publishing my proposals in the hope of provoking debates I see as urgently needed. If the book succeeds in stimulating discussion, I can dare to hope the discussion may lead to action. And I hope that, among those actions, will be steps to protect invertebrates like crayfish from the pain of being cooked alive—a particularly grotesque display of overconfidence.
Featured image by Jr Korpa via Unsplash. Public domain.
OUPblog - Academic insights for the thinking world.

August 14, 2024
On querns and millstones

Have you ever seen a quern? If you have not, Wikipedia has an informative page about this apparatus. Yet there is a hitch about the definition of quern. For instance, Wikipedia discusses various quern-stones, and indeed, pictures of all kinds of stones appear in the article. But stones don’t do anything without being set in motion. That is why quern could, apparently, refer to both a millstone and a hand mill for grinding grain.
The word quern goes back to the hoariest antiquity. It was known in all the Old Germanic languages, including fourth-century Gothic. From Gothic a sizable part of the New Testament has come down to us, and in Mark IX: 42 (RV), the English text has the following: “…it is better for him that a millstone were hanged about his neck…” The same admonition appears in Mt 18: 6 and Luke XVII: 2. The Gothic Bible was translated from Greek. The medieval Greek word for “millstone,” múlo onikós, means “donkey (mule)-mill.” Bishop Wulfila, the translator of the Gothic Bible, used the compound asilu qairnus (that is, asilu-kwairnus; ai has the value of English short e). Asilus is immediately recognizable from Modern German Esel “donkey.” Its origin will not interest us here. We should only note that qairnus ~ kwairnus is almost the same word as Modern English quern and that both Greek and Gothic (which translated the Greek compound bit by bit) needed an animal name, and this fact returns us to the statement that stones have to be set in motion, to be able to grind grain, and to my initial question: “Have you ever seen a quern?”

Image via rawpixel. Public domain.
A quern, used in some countries as late as the beginning of the twentieth century, was a construction that had to be rotated. The Internet provides illustrations from the MILLS ARCHIVE, and there, among others, I found an image of a woman rotating a handle manually and making the stones work. Eduard Daniel Clarke wrote in his book Travels in the Holy Land (1817, 167-68): “Looking into the court yard [sic] belonging to the house, we beheld two women grinding at the mill, in a manner most forcibly illustrating the saying of our Saviour (Matt. XXIV, 41 [“Two women grinding at the mill…” The same text can be found in Luke XVII: 35]. The two women seated on the ground opposite to each other held between them two round flat stones, such as are seen in Lapland, and such as in Scotland are called querns. In the centre of the upper stone was a cavity for pouring in the corn, and by the side of this an upright wooden handle for moving the stone. As the operation began, one of the women with her right hand pushed this handle to the woman opposite, who again sent it to her companion, thus communicating a rotary and very rapid motion to the upper stone, their left hands being all the time employed in supplying fresh corn as fast as the bran and flour escaped from the sides of the machine.” (Corn here of course means “grain.”) I found this quotation in The East Anglian…, vol. 1, 1861, p. 111.
But very often, an animal was tied to the platform and moved around and around. In pre-revolutionary Russia, such an animal was the horse, and the appliance was called kruporushka (stress on the second ru; “grain-grinder”). The Greek text refers to the mule. Wulfila substituted donkey for it. Medieval translators of the Bible were highly qualified men, fluent in Greek (medieval Greek) and Hebrew. They stayed in contact and discussed the best variants. Therefore, it is not surprising that the Old English verse has esel-cweorn, the same “calque” from Greek. (A calque is a word-for-word or morpheme-for-morpheme translation. It is also called loan translation.)

Image by Bhaskaranaidu via Wikimedia Commons. CC BY-SA 3.0
Not only is the primitive quern one of the most ancient inventions in the history of civilization, even the old word is the same all over the place. The differences among the Germanic cognates are due only to phonetics (the same noun but pronounced according to the norm of each individual dialect). Old English cweorn has been cited above. Compare Old Icelandic kvern, Old Frisian quern, and so forth. Our question is predictable: What is the origin of this word? Slavic (for example, Russian zhyornov), Baltic, Celtic, and perhaps Sanskrit used the word that seems to go back to some form like gwernā-.
At this point, the etymologist stops, unless the word under discussion is obviously sound–imitative or at least sound–symbolic. Did the stones go gwern-gwern (as it were), which suggested to people the word imitating the sound they made? Corn (that is, “grain”) and kernel, related to it, resemble quern (by chance?), and so does English churn “butter-making machine,” from Old English cyrin, a word of obscure origin, but not improbably related to kernel. I have found only one passing reference to the proximity between quern and churn. In the same volume of East Anglian (see above), p. 112, R. C. Charnock compared them. Charnock was a good folklorist but the author of numerous fanciful etymologies. Yet in this case, he may have guessed well.
Long ago (in 1909), Francis A. Wood, a serious etymologist, suggested that gwer-, the Indo-European root of quern, meant both “crush,” “crushed,” and, by extension, “soft, mild.” Wood also cited Latin mola “millstone, grindstone.” Indeed, mill and mild are related. At first sight, the idea that the words designating a crushing machine and softness are related looks bizarre. Yet the logic underlying it is clear: first force, then submission! Old Icelandic kvern, a cognate or English quern, has been compared (and quite justifiably so) with Icelandic kvirr ~ kyrr “quiet, friendly.” Another traditional view connects the roots of quern and grave (English grave is of course a borrowing: compare Latin gravis). It would be tempting to have the same idea underlying the origin of both quern and mill. Wood’s suggestion, which has hardly ever been discussed, does not seem to be too bold. Yet the problem is not only to reconstruct an ancient root and guess what it meant but also (and mainly!) to understand why it meant what it did. To repeat: Was the root of quern “crush(ed)” sound-imitative? Perhaps, but we will never know for sure.

Image by Adrian Vieriu via Pexels. Public domain.
Here then is a short summary of the above notes. Quern is an extremely old word, going back to the Indo-European antiquity. With the expected phonetic differences, it has been recorded from Scandinavian to India. The ancient root of this word seems to have referred to a forceful movement and the “peaceful” result of using the force of the rotating millstone. The root (approximately, gwer-) may have been sound-imitative. This hypothesis is not unlikely. When we deal with words like grumble or creak, the connection between the word and the thing is obvious. With quern, we need an intermediate link, which today we are unable to supply. However, if quern is related to churn, the sound-imitative origin of quern begins to look rather persuasive.
PS. 1. My thanks to our reader for the comment on the Persian word for “lips.” 2. Re the comment on whale. Whale might be a borrowing from Greek. As indicated in the post, this idea occurred to etymologists long ago. But both words may be related and go back to the same ancient source.
Featured image by Pawel Marynowski via Wikimedia Commons. CC BY-SA 3.0
OUPblog - Academic insights for the thinking world.

Cooperation and the history of life: is natural selection a team sport?

Cooperation and the history of life: is natural selection a team sport?
Cooperation is in our nature, for good and for ill, but there is still a nagging doubt that something biological in us compels us to be selfish—our genes. This is the paradox: genes are inexorably driven by self-replication, and yet cooperation continually rears its head. Not only are humans fundamentally team players, but all of nature has been teaming up since the dawn of life four billion years ago. The rules of cooperation that we encounter in our daily lives are fundamentally the same as those that apply to how our cells cooperate within the body, how the parts of the cell came to work together, and how selfish genes cooperate to make social beings. Though simple, these rules play out in complicated and fascinating ways that illuminate everything from the profound to the trivial.
Cooperation is defined as a social behaviour by one individual that benefits another at a cost to the cooperator. It evolves when the benefits to the cooperator exceed the cost of cooperation—a situation that might seem rare, but in fact is very common. Benefits can accrue to a cooperator in two ways, either directly or indirectly, through relatives. Benefits to relatives, or kin selection, explain why, for example, long-tailed tits will help their neighbours raise a brood when their own has been lost. Neighbours tend to be relatives. The cells of a multicellular organism cooperate for essentially the same reason: all are relatives. We don’t usually think of the cells of the body as being relatives of one another, but that is what they are, albeit genetically identical ones.
Uncovering the benefits as well as the costs to cooperation is key to understanding both its evolution and the situations in which it breaks down. Cooperative behaviour is conditional on there being a net benefit and it can disappear when the advantage is lost. Even an intimate symbiosis like a mycorrhizal association between a plant and a fungus will dissolve if the plant can obtain nutrients normally supplied by the fungus more easily from elsewhere.
I like to think of groups of cooperators as teams, because this idea is familiar and captures the essence of how cooperation produces benefits for individuals. Calling the plant and fungus in a mycorrhizal association a team may sound like a metaphor, but it is more than that because the individual benefits of cooperating in a team are the same, whether the game is football or natural selection.
Teaming up can produce direct benefits to team members in two distinct ways: through force of numbers and through division of labour. A team of 11 will beat a team of 2. This has to be how social insects with thousands of workers evolved. But force of numbers alone is not enough. The highest score that a team of 11 goalies can expect is nil-nil. A division of labour among the team, placing players in different positions according to their skills and the overall strategy, wins matches. Likewise, social insects have one or a very limited number of queens with the exclusive role of reproduction. Other castes such as workers assume different tasks, depending on their phenotype, age, and the size of the colony.
Individuals stick with the team so long as their interests are aligned with those of other team members, but this can never be taken for granted. Since cooperation involves costs as well as benefits, there is always the possibility that some individuals will try to take the benefits for free—or in other words—cheat. Tumour cells are cheats. Mutation breaks the alignment of interests that normally exists among the cells in a body, allowing a cancer cell to escape the many mechanisms that normally limit cell proliferation and to multiply at the expense of the host. The most dangerous and successful tumours recruit the assistance of normal cell types, acquiring a blood supply. Such cells cross the line from cooperation to parasitism.
Cheats may be found wherever there is cooperation, but cooperation thrives, nonetheless. Its most spectacular successes occur when members of a symbiotic team start to reproduce as a team, uniting their reproductive fates within a new kind of individual. This is what happened when the ancestor of the eukaryotic cell teamed up with the bacterial ancestor of the mitochondrion. The union of the two ancestral cell types produced a new kind of cell and a major transition in evolution.
Metaphors can help explain a difficult concept, but they can also mislead because at some point even good metaphors fail when taken literally. ‘Selfish gene’ is exactly such a metaphor. It has illuminated the science of social evolution for half a century since it was coined by Richard Dawkins, but it has also misled people into underestimating the importance of cooperation. Instead, let us think of natural selection as a team sport: on every level, from genes and cells to social beings, the team structure of life exists.
Featured image by Getty Images; CreativeJourney/ Shutterstock.com; Wikimedia Commons (Used with Permission).
OUPblog - Academic insights for the thinking world.

August 12, 2024
Effective ways to communicate research in a journal article

Effective ways to communicate research in a journal article
In this blog post, editors of OUP journals delve into the vital aspect of clear communication in a journal article. Anne Foster (Editor of Diplomatic History), Eduardo Franco (Editor-in-Chief of JNCI: Journal of the National Cancer institute and JNCI Monographs), Howard Broman (Editor-in-Chief of ICES Journal of Marine Science), and Michael Schnoor (Editor-in-Chief of Journal of Leukocyte Biology) provide editorial recommendations on achieving clarity, avoiding common mistakes, and creating an effective structure.
Ensuring clear communication of research findingsAF: To ensure research findings are clearly communicated, you should be able to state the significance of those findings in one sentence—if you don’t have that simple, clear claim in your mind, you will not be able to communicate it.
MS: The most important thing is clear and concise language. It is also critical to have a logical flow of your story with clear transitions from one research question to the next.
EF: It is crucial to write with both experts and interested non-specialists in mind, valuing their diverse perspectives and insights.
Common mistakes that obscure authors’ arguments and dataAF: Many authors do a lovely job of contextualizing their work, acknowledging what other scholars have written about the topic, but then do not sufficiently distinguish what their work is adding to the conversation.
HB: Be succinct—eliminate repetition and superfluous material. Do not attempt to write a mini review. Do not overinterpret your results or extrapolate far beyond the limits of the study. Do not report the same data in the text, tables, and figures.
The importance of the introductionAF: The introduction is absolutely critical. It needs to bring them straight into your argument and contribution, as quickly as possible.
EF: The introduction is where you make a promise to the reader. It is like you saying, “I identified this problem and will solve it.” What comes next in the paper is how you kept that promise.
Structural pitfallsEF: Remember, editors are your first audience; make sure your writing is clear and compelling because if the editor cannot understand your writing, chances are that s/he will reject your paper without sending it out for external peer review.
HB: Authors often misplace content across sections, placing material in the introduction that belongs in methods, results, or discussion, and interpretive phrases in results instead of discussion. Additionally, they redundantly present information in multiple sections.
Creating an effective structureAF: I have one tip which is more of a thinking and planning strategy. I write myself letters about what I think the argument is, what kinds of support it needs, how I will use the specific material I have to provide that support, how it fits together, etc.
EF: Effective writing comes from effective reading—try to appreciate good writing in the work of others as you read their papers. Do you like their writing? Do you like their strategy of advancing arguments? Are you suspicious of their methods, findings, or how they interpret them? Do you see yourself resisting? Examine your reactions. You should also write frequently. Effective writing is like a physical sport; you develop ‘muscle memory’ by hitting a golf ball or scoring a 3-pointer in basketball.
The importance of visualizing data and findingsMS: It is extremely important to present your data in clean and well-organized figures—they act as your business card. Also, understand and consider the page layout and page or column dimensions of your target journal and format your tables and figures accordingly.
EF: Be careful when cropping gels to assemble them in a figure. Make sure that image contrasts are preserved from the original blots. Image cleaning for the sake of readability can alter the meaning of results and eventually be flagged by readers as suspicious.
The power of editingAF: Most of the time, our first draft is for ourselves. We write what we have been thinking about most, which means the article reflects our questions, our knowledge, and our interests. A round or two of editing and refining before submission to the journal is valuable.
HB: Editing does yourself a favour by minimizing distractions-annoyances-cosmetic points that a reviewer can criticize. Why give reviewers things to criticize when you can eliminate them by submitting a carefully prepared manuscript?
Editing mistakes to avoidAF: Do not submit an article which is already at or above the word limit for articles in the journal. The review process rarely asks for cuts; usually, you will be asked to clarify or add material. If you are at the maximum word count in the initial submission, you then must cut something during the revision process.
EF: Wait 2-3 days and then reread your draft. You will be surprised to see how many passages in your great paper are too complicated and inscrutable even for you. And you wrote it!
Featured image by Charlotte May via Pexels.
OUPblog - Academic insights for the thinking world.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
