Oxford University Press's Blog, page 174
October 4, 2019
What American literature can teach us about human rights
The arrival of a new child destroys a household’s ordinary sense of time. At least, it did for us. When our first son was born last fall, two leading scholars had just published books that each, in their own way, describe how contemporary US fiction has been shaped by the dramatic rise of human rights in global politics since the 1970s. Despite their many differences, both these books—James Dawes’ The Novel of Human Rights (2018) and Crystal Parikh’s Writing Human Rights (2017)—argue that new kinds of stories will matter to the future of human rights. Over the course of the fall, their ideas became entangled in my own sleep-deprived mind with two other developments: first, the arrival of our first child in November; and, second, the ongoing fallout from the US government’s family separation policies at the US-Mexico border.
In the introduction to The Novel of Human Rights, Dawes writes, “In the United States, we are, for the first time, living in a popular culture of human rights.” I read Dawes’ introduction around the same time that I came across an op-ed written by Michael Bochenek, the senior counsel of the Children’s Rights Division at Human Rights Watch, titled “The US Detention of Children Is Only Getting Worse.” The op-ed was accompanied by a satellite image of a detention camp for migrant children in Tornillo, Texas. Dawes’ account of the rise of a new US literature of human rights was published on 12 September 2018. The satellite image was taken just a day later.
As I was writing about the new US literature of human rights, events soon overtook me as well. By January of this year, the children detained at Tornillo had all been transferred elsewhere. In April, Kirstjen Nielsen, the secretary of homeland security, resigned over disagreements with the Trump administration. There were rumors that family separation would soon resume. (A recent Congressional oversight report suggested that, in fact, hundreds of separations have occurred in the year since the courts ordered that the practice stop in summer of 2018.) In late August, the Trump administration unveiled a new policy that, barring a successful legal challenge, would override the so-called “Flores Settlement,” abolishing time limits on how long migrant children can be locked in detention centers.
All of which raises a troubling question for our field: Has the US literature of human rights arrived too late? Perhaps this is an unfair, or even vulgar, question. After all, what can any work of literature—or scholarship, for that matter—be expected to do in a period of escalating refugee crises. Parikh closed Writing Human Rights with a discussion of the UN Convention on the Rights of the Child and the surge of undocumented children arriving at the US-Mexico border in 2014. Both she and Dawes described how contemporary US novels often address the problem of the “lost child” by imagining new forms of kinship beyond the traditional family as well as beyond the nation-state. They also pointed out that the Convention on the Rights of the Child has been adopted by every country in the world except the United States. Elsewhere in her book, Parikh conceded, “I certainly do not want to overstate the transformative capacities of cultural politics in the face of immense forces of state power and social violence.”
But a turn toward the literary past might help us imagine a different kind of future, a counter-politics, for human rights. Lyndsey Stonebridge’s Placeless People: Writing, Rights, and Refugees, the subject of a recent online symposium, is an example of what this new literary history of human rights might look like. But, again, are we already too late?
“Humanities scholars are great at critique and history,” the literary scholar Stephanie DeGooyer recently tweeted, “going beyond to policy and ‘solution’ is a major disciplinary jump.” An editor of The Right to Have Rights and author of a recent essay about birthright citizenship, DeGooyer is another great example to follow. Not only do literary scholars need to continue working across disciplinary boundaries to reach new audiences, but we also need to learn how to think, write, and act across different scales of time. Academic research, much like the new US literature of human rights, moves slowly through the world. Meanwhile, as Parikh pointed out near the end of Writing Human Rights, “the vulnerable, desirous, unpredictable, demanding child arrives in her own right, on a future terrain of community that unfolds in its own time and space.” And their arrival is only the beginning.
Featured image credit: “View from a parking garage” by Abraham Osorio. CC0 via Unsplash.
The post What American literature can teach us about human rights appeared first on OUPblog.

October 2, 2019
Feeling fingers, part 2
Finger seems to be a transparent word, but this transparency is an illusion, for what is fing– (assuming that we understand what –er is)? Our story began last week (see the post for September 25, 2019), and I attempted to show that one of the two best-known etymologies of finger, namely, from the numeral five, is “less than fully convincing” (a common academic euphemism for “nearly unacceptable”). As a matter of fact, no one likes it, but, since dictionaries have to comment on the origin of each word they include and since the formula “of unknown (obscure, undiscovered, doubtful) origin” cannot satisfy the public, they prefer to say something, rather than remain silent. More about this subject below.
The second dubious but well-known etymology of finger connects it with the German verb fangen “to seize, catch.” English has lost this word, but fang contains the same root. The word was taken over from Scandinavian and meant “catch” (a noun). The other sense (as in White Fang) developed a few centuries later.
Here I should make a short digression. I often mention the name of Francis A. Wood. He was a member of an excellent group of scholars at the University of Chicago that cast a wide net: they explored the ways of naming objects all over the Indo-European world. The best remembered product of those activities is Carl Darling Buck’s A Dictionary of Selected Synonyms in the Principal Indo-European Languages: A Contribution to the History of Ideas, an invaluable reference book.

The group attracted numerous graduate students, all of whom wrote dissertations with similar titles, often published in the journal Modern Philology and later appearing in book form. The most famous student of Wood and Buck was Leonard Bloomfield. But here we are interested in William D. Basket’s work Parts of the Body in the Later Germanic Dialects. He found six main synonyms for “finger” in German dialects, including at least one that refers to grasping, catching, and scraping (in Swabian).

In his all-encompassing 2000 survey, Ari Hoptman (see the reference in the previous post) did not miss this fact and again noted that the fangen connection does not recognize the finger as an independent unit, only as a set. “The act of grasping necessarily requires more than one finger, often the forefinger and the thumb; thus it is strange that one finger would still be deemed ‘a grasper’, considering all the other things the finger can do independently.” One more linguist whose name appears with some regularity in this blog is Wilhelm Oehl. His study of “primitive creation” produced many useful parallels (and a few fanciful ones). He was a supporter of the finger-fangen idea. The Swabian case is troublesome, but more details are needed, for us to be able to evaluate it. At the moment, we know only the word and the gloss, but not the situation in which the word is or was used.
As a matter of curiosity, it should be mentioned that, even though the origin of hand is often said to be uncertain, its connection with the Gothic verb hinþan “to seize” looks probable. If, for the sake of argument, we accept the connection, this is where the grasper really is: the hand, not the finger! It should also be noted that fist probably has the ancient root meaning “five,” and again this fact makes perfect sense.
Of the other etymologies of finger I’ll mention only one, which goes back to the beginning of the seventeenth century: finger was compared with Latin fingere “to mold, sculpt; arrange; compose” A clever comparison, but Germanic f corresponds to Latin p (as in father ~ pater), so that the words cannot be related, unless we resort to the idea of borrowing. However, Germanic fingr– looks like a native word; also, Latin finger– does not refer to fingers.

As a rule, it is useful to look at Buck’s “synonyms” outside Germanic, though in this case there is not much to be gained. Latin digitus is obscure and yields the sense “pointer” only if we turn it into dicitus and ally (equate) with Greek dáktylos (or Latin dactylus, mentioned in the previous post), but we will again resist the temptation to do so. Perhaps more instructive is a look at Russian palets and its cognates elsewhere in Slavic. Originally, the word referred to the thumb; the finger was called perst. (Note: Engl. thimble has the same root as thumb, and the connection must have happened for good reason, but the Russian word for thimble, na-perst-ok, has the root of perst “finger”. Conclusion: don’t generalize).
Palets (that is, “thumb,” from a historical point of view) has been compared with Latin pollex “thumb,” Latin palma “the palm of the hand,” the verb feel (to us, the most interesting parallel of all), and a few other words in Latin and Greek. But since palets once meant “thumb,” its putative connections with feel, so tempting at first sight, will take us nowhere. The thumb is not a feeling finger, and all attempts to connect finger and pollex should probably be abandoned.

Where then are we at the end of our cursory look at the proposals? Fingers are not toes, even though all of them are “digits” (hardly a startling revelation). In any case, the etymons of finger and toe are different. Each of the five fingers of the hand is useful and necessary, and all of them are fiercely individual (the thumb and the forefinger especially so). Consequently, the derivation of finger from the word for “five” is unlikely. Fingers point and “feel”; they don’t catch, seize, or grasp. For this reason, the connection between finger and fang (or German fangen, with its related forms in Scandinavian) should also be given up as unrevealing. It appears that we are exactly where we were at the beginning, and the impression is correct.
Ari Hoptman suggested that finger is one of the many sound-imitative or sound symbolic words that designate movement. Feel, we may remember, is defined as “examine by touch.” Hoptman’s supportive material is not too impressive, because not a single word he cites looks too close to finger. Compare fumble, flitter ~ flatter, flicker, and so forth, including perhaps the rather opaque fritter. It may be that finger continues some verb for “fumble” or “palpate.” Touch, from French, is sound-imitative. It goes back to some verb like toccare: compare Russian tuk-tuk (u as in Engl. put) “knock-knock.”
The uncertainty of the proposed etymology could be predicted. If the perfect sought-for verb existed, it would have been exposed to light long ago, and the etymology of finger would have been solved once for all. But in the sound-symbolic area, even some much more secure solutions are usually looked upon with distrust (for no reason whatsoever, as far as I can judge). Thus, the etymology of the ignominious English F-word remains in limbo, despite the existence of a host of words all over the Germanic-speaking world of the same structure and meaning “to go back and forth” (not necessarily, even not too often, with sexual connotations). And yet Hoptman’s etymology is intrinsically more probable than all the previous ones, whose weakness is manifest, but whose venerable age allows dictionary makers to cite them again and again, hedge, and apologize for what they say.
Every time I open Google, it says: “I am feeling lucky.” The reason for its permanent elation is not disclosed. Perhaps I am supposed to respond in kind. My moods change depending on the situation, but now that you have read two posts on the origin of the word finger, you, along with Mr. Google, may, I hope, indeed feel lucky.
Feature image credit: Van Cliburn playing to children on a visit to Israel, by Moshe Pridan, 1962. Public domain, Israel National Photo Collection. Via Wikimedia Commons.
The post Feeling fingers, part 2 appeared first on OUPblog.

The important role of animals in refugee lives
Refugees are people who have been forcibly displaced across a border. What do animals have to do with them? A lot.
Companion animals, for example, are important to many people’s emotional wellbeing. “For people forced to flee,” the Norwegian Refugee Council recently noted, “a pet can be a vital source of comfort.” At the sterile and hyper-modern Azraq refugee camp in Jordan, Syrian refugees would pay a high price for caged birds (30-40 Jordanian dinars each, roughly €35-50). In Syria, many people keep a bird at home. At Azraq keeping a bird is one way of making a plastic and metal shelter into a home.
In many contexts, refugees also depend on animals for their livelihoods. Humanitarian assistance understandably focuses on supporting people, but if refugees’ working animals and livestock perish then their experience of displacement can worsen drastically: they lose the means to support themselves in exile, or to return home and rebuild their lives.

Ever since the earliest days of modern refugee camps, at the time of the First World War, agencies responsible for refugees have had to think about animals too. When the British army in the Middle East built a camp at Baquba near Baghdad in 1918 to house nearly 50,000 Armenian and Assyrian refugees, there were some animals that they needed to keep out. Fumigation procedures and netting were used against lice and mosquitos, carriers of typhus and malaria respectively. But refugee agencies also needed to let larger animals in. The people in the camp, especially the Assyrians, were accompanied by seven or eight thousand sheep and goats, and about six thousand larger animals like horses and cattle. Many of the people depended on their animals for their livelihood and for any prospect of permanent settlement, so they all needed to be accommodated and cared for.
There are similar examples around the world today, like the longstanding camps for Sahrawi refugees in southwestern Algeria. There, goats and camels are socially and economically significant animals: goat barns, enclosures often made of scrap metal, are a prominent part of the camps’ increasingly urban landscape, while camel butcheries are important shops.
This helps us understand why the office of the United Nations High Commissioner for Refugees now has programmes to support refugees’ animals. For example, in 2015, with funding from the IKEA Foundation, the organization assisted 6,000 Malian refugees in Burkina Faso and their 47,000 animals: helping animals helps the people too, in this case to earn a living (and achieve economic integration) through small-scale dairy farming. It also helps us understand how conflicts can arise between refugees and host communities when refugees’ animals compete for grazing or water with local animals, or damage local farmers’ crops.
Starting in 2011, nearly 125,000 people from Blue Nile state in Sudan fled from a government offensive into Maban country, South Sudan, with hundreds of thousands of animals (about half of whom soon died, stressed by the journey). Peacefully managing their interactions with local residents and a community of nomadic herders who also regularly migrated through the county was a complex task for the South Sudanese government and humanitarian agencies. It required careful negotiations to allocate grazing zones as far as 60km from the camps where the refugees lived, schedule access to watering points, and agree a different route for the nomads’ migration.
In Bangladesh, meanwhile, the International Union for the Conservation of Nature has recently been involved in efforts to prevent conflict between refugees and wild animals. Nearly a million Rohingya refugees from state persecution in Myanmar live in semi-formal camps near Cox’s Bazar that have grown up since 2017, but these block the migration routes of critically endangered Asian elephants. The International Union’s conflict mitigation programme includes building lookout towers, training Rohingya observers, and running an arts-based education project.

And this highlights a final issue. The elephants at Cox’s Bazar are endangered because of ecological pressures caused by humans. But increasingly, humans too are endangered—and displaced—by ecological pressures. In the late 2000s, a years-long drought in Syria, worsened by human-caused climate change, pushed over a million rural Syrians off the land. (The country’s total livestock fell by a third.) The political and economic pressure that this generated was a significant factor in the crisis that ignited into war in 2011, in turn displacing millions more people. Hundreds of thousands of them fled to Jordan, one of the most water-stressed countries in the world, where the increased extraction of groundwater has lowered the water table, drying out oases in the Jordanian desert. Wild birds migrating across the desert have to fly further to find water and rest, meaning fewer survive the journey. In the Syrian war, and in many other conflicts around the world, human and animal displacements are intersecting under environmental stress. If we want to understand human displacement and respond to it adequately, we need to be thinking about animals too.
Featured Image Credit: ‘Herd of sheep in mountains’ by Nima Hatami. CCO public domain via Unsplash .
The post The important role of animals in refugee lives appeared first on OUPblog.

September 30, 2019
Banking regulation after Brexit
It is a truism that Brexit will have a significant impact on banks and the wider financial services industry. The loss of passports by UK firms has received some attention from the non-specialist media, and is relatively well-understood. However, the loss of passports, significant as it is, is just one of many issues. Others have received no or little coverage outside the industry. In this blog, we will touch upon some of them. To do so, we need to step back and consider the very legal nature of a bank.
A bank—or, indeed, any company—has no corporeal existence. It exists only as a matter of law: specifically, the law of the place in which it is incorporated. Likewise, a bank may fail (or its failure may be managed or averted) pursuant to, or as a consequence of, the insolvency or resolution (bank rescue) laws of such state.
But many banks are established in several states: specifically, where they operate through international branches. The legal status, and the laws governing the failure of, banks established in several different states is complex. The key issue is whether, and on what basis, the laws of a state in which a branch is established recognise, or override, the laws of the state in which the bank was originally incorporated. In this regard, we come across a fundamental difference between the treatment of branches of EU banks and third country banks within the EU.
Where a bank, incorporated in one European Union member state (the home state), operates from a branch in another (the host), the host state is obliged to recognise, and is unable fundamentally to interfere with, the insolvency and resolution laws of the home state. This principle does not apply to banks incorporated in a third country, with a branch establishment in an EU member state (and vice versa).
So, pre-Brexit, if a French bank has a branch in London, and that bank enters into a crisis, its resolution (or dissolution) would be dealt with under French law by the French (or Eurozone) authorities, without regard to, and without constraint or interference by, UK law. Post-Brexit, the failure of the UK branch of the French bank would be subject to both UK and French/EU law and regulation. The Bank of England would have powers over the branch, and the actions of the Eurozone’s resolution authority would not automatically be recognised within the UK. The reverse applies, of course, for the French branch of a UK bank.
All of which may seem of mere theoretical interest, only of relevance in the event of the (one would hope, rare) failure of a bank in the future, and therefore of very little consequence for the day-to-day business of UK/EU banks. However, the entire financial regulatory system is driven by what would happen in a crisis. Thus, many of the rules and regulatory policies that apply to banks throughout their day-to-day existence are shaped by what would happen in the event of their failure.
The entire financial regulatory system is driven by what would happen in a crisis. Thus, many of the rules and regulatory policies that apply to banks throughout their day-to-day existence are shaped by what would happen in the event of their failure.
Rules relating to how branches must look after client assets (i.e. shares and bonds belonging to clients that are held in custody), how branches must record, monitor, and report money held on deposit (and money belonging to clients, but deposited with other banks) are all governed by home state rules in the case of an EU branch of an EU bank, but by host state rules in the case of an EU branch of a third country bank, or UK branch of an EU bank, post-Brexit. Thus the rule-set for UK branches of EU banks (and vice versa), will flip.
More fundamentally, the underlying political dynamic changes. In our example of a UK branch of the French bank, the French/EU authorities suddenly will become vulnerable to the action (or inaction) of the UK authorities (who, vice versa, will find themselves part-responsible for having to deal with a failure of the branch). This sets up a web of mutual dependence and vulnerability. Whilst the solution would lie in cooperation between EU and UK authorities, perfect and seamless cooperation between states can never be guaranteed. In practice, the result is the imposition of greater scrutiny and economic burdens, coming from both sides of the channel. The purpose of this blog is not to advocate for the deregulation of banks: the mischief here is the imposition of multiple levels of superfluous, possibly conflicting, regulation ⸻ creating additional burdens and restrictions, without accompanying prudential benefits.
So far, we have discussed only bank branches. In reality, banks are established as groups comprising many different bank subsidiaries, under the common ownership of a parent company. Just as they do for branches, different rules apply to the EU subsidiaries of third country holding companies, than those which apply to the EU subsidiaries of EU holding companies. The various permutations (and thus the impact for EU subsidiaries of UK parents and vice versa) are so manifold that we cannot begin to examine them here: but they are of such significance that they can alter the entire shape, form, and manner in which bank groups are formed and operate, and create different kinds of tensions between the regulatory authorities either side of channel.
We have barely skimmed the surface. Everything discussed already is only too familiar to bankers and their legal and regulatory advisers. But what are the broader messages?
The first message is that the technical consequences of Brexit go far deeper than the headlines, and perhaps deeper than anyone who does not have specialist knowledge of a particular sector would even contemplate. What other consequences might transpire for businesses in sectors that are less well-resourced and prepared than the banking industry?
The second is that that the issues discussed in this blog illustrate one of the ironic consequences of Brexit: Brexit provides the means and motivation for greater interference by EU authorities over UK businesses. In an international economy, where one state takes back control, it is natural that states that have thus ceded control will react to their new position of vulnerability with suspicion, and may seek substitutes for their ceded control. Businesses will be caught in the midst of this struggle.
Featured image credit: MEPs to debate Brexit by European Parliament. CC-BY-4.0 via Flickr.
The post Banking regulation after Brexit appeared first on OUPblog.

September 28, 2019
How Congress surrenders its constitutional responsibilities
If there is a single overriding narrative about the current Congress, the institution America’s founders considered the first and most important branch of government, it is that partisan warfare has rendered it almost impossible for Republicans and Democrats to agree on anything, and especially on any question of significance.
This bleak assessment is generally correct, extending to the realms of economics (tariffs, spending priorities, and tax policy), security (border permeability and guns) and most other items that dominate the daily news. The partisan division extends, in fact, to almost all facets of American governance.
But not all. In fact, the bigger story, and the most impactful, is where the parties have come together with disastrous effect. Arthur Schlesinger’s famous critique of an emerging “imperial presidency” succinctly named a phenomenon that has grown exponentially over decades – the transformation of American government from one dominated by the peoples’ representatives in Congress (the branch to which most major government authority had been assigned) to an enterprise increasingly directed by an almost king-like presidency alternating between directing and ignoring an enfeebled national legislature.
It is generally assumed that this shift is the result of a ruthless presidential power grab in which chief executives brutally wrested decision-making authority from Capitol Hill. In fact, however, the growth of presidential power was the result of repeated instances of the voluntary abdication by Congress of its constitutional obligations. And this abandonment by Congress of its most fundamental duties was entirely bipartisan.
Recent political battles have focused attention on presidential actions regarding immigration and refugee policy and the imposition of tariffs on products imported from countries ranging from perceived adversaries (China) to long-standing allies (Canada). Members of Congress, the branch of government with constitutional authority in these policy realms, learned about these policy shifts the same way we all did: by reading about them in news bulletins. But the president was merely using powers the Congress had given him to use in the case of a national security emergency (with the president being the sole decider as to what constitutes an emergency.
To some extent – many observers would say a very large extent – the grievances that have fueled voter anger in industrial states are the result of international trade agreements that have proved harmful to many of America’s blue-collar workers. The Constitution gives Congress absolute authority over the nation’s engagement in international commerce but in response to presidential urging, the Congress agreed to consider trade agreements under a fast track procedure that limits the ability of voters’ representatives to amend the terms agreed to by the White House. Previously, members of Congress could insist on removing provisions that might negatively affect American workers but that ability is now largely gone, willingly surrendered by Congress.
Perhaps the single most important feature of the United States Constitution is the placing in Congress of absolute authority to determine when and where and under what conditions the United States will go to war. In one of the worst decisions in American history, a bipartisan Congress passed a War Powers Act that allows a president to initiate military conflict without congressional approval, reserving the right of Congress to step in after the fact to order an end to the engagement. By that time, however, Americans may have died in combat and those who remain will require the means to survive. Wars, once started, are difficult to bring to a rapid close. The ability of American citizens to determine what they are willing to die for, what they are willing to send sons and daughters to die for, has been delegated to a single person.
It’s true that on most daily concerns, even very important concerns, partisanship has proven crippling to America’s ability to govern itself. But on the very shape of government, on where the most fundamental powers reside, there has been a surprisingly high amount of bipartisan agreement. And not for the better.
Featured image credit: “Our nation’s capital” by Louis Velazquez. CC0 via Unsplash.
The post How Congress surrenders its constitutional responsibilities appeared first on OUPblog.

September 27, 2019
How to talk to your political opponents
Imagine that you are having a heated political argument with a member of the “other” party over what the government should or should not do on various issues. You and your debate partner argue about what should be done about immigrants who want to come into the country. You argue about what should be done about the never-ending mass murder of people in schools, places of worship, and entertainment venues by killers using assault weapons. You argue about what should be done to improve employment and to improve the healthcare system. You argue about how to increase access to better schools and higher education. You and your partner care deeply about these issues and the debate about how to solve these issues goes on for quite a while. While you were arguing, another person was standing and watching. When there is a pause in the debate, this person comes over with a bewildered look, and asks “Why are you arguing so hard?” You and your debate partner both answer that these are important issues that need solutions and you are arguing about what are the best solutions. The newcomer says, “Why waste your time talking about those issues? They don’t really matter. Hakuna matata. Worry-free is my philosophy.”
Imagine how you would feel about this newcomer. Imagine how you would feel about your debate partner. I know how I would feel. I would be very annoyed at the newcomer, and would be thinking something like “what a jerk!” And I would suddenly appreciate and feel a connection with my debate partner: “At least he knows what is important in the world. He cares about what matters. We don’t agree right now about what to do about these issues, but we agree that they deserve close attention. We should talk more to see if we might find some more common ground and can identify some policies that we both agree would make sense to institute.”
This is an example of something that is fundamental to what makes us human. It illustrates how we might move beyond the political bubbles that are driving us apart. What makes us human is our motivation to share with others how we see the world, share what we experience as being real about the world—the motivation for shared reality. And shared reality begins with sharing with others what is happening in the world that deserves our attention, what is important in the world: it all begins with shared relevance. When we have shared relevance with others, we feel more connected to them and trust them more. In the above example, you and your debate partner share which political issues are worthy of attention and deserve an effort to find solutions. You don’t yet agree about what are the best solutions, but you feel connected because you do agree that they are important issues. In contrast, the newcomer does not share the opinion that these issues are important. This distances you from the newcomer. Between the two, your debate partner has become your partner.
Creating shared relevance is not the end of the story. But it is a beginning. To move beyond political bubbles it is not necessary to expect that a discussion will quickly lead to shared beliefs or shared solutions. This can take time. But it does matter if you can establish that certain issues are important, deserve close attention, and need solutions. Such shared relevance is the beginning of creating trust…even without initial agreement in feelings and beliefs. A classic case of this is the therapist-client relationship. Consider the following example:
A client begins the session by saying, “My wife doesn’t love me, and I feel lonely.” His therapist says, “You believe that your wife doesn’t love you. And you feel lonely.” The therapist is paying close attention to what the client says and simply repeats it. This is called mirroring because the therapist simply reflects back what the client is saying. Note that the therapist does not need to agree with the client’s belief or the client’s feelings in response to that belief. What the therapist is doing is communicating to the client that his belief about his wife’s feelings about him and how he feels about that is highly relevant to the therapist—so relevant it is worth the therapist’s close attention and worth repeating. By communicating this shared relevance, a closer and more trusting relationship is built between the therapist and the client.
Political debates are not an interaction between a therapist and a client. I am not recommending mirroring in political debates. But the motivational principle is the same: to build a closer and more trusting relationship, it begins with establishing shared relevance. This is one path to start to move beyond political bubbles.
Featured Image credit: Image by rawpixel via Pixabay
The post How to talk to your political opponents appeared first on OUPblog.

The trouble with disease awareness campaigns
In October, pink ribbons promoting breast cancer awareness decorate everything from sneakers to buckets of fried chicken. In addition to breast cancer, October is simultaneously ADHD Awareness Month, AIDS Awareness Month, Down Syndrome Awareness Month, Rett Syndrome Awareness Month, and Selective Mutism Awareness Month. Campaigns to raise awareness about diseases have been a major feature of American public life for more than a hundred years, dating back to the early twentieth-century crusades against tuberculosis.
These campaigns have many goals—decreasing stigma, encouraging public funding and private donations for research, and encouraging healthy behaviors. Often, the hope is that awareness will save lives by encouraging screening, early diagnosis, and effective treatment. This seems so self-evidently true that it’s hard to imagine ever having too much awareness. But all medical interventions come with risks and side effects. For people with no symptoms and low risks, like those with slow-growing cancers, benign abnormalities, or mildly elevated blood pressure, the harms of treatment can outweigh the benefits. Overblown awareness campaigns can literally make us sicker by exposing us to risky, harmful, and unneeded medical care.
Disease nonprofits and pharmaceutical companies both have incentives to maximize awareness, even when this means overlooking scientific evidence of the risks. In the early days of the twentieth century, a tuberculosis campaigner explained that education campaigns should create “an enlightened public opinion in which everyone is frightened just enough to act sensibly, and not enough to act foolishly; just enough to insure necessary public appropriations and private donations.” Another leader explained that scientific uncertainties should be hidden from public view; campaigners should “adopt our creed and doctrines and present them to the laity as though they were unanimously adopted and almost spontaneously created. Our controversies of orthodoxy and faith should be reserved for the inner chambers of our scientific and professional conferences.” (Edward Devine, 1905, and George Palmer, 1915, quoted in Teller, The Tuberculosis Movement, 56–57.)
Public messages about cancer have been similarly shaped by the desire to maximize medical testing and public donations. From its birth in 1913, the organization now called the American Cancer Society sought to find a “middle ground between too much hope and too much horror” that would maximize donations (Thomas Debevoise, quoted in Ross, Crusade, 24–25.) In the 1990s and 2000s, federal warnings that routine mammograms for women under 50 caused more harm than good were greeted with a chorus of criticism from breast cancer advocates. Leading cancer organizations continue to recommend annual mammograms for women in their 40s. Similarly, when the US Preventive Services Task Force warned that indiscriminate prostate specific antigen testing does more harm than good, prostate cancer advocacy groups continued to recommend the screenings. Ironically, the people who are most harmed by screening—the ones who undergo difficult, disabling, and disfiguring treatments for cancers that would never have killed them—often believe that screening saved their lives, detecting cancer while it could still be cured.
Drugmakers and other healthcare corporations also sponsor awareness campaigns; they stand to profit when awareness encourages more people to be screened and treated. Pharmaceutical companies fund patients’ organizations that encourage people to seek diagnoses; mammogram machine manufacturers fund the American Cancer Society. The pharmaceutical company now known as AstraZeneca, which manufactures a blockbuster breast cancer drugs, created Breast Cancer Awareness Month. They profit when awareness campaigns lead more women to be diagnosed early and spend more years taking Tamoxifen.
But although financial and corporate incentives push disease campaigners to overemphasize awareness, there are promising countertrends. Feminist and environmental breast cancer activists have often been harsh critics of the “screening orthodoxy” and have brought attention to environmental carcinogens. And rather than being crowded out by mainstream breast cancer campaigns, these activists can capitalize on them, as when Breast Cancer Action launched a campaign against carcinogenic chemicals during Breast Cancer Awareness Month. Mainstream disease advocacy organizations can also pull back from the orthodoxy of early detection. In 2002, the National Breast Cancer Coalition stopped recommending routine mammograms for younger women. The American Cancer Society’s new chief medical officer has shown an unprecedented willingness to question screening, saying that “the advantages to screening have been exaggerated” and “we are actually hurting people with overtreatment.” (Brawley, “Quotation of the Day”; Otis Brawley, quoted in Parker-Pope, “Plenty of Blame.”) Being aware of diseases need not only mean more and more screening and earlier diagnosis; we can also investigate ways to prevent disease and ensure equitable access to treatment.
Featured image credit: pink ribbon by marijana1 via Pixabay
The post The trouble with disease awareness campaigns appeared first on OUPblog.

September 26, 2019
Why supply is the secret to affordable housing
Housing has become unaffordable for all but the lucky few in many of the world’s great cities. Who can afford to live in New York or Paris? Yet, housing prices can be kept in check. Some cities have succeeded in doing so, as we shall see. The secret is simple: housing supply, which can be stimulated or thwarted by public policy. Basic economics teaches us that rising demand (say for bread) in the face of inadequate supply (of bread) will inevitably lead to higher bread prices. Housing is no exception. The answer, thus, if we wish to keep prices down is to provide more housing. That indeed is the right answer; but easier said than done. For bread, the normal workings of the market (bakers making more when demand increases) will keep prices in check.
Housing, on other hand, is rarely produced under normal market conditions. Housing as well as the makeup of the neighborhoods we live in are highly sensitive issues that directly affect our well-being. Almost all nations have laws and regulations that govern what can be built where; which is why zoning ordonnances exist. No one wants a waste dump next door or a high-rise blocking out the sun. How housing markets are regulated varies greatly across nations, reflecting national conditions and tastes. High housing prices in many rich-world cities, not least in the United States, are largely self-inflicted, the result of public policy.
The rationale behind many restrictions on housing supply are entirely understandable, heritage protection a common objective in Europe, Paris’ urban planning regulations prohibiting buildings above a certain height a typical example, but which in essence freeze supply, rising prices the inevitable outcome. The roots of America’s housing crisis, especially acute in West Coast cities, are more complex (geographical constraints aside) and also more difficult to remedy, the outcome of a tradition of urban governance and development that privileges local democracy and single family homes. Local democracy is certainly laudable; but has all too often resulted in local residents, via referendums or other means, blocking zoning changes that would have allowed denser residential construction. The parallel proliferation of single-family home low-density neighborhoods, which residents are loath to alter, is the outcome of over sixty years of car-oriented development, difficult to reverse. America’s path to affordable housing will not be easy, requiring changes in how American cities are governed.
However, affordable housing is not an unattainable goal. Vienna in Europe and Montreal in North America have systematically kept housing prices (both rental and owned) below those of comparable sized cities in their respective continents. The two cities have chosen different paths. Housing can be produced by the public sector or by private builders. Vienna chose to privilege the former. Following in the footsteps of 1920s Red Vienna, the City administration with federal support has consistently financed the construction of housing, often via non-profit builders associations, public housing accounting for some two thirds of the market, in turn keeping private sector prices in check. Montreal, on the other hand, has chosen to facilitate market entry by private contractors, keeping charges and planning constraints to a minimum while at the same time enforcing mid and mixed-density zoning, resulting in a generally flexible (elastic) market in which triplexes, duplexes, and other mid-range housing is the rule. Montreal stands out in North America in not imposing impact charges on developers, public infrastructure costs shared by all taxpayers, thus facilitating the entry of small players and ensuring a more competitive market.
The accounts above are necessarily brief (this is a blog, not an academic treatise). However, the lesson from Montreal and Vienna is clear. The road to affordable housing requires either sufficient public financing of housing construction to ensure continued supply or a regulatory environment that ensures a competitive, flexible, housing market. If neither is forthcoming, the outcome is predictable.
Image credit: Image by Pete Linforth from Pixabay.The post Why supply is the secret to affordable housing appeared first on OUPblog.

September 25, 2019
Feeling fingers
This will be a story of both protagonists mentioned in the title: the verb feel and the noun finger. However, it may be more profitable to begin with finger. In the year 2000, Ari Hoptman brought out an article on the origin of this word (NOWELE 36, 77-91). Although missed by the later dictionaries, it contains not only an exhaustive survey of everything ever said about the etymology of finger but also a reasonable conjecture, differing from those he had found in his sources, both published and unpublished. In what follows, I’ll depend heavily on his exposition, but a few introductory remarks are needed.
Finger is a Germanic word without cognates outside Germanic. As far as we can judge, Indo–European lacked a common name for “finger.” This circumstance complicates all solutions, but Germanic poses a special difficulty, and English can be used as an example. In addition to the word finger, English has thumb and toe. From a historical point of view, thumb means something like “a swollen one.” It is related to Engl. thousand (“a very large number,” from a historical viewpoint) and tumor, which has the same root in Latin (Engl. th corresponds to Latin t by Grimm’s Law, or by the First Consonant Shift). The word thumb cannot tell us anything about the mental process that resulted in coining the word finger.


Neither can toe. Outside Germanic, the opposition of finger versus toe is uncommon. Therefore, it has been suggested that at one time toe meant the same as finger, with the later differentiation of the synonyms. To bolster up this idea, toe has been compared with Latin digitus “finger” (the word known to English speakers from digit and digital, or, if you prefer, from prestidigitation “quick finger work”). But the phonetic correspondence between toe, from the much older form taih-, and the Latin word is not good: to make digitus match toe, it has to be derived from dikitus. Such tricks, played on words with the single purpose to justify a desired etymology, should be avoided.
The German for “toe” is Zehe, which bears some resemblance to the German verb zeigen “to show; point to.” Though this verb has secure related forms outside German, the connection between Zehe and zeigen is far from obvious. The idea of their being connected again stems from the idea that toes at one time meant the same as fingers and did the same work as those. However, it is safer to assume that finger and toe were from the start coined to designate different body parts and, whatever the initial idea underlying finger might be, the two words hardly ever functioned as synonyms. Let us also bear in mind that the hand has five fingers, and, if we discount the thumb, only one of them, namely the forefinger (or the index finger, or the lickpot, if, after being exposed to prestidigitation, you now need a hopelessly archaic noun) is “independently active.” Perhaps finger once referred to the index finger, and only later the name acquired a broader meaning (“finger in general”).
The opposition between finger and toe gave lexicographers no end of trouble. How do you define toe? Dictionaries find refuge in Latin. A toe, we are usually told, is a digit on the foot. Only a learner’s dictionary explains that toe is a finger (!) on the foot. And so it of course is! There is simply no way of explaining toe otherwise, however awkward this definition may sound. The origin of toe is not obscure. Fortunately, English has preserved the word mistletoe. This plant has deadly association in Scandinavian myths, because the mistletoe was the weapon that killed the shining god Baldr. England offers a friendlier scenario: holly and mistletoe at Christmas and kissing under the mistletoe.

The second component of mistletoe (-toe) means “twig.” Its congeners are Gothic tains, Old Icelandic tá, and others. Toes, it appears, were described as “twigs” on the foot: the image looks credible. To be sure, fingers could also be understood as twigs on the hand, but there is no evidence for this approach. To sum up, it appears that neither thumb nor toe will furnish us with a clue for discovering the ancient impulse behind the creation of the word finger.

The oldest Germanic form of finger has been recorded in fourth century Gothic. Following the rules of Greek orthography (the Gothic gospels were translated from Greek), it was spelled as figgrs but pronounced as fingrs. The story began with a form like fingraz. In Old Engl. finger, the second vowel (e) was inserted later. The origin of this word remains a puzzle. The conjectures are not too many, and the best known of them are old. Some historical linguists connect finger with the numeral five. In the past, five had the consonant n in the middle (German fünf still has it, and Gothic for “five” was fimf). However, the path from finf or fimf to fingr “does not run smooth.” To begin with, it is hard to understand where the suffix –r came from (it is indeed a suffix, not an ending: see fingraz, that is, fingr-az above). I’ll skip the suggestions about this –r (they are too vague and too uncertain), for more important is the semantic part of the reconstruction.
True enough, the hand has five fingers, but those are individualized entities. We have already noted the history of thumb and the synonyms for index finger. People constantly invent names for them, such as middle finger, ring finger, little finger, pinky, and the like. Remember nursery rhymes like “This little piggy went to market” (finger play). The whole point of such songs and games is that the fingers of the hand are treated individually, rather than as a “multitude.” (You may remember that the last little pig said: “Wee! Wee! I can’t find my way home” or “Wee, wee, wee all the way home.” That is why in The Tale of Little Pig Robinson by Beatrix Potter, when Robinson was abducted, he cried wee, wee like a little Frenchman.)
Even if we ignore the obnoxious suffix r, I believe that Hoptman was justified in saying that deriving finger from five is flawed, because the finger turns out to be one fifth of the hand. “The concept ‘five’ is perhaps not the most logical word to denote such an important instrument as the finger. One might expect that the finger would be named for what it can do alone, as well as what it can do together with the other fingers, and it would make far more sense to name a body part based on shape, function, or movement than on mathematics.” Yet dictionaries keep recycling the old etymology. They do of course supply their entries with a certain amount of hedging, but evading the main issue does not save an unconvincing approach.
So what is the best etymology of finger? And why is the little finger called pinkie? Wait until next week. Even more than one week may be needed for us to get to the end of the story. And don’t forget that feel has also been promised in the title.
Feature image credit: This little piggy by Thomas Berg. CC-BY-SA 2.0, via Flickr.
The post Feeling fingers appeared first on OUPblog.

Why love ends
Western culture has endlessly represented the ways in which love miraculously erupts in people’s lives, the mythical moment in which one knows someone is destined to us; the feverish waiting for a phone call or an email, the thrill that runs our spine at the mere thought of him or her. To be in love is to become an adept of Plato, to see through a person an Idea, perfect and complete. Endless novels, poems, or movies teach us the art of becoming Plato’s disciples, loving the perfection manifested by the beloved. Yet, a culture that has so much to say about love is far more silent on the no-less-mysterious moment when we avoid falling in love, where we fall out of love. This silence is all the more puzzling as the number of relationships that dissolve soon after their beginning or at some point down along their emotional line is staggering.
Perhaps our culture does not know how to represent or think about this because we live in and through stories and dramas, and “unloving” is not a plot with a clear structure. Some relationships fade or evaporate before or soon after they properly started, while others end with slow and incomprehensible death. And yet, unloving means a great deal from a sociological perspective as it is about the unmaking of social bonds, which is perhaps the central topic of sociological inquiry.
But in networked modernity, anomie—the breakdown of social relationships and social solidarity—does not primarily take the form of alienation or loneliness. On the contrary, the unmaking of bonds that are close and intimate is deeply connected to the increase of social networks and to a formidable economic machinery of advice-giving or help-giving: psychologists of all persuasions as well as talk-show hosts, pornography and sex toy industries, the self-help industry, shopping and consumer venues—all of these cater to the perpetual process of making and unmaking social bonds. If sociology has traditionally framed anomie as the result of isolation and the lack of proper membership to community or religion, we are now faced with a more elusive property of social bonds in hyperconnective modernity: their volatility despite and through intense social networks, technology, and consumption.
Thus modern relationships have two properties: they are lived as free (freedom to choose a mate, to cultivate one’s sexuality, to choose one’s sex and gender, etc.) and they are framed by powerful institutions of consumer culture and technology. I dub these relationships “negative relationships.”
The period ranging from the sixteenth to the twentieth centuries as one that saw the generalization to all social groups of the cultivation of new forms of relationships—the love marriage, the disinterested friendship, the compassionate relationship to the stranger, and national solidarity, to name a few. All of these novel social relations, novel institutions, and novel emotions all in one, and they are all resting on choice, the capacity to act according to one’s desire and preference. Early emotional modernity was thus a modernity in which freedom (to choose) was institutionalized and people experienced their freedom in the refinement of the practice of choice, experienced through emotions. Bonds of friendship, romantic love, marriage, or divorce were self-contained, bounded social forms, containing clear emotions and names for these emotions, studied by sociology as definable and relatively stable empirical and phenomenological relationships.
In contrast, our contemporary hyperconnective modernity seems to be marked by the formation of quasi-proxy or negative bonds characterized by negative choice: the one-night stand, the hookup, the fling, the friends with benefits, casual sex, casual dating, cybersex, are some of the names of relationships defined as short-lived, with no or little involvement of the self, often devoid of emotions, containing a form of autotelic hedonism, with the sexual act as its main and only goal. In such networked modernity, the non-formation of bonds becomes a sociological phenomenon in itself. If early and high modernity were marked by the struggle for certain forms of sociability where love, friendship, sexuality would be free of moral and social strictures, in networked modernity emotional experience seems to evade the names of emotions and relations inherited from eras where relationships were more stable. Contemporary relationships end, break, fade, evaporate, and follow a dynamic of positive and negative choice, which intertwine bonds and non-bonds.
Negative relations are apparent in the conscious decision or non-conscious practices by many men and women not to enter stable bonds or have children and in the fact that single households have considerably increased in the last two decades.
A second way in which negative choice is made apparent is by the development of divorce rates. In the United States, for example, the rate more than doubled between 1960 and 1980. In 2014 it was more than 45 percent for people who married in the 1970s or in the 1980s, making divorce a likely occurrence in a large portion of the population.
Third, more people live in multiple relationships (of the polyamorous or other types), putting into question the centrality of monogamy and attendant values as loyalty and long-term commitment. An increasing number of people enter and leave a larger number of relationships in a fluid way throughout their lives.
A fourth manifestation of non-choice is sologamy, the puzzling phenomenon of (mostly) women who choose to marry themselves, thereby declaring their self-love and affirming the worth of singlehood. Finally, negative choice is somehow implicated in what a commentator has called the loneliness epidemic: An estimated 42.6 million Americans over the age of 45 suffer from chronic loneliness, which significantly raises their risk for premature death, according to a study by the American Association of Retired Persons. One researcher called the loneliness epidemic “a greater health threat than obesity.”
The loneliness epidemic has another form: As Jean Twenge (a psychology professor at San Diego State University) has suggested, members of the iGen generation (the generation after the millenials) have fewer sex partners than members of the two preceding generations, making the lack of sexuality a new social phenomenon. This is explained by the cultural shift to negative choice, to the quick withdrawal from relationships or to the fact that relationships themselves never get formed.
Featured image credit: “divorce” by ArmOrozco. CC0 via Pixabay.
The post Why love ends appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
