Oxford University Press's Blog, page 108
May 17, 2021
The Senate’s unchanging rules

At his recent press conference, President Biden said that he came to the Senate 120 years ago. I knew exactly what he meant because I got there three years after him when I joined the Senate Historical Office in 1976, and it was a different world. Back then, all senators were men, as were all the clerks at the front desk. One party had held the majority for twenty years. Both parties were internally divided, straddling liberal and conservative wings. The most conservative senator was a Democrat, James Eastland of Mississippi. One of the most liberal senators was a Republican, Jacob Javits of New York. Most votes were bipartisan, with the liberals in each party joining to vote against the conservatives, and each side vying for the moderates. Straight party-line votes made headlines.
Today, twenty-five women serve in the Senate. The president of the Senate, secretary of the Senate, sergeant-at-arms, and parliamentarian are all women. The majority in the Senate has flipped nine times since then, or about every six years. The South seceded from the Democratic party, and liberals were expelled from the Republican party as Rinos (Republicans in name only). All this reshuffling made both parties internally cohesive, with very few senators left in the middle. Almost every vote now is along party lines. Bipartisan votes make headlines.
“The Constitution authorizes the Senate to write its own rules. Paradoxically, the Senate created rules that make it exceedingly difficult to revise those rules.”
Clearly, a lot has changed. What hasn’t changed are the Senate’s rules. Unlike the House, which can write new rules at the beginning of each Congress, the Senate defines itself as a continuing body and is operating under the same rules that were in effect in the 1970s. Those rules fit an institution that engaged in a lot more wheeling and dealing and bipartisan coalition-building, but they have not been working as well in an era of polarized partisanship.
The Constitution authorizes the Senate to write its own rules. Paradoxically, the Senate created rules that make it exceedingly difficult to revise those rules. So, whenever it’s necessary to adjust its procedures, the Senate has set new precedents rather than writing new rules. If a majority of senators vote to overturn a ruling of the chair, they set a new precedent, which can completely contradict the written rules. The more polarized parties in the Senate have become, the more likely that change will come through strong-arm tactics by a frustrated majority.
That’s what happened with the nuclear option. Back in 1989 when Democrat George Mitchell was the majority leader, he accused the minority of mounting an increasing number of filibusters. The question for Senate historians was how to quantify filibusters. Senators don’t have to announce a filibuster, and there are multiple ways of conducting one. All that could be quantified were cloture motions, which were indeed increasing, as were failed cloture motions, which Senator Mitchell defined as filibusters.
Over the next two decades the two parties assailed each other on the issue. In 2005 Republican majority leader Bill Frist threatened a “nuclear option” against Democratic filibusters of President George W. Bush’s judicial nominations—potentially setting a precedent to reduce cloture to a simple majority. Democrats protested vigorously—especially their leader, Harry Reid—that this would end robust debate. In 2013, when Senator Reid was majority leader, he complained about persistent Republican filibusters against President Barack Obama’s nominations, and detonated the nuclear option. Republicans protested vigorously—especially their leader, Mitch McConnell and when Republicans returned to the majority, Senator McConnell not only kept the nuclear option but expanded it to include Supreme Court nominations. Democrats protested vigorously.
The two parties had begrudgingly reached a consensus to rationalize the process and end routine filibusters of nominations. Federal offices have to be filled, and the ideological divide between the two parties has grown so wide that neither side is likely to approve of the other party’s nominees. Despite their protests, they both accepted cloture by a simple majority. It has still been possible to derail a nominee if at least a few members of the majority find the candidate unfit, and also possible to slow nominations down. But for the most part we have to expect that nominees are going to reflect the ideology of the incumbent president, and that they are going to be confirmed if the president’s party holds the Senate majority.
Now the question is whether to reduce cloture for legislation. Democrats in the House are all in favor of it. House members from both parties have long claimed that the Senate’s rules made it a graveyard for their bills. President Biden has suggested that the Senate make filibusters tougher and force the dissenters to stand up and talk rather than quietly and effortlessly vote against cloture. Senate Democrats are mulling this over, but Senate Republicans are vigorously opposed. With the Senate divided 50-50, any change would be exceedingly difficult.
Based on the past, and the continued polarization of politics, the more excessive obstructionism becomes the more likely it will be that an exasperated majority party will strong-arm the minority and set a precedent to limit or abolish filibusters. Senators in the opposition will protest vigorously, before embracing the change once they return to the majority. Tensions could well escalate until the Senate abandons the notion of being a continuing body and allows members to rewrite the rules at the start of each Congress. The Senate would be a very different place then, but then it already is.
Featured image by MIKE STOLL.

May 15, 2021
The Spanish Civil War: a nostalgia of hope

This summer will mark the 85th anniversary of the start of the Spanish Civil War, a brutal struggle that began with a military uprising against the democratic Second Republic and ended, three years later, in victory for the rebels under General Francisco Franco. The enduring fascination of that conflict, its ability to grip the global imagination, belies its geographical scale and is testament to the power of art. Pioneering photojournalists Robert Capa and Gerda Taro seared into the cultural memory frontline images of compelling intimacy and immediacy. Ernest Hemingway and George Orwell turned their experiences on the front lines into classics of English literature, For Whom the Bell Tolls and Homage to Catalonia. And in his grisaille rendering of the atrocity of Guernica, the exiled Picasso made Spain’s primal anguish an everlasting icon of the twentieth century.
Within Spain the civil war’s traumatic legacy is still raw, as witnessed by the decision of the present socialist-led government to exhume Franco’s remains from the Valley of the Fallen, a basilica built in part by convict labour and a place of pilgrimage for the far right. The causes of the uprising were many and complex, rooted in history as well as ideology. To non-Spaniards the struggle appeared more binary and portentous, acquiring mythological status as “the last great cause,” “the decisive thing of the century,” the battleground for the soul of Europe. It mobilized, in Ronald Blythe’s words, “a great modern Theban band” of foreign volunteers in whom the battle noises from Spain produced “spiritual reactions which would have shocked Wilfred Owen and pleased Rupert Brooke.” A high proportion of those volunteers were already political refugees, a fact which partly explains why the mythology of “the last great cause” is often underpinned by an Odyssean sense of nostos (homecoming), a vision of Ithaca, a nostalgia of hope. For, as Primo Levi once wrote, the search for home is “part of a far greater hope, that of an upright and just world, miraculously re-established on its natural foundations after an eternity of upheavals, of errors and massacres, after our long patient wait.”
Among hundreds of graffiti left by Republican inmates inside Fort San Cristóbal near Pamplona, there survives in the punishment cell an inscription (most unusually) in Latin—four lines from Ovid’s Tristia, the Roman poet’s elegiac epistles from exile—which translates:
When the saddest image of that night recurs,
which was for me the final moment in the city,
when I recall the night in which I left behind so much dear to me,
a tear now also glides from my eyes.

The likely author of the graffito was the classically named Virgilio Zorita Fajardo, originally from Medina del Campo, whose occupation was listed as “Jefe de Telégrafos” and who was 50 when he entered the prison, the same age as Ovid when he was banished to Tomis by the emperor Augustus. In the Tristia, Ovid declares his misfortunes to be greater than those of that archetypal nostalgic, Ulysses (Odysseus), who spent ten years returning from Troy to Ithaca and longed “only to catch sight of the smoke curling up from his own land.” Ovid never returned to Rome or his Penelope and died in exile, but he became his own Homer, recognizing the power of poetry to immortalize his sufferings and his yearning for home.
Virgilio had been transferred from Cáceres prison to San Cristóbal in 1939. In January that year, Luis Portillo, a young law don from Salamanca University, having remained loyal to the Republican government, was forced into exile in Britain. For two decades he was unable to set foot in Spain. According to his son Michael, he suffered always from a sense of impossible nostos, from “a tremendous nostalgia for what had been and what might have continued to be.” The tristia Luis composed in exile, many in sonnet form, were love letters to a lost Spain and especially to La Dorada, his beloved Salamanca, which was every bit as integral to his memory of who he was as Ithaca was to . And like Odysseus—and Ovid—he knew the importance of keeping Ithaca in mind, of exercising the redemptive imagination through language; he knew that, in its potential to restore form and beauty to what is fragmented, poetry itself is a kind of homecoming. Among the poems in his published volume, Ruiseñor del destierro, are elegies for absent friends such as the murdered poet Lorca, for the conviviality of the plaza and the excitement of the corrida, and for a particular fig tree remembered from childhood which “to the spell of love each summer / offered honey” (“al conjuro de amore n cada estío / brindaba miel”). There is also what may be termed an inverted nostos, rather like Rupert Brooke’s famous sonnet, “The Soldier”, but commemorating a corner of Trafalgar Square that is forever Spain. “Mi Española en Londres” is an ekphrasis on Velázquez’s Venus in London’s National Gallery, which begins:
Siempre me espera aunque jamás suspira.
Yo la frecuento cuanto me es posible.
Yacente, no se yergue ni aun me mira.
Mas me ofrenda hermosura inmarcesible.
Always waiting for me though never heaving a sigh.
I visit her as often as I can.
Recumbent, she never raises herself nor even stares at me.
Yet she offers me unfading beauty.

Like Luis himself, Velázquez’s painting—his only extant female nude—is a survivor and an émigré. Transplanted to Yorkshire in the early 1800s before ending up in the National Gallery a century later, this languorous goddess had evaded censorship in Inquisitorial Spain, suffered a meat-clever attack in 1914 by suffragette Mary Richardson, and been exiled to a disused Welsh slate mine during the Blitz. Here, in Luis’s sonnet, art’s resilience and poetry’s capacity to recollect and revivify are what make nostos possible. Or, as Seamus Heaney expressed it, poetry “opts for the condition of overlife, and rebels at limit.”
Poetry as homecoming is also the tenet of a poem by Englishman John Fownes-Luttrell, who at 20 went to Spain as a volunteer to assist with the Refugee Children Aid Society. While stationed in the Catalonian town of Puigcerdà in November 1937, he wrote “Song of Dead Spanish Soldiers”, the final stanza of which envisages a spiritual nostos, a resurrection through verse:
We shall return indeed to our native land.
Although the shells from which we could not hide
Scattered our limbs upon the burning sand,
We shall return, we shall not die!
Whether or not the Spanish Civil War can legitimately be viewed as “a poet’s war,” as Stephen Spender pronounced, much of the poetry (and art) it generated was infused with an Odyssean nostalgia of hope and spoke to a homesickness inherent in the human condition. Heaney, who had been in Madrid when the Troubles broke out in Northern Ireland, was fascinated by Lorca’s notion of duende, the daemonic source of artistic inspiration, a sense of otherness and otherwhere that paradoxically always feels like home. Paraphrasing Lorca, he said “poetry requires an inner flamenco… it must be excited into life by something peremptory, some initial strum or throb that gets you started and drives you farther than you realized you could go.” Elsewhere he identified that initial strum or throb as “a homesickness.”
In a sonnet directly about the civil war, “Pilares de dolor y gloria”, Luis Portillo describes his exiled self as a “reliquary”, a vessel of memory and Ithacan longing. But, as he demonstrates, it is poetry, ignited by the throb of homesickness, that transmutes loss and longing into hope—the hope “triunfar con nuestra paz contra la guerra / y redimir la vida, de la muerte” (“to triumph with our peace against war /and to redeem life, from death”).
Feature image: Robert Capa, On the road from Barcelona to the French border, January 1939. Via Wikimedia Commons.

May 14, 2021
Why has Gaza frequently become a battlefield between Hamas and Israel?

During the past decade, the eyes of the world have often been directed toward Gaza. This tiny coastal enclave has received a huge amount of diplomatic attention and international media coverage. The plight of its nearly two million inhabitants has stirred an outpouring of humanitarian concern, generating worldwide protests against the Israeli blockade of Gaza.
Gaza has undoubtedly taken on much greater prominence in the Israeli-Palestinian conflict in recent years, surpassing the larger, more populous, and wealthier West Bank, historically the more important of the two territories. Under British rule (1917–1948), Gaza was a relatively quiet backwater, less embroiled in the growing Arab-Jewish conflict than other parts of Palestine. Then, under Egyptian rule (1949–1967), Gaza became a hotbed of Palestinian nationalism and a staging ground for guerrilla raids into Israel. But Gaza was not the locus of Palestinian national aspirations. Under Israeli rule (1967–2005), Gaza’s overcrowded, poverty-stricken refugee camps became places of stiff, sometimes violent, resistance to the Israeli occupation but the West Bank was of much greater interest to Israel because of its strategic value and historic and religious significance to Jews. Gaza, by contrast, had little, if any, strategic or ideological value for Israel. Indeed, Israeli Prime Minister Yitzhak Rabin famously remarked in 1992 that he wished Gaza “would just sink into the sea,” and many, if not most, Israelis probably felt the same way.
However, under Hamas rule since 2007, Gaza no longer has a peripheral status in the conflict between Israel and the Palestinians. Instead, it has become the most frequent flashpoint in the conflict and the epicenter of its deadliest violence, with regular tit-for-tat skirmishes between Hamas and Israel periodically. The West Bank, most of it still under Israeli military rule, has been comparatively calm in recent years. Attacks against Israel from Gaza, on the other hand, have spiked since Hamas took power. These attacks come in many different forms, including firing rockets and mortars into Israel; shooting at nearby Israeli soldiers and agricultural workers; and detonating improvised explosive devices along the fence separating Gaza from Israel.
To understand why Gaza has become a staging ground for so many Palestinian rocket and mortar attacks over the past dozen years, one must look at Hamas’s motives for firing them. This is seldom explained in Western or Israeli media coverage, which tends to portray Hamas’s violence against Israel as solely driven by a burning hatred of the Jewish state (and Jews as well, it is often claimed) and an insatiable desire to destroy it. But there is more to Hamas’s violence than homicidal hatred. Since forming its military wing in 1991, Hamas has strategically employed violence as a means to achieve both short-term and long-term objectives. According to Hamas’s 1988 founding charter, its ultimate goal is the complete “liberation” of Palestine (whether Hamas still remains committed to this in practice is now subject to some debate). Ideologically, Hamas believes that it is fighting a defensive jihad (holy war) or, in more secular terms, a war of national liberation against “Zionist aggression” and colonialism. As part of its “armed resistance,” Hamas uses terrorism against Israeli civilians to demoralize Israeli society and undermine its staying power over the long run. In this respect, Hamas’s use of rockets is just another tactic in the long war of attrition that it has been waging against Israel.
Hamas’s rocket attacks may be indiscriminate, but they are also calculated. As the primary goal of Hamas’s leadership has been to stay in power in Gaza, whatever the cost, they have used rocket attacks for a few different purposes. First and foremost, the attacks are used to pressure Israel to lift or at least ease its blockade of Gaza. Hamas itself claims that its rocket attacks are aimed at forcing Israel to end its blockade of Gaza, but the strategy is not quite as simple as that. Its rocket attacks have actually been intended to provoke Israeli military retaliation against Gaza, which Hamas has hoped will then draw more international attention to Gaza and lead to diplomatic pressure on Israel to make concessions or change its policies. Second, Hamas has used rocket attacks to retaliate against Israel when the Israel Defense Forces (IDF) assassinates Hamas leaders as it has done on a number of occasions, kills Hamas members, or takes other aggressive actions against the group. Third, along with trying to pressure and punish Israel, Hamas has used rocket attacks to prove that it is still committed to “armed resistance” against Israel.
Israel’s frequent and occasionally devastating use of force in Gaza has also been purposeful. Israel’s consistent, overriding objective has been to stop the rocket fire, and maintain or restore calm, without making any major concessions to Hamas. Since Israel cannot prevent militants from launching rockets from Gaza, nor intercept and shoot down every rocket, it has tried to reduce the number of rocket attacks from Gaza through a strategy of military deterrence. Whenever rockets or mortars are fired at Israel, the IDF immediately retaliates to inflict a punishment that will deter their future use. Most of the time, it carries out what it calls “precision” airstrikes that target sites where rockets were launched from and the militants who fired them (although civilians nearby are sometimes also killed or wounded). But since this is not always possible or effective, Israel also holds Hamas responsible for all rockets fired from Gaza, regardless of who launches them, and it retaliates against Hamas targets in Gaza.
When small-scale, tactical airstrikes prove to be insufficient in deterring rocket attacks, or when their deterrent effect wears off and rocket attacks resume or escalate, then the IDF responds by launching a large-scale offensive against Hamas aimed at degrading the group’s military capabilities. The purpose of these major IDF operations is not simply to destroy Hamas’s weapons and kill its fighters, but to inflict such severe damage that Hamas will be not only militarily weakened but also effectively deterred from launching rockets into Israel or allowing others to do so. However, since Israel fully expects Hamas to gradually rebuild its military capabilities and become more emboldened and less deterred over time, its military offensives only achieve a temporary lull in violence and buy time. When the quiet eventually ends and rocket attacks escalate again, the IDF engages in another major offensive against Hamas, and the cycle repeats itself. Two such offensives “Operation Cast Lead” (December 27, 2008 to January 18, 2009), and “Operation Protective Edge” (July 8 to August 26, 2014) went on for weeks and involved an aerial assault and a ground invasion, resulting in unprecedented destruction and loss of life, primarily on the Palestinian side. In fact, the high numbers of casualties incurred during these two “rounds” of fighting between Israel and Hamas means that they can be accurately described as wars.
In these two Gaza wars, Israel largely achieved its military objectives as it degraded Hamas’s military capabilities and deterred it from launching rockets for substantial periods of time. But during these periods of relative calm, Hamas has rearmed and grown stronger militarily. It has increased and upgraded its stockpile of rockets, built armed drones, recruited more fighters, dug more tunnels, and turned its militia into a well-organized, well-armed, and battle-hardened professional army. Despite the losses it incurred during Israel’s recurrent offensives against it, therefore, Hamas has become a more powerful adversary. And it remains firmly in control of Gaza, despite the deprivation and suffering of its population. The fact that Hamas still controls Gaza is actually an outcome that suits Israel. Israel wanted to deter Hamas and weaken it, but not weaken it so much that Hamas would be unable to maintain control over Gaza. Israel does not want to reoccupy and rule Gaza since this would be costly, dangerous, and domestically unpopular. So instead, ironically, Israel has reluctantly come to rely on Hamas to govern Gaza, provide some stability, and police the more radical militant groups operating there.
While the two wars in Gaza may well have served Israel’s short-term strategic interests, and arguably Hamas’s interests as well, Gaza’s civilian population paid a terrible price. In total, more than two thousand Palestinian civilians were killed in these wars (just nine Israeli civilians were killed), thousands more were wounded, and hundreds of thousands were psychologically traumatized (studies have shown that children in Gaza, who make up almost half its population, suffer from particularly high rates of post-traumatic stress disorder because of the wars they have lived through). Regardless of whose fault this is—human rights groups and the UN’s Human Rights Council have accused the IDF of committing war crimes in Gaza, while Israeli officials blame Hamas for launching rockets from densely populated urban areas, storing weapons in schools, and using civilians as human shields—in the eyes of many people around the world, Israel appears to be most culpable, if for no other reason than the hugely lopsided casualty ratios in these wars. International criticism, therefore, has focused more on Israel’s allegedly disproportionate use of force in its military offensives in Gaza than on Hamas’s indiscriminate rocket attacks against Israel. Consequently, Israel’s reputation around the world, especially in Europe, has been tarnished, rather than Hamas’s.
This blog post is an excerpt from The Israeli-Palestinian Conflict: What Everyone Needs to Know®.

May 13, 2021
The risks of privatization in the Medicaid and Medicare programs

Increasingly, two of the largest publicly supported healthcare programs, Medicaid and Medicare, are administered by for-profit insurance companies. The privatization of the Medicaid long-term care programs has been implemented largely through state managed care contracts with insurance companies to administer Medicaid LTC funds. Medicare privatization has been similarly implemented through contracts with insurance companies in the Medicare Advantage (MA) program. Privatization, without rigorous regulation, poses a threat to the original goals of these programs—to expand access to healthcare to low-income families, children, and those living with certain disabilities in the case of Medicaid, and to protect older people from devastating financial loss by providing them with effective healthcare coverage in the case of Medicare.
Since the 1980s, for-profit corporate insurance, managed care companies, and private equity firms have increasingly recognized the potential investment value of the health and long-term care (LTC) sectors. Stakeholders are often promised the efficiency and cost-effectiveness that purportedly comes from a corporate management style. For-profits promise added value and lower costs as they pursue profits to be distributed among its leadership and shareholders. Non-profits are also incentivized to control costs; however, they differ in that they tend to reinvest funds into the organization for the benefit of the people they serve and are held accountable by the communities in which they operate. Private equity firms have no stake in the continuity of the company, beyond the initial investment.
The American LTC system is a mix of privately provided nursing homes, assisted living and community-based support services that has not received much public attention except in Medicaid budget discussions. Forty states contract with large for-profit insurance companies to run all or some parts of their Medicaid LTC programs. With the projected doubling of the population aged 70 and older over the next 30 years, there is growing concern with the assumptions made by many policy makers that managed LTC can reduce nursing home use while earning profits for shareholders.
The Health Maintenance Organizations (HMOs) have convinced state legislatures that they can reduce nursing home admissions by redirecting persons to community-based care and produce at least a 5% savings. But the home and community-based services are not program entitlements and there is little reported data on the availability of services within the states. The states have focused primarily on containing LTC Medicaid spending with little attention paid to the quality of services provided. Due to higher administrative costs and the need to generate shareholder dividends, HMO-administered LTC services could well end up costing state governments more, creating fiscal pressures that lead to rising LTC unmet needs. Florida is an example of how this has happened, with its growing waitlists of 65,000 for aging LTC services in the community and 22,000 persons with developmental disabilities.
Key to private equity’s investment growth is its short-term, high rate of return strategy of buying businesses, steering a rapid pace of change that is supposed to improve performance, and then selling them at a profit to investors. They have become major investors in health care, especially MA, and in Medicaid-funded long-term care over the past decade. A recent studyshows that private equity buyouts of nursing homes have resulted in major reductions in quality of care, as measured by the federal five-star ratings and in hospital readmission rates, possibly because of reduced staffing.
A recent study shows that private equity buyouts of nursing homes have resulted in major reductions in quality of care, as measured by the federal five-star ratings
The Medicare program is also increasingly being privatized with over one third of beneficiaries currently enrolled in MA plans. These plans are designed to attract consumers by offering traditional Medicare benefits plus additional benefits like prescription drugs, dental, vision, and gym memberships. MA enrollees, on average, have lower premiums and cost-sharing compared to traditional Medicare beneficiaries. Questions, however, remain about how MA plans are using the risk-adjusted per-enrollee capitated payments and quality bonuses to provide care.
The stated goals of the MA program were to use private plan alternatives to increase choice and reduce cost without diminishing the quality of care received. However, many enrollees find themselves in plans with narrow networks of providers and face inappropriate denials for care, raising questions about the program and its ability to provide quality care. MA plans were supposed to take on higher morbidity beneficiaries, yet a consistent finding is that MA enrollees are healthier than traditional Medicare beneficiaries. It’s unclear if private Medicare plans have achieved their goals largely due to a lack of encounter data and enforcement of plans’ contracts with the Centers for Medicare and Medicaid Services (CMS).
Why has privatization in America’s two largest publicly funded programs, Medicare and Medicaid, proceeded over the last several years in the absence of rigorous and consistent regulatory oversight? This should be a high priority concern for the Biden Administration and the leadership of CMS as they work to ensure that taxpayers and beneficiaries are well served by these essential programs.
Featured image via Getty Images .

Do you know how these words were coined? [Quiz]

What makes a new phrase stick, really stick, in general parlance? Author Ralph Keyes explores that question in his book The Hidden History of Coined Words while also providing entertaining explanations of some of English’s most nonplussing words and phrases. All sorts of situations beget new words—hoaxes, insults, and jokes have all created common words, while more than a few resulted from typos, mistranslations, and mishearing (bigly and buttonhole, for example), or from being taken entirely out of context (robotics). Neologizers (a Thomas Jefferson coinage) include not just scholars and writers but cartoonists, columnists, children’s book authors. Keyes also tackles terms with contested coinages, addressing the controversy of the origins of gonzo, mojo, and booty call.
How many surprising coinage stories do you know? Take our quiz to find out!

May 12, 2021
Monthly gleanings for April 2021

I have received two letters: one asked me about the origin of dog, the other about bodkin. In the past, I have written about both words. As regards dog, see the post for 4 May 2016 (the beginning of a series). I have nothing new to say about this enigmatic animal name, but it might be useful to expand the post on bodkin (7 October 2015), which, by association, was followed by a post on body (14 October 2015).
Bodkin and its kinThere is no need to repeat in detail what I wrote five years ago, and I am returning to bodkin (revisiting it, as linguists like to say), because I wish to reinforce the idea that researchers have missed an important Slavic connection and not made enough of the symbolic origin of b-d and b-t words all over Europe. The Slavic verbs with the root bad- ~ bod- mean “to prick; sting; pierce” and, characteristically, “to butt.” Nouns like bodak (with stress on the second syllable), bodika, bod, and body are numerous and refer to stinging, pricking, and cutting objects, including thistle. My suggestion depended on the idea that the similarity between bodkin and the Slavic words may not be accidental.
The borrowing of bodkin directly from Slavic is out of the question. Yet in no other language group of Indo–European do we find such a multitude of bad- ~ bod- words referring to stabbing and cutting. Other than that, it appears that verbs like beat, butt, bat, batter, along with such monosyllables as put, pick, kick, cut, dig, and a few others of the same type are of symbolic (occasionally of sound-imitative) origin. They refer to strenuous efforts and may have emerged as spontaneous expressions, almost like interjections accompanying such actions.

It is instructive to look at their geography. Bat “cudgel” (an old word in English, from which we have the verb to bat) resembles (Old) French batter “to strike” and, unexpectedly, English beat and Old Slavic biti (the same meaning). To pick, is for all intents and purposes, the same verb as Dutch pikken, French piquer, Italian piccare, etc. Cut goes together with Icelandic kuta. Dig hardly owes anything to French diguer “prod, stab” but sounds like it. Kick has Scandinavian look-alikes, such as Icelandic kikna and keikja “to bend over backwards” (in the direct sense of the phrase!), among others. It is therefore not unthinkable that bod– in bodkin is part of the same multitude (indeed, multitude, rather than family). In such cases, I often refer to an image of many mushrooms on a stump: no roots but a family!
The Old Germanic word bad– meant “battle.” Note again how close bad– and bat– in battle are! (For my non-traditional ideas on the origin of the unrelated English adjective bad see the posts for 24 June and 8 July 2015). The word was common and is also extant in such ancient names as Baduarius (a Latinized spelling of course), Baduvila, and Baduhenna. In the earlier post on bodkin, I mentioned the Old Icelandic name Böðvarr, literally “battle + fight,” a name certainly worthy of a prospective hero. It is anybody’s guess whether bad- “battle” has the symbolic origin suggested for Slavic bad- ~ bod-. However, some of those Germanic names had close ancient Celtic counterparts, so that the root was not isolated.
To conclude: however obscure the origin of bodkin may be, it does not seem improbable that bod– in it refers to cutting. We have no way of deciding whether the fashion for bod– words reached Western Europe from the Slavs. Some names for swords and axes did reach Europe from the East.
The suffix –kin is Dutch, but its presence in English does not always testify to the word’s origin in the Low Countries. For example, the root of napkin is French, while the history of the whole is obscure. In pumpkin, –kin may be the product of a relatively late assimilation, while bumpkin appears to be pure Dutch (a little barrel; squat figure, an English coinage for a derogatory name of a Dutchman). In discussing bodkin, modern dictionaries cite similar Welsh and Gaelic forms and either look on them as the source of the English word or conclude that neither is close enough to be regarded as its etymon. When all is said and done, we may risk a guess that bodkin was coined from foreign elements on the native soil.
[image error]Bumpkin and pumpkin: they have nothing in common except for the suffix. (Image one by Elias Shariff Falla Mardini, image two by S. Hermann & F. Richter, via Pixabay.)It appears that I cannot offer a secure etymology of bodkin, but I believe that boyd-, bid-, bad-, and bod– with various suffixes traveled all over Eurasia as the names of a small pointed instrument, and diminutive suffixes or suffixes of the agent noun clung to them easily. If this is true, such words functioned like so many other elements of artisans’ and handymen’s lingua franca. Compare the post on adz(e) (25 March 2020).
From such musings Constantinos Ragazas (cf. his comment on the previous post) concluded that I had given up “etymological algebra.” Nothing can be farther from the truth. Sound correspondences are the backbone of etymology and have served it well for more than two hundred years, but they don’t cover the entire terrain, and marginal phenomena are as useful to study as those which are governed by rules. I also believe that before mocking algebra, one should try to master it. By contrast, Alan Mighty’s list of more b-n words (another comment on the previous post) is suggestive. The danger is that in such cases one never knows where to stop. I probably needn’t say that my ideas on b-n words are close to those on b-d words. Iconic (symbolic and sound-imitative) formations exist in great numbers, but, while walking over that field, one should be aware of mines. Once again, I may refer to Wilhelm Oehl’s works. They are useful but should be treated with caution.
The family name Bodkin is usually derived from Baldwin, with the same diminutive suffix as above. I am not aware of any refutation of this etymology. (This is an answer to the question asked several years ago. Better late than never.)
And now the phrase to ride bodkin, which means “to occupy a seat on a coach, wedged in between two others.” The earliest citation in The Oxford English Dictionary (OED) goes back to 1638, to a time when it was still common, one would think, to make one’s quietus with a bare bodkin, and the OED does not seem to have any trouble with the origin of the metaphor. But it may be of interest to read what the contributors to Notes and Queries (a wonderful weekly at that time) had to say about the subject after the publication of the fascicle of the OED with bodkin. I’ll quote only two statements: “It has been suggested to me that a place in which to set a sword (or bodkin) used to exist in the old traveling coach or chariot between the two occupants of the ‘front seat’.” “The ‘sword case’ in old carriages was not a perpendicular socket, but a horizontal recess [!] in the upper part of the back, the full width of the carriage.” Finally, F. Adams, whose letters often appeared in the periodical, pointed out that in the earliest examples there is no connection of wedging in between two others.
Two British (regional) words[image error]Stroil ~ sproil: sprawling indeed. (Photo by John Tann, CC 2.0.)To lomp is a phonetic variant of to lump, and to the extent that it is still a colloquialism (in whatever sense), it reflects a dialectal pronunciation of the better-known form. The question was whether I know this word. I have seen it mentioned in dictionaries but never heard or used it.
The origin of stroil ~ stroyl “couch, or quitch grass” is unknown. Most words ending in the diphthong [oil] (boil, coil, foil, moil, oil, soil, toil) have reached English from French and sometimes display the variation of the rile ~ roil type. Therefore, stroil may go back to strile (as boil “inflamed tumor” goes back to bile), but strile has not been attested. On the other hand, stroil alternates with sproil, which is a variant of sprawl. (I found this information in Joseph Wright’s inestimable The English Dialect Dictionary.) Could the plant get its name because it sprawls? If so, then stroi! ~ stroyl would be an alternate variant of sproil. Mere guessing, as Walter W. Skeat used to say.
Feature image by Arwen Wood of the Portable Antiquities Scheme, on Wikimedia Commons (CC BY 2.0)

May 11, 2021
Fascination of Plants Day: interview with a plant scientist

Mitchell Cruzan is Professor of Evolutionary Biology at Portland State University. For Fascination of Plants Day on 18 May this year, he talked to us about his research and shared some fascinating insights into the evolved adaptations that distinguish plants from animals.
Your research looks at how plants differ from animals. Can you tell us what the key such differences are?There are two fundamental differences between plants and animals. The first is that plants are sedentary. Since all of the creatures that we share an affinity with have some form of mobility, not being able to move might seem like a limitation. In reality, it’s simply an outcome of their evolutionary history; plants don’t move around because they don’t need to. Early in the history of life on Earth, when only microbes were present, some lineages of bacteria acquired the ability to convert light energy into chemical energy, known as photoautotrophism. This was a completely novel mode of living; the new photoautotrophs just had to sit still and absorb light, carbon dioxide, and nutrients to make their own food. This changed everything; now, there was an abundance of food energy and oxygen for efficient metabolism. Photoautotrophs enabled the evolution and diversification of all life that followed and is responsible for the incredible biodiversity we find around us today.
“all of the chardonnay vines grown around the world are parts of a single massive clone”
The second major way that plants differ from animals is that they can keep growing into adulthood; they have indeterminant growth. They do this by regenerating themselves as they elongate their stems and produce new leaves and buds. Even for very old trees, the new growth at the tip of each stem looks as fresh and new as when they were seedlings. In contrast, animals stop growing when they become adults, and it’s downhill from there as senescence eventually leads to death. The regular regeneration of new tissue by perennial plants allows them to be effectively immortal, and death only comes with disease or destruction. Humans have taken advantage of this unique characteristic as many of the plants we rely on for food production, such as fruit trees and grape vines, are propagated with cuttings. Even though cuttings are grown as separate plants, they are still part of one individual plant that originated from a single seed. For example, all of the chardonnay vines grown around the world are parts of a single massive clone.
Since plants can’t move around, does that put them at a disadvantage as compared to animals in adapting to changing environments?It might seem that way, but it turns out that plants may be able to adapt to variations in their environments more easily than animals. They do this in three ways. First, plants can change as they grow. This is known as phenotypic plasticity. Plants modify their leaves as they develop so they are suited to the current environment. For example, leaves developing in the shade are larger and thinner, and plants exposed to drought will produce smaller leaves that may be covered with hairs. Phenotypic plasticity is typically due to reversible epigenetic changes—the addition of proteins to the DNA molecule that control gene expression. Such epigenetic modifications can allow plants to adjust to changes in the environment over a matter of minutes as they grow new leaves and stems. Indeterminant growth and a high degree of plasticity allow plants to continue thriving while they remain sedentary and are exposed to seasonal and longer-term changes in their environments.
Second, plants can pre-condition their seedlings to improve their chances of survival. The epigenetic changes made during a plant’s development can be inherited by their seedlings. In animals, epigenetic changes are removed during gamete formation and have to be reformed during embryo development. The inheritance of epigenetic changes allows plants to prepare their offspring for the particular environments they are likely to encounter. For example, plants subjected to shade or drought stress will produce seedlings that have better survival under the same conditions. Epigenetic responses in plants have been conditioned during their evolution, so unlike random mutations in the DNA code, epigenetic changes are directed and predictable responses to the environment. Once the stress is removed, these modifications are erased until the same stress is encountered in a future generation.
Third, plants can evolve as they grow through clonal evolution. Mistakes in DNA replication—mutations—happen every time cells divide. In animals, heritable mutations only come from cell divisions in specialized cells known as the germ line that form sperm and eggs; mutations in the rest of the body are not passed on to offspring. The situation is quite different in plants, where one group of germ cells at the tip of each stem is responsible for generating the stem and leaves as well as flowers and their reproductive cells. Since plant germ cells undergo many thousands of divisions, the potential for accumulating heritable mutations is very high. Such a high mutational load would be a problem for plants, but clonal evolution during stem growth filters mutations as they arise. If one germ cell acquires a mutation that slows its growth, it will be replaced, and all the mutations it carries will be lost. Mutations that increase growth will be retained as the cells carrying them replace all the other germ cells. By the time flowers are produced, many of the deleterious mutations have been filtered out, while beneficial mutations remain to be passed on to offspring. Viticulturalists have taken advantage of this feature by selecting clonal lines within each grape variety that are characterized by unique sets of mutations that affect their flavor profile and their ability to grow in different climates and soil types.
How has the inability of plants to move around affected the way they reproduce?“We may have fleshy-fruited plants to thank for our own origin, and for our ‘sweet tooth’”
Since plants cannot move to find mates or disperse their seeds, they rely on biotic vectors, such as bees, and abiotic vectors, such as wind, to accomplish these tasks for them. Plants have undergone selection to manipulate their dispersal vectors to their greatest advantage. For example, plants offer enough floral rewards to entice visits by bees and other pollinators, but not enough to satisfy them, so pollinators are forced to move among flowers and plants to gather enough nectar and pollen. This promotes the dispersal of their pollen and facilitates outbreeding. Many seeds are carried passively by the wind, or by attaching themselves to the fur of animals, but in other cases, plants offer food rewards in exchange for seed dispersal. For example, berries and other fleshy fruits offer a food reward to fruit-eating animals. The seeds in fleshy fruits can pass through the digestive tract without being damaged. As the animal moves around, seeds are dispersed some distance and deposited in new locations along with a bit of fertilizer. We may have fleshy-fruited plants to thank for our own origin, and for our “sweet tooth,” as the earliest primates were fruit eaters that first appeared during the Cretaceous.
Most plants produce both pollen and ovules, so the potential for inbreeding through self-fertilization, also known as selfing, is high. Many flowering plants avoid selfing by a mechanism that arose very much by accident during their origin. The ancestors of flowering plants were fern-like plants that had their ovules exposed on the surface of their leaves so they were susceptible to herbivory by beetles, which became common around the time of the origin of flowering plants in the Triassic. Selection favored the fusion of ovule-bearing leaves to enclose them in an ovary, which protected them from herbivores. One consequence of this new flower structure was the displacement of the location of pollen deposition—the stigma—away from the ovules. Consequently, pollen must grow some distance to fertilize the ovules. This enabled plants to evolve mechanisms to prevent self-fertilization and inbreeding, thereby improving seedling fitness.
This is just a sample of the many ways that plants differ from us. Their sedentary lifestyle, immortal growth, and their ability to change and evolve as they grow may make them seem more alien than any other organism we know. These unique characteristics of plants originated from their autotrophic mode of living and are the outcome of hundreds of millions of years of evolution. Starting with the earliest terrestrial algae, plants have diverged on their own evolutionary trajectories to occupy nearly every habitable environment and produce the amazing diversity of plant life that we enjoy today.
Explore the latest in botanical research from OUP’s growing portfolio of plant science journals.

May 10, 2021
The real crisis at the US border

Once again, we are exposed to daily doses of “border crisis” news complete with photos of Central American children attempting to cross the southern US border without their parents. Other media images depict adult women or men cradling children or holding their little hands as the parents anxiously wait to at least be able to present their cases for asylum to US officials. Headlines accompanying these images cite “soaring numbers” of children crossing at an all-time high. And they claim to explain this “surge,” discussing how this new border crisis may exceed the government’s capacity to care for them and ultimately thwart President Biden’s plans for immigration reform.
Calling the groups of immigrants arriving at the Southern border a crisis has become an easy shorthand with sensationalist overtones. It provokes reactions across the range of political opinions, as well as among government officials and civil society actors alike. But is there really a crisis at the border? Or is this crisis located elsewhere? And whose crisis is it?
First is the issue of whether the numbers we see today constitute an actual, statistically significant spike over previous years. A question about definitions used in collecting official statistics is important to keep in mind, as such definitions change over the years to the point where we might be comparing apples and oranges when we assess the number of immigrants arriving at the border from year to year. Although the approximately 15,000 unaccompanied minors who border authorities encountered in the first two months of 2021 seems much higher than the 12,758 apprehensions during the same period in 2019, a semantic difference may partially account for the uptick: whereas the Department of Homeland Security classified the figures in 2019 as “apprehensions,” the numbers in 2021 were of “encounters.” Because asylum seekers are now often sent back across the border without being detained, the same person might try to cross again in a short period and be re-encountered, meaning they appear more than once in the newly-defined statistic.
Even if we put aside the lack of clarity in these statistics, we must also contextualize the numbers and any apparent increase. Importantly, unaccompanied minors are still only a fraction of the total number of 180,000 “encounters” recorded in the first months of 2021. And the total number of migrants “encountered” is much smaller than that registered in 2000, when apprehensions were at the all-time high of over one million annually for several consecutive years. Panic over a border crisis and its consequences begins to dissipate when empirical evidence enters the discussion.
“Panic over a border crisis and its consequences begins to dissipate when empirical evidence enters the discussion.”
To the extent that there are migration-related crises, these are located elsewhere and have been created over decades by government policies in which the United States has played a critical role. Policies that have thoroughly militarized the US southern border through various “operations” date back to the 1950s but have expanded and multiplied from the 1980s and grown astronomically since the 1990s. Parallel approaches in Central America, instigated by the US government, have wholly militarized the countries where many immigrants originate, historically and contemporaneously. In recent years, US-sponsored militarization strategies have extended to cover the entire migration corridor through Mexico, making Mexico a “vertical border.” This tripartite militarization strategy—at origin, transit, and destination—has created some of the most violent country conditions in the world from which Central Americans flee today, harrowing journeys by land through Mexico, and one of the most heavily armed borders in the world that pushes migrants onto inhospitable desert terrain where many have perished. The multi-sited militarization approach constitutes a veritable crisis in need of overhauling: it has generated more violence and produced more deaths, disappearances, and suffering than most natural disasters in the Central American region.
Thus, we are left with images of a border crisis that do not hold up in the face of empirical evidence and with veritable crises that cause infinitely more harm yet go ignored by the media and policy makers. Migration crises are mostly constructed through policy, such as the Trump administration’s Title 42, which largely suspended the right to seek asylum in the United States, and which pushed asylum seekers to remain in Mexico indefinitely. Thus, real migration crises originate in policy decisions made in wealthy nations that historically have actively repelled, not welcomed, asylum seekers in need of protection.
Featured image by Greg Bulla

May 9, 2021
On SHAPE: a Q&A with Lucy Noakes, Eyal Poleg, Laura Wright & Mary Kelly

OUP have recently announced our support for the newly created SHAPE initiative—Social Sciences, Humanities, and the Arts for People and the Economy. To further understand the crucial role these subjects play in our everyday lives, we have put three questions to four British Academy SHAPE authors and editors—social and cultural historian Lucy Noakes, historian of objects and faith Eyal Poleg, historical sociolinguist Laura Wright, and Lecturer in Contemporary Art History Mary Kelly—on what SHAPE means to them, and to their research.
SHAPE subjects are well-named — they help us shape the world we live in and the future we’re building. What distinctive potential and skills do you think Arts and Humanities and Social Science disciplines bring to the lives of those learning them, as well as to society?
Lucy Noakes: I think that these disciplines, though they vary widely in approaches and methods used, all have one essential element in common: they help our students to learn how to be effective, engaged, and critical citizens. For example, the pernicious nature of “fake news” today, from the wilder extremes of QANON fantasists to the advice circulating on social media suggesting that people can protect themselves from COVID-19 by inhaling steam or drinking hot water with lemon juice, can be harmful to both individuals and to wider societies. SHAPE students learn to be active and participatory readers and listeners. A student researching an essay topic will ask: who is arguing this? Why? What is their evidence? Where was it published? They also learn how to develop arguments based on evidence, not opinion—crucial skills in today’s world.
Eyal Poleg: Critical thinking and the ability to reflect on events, past and present, are vital for our existence as a dynamic and pluralistic society. Our students learn how to analyse sources, be they written accounts, artwork, mundane objects, or buildings. These skills are invaluable in becoming active and engaged citizens within modern society, especially in the face of empty rhetoric and fake news. Their ability to clearly communicate complex ideas is likewise instrumental in shaping the world we live in. History does not simply repeat itself, but, by learning about past societies, we gain a better understanding of the nature of our own, and of possible future directions.
Laura Wright: I’m a word-historian so I’ll give a specific answer with regard to my discipline: looking at how people used language in the past holds a mirror up to who we are now. For example, names Alice, Emma, Joan, John, Katherine, Margery, Peter, Richard, Robert, Thomas, William entered English via the Anglo-Norman language and knocked out the Old English namestock of Beowulf, Cyneheard, Ealdraed, Frithuswith, Ohthere. So, if you are called Alice or John you signal to the world at large that your parents were members of the Anglo-Norman family. But they might not have known it or thought of it that way: Alice or John might have just sounded suitable for a baby—traditional, not too outlandish. Society and its traditions shape us and the choices we make and studying SHAPE subjects causes us to question those assumptions—and in the case of historians, track them back to their source.
Mary Kelly: Students and scholars of the Arts, Humanities and Social Sciences open important questions about, for example, human difference and why people maintain certain belief systems over others. Students are encouraged to analyse, to be critical, to be diplomatic, to challenge when required, and to think creatively when locating solutions. The Arts, Humanities, and Social Sciences exist in the service of human development, always enhancing our quality of life.
“The Arts, Humanities, and Social Sciences exist in the service of human development, always enhancing our quality of life.”
For example, in 2016, University College Cork became the first Irish-based university to formally integrate modern and contemporary art from the Middle East and North Africa into its History of Art curricula. The teaching philosophy, which underpins the building of my courses, is to create an awareness among students about the current decentred world as well as our responsibility to equip students (potential future leaders) with robust cross-cultural competencies through innovative practices in teaching and learning. Our students are gaining valuable skills and insights which will galvanise them to engage with challenging conversations relating to human difference. Arts, Humanities, and Social Sciences disciplines are actively enhancing human diplomacy.
As a SHAPE researcher, how are your concerns and needs different from your colleagues in STEM (Science, Technology, Engineering, and Mathematics)?
Lucy Noakes: There is perhaps more in common between STEM and SHAPE subjects than we might first think. The key, and most important, similarity would be that we all work with evidence; it is just as important that a historian build their analysis based on the evidence available as an engineer or a biochemist, even though the outcomes might be very different. I would also argue that the overwhelming majority of academic research, across all subjects, is shaped by the historical context, concerns, needs, and values of the time and place in which we work. But perhaps the biggest difference is that in SHAPE we have more space for the development of arguments and perspectives—while 2 + 2 will always equal 4 in mathematics, historians’ analyses of a subject like the Second World War are endlessly varied and ever-changing. For me, this is a huge part of SHAPE’s appeal.
Eyal Poleg: STEM colleagues often pursue innovation, looking for ever more advance technologies, for ways of improving our quality of life and of understanding the natural world. SHAPE disciplines, on the other hand, tend to be more reflective, taking into account past accomplishments, and thinking more clearly about why and how should progress be made. This being said, I do not think of our work in opposition. Much of my recent research has been in collaboration with scientists, employing cutting-edge technologies in the analysis of historical objects. The two perspectives complement one another, with SHAPE defining the historical questions and STEM providing new means of answering them. At best, such collaboration contributes to both disciplines, unearthing hitherto unknown information about historical objects and learning about the past, on the one hand, while finding new uses for innovative technologies, on the other.
Laura Wright: What I need is historical text, and I suppose STEM researchers don’t—but in terms of research questions, we’re probably not very different. As a historical linguist I study creative literary texts as well as other kinds, but so do clinicians and scientists concerned with the brain, because people spend a lot of time talking about imaginary states—what might happen, what could happen, as well as what does happen. Whatever humans do ends up expressed in language, one way or another, and much of my source material consists of historic STEM text—people inventing things, in particular. For example, the term pickled salmon was correlated with the London poor in the 18th and early 19th centuries as it was what they ate, sold from street barrows. Then the tin can was invented in 1813, pickled salmon was replaced, and the poor turned to tins, with the term tinned salmon having connotations of “working-class” for a century or so.
Mary Kelly: SHAPE and STEM address major societal challenges, however in very different ways. In addition, SHAPE researchers’ empirical and analytical needs, as well as divergent and convergent thinking processes, differ greatly to those applied in STEM.
In order for us to truly maximise the impact of STEM ideas and technologies, public and private sectors must engage with the Arts, Humanities, and Social Sciences in order to understand how human groups and individuals are formed and how they behave, produce, evolve, and co-exist.
Right now, however, the most urgent need for the Arts, Humanities and Social Sciences is the need for fair and adequate financial resources for SHAPE research and development. SHAPE research is undervalued by many in the public and private sectors: this is clearly evident from the limited funding and support which the Arts, Humanities and Social Sciences receive from numerous funding bodies and in the education system.
SHAPE subjects are hugely diverse, but they do share a focus on understanding more about people and societies, and what it is to be human. How does your research go about investigating these concepts? How do you see your work contributing to and informing these broader discussions?
Lucy Noakes: My most recent work has been on death and grief in Second World War Britain, and on the insights that approaches to the past that are attuned to the emotional lives of those we study can bring to our understanding of what it might be like to live through and navigate crises and changes that feel out of our control. I have been struck again and again this year by how much our experiences of fear, loss, and changes to our day to day lives have shaped my students and my own understandings of the lives of those who experienced total war. I also have a new awareness of the changes that the crisis of war helped to bring about in Britain, particularly the creation of the Welfare State at the war’s end. If only we listen, history has a lot to teach us about not only how societies manage crises, but about how we can use these moments of rupture to rethink our priorities, and how we want to live.
Find out more about Lucy’s recently published title, Total War , edited alongside Claire Langhamer and Claudia Siebrecht.
Eyal Poleg: My earlier work has explored how people engaged with the Bible in the Middle Ages, demonstrating a reliance on mediated access, surprisingly similar to knowledge of people and events of the Bible among secular societies nowadays. More recently I have studied hundreds of manuscripts and early printed Bibles to trace continuity and change across three and a half centuries, reevaluating the impact of print and Reformation on English religion. This perspective enabled me to unearth the long and complex process of innovation and change. Some features familiar to us, such as chapter division, took centuries to implement, very gradually moving from the nascent universities, through nunneries and chapels, to be embraced by lay women and men. The parish Bible, an early modern innovation, was first met with confusion and uncertainty. Understanding the limits of innovation, and putting things we take for granted in new perspective, helps us better understand our own society, past, present and future.
Find out more about Eyal’s recently published title, A Material History of the Bible, England 1200-1553 .
Laura Wright: Well I research things we tend to take for granted, and one of these is house-names. Humans need shelter. Humans give things names. Numbering houses is modern—18th century—but house-names are old. The ubiquitous house-name “Sunnyside” started as a medieval Scottish legal term in dividing up farm land, and then became an 18th-century English house-name particularly used by Quakers—ceasing to be a legal term and becoming a cultural marker, insider-code for “a Quaker lives here.” Certain Quakers and Nonconformists became extremely rich and their Sunnysides were mansions, and American author Washington Irving, visiting Sir Walter Scott at Abbotsford in the Borders borrowed the name of a nearby farm called Sunnyside and named his highly-influential New York mansion Sunnyside too. There’s more to the story, but who influences who linguistically shows how culture spreads, and all humans are shaped by their culture. It’s good to be aware of one’s prejudices.
Find out more about Laura’s recently published title, Sunnyside .
Mary Kelly: My current research project looks beyond the purely European canon of historical Orientalist art objects to explore contemporary artistic responses from the Middle East and North Africa (MENA). I argue that this approach will further contextualise art objects as being an important part of an ongoing, reciprocal socio-cultural dialogue between the global north and south. Specifically, I work in the space between 19th- and 20th-century European Orientalism and 21st-century responses to Orientalism from women artists located in various Middle Eastern and North African countries. Many historical and contemporary women artists from across the globe address the conflicting experiences of female identities and—through their art—they are “speaking back” to local, national and international marginalising views which present stereotypical ideas of oppressed or powerless women. I engage with Transnational Feminism in my work because it is rooted in the local and translocal experiences of women—after which women’s narratives cross “borders” in order to create meaningful conversations and collaborations internationally. My work evokes themes such as Orientalism, gender, female agency, female oppression, religion, heritage, diaspora, and difference all for the purpose of:
Art builds progressive and positive bridges between different people.
Find out more about Mary’s recently published title, Under the Skin , edited alongside Ceren Özpınar.

May 8, 2021
Dear fellow nurses

I am honored to bring Nurse Week greetings, especially in this year of unprecedented demands on nurses much like Nightingale experienced during the Crimean War. Imagine how Florence Nightingale felt when, for the first time, she surveyed the war hospital wards, filled with moaning soldiers on wobbly cots dying from the infections of their wounds. Even with her British reserve, it is likely that she thought as she surveyed this scene, “Blimey, what have I gotten myself into?” Nevertheless, faced with the challenge of a 42% mortality rate and a largely uncooperative medical staff, Nightingale dug in—cleaned up the wards, paid for needed supplies from her own pocket, maneuvered around the recalcitrant physicians, nursed and nourished the wounded and reduced the soldiers’ mortality rate to 2%.
Think about the COVID-19 pandemic experiences of so many nurses, working in overcrowded units, with patients in hallways; in Emergency Rooms turning patients away; in nursing homes with spiking mortality rates; with patients dying alone; with inconsolable family members; with exhausted and anxious colleagues worrying about contagion, about protecting self and families. Like Nightingale, it is likely that you sighed deeply, and thought, “No way, is this what I signed up for?” And yet, you dug in—working extra hours, caring for critically ill patients experiencing a novel disease, video-phoning with patients’ loved ones, comforting the dying, reveling with those who pulled through, supporting anxious and spent colleagues. Nurses are heralded in the media as heroes—not only for their prowess at complex physical care but for their humanity in attending to patients and families isolated from each other. However, it is difficult for many nurses to celebrate this proclaimed heroism. As a critical care nurse recently confessed to me, “I don’t feel heroic, just tired and empty.”
Wars and pandemics end and a new normal emerges. Nightingale returned to London in ill health but managed to turn her stressful Crimean experiences into educating and legitimizing nurses as paid professionals. She also focused on the importance of maintaining health to prevent illness and applied those principles in her own life post Crimea. You may be heaving a sigh of relief as the pandemic winds down. You are fantasizing about “getting back to normal,” whatever “normal” means to you. However, your life as a practicing nurse is forever changed as a function of living through the crisis of the COVID-19 pandemic. You have experienced an unprecedented level of sustained stress, including secondary stress—that extra layer of stress caused by the pressures placed on professionals who care for others in need. It is the images of dying patients, inadequate equipment, overcrowded hospitals, distraught families, and beleaguered colleagues that linger on and have changed your perspective of the practice of nursing forever.
What now? Can you purposively reflect on your pandemic experiences and answer the questions, “What have I learned about myself? How can I use this crisis to change my life and perspective, my health, and my nursing practice?” There is great value in taking stock post trauma. Every crisis is an opportunity for personal growth, for a new resilience to emerge in ways that would not have been possible had the trauma or darkness not occurred in the first place.
This line from Jane Hirschfield’s poem Optimism captures the essence of our choice going forward.
“More and more I have come to admire resilience.
Not the simple resistance of a pillow, whose foam
returns over and over to the same shape, but the sinuous
tenacity of a tree: finding the light newly blocked on one side,
it turns in another.”
What will it be for you? Pillow or tree? Happy Nurses Week!

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
