Oxford University Press's Blog, page 157
March 20, 2020
How work conditions shape healthcare
A few hours before he died, my patient, a 21-year-old man (a boy, really) who was undergoing treatment for a blood cancer came to my ward from the emergency department where he had presented with fever. His parents came with him. The emergency clinicians had begun the right protocol to address his fever. My duties, as his ward nurse, were to follow that protocol. He was doing well. He even tried to crack a joke. I laughed, though I was too old to get the reference. His joy, with his smiling parents by his side, was enough to make me happy. But then he started breathing heavily. His blood pressure dropped. I knew these as the signs of life-threatening infection. While running between his room and the rooms of my three other patients, I got the resident, whom some call the junior doctor, on the phone. After what seemed too long, she came, but shortly after she got there, my patient had grown unconscious. I grabbed a colleague on the floor and asked her to cover hanging the chemotherapy for my patient in another room so that I could sit with my unconscious, now dying patient and his parents. I knew this would burden my colleague. She had four very sick patients of her own; now she’d have five. My patient died about a half hour later. I sat with his sobbing parents for another half hour, but then my friend came to say that she had a patient in crisis. If I didn’t take back my patient who needed the chemotherapy, she was afraid she was going to be too busy and she might make a mistake. “I don’t want to kill someone. There’s enough mourning on the ward tonight already,” she said. She needed me to pick up my full load.
Another night, I had a patient in the intensive care unit who had come to the hospital with a stroke. A neurologist had placed her on a mechanical ventilator by which to oxygenate her body as soon as she had arrived at the hospital. But the stroke, an electroencephalogram had shown, had shut down her brain altogether. There was no electrical activity; she had no reflexes; and without the ventilator, she could not breathe. The clinical signs were clear: Her brain essentially was dead. The neurologist who had placed her on the ventilator met with her family and said he needed them to consent to “take her off and let her die.” Her family was confused. Why would they “let her die”? The neurologist insisted on calling an ethics consult to force a withdrawal of the ventilator—without the family’s consent. As I was in the ICU that night, I only had one other patient. But this other patient needed dialysis, which demanded a lot of my time. I didn’t have time to help the family process why the neurologist had placed their wife, mother, and sister on the ventilator without their consent and now was forcing their hand in disconnecting it. They sat in the room dazed and angry.
On both nights, I wanted to quit. And truth be told, I would have had I not needed my salary. I didn’t want to stop being a nurse. But I couldn’t deal with the emotions I felt. I felt trodden upon. Beat up, burned out. My body ached. I oozed stress hormones.
Some call what I felt “moral distress,” the emotions that arise from not being able to do what one thinks of as the right thing to do. It is right for nurses to care for bereaved parents and help a confused and angry family understand the clinical situation. I knew my nursing department had a “moral resilience” program to help with the feelings some call “moral distress.” But I didn’t want to deepen, sustain, or restore my integrity. My integrity was just fine.
The twin concepts of moral distress and moral resilience put a lot of burden on nurses – and other healthcare providers. But healthcare providers aren’t the problem. And both concepts, in one way or another, put the problem on our backs. It’s our problem to butt heads against institutional constraints so we can do our jobs or our problem to readjust our integrity amid these constraints. No; the problem is the constraints.
Healthcare systems, of whatever stripe, control the conditions of our work. The push to treat patients in the cheapest way possible, the hierarchy that lets one type of provider decide what to do and then makes families undo those decisions and undercuts other providers’ ability to help – whatever the situation, when nurses and other healthcare providers go home at the end of the shift beaten and broken, it’s because the moral conditions of work are off.
We can no longer use moral distress and moral resilience to mask the moral conditions of work. Safe work environments in which one can perform one’s core professional duties while being treated fairly and respectfully and in which all providers are valued as important members of the care team; environments in which one can take time to be human, time to eat, put one’s feet up for a few minutes, and to process one’s own grief at the loss of a patient – these, among others, are the moral conditions of work. Let us reframe the discussion to focus on the moral conditions of work, not on concepts that make these conditions healthcare providers’ problem.
Featured Image Credit: “Stress Programmer” by andreas160578 via Pixabay.
The post How work conditions shape healthcare appeared first on OUPblog.

The mystery of the Elder Pliny’s skull
Has part of the body of the Elder Pliny, the most famous Roman victim of Vesuvius, been recovered? The story surrounding the relic is a source of continuing fascination.
When Vesuvius erupted in 79 C.E. the Elder Pliny was under 20 miles away. He was quite unaware that Vesuvius was a volcano, despite publishing Rome’s greatest encyclopedia the Natural History around two years earlier; he had not included this Campanian mountain alongside Etna in his review of the great volcanoes of the known world. The eruption now slowly unfolded before Pliny’s eyes from his vantage point at Misenum near the northern tip of the bay of Naples. As commander of the imperial fleet at Misenum, Pliny had responsibility for policing the waters of the bay. A rescue mission ensued, and a flotilla of navy ships was launched. The Elder Pliny offered his 17-year old nephew, the Younger Pliny, the chance to join the fleet. The nephew declined. He preferred to stay home and finish his homework by making excerpts from the great Roman historian Livy. A wise decision, in retrospect. The Younger Pliny lived to record the death of the Elder in a pair of letters addressed to Rome’s other great historian, Tacitus.
Those documents survive today as part of Pliny’s own collection of Letters and are an invaluable source of knowledge about the 79 eruption. They include precise information on how the Elder Pliny died on the beach at Stabiae at the southern end of the bay of Naples. The strong onshore winds created by the violent eruption meant the Elder could sail quite easily into Stabiae on his rescue mission. But his ships could not fight their way out against the torrent of fast-moving air blowing onto land. Deadly surges of superheated gas thrown out by Vesuvius eventually reached Stabiae, only five miles south of Vesuvius. The Elder Pliny, who was probably asthmatic, soon succumbed to the surges. The next morning “his body was found whole and uninjured,” according to the Younger Pliny, “in the clothes he wore; its appearance was of one resting rather than dead.”

Leap forward 18 centuries. Around 1900 a local landowner named Gennaro Matrone is digging on his estates in the vicinity of ancient Stabiae. He uncovers over 70 skeletons in close proximity to one another. They are evidently the remains of victims of the ancient eruption, left to lie where they had perished. One corpse attracts particular attention. Wearing a gold ring, necklace and armbands, the body is found in a supine position that suggests sleep, its head supported by a pillar. A French diplomat, who has perhaps read the Younger Pliny’s letter, is on hand to suggest that the corpse may be that of the Elder Pliny. Matrone publishes the finding in a pamphlet, but receives only derision in response. One eminent archaeologist of the day pours scorn on the identification: a Roman admiral would hardly be dressed like “a ballet dancer.” Matrone is discouraged and the relic eventually ends up in respectable obscurity at the Museo dell’ Arte Sanitaria in Rome, where it is tagged simply “skull attributed to Pliny.”
A century later, the Italian military historian Flavio Russo suggested that the attribution should be settled by scientific investigation. Might an isotopic examination of the teeth reveal where the skull’s owner had spent his childhood? A campaign led by La Stampa eventually raised funds for a study of the relic at the hands of a scientific team fronted by popular historian Andrea Cionci. The results, released in January 2020 at the Museo dell’ Arte Sanitaria, were somewhat ambiguous. Isotopic examination of the lower jaw indicated a childhood consistent with the early life spent by the Elder Pliny in his north Italian hometown of Comum (modern Como). But the ancient owner of the lower jaw was in his 30s – whereas the Elder had died in his 50s. The skull had no upper jaw. Gennaro Matrone, confronted with an incomplete head in 1900, had evidently supplemented the “Pliny” skull with a mandible taken from a nearby corpse. What of the rest of the skull? Examination of the cranial sutures suggested an individual who had died in their 40s or 50s. That, at least, was consistent with the age at death of the Elder.
Is this the skull of the Elder Pliny? Almost certainly not. Quite apart from a debate whether the gold relics found with the corpse can or cannot represent the insignia appropriate to a Roman admiral (or simply point to a refugee fleeing nearby Pompeii with all his most precious worldly goods), one important question remains. If the Younger Pliny knew precisely where (and how) his uncle died, why was he content to leave the corpse unburied? The Roman horror of a body left to lie without funeral rites is well known. The letters written to Tacitus about the eruption attest a devotion by the Younger towards the Elder strong enough to keep him waiting at Misenum for news of his uncle long past the point where it was safe for the Younger to remain there.
Perhaps we have misunderstood the point of Pliny’s precision about the resting place and special pose of his uncle’s corpse? There existed a rumour – later spread by the Younger’s own protégé (of all people!) the Roman biographer Suetonius – that the Elder Pliny was ‘killed by one of his slaves whom he had begged … to hasten his death’. Clearly intending to deny this story, the Younger is very careful with his words: the Elder’s corpse was discovered “uninjured” with the appearance “of one resting.” No evidence here, Pliny implies, of a violent death at the sword of slave. He is not trying to help us find or identify the body of the Elder.
Was the effort of Cionci’s scientific team wasted? Absolutely not. The investigation attests to our continuing desire to come as close as possible to the great figures of the Roman past. And if there is no skull, after all, for us to venerate, we can at least still undertake a pilgrimage. From Como near the foot of the Swiss Alps to the bay of Naples, Italy is filled with monuments and sites closely associated with the Younger Pliny. Ancient inscriptions by Pliny are held in museums in Como and Milan, a site for his country estate has been uncovered near San Giustino in Umbria, and the public buildings Pliny knew and worked in at Rome still stand – not to mention the vestiges of the naval station at Misenum and the spectacular remains of the villas in which the Elder almost certainly sheltered at ancient Stabiae, on Varano hill in modern Castellammare di Stabia.

Featured Image Credit: by _M_V_ on Unsplash
The post The mystery of the Elder Pliny’s skull appeared first on OUPblog.

March 19, 2020
How religion affects global pandemics
People sometimes see religion as an unwelcome infection affecting the secular politics of international relations. Such attitudes easily present themselves in consideration of terrorism and violence. Religion is seen to distort and hamper the healthy peaceful progress of secular politics, operating as an outside pathogen that inflames tensions and challenges already present in global affairs.
Religion is also said to play a role in the spread of novel coronavirus (COVID-19). Authorities are linking clusters of cases to religious congregations in China and Korea. Religious activities of Christians in Korea—attending worship sessions and outreach events multiple times a week—and their unwillingness to curb those activities may have led to large-scale spread of contagion. US pastors are also considering adjusting core rituals and acts of worship over Easter, Catholic priests in Jerusalem are advised to change how they give the sacrament, and Saudi Arabia were quick to suspend pilgrimages to Mecca and other holy sites from destinations where outbreaks are reported. The WHO and Iranian Health Ministry have linked the outbreak in Iran to pilgrims to Shi’a holy sites in Qom.
The Bahrani Crown Prince said recently that the virus, “doesn’t discriminate,” by religion, gender, or class, perhaps attempting to mitigate accounts that blame the Other, as happened with historical pandemics, such as HIV and Ebola. Belief systems influence whether people flee from pandemics or stay to support the sick, including whether to accept vaccinations. Christian Scientists believe that diseases are a state of mind, and have lobbied hard for religious exemptions to public requirements for vaccinations. At an ultra-Orthodox anti-vaccination symposium in Monsey, NY, Rabbi Hillel Handler suggested the measles outbreak was an anti-Hasidic conspiracy concocted by Mayor Bill de Blasio as a cover for diseases imported by Central American immigrants. Others at the event equated what they called, “forced vaccination,” with the Holocaust. (Importantly, most Rabbis support vaccinations.) Anti-vaccination trends are inflected with politics—in Pakistan and the Caucuses following the death of Osama Bin Laden, poor handling of the vaccination programme and distrust in government resulted in a rise in polio, while opposition to the programme from pro-Taliban groups (fearing it a ploy to sterilize Muslims) led to attacks on medical facilities.
Yet while fear and the mysterious nature of a pandemic can cause hatred and division, responses can also have the opposite effect. The Great Influenza Pandemic of 1918–19 and yellow fever crises across numerous cities and regions in America and Europe unified communities, through self-sacrificing volunteerism, healing previous social, political, religious, racial and ethnic tensions and anxieties. Similarly, Pakistan recruited the late Maulana Sami ul Haq (the so-called “father of the Taliban” in Pakistan) in promoting polio vaccinations, resulting in an uptake in immunisation—but this lesson seems hard to learn. With Ebola and perhaps coronavirus, authorities have been slow to appreciate the vital role of religious organisations in supporting health services.
Religion is therefore a more complex phenomenon in global affairs than good or bad. There’s a lot we can adapt from the example of faith-based actors working to address public health concerns like the spread of fatal viruses. Religious identities also help shape modes of belonging in global politics, such as diaspora and transnational connections. It is not only in close-knit or closed religious communities that we find clusters of infection.
Transnationalism and religious diaspora politics is another theme which can be extended when we think about global health concerns. A discussion on Shi’a transnationalism shows how important it is to understand religious diaspora as key influencers in global and state politics. Religion is not only about religious practices or identities, but also about norms, values and beliefs; ideas about the global, and the religious, influence our responses. To help us better understand the role of religious norms and ideas in global affairs, we can consider the importance of political theology and reflect on Christian Feminist theorising on the politics of hope. This politics of hope gives us ideas about how cooperation between religious communities, and with governments and international agencies, might overcome fear and mistrust. With that we might find proactive engagement to counter the spread of coronavirus.
Featured Image Credit: “Cathedral Interior Religious With Benches Empty in Back” by Pixabay. CC0 public domain via Pexels .
The post How religion affects global pandemics appeared first on OUPblog.

How strategists are improving team decision-making processes
How companies and teams make decisions can be very challenging. Poor or ill-structured decision-making processes can make the organization less successful and create destructive conflicts in decision-making teams. But there are a few strategies companies can try that help organizations make big decisions in a better way.
People operate in complex and dynamic environments, making decisions with limited information under conditions of uncertainty and ambiguity. In contrast to routine decisions, strategic decisions are often made at top-management level and they have an impact on the whole organization in terms of its future direction and the scope of its activities. Such important decisions have a significant impact of the organization’s resource allocation that require trade-offs between alternatives and the setting of organizational priorities. In addition the strategic decision making process rarely results in one clear best solution to the problem, but an organization makes a decision it will be difficult to reverse. This is because the implementation of a decision involves a significant resource commitment and change in organizational activities.
Although we have ways to assess risk through probability theory, dealing with strategic decision problems is much more difficult where the information we gather to support a decision and the actual decision outcomes are uncertain and ambiguous.
In complex decision situations, groups are better at solving problems than acting alone. This may be because group members bring a variety of information, critical judgment, solution strategies, and a wide range of perspectives to the decision problem. However, groups can be subject to destructive conflict and cognitive biases that may hinder the quality of decision outcomes and group members’ decision acceptance.
The main group decision-making biases are risky shift, as groups tend to make risker decisions than individuals alone, and groupthink, as group members strive for consensus and harmony in their decision-making. In addition, group conflict may arise when people with competing opinions and solution alternatives clash. Group decision-making, therefore, presents a managerial conundrum. On the one hand, multiple perspectives of group members can add insight into the problem situation. On the other hand, group diversity can produce fragmentation, conflict, or groupthink.
Group member interaction in decision-making situations may produce two types of conflict: cognitive and affective conflict. Task-focused cognitive conflict arises when group members focus on a task or an issue and debate to come up with a creative solution. Cognitive conflict improves decision-making quality as the goal of the decision-makers is to find the best possible solution rather than win the argument. However, there is a danger that this beneficial cognitive conflict spills into a dysfunctional, affective conflict. Instead of working constructively to achieve the best possible solution, the debate becomes personal, adversarial, and a lose-win paradigm where one view prevails while others are discarded.
Affective conflict tends to be emotional and focuses on personal incompatibilities or disputes. These disputes can result from group members’ personal judgments that they are not fully able to explain to group members. The more these personal judgments influence decisions, the more there is potential for decision-making group members to speculate and find reasons to distrust the motivation and hidden agendas of other group members. Hence, too much affective conflict may hinder overall group performance as the decision will not be accepted by some decision-making group members regardless of the quality of the decision outcome.
Management consultants have developed several decision-making techniques to encourage critical interaction between decision-making group members. Devil’s advocacy is a decision-making technique where one or more people in the group works to point out all the flaws and risks with an option under consideration. Dialectical inquiry is a decision-making technique where divide into two camps: those advocating for an idea and those advocating against it. Both sides highlight the advantages of their assigned decision and outline the disadvantages of the opposing idea.
Both dialectical inquiry and devil’s advocacy lead to higher quality assumptions and decision outcomes than a consensus approach in decision-making. However, consensus-seeking groups express more satisfaction and desire to continue to work with their groups and indicate a greater level of decision acceptance than those who were asked to apply dialectical inquiry and devil’s advocacy in their groups’ decision-making process. Hence, management scholars’ attention has turned to developing techniques that maintain moderate levels of cognitive conflict to ensure high quality decision outcomes, but simultaneously preserve group cohesion to prevent emergence of affective conflict.
Causal mapping is a visual decision-making technique that is designed to help groups improve their strategic decision-making processes and promote cognitive conflict, while at the same time preventing the emergence of affective conflict. Visual tools are particularly useful in strategy work as decisions are often made collectively in group working situations.
In a causal map, ideas are causally linked to one another through the use of arrows and nodes. The arrows indicate how one idea or action leads to another. In effect, the maps are word-and-arrow diagrams where the arrows mean “might cause,” “might lead to,” or “might result in.” Causal mapping facilitates a visual articulation of a large number of ideas, actions, and their consequences.
An example of a fully developed causal map is depicted in below. The map incorporates the collective understanding of a decision-problem by a decision-making group.

Causal maps that have been created in groups bring together the thinking of many people, including conflicting views, subtly different slants on the same issues, and different perspectives held by individual people. Such causal maps provide simplified representations of the beliefs of the greater group.
Group causal mapping could be perceived as a form of brainstorming, but there is a distinction between group maps and a free-flowing brainstorming of ideas. Group mapping that is used for decision-making is focused on raising issues and concerns. These are usually activities or events that can either support or challenge the decision-making aspiration of the group. In contrast to eliciting “off-the-wall” ideas in group brainstorming sessions as the means of unleashing creativity, group mapping focuses on bringing together the group members’ current wisdom and experience, as well as issues surrounding the problem. Therefore, group mapping is a process of engaging in a dialogue to uncover the causality between the problem and a number of potential solution outcomes, represented as a visual object. The mapping process provides the means for the decision-making group to structure and merge differing perspectives that should lead eventually into a shared understanding of the issue in a holistic manner.
There is increasing evidence that the causal mapping process prevents decision-making groups from talking over each other and going around in circles. It helps group members to speak and be heard. The mapping process produces a lot of ideas and can ultimately clarify the most suitable course of action. Furthermore, the shared and collective enactment of a group causal map can increase individual ownership, acceptance, and the feeling of fairness of the decision outcomes. This is because the map shows evidence that people have listened to everyone in the group.
Featured Image Credit: AbsolutVision, ‘Photo of bulb artwork’ via Unsplash
The post How strategists are improving team decision-making processes appeared first on OUPblog.

March 18, 2020
How emotions affect the stock market
Last year marked the 90th anniversary of Black Thursday, the October day in 1929 when stocks stopped gradually falling, as they had since the start of September, and started wildly crashing. All told, the Dow Jones dropped from 327 at the opening of trading on the morning of Tuesday, 22 October to 230 at the close of trading on Tuesday the 29th, a loss of around 30% of its value.
Before stocks could even bottom out, the debate about why they crashed had begun. It continues to this day. Read around in the literature long enough, and you find that arguments about the Great Crash of 1929 usually take one of three forms: (1) There was no bubble—but investors panicked. (2) There was a bubble—and when investors realized it, they panicked. (3) There was no bubble—and no panic. These arguments about the origins of the crash matter because they bear on policy. (For example, how much should the government try to regulate the market.) But they also matter because they lead us to believe, incorrectly, that emotional is a synonym for irrational. It is not, and the sooner we realize that the better.
In the weeks and months after the crash, the economist Irving Fisher argued that stocks did not crash because they were inflated but because investors had panicked. Throughout the fall of 1929, Fisher had insisted that corporate earnings and the prospect of still greater corporate earnings had justified the growth in stock prices. True, the price-to-earnings ratio looked historically high, possibly even unprecedented. But Fisher thought that plowed-back earnings, mergers, scientific research and invention, industrial management, labor cooperation, the dividends of prohibition, and seven years of stable money more than accounted for that. In other words, this time really was different.
Why did the market crash, then? For Fisher, the market acquired its own momentum. It went down—for whatever reason—which persuaded investors that it would continue to go down. Trying to beat the fall, investors sold out, which, of course, drove prices down, which convinced still more investors that prices would fall, and they responded by selling out, on and on or, rather, down and down. Throughout all of this, however, the market—and its underlying economic soundness—had not changed. Rather, investors had. They panicked.
In his 1954 book The Great Crash, the economist John Kenneth Galbraith agreed that in 1929 investors had panicked. He departed from Fisher, though, in suggesting that investors had every reason to panic. For Galbraith, the 1929 stock market was grossly and obviously overvalued, the result of what he called “a speculative orgy.” That is, there was a bubble—and when investors realized it they all made for the exit door at once.
Galbraith’s account of the 1929 crash remains the conventional wisdom today, except among many economists, who eventually began to warm to what Fisher had argued. Much of the pro-Fisher scholarship drew from and contributed to the efficient market hypothesis, which arose in the 1960s and 1970s and held that financial markets always price assets, including stocks, according to their underlying value, and that prices only change with new information. By this light, there was no bubble in 1929. (A bubble, by the efficient market hypothesis, is all but impossible.) Nor was there necessarily a panic. Here these scholars broke with Fisher. According to them, by selling off stocks, investors did not overreact but, rather, simply responded to new information, whether a recession, which had begun in August of 1929, or to misguided Federal Reserve policy, which raised interest rates at precisely the wrong time.
We may never be able to declare a winner in this debate, but the strange thing about it is not how its disputants differ but what they take for granted. Although Fisher ultimately rejected accounts of the Great Crash, including his own, that focused exclusively on its emotional causes, he still had the habit of speaking of “the psychological short swing” to explain its gyrations. For Fisher, that is, emotion functioned as a heuristic, a way to make sense of the otherwise inexplicable. If the price of stocks exceeded their underlying value, that is because investors were “over-enthusiastic.” If, after the Crash, stock prices fell below what their underlying value or the fundamentals of the economy suggested they should be, which Fisher also believed, that is because investors “panicked.” If something, say, stock prices, did not make sense, if reality did not follow rational predictions, then something irrational and distorting must explain why. For Fisher, that something was emotion.
In this respect, Fisher and later economists speak the same language. For those who believe in bubbles, emotion manifests itself in “tulip mania” and “irrational exuberance,” driving up stock prices beyond where they rationally belong. Similarly, when the mania passes, investors “panic,” and the market comes crashing down. Meanwhile, those who believe in efficient markets struggle to show how the market is not tainted by emotion, how it accurately—and rationally—reflects the available information. For both the critics and defenders of efficient markets, that is, emotion is a synonym for irrationality. It undermines the rational and the efficient. For both, the question is not whether emotions are bad. They plainly are. The question is whether they are present or not.
Yet this way of viewing emotions offers a false binary. We say that when the price of stocks falls within certain price-to-earnings ratios, or accurately reflects prospects for future growth, the price must be rational, and investors therefore free of emotion. And when the price of stocks departs from these ratios, then the market—and investors—must be irrational.
That is not to say that emotions cannot distort markets, only that they do not exclusively distort markets and may—are—present when the market is neither too high nor too low. As more and more psychologists, philosophers, and neurologists have come to understand, emotions are not necessarily irrational. Rather, they are part and parcel of what we call the rational. In his groundbreaking book, Descartes’ Error, Antonio Damasio found that when the part of the brain that processes emotions gets damaged, the person who suffers such damage does not become hyperrational. Rather, they cannot function at all. Reason depends on emotion.
If so, then economists might want to stop speaking of emotions as though they only warp the market. Instead, they might want to begin thinking of emotions as suffusing the market, as they suffuse everything, even when the market appears to be rational and efficiently priced. They may even want to approach emotions as playing a crucial role in keeping the market within certain bounds—that is, rational.
Emotions are—and always will be—with us.
Featured Image Credit: Rick Tap via Unsplash
The post How emotions affect the stock market appeared first on OUPblog.

An etymological ax(e) to grind, followed by the story of the English word “adz(e)”
Part One
Wherever we look for the history of the names of instruments and tools, we confront a similar problem: the available material is either too copious or too scanty. Last week (March 11, 2020), we followed a hectic but inefficient hunt for the etymology of the word awl, and I promised a continuation: a post on adz (spelled as adze in British English). An adz is an ax (incidentally, in British English, ax is also spelled with an e: axe; Americans have succeeded in chopping off a superfluous ending, while in Britain it managed to stay), and a “disquisition” on axes should preface the main story. Nowadays, such disquisitions are usually called discourses.

At first glance, ax and adz look somewhat alike, and the question arises whether they are related. That is why, before attacking adz, something should be said about the etymology of ax and its synonyms. The oldest Germanic form of ax is known from Gothic, a language recorded in the fourth century CE. The Gothic for “ax” was aqizi, that is, akwizi. Its obvious cognates are Old Engl. æx ~ eax (eax is a phonetic variant of æx) and æces, Old Saxon akus (Modern Dutch aaks), Old High German ackus (Modern German Axt, -t is excrescent and does not belong to the root), Old Frisian axa, and Old Icelandic øx. The Old English and other forms were recorded half a millennium and even 900 years later than Gothic akwizi. Along the way, the word seems to have lost one syllable, though it is not improbable that the earliest West Germanic and the Old Norse forms were always slightly different from the Gothic one. (Old English and its neighbors were dialects of the same ancient language, and dialectal variation is a universal phenomenon.)
Although æx was the main Old English word for “ax,” it was not the only one. Taper-ax has also been recorded, a typical tautological compound: each component means the same (ax-ax; see the post for October 7, 2009). This was the name of a formidable weapon, borrowed from Old Norse, where the “tapar-øx” served its purpose very well: one constantly reads in the sagas how enemies’ skulls were cloven with its help all the way to the neck (grisly graphic details are usually provided). But the medieval Scandinavians did not coin this word: it migrated to them from the Slavs: compare Russian topor (stress on the second syllable), etc. Finnish tappara “ax” also goes back to Slavic. Even this is not the end of the journey, because the Slavs must have taken over that word from their eastern neighbors: its probable source was (or so it seems) Old Persian tabar “pickax.” Turkic täbär looks amazingly similar. Nor has the ancient pre-Indo-European word taba “stone; rock” from Asia Minor escaped the attention of language historians.

Are we dealing with an old migratory word (see the previous post on such words), traceable to the Stone Age? In the essay on “awl,” I noted that a study of the names of instruments and tools provides rare insights into the history of civilization. The wanderings of taba ~ tabar ~ täbär ~ topor ~ tapar ~ tappara all over Eurasia tell us a good deal about the spread of material culture but hardly enough about etymology. Even if at one time taba or tapa meant stone, we still do not know what the origin of this word is. Did our remote ancestors go tap-tap-tap with their pieces of rock?

The same question arises in connection with the word ax. To be sure, Greek axīne “ax” and Latin ascia (presumably, from acsia “adz”) make us think of the root of acute “sharp” (from Latin acūtus), but is this how the ax got its name? Gothic aqizi had three syllables. If its earlier form was approximately ak-wes-j-ō, with a suffix (wes) meaning “belonging to,” perhaps ax did mean “something sharp, a member of a class of sharp objects.” Long before the nouns of the Indo-European languages acquired grammatical genders, they were classified by their features: soft, warm, round, and so on. This system is well-known from some modern African languages. In Germanic, the word for “ax” (aqizi, and all the others) was feminine, and the same holds for Modern German Axt. This fact says nothing about the origin of ax. Why should axes be referred to as “she”?

A migratory word does not have to be a world-wide traveler. For example, French hache “ax” reached France in the twelfth century from its German-speaking neighbors. The etymon was some form like happja (Old High German happa ~ heppa, etc., the name of a sickle, rather than an ax; Modern German Hippe means “pruning knife”). This means that, when we look for the origin of a word like ax, we need not concentrate only on its native and foreign synonyms: a simple association will sometimes do quite well (both sickles and axes cut). Finally, by way of exception, the word we are investigating may be transparent. For instance, another French name meaning “ax” is cognée, that is, “fastened with a wedge.” At one time, this participle was used with a noun; later, it began to function as a self-sufficient name of the tool. The root is familiar to English speakers from the word coin. It meant “wedge” and “a die for stamping money.” Coign of vantage “a favorable position of observation,” revived by Walter Scott, and quoin (pronounced as coin!) “cornerstone” also remind us of the Latin word.
Armed with such an amount of partly superfluous information, we will approach the history of adz with due caution. This history is rather obscure, and it is too long to be used as a supplement to today’s post. I’ll say what I know about the etymology of adz next week. Here I’ll only mention one disturbing fact: adz does not seem to have any cognates in any language, which means that the word was coined in Old English. Such words are rather numerous, and nearly each of them is an etymological crux.
To be continued.
Feature image credit: Image by Klaus Hausmann from Pixabay.
The post An etymological ax(e) to grind, followed by the story of the English word “adz(e)” appeared first on OUPblog.

Governments should tackle air pollution by banning old cars
Air pollution continues to be a serious problem in many cities around the world in part because of a steady increase in car use. In an effort to contain such a trend and persuade drivers to give up their cars in favor of public transport, authorities increasingly rely on limits to car use. Some places have banned drivers from using their vehicles on certain days of the week. Good examples of these driving restrictions include Athens (where restrictions were introduced in 1982), Santiago (1986), Mexico City (1989), São Paulo (1996), Manila (1996), Bogotá (1998), Medellín (2005), Beijing (2008), and Tianjin (2008).
These restrictions may, however, create perverse incentives by encouraging people to purchase additional vehicles. This not only increases the number of vehicles but may also induce people to purchase higher-emitting vehicles. Mexico City’s Hoy No Circula program –which in 1989 prevented drivers from using their vehicles one day per week– confirm these fears. The ban didn’t reduce air pollution very much.
But there is an aspect of driving restrictions that has received little attention yet can be found in some recent restriction programs: namely, vintage-specific restrictions, or more precisely, restrictions that differentiate cars by their pollution rates. In 1992, for example, Santiago reformed its restriction program to exempt all cars equipped with a catalytic converter (a device that transforms toxic pollutants into less toxic gases) from the existing restriction that prevented all drivers from using their cars one day per week. This exemption ended in March 2018 for all cars built before 2012. Mexico City has also introduced several reforms to its restriction program. In today’s restriction program, new vehicles are exempt for their first eight years.
Vintage-specific restrictions are also in recent European programs. Authorities in Germany, for instance, have adopted low-emission zones in several cities since 2008. In 2016, the city of Paris banned any car built before 1997 from circulation within its limits weekdays from 8 a.m. to 8 p.m. Recent diesel bans in Madrid, London, Paris, and Rome, are another kind of vintage-specific restriction.
Of all the possible variations on a driving restriction policy one might think of, just banning older cars represents a radical departure from early designs. By allowing drivers to avoid being hurt by the restriction not by purchasing a second (and possibly older, more polluting) car but by switching to a cleaner car facing lighter or no restrictions, vintage-specific restrictions have the potential to significantly reduce air pollution.
How do these vintage-specific restrictions work in practice? Existing research suggest that it’s best for policymakers to design driving restrictions to work through the type of cars people purchase, never through how much they drive their cars.
By affecting purchasing decisions, a vintage-specific restriction can yield important welfare gains by moving the fleet composition toward cleaner cars. Emission rates vary widely across cars. That’s why in most cities older cars are responsible for most of the pollution. A driving restriction that places a uniform restriction upon all cars regardless of their pollution rate (like the one day a week driving ban) is sure to result in significant welfare losses. Such uniform policy not only fails to remove old cars from the road, it also reduces their prices, extending their lives and dampening sales of new cars.
Featured Image Credit: Traffic Rush Hour by quinntheislander. Public Domain via Pixabay .
The post Governments should tackle air pollution by banning old cars appeared first on OUPblog.

March 17, 2020
Seven women who changed social work forever
We celebrate National Professional Social Work Month each March. The theme for Social Work Month in 2020 is Generations Strong. This is a great opportunity to look at the lives of pivotal figures in the history of social work and social welfare. The seven women discussed below made important contributions to people’s lives and to social work as a profession. Their lives also reflect environmental assaults, such as socioeconomic disenfranchisement, racism, sexism, classism, and other toxic issues that continue to plague vulnerable communities. The following biographical sketches represent the positive impact of social workers, social justice advocates, and other helping professionals over the last century.
The following biographical sketches represent the coming of age of social work from an international perspective and shows the positive impact of social workers, social justice advocates, and other helping professionals over the last century.
Mary Ellen Richmond (1861 – 1928) was an organizer, researcher, and administrator. She was sometimes referred to as the Mother of Social Work. Richmond fought for the standardization and professionalization of social work . She implored schools of social work to train social workers due to her concerns about frequent failures of clients to respond to services offered by the Friendly Visitors. Richmond believed that this response on the part of the clients was due to workers lack of perquisite knowledge, skill and understanding about the problems confronting impoverished families and children. Her first publication in 1899, Friendly Visiting Among the Poor, provided practical guidelines for working with such cases. She cared deeply about the needs of families and children and became their advocate on the national stage. She lobbied for legislation to address housing, health, education and labor. Overall, Richmond’s work demonstrated the importance of the education of the social work field, but also recognizing the need to advocate for as well as the need to design programs and services that effectively meet the needs of a diverse, expanding population. Grace Longwell Coyle (1892 – 1962) made a major contribution to the profession through her scholarly writing and speeches championing the integration of group work and casework. Coyle had an expansive vision of social work. In her view, case work, group work, and community organizing were based on a common factor, the conscious use of social relations. In addition to an expansive writing on group work practice, Coyle was the first to develop a scientific approach to group work practice through the provision of a systemic, organized series of steps to helps ensure objectivity and consistency in group work practice. Insoo Kim Berg (1934 – 2007) was a gifted solution focused brief therapy clinician who championed the approach that clients had within themselves their own answers for lasting change. Her emphasis on clients’ strengths provided a new and exciting way of engaging with different client systems. Shirley Chisholm (1924 – 2008) described herself as “unbought and unbossed.” In 1964 she successfully ran for and was elected to the New York State Assembly. Chisholm later ran for and was elected to the US House of Representatives, serving from 1968 to 1980. In 1972 Chisholm was the first African American to make a bid for the United Sates Presidential nomination by the Democratic Party. She gained 10% of the share of total delegates. She was a vocal opponent of the Vietnam War, the US judicial system, specifically as it relates to police brutality, prison reform, gun control, and substance abuse policies. Chisholm was also an advocate for early childhood education and a proponent of the Head Start Program. Chisholm shattered gender and racial barriers as a social justice advocate and activist for the poor and vulnerable populations. Wilma Mankiller (1945 – 2010) was an activist, community organizer, and principal chief of the Cherokee Nation. She worked for the Cherokee Nation as a tribal planner and program developer, founding the community development department for the Cherokee Nation. She developed lasting sustainability projects such as rural water systems and rehabilitated and revitalized the Cherokee Nation’s education, health, and housing for approximately 300,000 tribal members. Mankiller brought the Cherokee Nation back to a traditional understanding of leadership in which women were in positions of authority and power and were revered for their understanding and wisdom. W. Gertrude Brown (1888 – 1930) was an organizer and advocate for racial, social, and environmental justice. Brown recognized and fought against environmental racism in the form of redlining and housing discrimination. Through her role as Head Resident of Phyliss Wheatly Settlement House in Minneapolis, she utilized a revitalization project to fight against blighted communities where people of color and marginalized groups lived. She also fought against discrimination in housing and public accommodation. Irena Sendler (1910 – 2008),was a social worker in Warsaw, Poland, during its occupation by Nazi Germany and the Soviet Union. She was a senior administrator in the Warsaw Social Welfare Department which operated “canteen” in every district in Warsaw. Sendler saved 2,500 Jewish children from starvation, disease, and eventual death between 1942 and 1943 by smuggling them out of Warsaw ghetto and placing them with non-Jewish families.Though Sendler was arrested and subjected to extreme brutal treatment by the Germans, she was always helped and saved from imprisonment and death, once by a German officer who accepted a bribe from the Zaguta, a Polish underground group, and once by a Jewish woman.
The extraordinary services exemplified by these seven social workers and their allies serve as examples of what it means to uplift humanity, and to serve others. Their work demonstrates our connectedness as humans and the importance of standing for something larger than oneself.
Featured Image Credit: by Mary Ellen Richmond, Irena Sendler, Shirley Chisholm, and Wilma Mankiller via Wikimedia Commons.
The post Seven women who changed social work forever appeared first on OUPblog.

Why cost-benefit analysis is flawed and how to improve it
Cost-benefit analysis is a key component of the US regulatory state. How it works and the function it plays in policymaking is not widely understood, however. Even the most substantive media outlets rarely discuss it. But cost-benefit analysis is a linchpin of the regulatory process. Its structure and role—and its flaws—should therefore be grist for an informed public conversation.
Any given proposed regulation would have various impacts on people’s well-being. Consider, for example, a proposed rule that will reduce air pollution—the sort of rule that the Environmental Protection Agency issues. This rule will have at least three different types of effects. Breathing polluted air causes various diseases. These diseases, first, may increase the risk of premature death. Thus the anti-pollution rule, if enacted, will reduce people’s fatality risks. The diseases, second, will be associated with impaired health quality (e.g., pain, reduced mobility) on top of the increased death risk they may produce. Thus the anti-pollution rule will improve health quality. Finally, implementing the rule will require material resources—for example, firms may need to install costly anti-pollution devices—and these resource costs will ultimately show up in reduced income, for firms’ shareholders, employees, and/or consumers.
Cost-benefit analysis measures all of the impacts of a proposed regulation on a monetary scale. We convert impacts into monetary equivalents, by asking what people are willing to pay (if they are made better off) or willing to accept (if made worse off). Consider the anti-pollution rule. The monetary equivalent for a reduction in someone’s fatality risk is what she is willing to pay for that reduction, as estimated by various sources of evidence (market prices of safety devices, surveys). The monetary equivalent for a health improvement is, again, what the person is willing to pay for that improvement. Finally, converting income losses into dollars is automatic. The monetary equivalent for a $100 loss of income is just −$100.
Cost-benefit analysis now says that a proposed regulation is worth enacting if the regulation’s sum total of monetary equivalents is positive. In the case of the anti-pollution rule, this means that the rule’s total benefits, as measured on a dollar scale, are larger than its total costs in loss income.
For nearly the last forty years, all federal agencies in the Executive Branch have been required, by executive order, to employ cost-benefit analysis in considering proposed regulations, and to submit these regulations and the accompanying cost-benefit documents for review by a powerful oversight body within the Office of Management and Budget.
Cost-benefit analysis has a wider role than even this. In the US government, cost-benefit analysis is principally used as a tool for evaluating regulation (legal rules). But its methodology is applicable to any type of governmental policy. For example, we can assess a proposed infrastructure project by comparing the monetized benefits of the project (e.g., better transportation) to the monetized costs. Cost-benefit analysis is important to the policymaking process in a number of countries, not just the U.S. And it has given rise to a vast body of academic literature. But cost-benefit is flawed. We can improve on it.
The problem is that money is an imperfect metric of well-being. Money has diminishing marginal well-being impact. The greater someone’s income, the smaller the effect of a given monetary change on that person’s well-being. For example, a $1,000 increase in the income of someone earning $10,000 a year makes a much bigger difference to her well-being than a $1,000 increase in the income of someone earning $100,000—in turn a bigger difference than a $1,000 increase in the income of someone earning $1 million.
How does this infect cost-benefit analysis? If Casey and Dalia experience the very same well-being impact as the result of some policy, but Casey has more income than Dalia, Casey’s monetary equivalent for the policy will be larger than Dalia’s. Conversely, if Casey experiences a smaller well-being impact than Dalia, Casey’s monetary equivalent may still be larger than or equal to Dalia’s.
Government tries to circumvent this feature of cost-benefit analysis by using population-average monetary equivalents. Thus, for, example, the Environmental Protection Agency converts fatality risk reduction into dollars by ascertaining what people, on average, are willing to pay for risk reduction. But this averaging can have perverse consequences. Consider the case in which a poor person not only receives various non-income benefits from a policy (risk reduction, health improvement, etc.), but also has to pay for those benefits in reduced income—so that, on balance, he is worse off from the policy. Cost-benefit analysis using population-average monetary equivalents might indicate the opposite.
It’s possible to improve upon cost-benefit analysis by using social welfare functions to evaluate governmental policy. The social-welfare-function framework is already used in some areas of economic scholarship, although not (yet) in governmental practice.
This methodology measures well-being with a utility scale rather than a dollar scale. A utility function is a mathematical device that reflects someone’s preferences. If Xavier prefers one bundle of goods to a second, and therefore is better off with the first bundle, his utility function will assign it a larger number.
Utility generally increases as income increases, but at a decreasing rate. Assume that Alice and Bob have the same preferences, and thus the same utility function. However, Alice is poorer than Bob. Increasing Alice’s income would translate into a larger utility change than growing Bob’s income by the same amount. Further, the utility approach to measuring well-being is better because it is based on people’s actual preferences.
The social-welfare-function framework is quite flexible in how it aggregates utilities across persons. The framework can simply sum up individual utilities. This is a “utilitarian” approach; Jeremy Bentham’s famous idea of utilitarianism is implemented, in the social-welfare-function methodology, via a straight summation of utility numbers. However, it is also possible to employ a “prioritarian” social welfare function, which accords greater weight to the utility of people at a lower utility level.
In a recent analysis, I compared cost-benefit analysis to the social-welfare-function approach using a model of risk regulation based upon actual US data. Individuals of different ages are modelled as facing “lotteries” over life-histories. A given life-history is some lifespan (for example, living to the age of 75), with a certain amount of income each year alive. The effect of a risk-regulation policy is to reduce individuals’ current fatality risk, and thereby increase their chances of life-histories with longer lifespans; but also to reduce the amount of income they earn if alive. For purposes of the social-welfare-function methodology, income is converted into “utility” with a logarithmic utility function—a very standard utility function in economic scholarship.
My analysis finds that the utilitarian social welfare function is somewhat biased towards the rich. Reducing a rich person’s fatality risk is accorded greater social value than reducing a poor person’s fatality risk. However, cost-benefit analysis is shown to be much more biased toward the rich than utilitarianism. Moreover, the utilitarian bias can be avoided by shifting to a prioritarian social welfare function. Here, I find a preference for risk reduction among the poor. Cost-benefit analysis with population-average monetary equivalents avoids the problem of being biased towards the rich, but has a different problem: it accords the same value to risk reduction independent of age. For example, risk reduction for a 20-year-old is assigned exactly the same value as risk reduction for a 60-year-old. By contrast, both utilitarian and prioritarian social welfare functions prefer to allocate risk reduction to the young. This seems more appropriate, ethically; someone who dies at age 20 can be expected to lose more years of life than someone who dies at age 60.
In short, we can improve on cost-benefit analysis, and the social-welfare-function framework shows how.
Featured Image Credits: by Helloquence on Unsplash
The post Why cost-benefit analysis is flawed and how to improve it appeared first on OUPblog.

March 16, 2020
Seven books on the fascinating human brain [reading list]
The human brain is often described as the most complex object in the known universe – we know so much, and yet so little, about the way it works. It’s no wonder then that the study of brain today encompasses an enormous range of topics, from abstract understanding of consciousness to microscopic exploration of billions of neurons. Brain Awareness Week takes place this year March 16-22. To mark the occasion, we’ve put together a list of books that explore the matter between your ears.
Birth of Intelligence Daeyeol LeeWhat is intelligence? Does a high level of biological intelligence require a complex brain? Can man-made machines be truly intelligent? In Birth of Intelligence, distinguished neuroscientist Daeyeol Lee tackles pressing issues that will be key to preparing for future society and its technology, including how the use of AI will impact our lives. How (not) to train the brain Amir Raz and Sheida Rabipour

Can we improve cognitive performance through deliberate training? The short answer is yes, albeit with some caveats. In this book, the authors review data from hundreds of articles and provide an overarching account of the field, separating scientific evidence from publicity myth and guiding readers through how they should – and should not – train the brain. The Evolutionary Road to Human Memory Elisabeth A. Murray, Steven P. Wise, Mary K. L. Baldwin, and Kim S. Graham
This book tells an intriguing story about how evolution shaped human memory. As our ancestors faced the problems and opportunities of their time, their brains developed new forms of memory that helped them gain an advantage in life. Sometime during human evolution, another new kind of memory emerged that ignited the human imagination, and empowered every individual, day upon day, to add new pages to the story of a life. Musical Illusions and Phantom Words Diana Deutsch

Why is perfect pitch so rare? Why do some people hallucinate music or speech? In this ground-breaking synthesis of art and science, Diana Deutsch, one of the world’s leading experts on the psychology of music, shows how illusions of music and speech—many of which she herself discovered—have fundamentally altered thinking about the brain. Sex, Lies, and Brain Scans Barbara J. Sahakian and Julia Gottwald
Sex, Lies, and Brain Scans takes readers beyond the media headlines. The authors consider what the technique of fMRI entails, and important ethical questions these techniques raise. Should individuals applying for jobs be screened for unconscious racial bias? How far will we allow neuroscience to go? It is time to make up our minds. The Elephant in the Brain Kevin Simler and Robin Hanson

Our brains are designed not just to hunt and gather, but also to help us get ahead socially, often via deception and self-deception. But while we may be self-interested schemers, we benefit by pretending otherwise. The less we know about our own ugly motives, the better – and thus we don’t like to talk or even think about the extent of our selfishness. This is “the elephant in the brain.” Your Brain on Food, Third Edition Gary L. Wenk
This essential book vividly demonstrates how a little knowledge about the foods and drugs we eat can teach us a lot about how our brain functions. The intersection between brain science, drugs, food and our cultural and religious traditions is plainly illustrated in an entirely new light. Wenk tackles fundamental and fascinating questions, such as: Are some foods better to eat after brain injury?
These books showcase some fascinating areas of brain research. But we’ve still got a long way to go before most of our questions about this intricate organ are answered. A comprehensive understanding of the brain is especially critical to combating public health issues, including rising rates of dementia and psychiatric disorders. As such, the continuing progress of brain research will be beneficial to us all.
Featured Image Credit: Owned by Oxford University Press.
The post Seven books on the fascinating human brain [reading list] appeared first on OUPblog.

Oxford University Press's Blog
- Oxford University Press's profile
- 238 followers
