Oxford University Press's Blog, page 111

April 19, 2021

What can neuroscience tell us about the mind of a serial killer?

Serial killers—people who repeatedly murder others—provoke revulsion but also a certain amount of fascination in the general public. But what can modern psychology and neuroscience tell us about what might be going on inside the head of such individuals?

Serial killers characteristically lack empathy for others, coupled with an apparent absence of guilt about their actions. At the same time, many can be superficially charming, allowing them to lure potential victims into their web of destruction. One explanation for such cognitive dissonance is that serial killers are individuals in whom two minds co-exist—one a rational self, able to successfully navigate the intricacies of acceptable social behaviour and even charm and seduce, the other a far more sinister self, capable of the most unspeakable and violent acts against others. This view has been a powerful stimulus in fictional portrayals ranging from Dr Jekyll and Mr Hyde, to Hitchcock’s Psycho, and a more recent film, Split. Yet there is little evidence that real-life serial killers suffer from dissociative identity disorder (DID), in which an individual has two or more personalities cohabiting in their mind, apparently unaware of each other.

Instead, DID is a condition more associated with victims, rather than perpetrators, of abuse, who adopt multiple personalities as a way of coming to terms with the horrors they have encountered. Of course a perpetrator of abuse may also be a victim, and many serial killers were abused as children, but in general they appear not to be split personalities, but rather people conscious of their acts. Despite this, there is surely a dichotomy in the minds of such individuals perhaps best personified by US killer Ted Bundy, who was a “charming, handsome, successful individual [yet also] a sadist, necrophile, rapist, and murderer with zero remorse who took pride in his ability to successfully kill and evade capture.”

“a recent brain imaging study … showed that criminal psychopaths had decreased connectivity between … a brain region that processes negative stimuli and those that give rise to fearful reactions”

One puzzling aspect of serial killers’ minds is the fact that they appear to lack—or can override—the emotional responses that in other people allows us to identify the pain and suffering of other humans as similar to our own, and empathise with that suffering. A possible explanation of this deficit was identified in a recent brain imaging study. This showed that criminal psychopaths had decreased connectivity between the amygdala—a brain region that processes negative stimuli and those that give rise to fearful reactions—and the prefrontal cortex, which interprets responses from the amygdala. When connectivity between these two regions is low, processing of negative stimuli in the amygdala does not translate into any strongly felt negative emotions. This may explain why criminal psychopaths do not feel guilty about their actions, or sad when their victims suffer.

Yet serial killers also seem to possess an enhanced emotional drive that leads to an urge to hurt and kill other human beings. This apparent contradiction in emotional responses still needs to be explained at a neurological level. At the same time, we should not ignore social influences as important factors in the development of such contradictory impulses. It seems possible that serial killers have somehow learned to view their victims as purely an object to be abused, or even an assembly of unconnected parts. This might explain why some killers have sex with dead victims, or even turn their bodies into objects of utility or decoration, but it does not explain why they seem so driven to hurt and kill their victims. One explanation for the latter phenomenon is that many serial killers are insecure individuals who feel compelled to kill due to a morbid fear of rejection. In many cases, the fear of rejection seems to result from having been abandoned or abused by a parent. Such fear may compel a fledgling serial killer to want to eliminate any objects of their affections. They may come to believe that by destroying the person they desire, they can eliminate the possibility of being abandoned, humiliated, or otherwise hurt, as they were in childhood.

Serial killers also appear to lack a sense of social conscience. Through our parents, siblings, teachers, peers, and other individuals who influence us as we grow up, we learn to distinguish right from wrong. It is this that inhibits us from engaging in anti-social behaviour. Yet serial killers seem to feel they are exempt from the most important social sanction of all—not taking another person’s life. For instance, Richard Ramirez, named the “Night Stalker” by the media, claimed at his trial that “you don’t understand me. You are not expected to. You are not capable of it. I am beyond your experience. I am beyond good and evil … I don’t believe in the hypocritical, moralistic dogma of this so-called civilized society.” 

It remains far from clear why a few people react to abuse or trauma at an earlier stage in their lives by later becoming a serial killer. But hopefully new insights into the psychological or neurological basis of their actions may in the future help us to identify potential future such killers and dissuade them from committing such horrendous crimes.

Featured image via Pixabay

The post What can neuroscience tell us about the mind of a serial killer? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 19, 2021 02:30

April 18, 2021

Corona and the crown: monarchy, religion, and disease from Victoria to Elizabeth

Queen Elizabeth II and the royal family have featured prominently in the British state’s response to the COVID-19 pandemic. In her 2020 Christmas broadcast, which ended with the Lewisham and Greenwich NHS Choir singing “Joy to the World,” the Queen evoked the “light of Christmas” in dark times and assured her people of her “thoughts and prayers.” She celebrated the heroism of “our frontline services,” connecting modern nurses to Florence Nightingale, but also to the Good Samaritan, who had cared for a wounded stranger in the gospel parable. The Queen’s 2021 Commonwealth Day address returned to the theme. Over footage of religious services, she spoke of the “spiritual sustenance” her listeners ordinarily derived from meeting together and praised the “selfless dedication to duty” of frontline workers around the Commonwealth. Her children and grandchildren have associated themselves with the medical and spiritual response to the pandemic. On the National Day of Reflection to mark the anniversary of its outbreak, Prince William lit a candle at the shrine of Edward the Confessor in Westminster Abbey, which was doing double duty as a vaccination centre. Commentators have found the Queen’s public statements, such as her promise that “we will meet again” or her invocation of the Tomb of the Unknown Warrior in the Abbey, reminiscent of wartime patriotism. Yet the expectation that the monarch should articulate a spiritual response to the threat of disease had deeper roots. It took its modern form with Queen Victoria, whose reign decisively transformed the relationship between religion, the sovereign, sickness, and health.

Victoria’s truculence was instrumental in modernizing the monarchy’s religious response to disease. When she came to the throne in 1837, a large and vocal body of evangelical Protestants interpreted outbreaks of epidemic disease or setbacks in battle as harbingers of God’s anger with the British. Their leaders pressed the monarch to order state days of national prayer and humiliation to appease divine wrath. Victoria disliked such requests. As a young woman she had listened to liberal clergymen who taught her that God did not arbitrarily meddle with his creation. In Prince Albert, she had married a German rationalist who believed that true piety involved the scientific investigation of the laws which governed the health and prosperity of individuals and societies. In the summer of 1854 for instance, Victoria was adamant that she would not order the Church of England to hold special prayers to end the cholera epidemic which was then raging in London. Such prayers were, she said, “not a sign of gratitude or confidence in the Almighty.” That attitude lasted until the very end of her life and reign in 1901. Although the influenza epidemic of 1892 killed her own grandson, the Duke of Clarence, Victoria nonetheless resisted the Archbishop of Canterbury’s call for special prayers to end it.

“Queen Victoria, Princess Helena and Princess Beatrice Knitting Quilts for the Royal Victoria Hospital, Netley.” By Alexander Melville (via Wikimedia Commons).

Instead of relying on God to intervene, Victoria identified religiosity with the battle against disease through the improvement of housing, sanitation, and medical care. She strengthened her family’s longstanding association with hospitals and patronised Florence Nightingale, who had established nursing as a spiritual vocation. Her much publicised visits to hospitals continued even during the prolonged seclusion which followed the death of her husband in December 1861. The banners which greeted Victoria as she went to Whitechapel to open a wing of the Royal London Hospital in 1876 show how the quest for health could symbolically unite a periodically unpopular monarch with her subjects. They read: “Welcome Victoria, the friend of the afflicted,” “I was sick & ye visited me (Matthew 25:36)” and “Welcome England’s pride, Queen Victoria.” The alliance between the court and the hospital played an important role in establishing Victoria as a friend to all the religions in her expanding empire, rather than just the governor of the Church of England or the defender of Protestantism. The Golden and Diamond Jubilee celebrations with which Victoria’s reign culminated not only celebrated her religious profile in such terms but found expression in the ecumenical promotion of nursing and other philanthropic causes. British Jews for instance were enthusiastic contributors to such schemes. In 1897, the Jewish Chronicle was delighted by the award of a Jubilee baronetcy to the imperial epidemiologist Waldemar Haffkine in recognition of his activities in India, marvelling that a “descendant of William the Conqueror” had recognised “a Russian Jew” for saving “the lives of helpless Hindoos and Mohammedans.”

Yet if the royal family benefited from the promotion of public health, then they reaped even greater dividends from their own mortality. As she aged, Victoria defined her reign as the successful endurance of common suffering rather than the exercise of extraordinary power. Hers was a “thorny crown” and a “heavy cross,” she wrote to a friend in 1886, reflecting on blows such as her early widowhood, her heir Albert Edward’s brush with fatal illness in 1871, and the deaths of her adult children Alice and Leopold. These crises in the life of the monarchy occasioned a flood of sermons and addresses from around the British world, as publics both Christian and non-Christian eagerly manifested sympathy with Victoria’s resigned grief. In a letter to the press on her grandson Clarence’s death, Victoria wrote that in facing the “inscrutable decrees of Providence,” she felt strengthened not just by God but by the “sympathy of millions.” Historians recognise that the monarchy survived Britain’s movement towards mass democracy not only by reluctantly surrendering its political prerogatives but also by showing that it could represent the nation. The vulnerability of Victoria’s family to disease was a visceral demonstration of its representativeness, confirming to a predominantly religious public that the Queen was united with them in sorrow, sacrifice, and resignation. Though kindled in a religious age, these emotional expectations on monarchs and leaders have lasted into a more secular time, as both the Queen’s broadcasts and the media’s recent mobilization to condole with her on the death of her husband Prince Philip have shown.

Feature image: “Queen Victoria’s First Visit to her Wounded Soldiers.” By Jerry Barrett (via Wikimedia Commons).

The post Corona and the crown: monarchy, religion, and disease from Victoria to Elizabeth appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 18, 2021 02:30

April 17, 2021

Hollywood on Hollywood: will the Academy “Mank” up for Citizen Kane snub?

It is no secret that movies about Hollywood come with built-in Oscar buzz. The trend is nearly as old as the Academy Awards itself. MGM’s musical comedy Hollywood Revue started the tradition with a best picture nomination (or “Outstanding Picture” as it was then called) at the second annual Oscar ceremony in 1930, and A Star is Born and Sunset Boulevard followed suit in 1938 and 1951, respectively. More recently, The Artist (2011) and Birdman (2014) took home the award for best picture—and of course La La Land (2016) would have won if Warren Beatty ran the show. While sometimes sprinkled with cynicism, these films have tended to charm the members of the Academy because they romanticize the mystique of the movies.

Perhaps what is less common knowledge is that such self-promotion has been a part of the industry’s infrastructure from the very beginning. In response to growing public hostility against the film industry in the early 1920s, studio heads banded together to establish the Motion Picture Producers and Distributors of America (now known simply as the Motion Picture Association). One of the MPPDA’s by-laws focused on “diffusing accurate and reliable information with reference to the industry,” which in part led to a cluster of studio films depicting the industry as a wholesome place full of hardworking people.

Diffusing accurate and reliable information also entailed keeping a close eye on what the MPPDA considered misinformation about its business. Any publication that threatened to subvert the industry’s promotional front would be met with swift legal action. Similarly, throughout the 1930s, the MPPDA shut down a number of film projects that jeopardized the industry’s image, including the unproduced Queer People, The William Desmond Taylor Murder Case, and Hollywood Bandwagon. The Hollywood novel around this time was the only venue that could explore the dark side of the film culture.

“self-promotion has been a part of the industry’s infrastructure from the very beginning”

Out of this industry-wide effort emerged the Academy of Motion Pictures of Arts and Sciences, an organization that effectively fostered the MPPDA’s primary goal of “maintaining the highest possible moral and artistic standards in motion picture production.” Given the industry’s foundational insecurities, then, it is hardly surprising that films depicting Hollywood through a more critical lens—including Hollywood Shuffle (1987), Barton Fink (1991), The Player (1992), Ed Wood (1994), and Mulholland Drive (2001), and more recently Hail, Caesar! (2016)—have been snubbed if not ignored entirely at the Oscars, perhaps with the exception of Sunset Boulevard. Honoring such films, especially with best picture nominations, would be antithetical to the very foundation of the Academy. And yet these snubbed films are often far more memorable than their Oscar-darling contemporaries. When was the last time anyone watched The Artist?

This year, we find ourselves returning to this age-old dilemma with David Fincher’s Mank, which has earned ten nominations, including best picture, director, actor, and actress. On its surface, Mank appears to join this noir-ish vision of Hollywood. Emulating 1940s cinematography and sound design, Mank works to upend the nostalgia of classical Hollywood cinema, both in form and content. The film revisits the pre-production for Citizen Kane, commonly hailed the best film to ever come out of the studio era, and sheds light on Herman Mankiewicz’s role in what is commonly thought of as Orson Welles’s masterpiece. Fincher shows us studio life through the eyes of one of its admirable misfits, along the way exposing the ruthlessness of Louis B. Mayer and the industry’s influence over California politics.

Still, as groundbreaking as Citizen Kane was for its time, Mank feels fairly safe by comparison. Making a movie about a subversive movie, it turns out, doesn’t make for a subversive movie. Despite its darker perspective of the studio system, Mank invites us to discover a lesser-known aspect of Kane and ultimately remind us of its well-earned place in film history. Mank ends with a re-enactment of Citizen Kane’s Oscar win for best original screenplay followed by a real audio clip of Welles’s press conference in Brazil shortly after. A journalist says to Welles, “Kane was nominated in nine categories, including best actor. Aren’t you disappointed it only won one Oscar?” to which Welles responds, “Well that, my good man, is Hollywood.” In this way, Mank practically dares the Academy to make up for Citizen Kane’s snub back in 1942. Of course, this feels like a lose-lose situation. If Mank wins, everyone will accuse the industry of patting itself on the back as usual. If it loses, some might accuse the Academy of passing up an invitation to correct one of its biggest mistakes.

We’ll see what happens come 25 April.

Feature image by Jake Blucker. Public domain via Unsplash

The post Hollywood on Hollywood: will the Academy “Mank” up for Citizen Kane snub? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 17, 2021 05:30

How well do you know your libraries? [Quiz]

Were you born to be a librarian? Are you a library fan? Or do you just like a bit of trivia? Whatever your reason it’s time to prove to us how well you know your libraries with this short quiz.

Follow us on Twitter to stay up-to-date with the latest from OUP Libraries.

Think your library should be featured in our #LibraryOfTheWeekEmail us with their information or send us a message via Twitter.

The post How well do you know your libraries? [Quiz] appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 17, 2021 02:30

April 16, 2021

Respecting property takes two

Parents don’t teach their children the word for “mine”; they learn it all on their own. Parents also don’t teach their two-year-olds the following rules of “mine”:

 

if I like it, it’s mine;if I see it, it’s mine;if it’s in my hand, it’s mine;if it looks like mine, it’s mine;if I can take it from you, it’s mine; andif I had it a little while ago, it’s mine.

Yet, no parent in any age or community lets these untaught rules of “mine” stand unchecked. Every generation of parents teaches their children the rules of how not to acquire something. In the modern world we create television programs to assist with the uptake. We use such lessons to teach children when they can say, “This is mine,” and reciprocally, when someone else can say, “This is mine”—that is, when the child must say “That is yours.”

An episode of the British animated children’s show Bing illustrates how parents teach the general rules about what “This is mine” means. The eponymous Bing is a preschool bunny who learns how the world works from his guardian-babysitter, Flop. In the episode entitled “Not Yours,” Bing and Flop visit Padget’s corner shop to purchase groceries and a treat for dinner (carrot muffins, naturally). While Flop pays for the groceries, Bing wanders around the store and discovers a box of lollipops on the far wall. Lured by its sweetness, he picks one up, opens it, and takes a lick. When Flop calls Bing to leave, he discreetly slips the treasure into his pocket. Notice what the cartoon teaches in a simple fifty-second conversation when Bing takes out the lollipop and begins licking it on their way home:

Flop: Oh, what have you got there, Bing?
Bing: Mmm. A lollipop. It’s strawberry.
Flop: Where did you get that from?
Bing: It’s mine. I found it in the shop.
Flop: Ah, and did we pay for the lollipop?
Bing: Uh, no.
Flop: Oh, well… if we didn’t pay for it, I’m afraid it must still belong to Padget.
Bing: Oh, can we keep it?
Flop: Well no, Bing, it’s not yours.
Bing: Why not?
Flop: Well, if you take something without paying for it, that’s not right, is it? It’s called stealing.

Notice how naturally the show’s writers assume a child will claim a thing first-in-hand. Preschool viewers identify with Bing, and no one teaches them to claim things they find as “Mine!” They do that all on their own. What preschoolers need to be taught is that “finders, keepers” does not apply to things inside a grocery store. The lessons to be learned are that things like lollipops generally belong to someone else and that taking something because “I want it” is not that same thing as being able to say, “It’s mine.” We teach the lessons of mine via “not yours.”

“Property isn’t unilateral. It requires reciprocal relationships.”

A claim of “This is mine!” is not the end of property. If it were, then property would be as purely subjective as “I want this” is. Rather, property requires that people other than me also know the circumstances of when my claim of “Mine!” is indeed true. The singularity of property is evident. Only I can use the concept of “mine” to assert a first-person claim on something. If I can say about something, “This is mine,” then other people cannot say “This is mine” about the very same thing. Moreover, I can even say things like “Do not trespass” or “Leave it alone.” But what is less evident is that when I say things like that, I am relying on everyone else to use the concept of “yours” to respect my claim of “This is mine!” I am relying on them to say, “That is yours.”

Mine and thine are intertwined. If I want other people to say “That is yours” to me, and they are, in every respect, as good as me, then equality dictates that I must respect their claims of “This is mine” about other things. Property isn’t unilateral. It requires reciprocal relationships. Property requires me to respect other people’s claims of “This is mine” as much as it requires them to respect my claims of “This is mine!”.

Featured image: Calvin Hanson via Unsplash

The post Respecting property takes two appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 16, 2021 02:30

April 15, 2021

Pivotal moments in US history: a timeline of the Saratoga campaign

In the summer and fall of 1777, after two years of indecisive fighting on both sides, the American War of Independence was at a stalemate. The British were determined to end the rebellion that year and devised what they believed a war-winning strategy: to send General John Burgoyne south to rout the Americans and take Albany. Less than four months later, however, a combination of the Continental Army and Militia forces, commanded by Major General Horatio Gates and inspired by the heroics of Benedict Arnold, changed the course of the war…



The post Pivotal moments in US history: a timeline of the Saratoga campaign appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 15, 2021 02:30

April 14, 2021

Monthly gleanings for March 2021

Some of the letters I received deserve detailed answers; hence the length of this post. I once wrote a series of essays on the origin of the word bad (24 June , 8 July, and 15 July 2015). Today, considering the responses to part three, I would have written a few things differently. Anyway, our reader noticed that I had mentioned the word evil and wondered whether I know the origin of that adjective and of the word wicked.

Evil

This word has cognates elsewhere in West Germanic and in the fourth-century Gothic translation of the New Testament, but not in Old Norse, though some Scandinavian words sound similar and may be related. Dutch euvel, German übel, and Gothic ubils are three easily recognizable cognates of evil (Old English yfel). It is not clear whether ubil– (s is an ending) constitutes an entire root or whether we have ub– with the suffix -il. Those who preferred to treat ubil– as an indivisible morpheme cited a few similar-sounding Celtic words, but the status of such vague look-alikes is dubious, and the view of il as a suffix carries more conviction.

What then is the meaning of the root ub– if it ever existed? Perhaps it is the same as in the prepositions (adverbs) over, German über, and so forth. Then (whatever the function of the suffix –il, related to –il in Latin, as in agilis “agile,” fragilis “fragile,” etc.), evil referred to “going over the top, exceeding due limits,” something like English overweening “presumptions, thinking too much of oneself” (ween “to think; suppose; expect”). This etymology is acceptable. Epithets deteriorate easily, and the distance from “tyrannical” to “evil” is not long.

Overweening, but not evil. (Image by Grant Ritchie.)

But there is another approach to evil. Middle English had the adjective evel, apparently, an old word, which, like its Middle High German cognate evel, meant “proud, haughty.”  The German word can still be detected in the noun Frevel “sin; crime,” corresponding to Old English frævel ~ frevel “cunning, sly; arrogant.” Those words would then have the prefix fr– and the root cognate with Old English afol “strength, might.” The semantic change looks the same in both cases (from “mighty” to “evil”).

I am not sure who offered the second hypothesis, but my earliest reference to it is in a 1911 paper by the once extremely active and distinguished American historical linguist Francis A. Wood. The Indo-European etymological dictionary by Walde-Pokorny accepted it, and so did Ferdinand Holthausen, the author of the only etymological dictionary of Old English, but he, like Walde-Pokorny, reproduced the hypothesis of evil from ev-il. It seems that frevel and evil may be covered by the same explanation. Elmar Seebold, the latest editor of Kluge’s German etymological dictionary, doubts the connection between evil and over ~ über but offers no discussion.

WickedA wicked wicca? (Image by Ray_Shrewsberry)

Although this adjective surfaced in texts only in the Middle English period, it must be old, because of its obvious ties with Old English wicca “wizard,” the masculine partner of wicce “witch.” The meaning of the root is not known. The idea that witch ~ wicked is related to a verb meaning to “turn away” (as Walter W. Skeat and several others thought) is not particularly attractive.

Both evil and wicked are hard words to etymologize. Yet ill is much harder, and I may devote a post to it sometime soon.

Sward

Another letter concerned the noun sward. See the posts for 27 May and 3 June 2020. I compared sword and sward. Our correspondent asked whether anyone had been interested in the origin of sward for its own sake. The most  detailed discussion can be found in the Norwegian etymological dictionary by Hjalmar Falk and Alf Torp (its German version Norwegisch-dänisches etymologisches Wörterbuch has the advantage that more people outside Scandinavia can read it and that in addition to some corrections, it has a German word index), pp. 1222 and 1561. In English, only a page in Robert A. Fowkes’s paper in the journal Language 21, 1945: 345-46, is known to me. Fowkes supported the idea that the root of sward meant “to cover.” None of those researchers compared sward and sword. My suggestions will be found in the post for June 3.

Hunt

In the post titled “An Etymologist is Not a Lonely Hunter” (12 February 2020), I discussed the origin of the verb hunt. A recent comment returned me to it. Can hunt be related to hound? The similarity is striking, and Friedrich Kluge, the author of the most important etymological dictionary of German (see evil, above), never gave up the idea of deriving hound (Old English hund) and hunt from the same root. But his posthumous editors removed this comparison. The problem is that neither word has a definitive etymology, and in historical linguistics, making one unknown entity support another never yields convincing results. Hound is especially obscure, even though its cognates exist all over the Indo-European world. The alleged common root meaning “to seize,” which fits hand, hardly fits hund “hound,” that is, “dog,” and the derivation of hunt from hund is out of the question.

Running with the hare and hunting with the hounds. (Image via Wikimedia Commons)Greek thraúō “break” and English trash

They cannot go back to the same root. Trash is a very late word in English. The relation between t and th, if the words were cognate, would have been reverse, and short a (ă) does not alternate with au by ablaut.

The Future in the Past

The reference is to the post for 17 March 2021. I have received two letters objecting to my interpretation and a comment (among others) referring to Thorleif Boman’s book Hebrew Thought Compared with Greek. I know the book and at one time read part of it in its German original but now looked through the relevant sections again. Boman discusses the way of thinking about time: spatial versus temporal. He also mentions the ambiguity of some of our expressions. Indeed, before may refer to what is in front of us (as in Byron’s: “I see before me the gladiator lie”) and to what is behind (as in Before Adam, the title of a novel by Jack London) but did not touch on the meaning of the enigmatic Germanic word that could allegedly mean both “yesterday” and (rarely) “tomorrow.”

In fourth-century Gothic (see the post), the word gistra-dagis, a counterpart of English yester-day, was once used to gloss (translate) aúrion, apparently, “tomorrow.” Yet, at least from a folk-etymological point of view, the Greek word consists of the adverb (particle) “further; once again” and the noun ríon “mountain peak; headland, promontory.” To the Greeks of even the Homeric period it must have meant “tomorrow” only because it pointed to some faraway object. And this is probably how Wulfila understood it. Gothic gistra– corresponds letter by letter to Latin hesternus “belonging to yesterday.” The reference must have been to any “adjoining day.” Karl Brugmann (see his portrait in the post) explained the situation well but, to my mind, did not go far enough; the situation puzzled him. The occasional translation of Old Icelandic í gær as “tomorrow,” in addition to “yesterday,” is misleading: the reference is to some other day. Only Modern Scandinavian i går means “yesterday” in our sense of this adverb.

No Common Indo-European word for “tomorrow” existed, and those in individual languages were descriptive phrases (like the Greek one), referring to the “day to come.” Some were etymologically impenetrable (like Latin crās, which also meant “sometime in the future”). For “the future” the speakers of Old Germanic had no word. Dutch toekomst and German Zukunft refer to things “coming toward one,” and both were coined only in the Middle period. Old English tōcomman meant “to approach.” (Latin futūrum is the future participle of esse “to be,” thus, also “about to happen; approaching.”) Although of course, those people knew that the future existed, in their perception of the world, it merged with the present (hence their ability to prophesy and detect its visual marks), and that is why they had not developed the concept of the future as an abstract category. Apparently, one did not need a special adverb for “tomorrow.” “Tomorrow” continued “today,” since no fixed boundary separated them: “tomorrow” set in whenever one woke up (“the morrow” could begin at any moment). By contrast, yesterday (though also of course “another day”) belonged unambiguously to the past and was easy to define. However, when needed, the word for it could be pressed into meaning its opposite, like English be-fore (fore “in front”! Compare fore and aft).

Advanced Elegant English

♦“For millennia we have followed the rule to not speak ill of the dead.” ♦“…[he] urged Americans to not allow themselves grow “numb,” as…”  ♦“Business groups… called on Democrats to instead work with Republicans and industry groups….” ♦“We do not ask a woman to keep their child, only to let the child live.”

Nor do I ask English to stop changing, only to let it live.

Feature image by numerologysign.com (CC BY 2.0)

The post Monthly gleanings for March 2021 appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2021 05:30

Taking stock of the future of work, mid-pandemic

This past month marked an anniversary like no other. On 11 March 2020, the World Health Organization declared COVID-19 a pandemic and with it, normal life of eating out, commuting to work, and seeing grandparents came to a sudden halt. One year later, my new book about the intersection of psychology and the workplace was published. With wide-scale vaccinations on the rise, I thought it would be a good time to take stock of where we are and just how much has changed.

Even before the pandemic, major shifts in how employees think about and interact with their workplace were well underway. Routine tasks were increasingly handed over to automation, such that workers could concentrate on more complex or valuable activities. Employees were being shuffled into new environments guided by activity-based working, with neighborhoods and open-plan floor plans replacing traditional cubicles. Gig working was on the rise and self-employed side hustles were becoming the norm.

COVID-19 added fuel to the fire. During the pandemic’s peak, approximately 30% of employees were working from home, while internet use surged by up to 30% of pre-pandemic levels, as workers competed for bandwidth with their kids who were now thrown into distance learning. Social collaboration tools, like Zoom and Teams, found the footing that they were searching for and quickly became the default means of working with peers and customers.

Even though employers made swift and effective adjustments, the pandemic unearthed social inequalities that were bubbling under the surface. Whereas over half of information workers in the US (e.g. consultants or managers) had the luxury to stay safe by working from home, only 1 in 20 service workers could do the same. These workers were tied to their physical workplace, making cars, cooking food, or cleaning hospitals, and often represented people of color. They bore the brunt of the sudden lockdown in normal life, having to both put themselves at risk and be left without the tools to work in any other way.

With sustainable progress towards an end to the pandemic in sight, employers have already made decisions that will have lasting effects on the workplace. Remote working has proven both feasible and effective for even the most skeptical companies, resulting in many employers encouraging hybrid working. Exactly what this means is still up in the air, even for Big Technology. Twitter and Spotify have vowed to make remote working indefinite, while Google requires employees who plan to spend more than 14 days at home annually to apply for special permission. Somewhere in between are Facebook, Salesforce, and Amazon who have all stated support for a hybrid model.

The offices that employees are returning to will also look and feel different. This past year has given employers the rare opportunity to reassess their real estate portfolio. For example, companies like Target have recently announced a reduction of their office footprint, anticipating lower utilization due to hybrid working. When they do come in, workers will be there to collaborate with peers, access tools and technology, or simply meet up as a team. What is increasingly off the menu is the solitary, heads-down type of work that was typically done in a cubicle.

“As we enter this new Future of Work, the same psychological tendencies that have always governed our work behavior will undergo a period of adjustment.”

To date, much of the shift to the Future of Work has been unidimensional, by tackling the physical workplace. What lags behind is a new set of behaviors and social norms that will help guide employees about how and when they should interact with their new workplace.

For example, video conferencing breaks down the separation between home and work life. Catching a glimpse of a curious four-year-old popping up on screen or spotting laundry that has yet to be put away creates unintended and possibly inappropriate intimacy. Whereas, choices about when to turn on the camera and what hours are truly off-limits have yet to fully work themselves out.

Just like the tangible office, effective remote working will be made possible by a range of behaviors and social norms that interplay with the motivations, needs, and desires of employees. Wearing a suit while on camera not only looks odd, but plays against our desire for being comfortable at home. There is a competing interest between looking professional and feeling comfortable, with the answer resting somewhere between formalwear and pajamas.

As we enter this new Future of Work, the same psychological tendencies that have always governed our work behavior will undergo a period of adjustment. Drivers like identity, reward, and obedience are all still in play, but will take on a distinct flavor as we migrate to a new way of working. Building rapport on Zoom is different than sitting down for a meal, just like observing job performance in person benefits from a degree of constant visibility that will go missing with remote work. Following the pandemic, there is little doubt that the workplace will fundamentally change; the question is how and when the psychology of workers will catch up.

Read a free chapter on “Connection” from Punching the Clock, freely available until 13 May 2021.

 

Featured image by piranka

The post Taking stock of the future of work, mid-pandemic appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 14, 2021 02:30

April 13, 2021

Rehabilitating the sacred side of Arthur Sullivan, Britain’s most performed composer

November 2018 saw the release of the first ever professional recording of Arthur Sullivan’s oratorio, The Light of The World, based on Biblical texts and focused on the life and teaching of Jesus. Reviewers focused as much on the piece as on the performance. The critical reaction to this work, which had been largely ignored and rarely performed for over 140 years, was extraordinary. Classical music magazines and websites hailed a revelatory discovery, with music of an engaging freshness and directness. What particularly impressed critics was the handling of the character of Jesus. Radically, Sullivan dispensed with the usual narrator and made Jesus a real character who interacted with others. There was general agreement among those reviewing the recording that this treatment of Jesus as a human figure with emotions gave The Light of the World a real spiritual depth contrasting with the pious sentimentality of most Victorian oratorios.

These tributes represent an overdue acknowledgment of the talents of a very different figure from the familiar master of the patter songs, rumpty-tumpty choruses and light, lilting waltzes of the Savoy operas. They are part of a welcome rehabilitation and appreciation of Sullivan’s sacred music and of the spiritual sensitivity as well as the artistic competence and daring innovation that underlay it. After more than a century of almost total neglect, this significant part of his overall output is coming to be appreciated not just in its own right but also in terms of its influence on Edward Elgar and other twentieth-century British composers.

The Savoy operas will always remain Sullivan’s most enduring legacy. Yet they are not what he wanted to be remembered for. In a newspaper interview during a visit to the United States in 1885, he said: “My sacred music is that on which I base my reputation as a composer. These works are the offspring of my liveliest fancy, the children of my greatest strength, the products of my most earnest thought and most incessant toil.” 

Sullivan was schooled in the Christian faith and the world of Anglican church music. As a young boy he attended the parish church where his father had responsibility for the music. Between the ages of 12 and 15 he sang as a chorister in the Chapel Royal, steeping himself in the Anglican choral tradition of hymns, anthems, and plainchant. While making his way as a composer and conductor, he served as organist in two London churches. His closest friends were church musicians and he developed a particularly intimate relationship with George Grove, one of the leading amateur Biblical scholars of the Victorian age.

Church music was Sullivan’s most abiding love. His first and last compositions were settings of a Biblical and a liturgical text, respectively. At the age of eight, he wrote an anthem setting the opening verses of Psalm 137, “By the waters of Babylon.” At the age of 58, when his strength was ebbing fast, he wrote a Te Deum to celebrate the end of the Boer War, devoting himself to this sacred piece to the detriment of an operetta, which remained unfinished at his death. In the fifty years between these two compositions, he produced a formidable corpus of sacred music encompassing two Biblically based oratorios, The Prodigal Son and The Light of the World; a sacred musical drama, The Martyr of Antioch; a sacred cantata, The Golden Legend; three Te Deums; twenty-six sacred part songs and ballads; nineteen anthems; and over sixty original hymn tunes, including ST GERTRUDE for “Onward, Christian Soldiers,” and NOEL for  “It came upon the midnight clear” is universally sung in Britain. Several of these religious works became as well-known and popular with his contemporaries as his comic operas. His sacred ballad “The Lost Chord” was the best-selling song of the last quarter of the nineteenth century. For more than two decades, The Golden Legend was the second most performed choral work in the United Kingdom after Handel’s Messiah.

The almost complete disappearance of Sullivan’s serious music, and particularly of his religious compositions, from the repertoire throughout the twentieth century was partly due to the general reaction against Victorian taste and values but it was also the consequence of a sustained campaign by music critics and commentators. During his lifetime, Sullivan faced considerable criticism for devoting too much of his time and talent to lightweight theatrical pieces and not fulfilling his youthful potential as a composer of serious and sacred works on a par with Brahms. In the twentieth century, while his comic operas won many accolades and continued to be performed, criticism shifted to his sacred and church music which was denigrated for being dull, affected, insincere, vulgarly populist, and over-sentimental.

Underlying these criticisms of Sullivan’s religious music was a widely shared view that he was not himself a person of any great faith or spiritual depth. This has been the consensus among his recent biographers, in marked contrast to the strong emphasis on his religious impulses and sensitivity in the early biographies written during his lifetime by those who knew him. Those who have contributed to the recent rehabilitation of his serious music have continued to downplay his religious commitment and his sacred works.

Sullivan was certainly no saint or ascetic, but rather a bon viveur who lived life to the full. He did not often write or talk about spiritual or theological matters. But there is clear evidence from his diaries and letters, which has been ignored or overlooked by his more recent biographers, that he had a consistent and simple Christian faith, which was shaped in a liberal Broad Church direction by his close friendship with George Grove. He felt that faith was best expressed in practical charity and in the exercise of generosity and forbearance. He emphasized the theological themes of forgiveness and assurance in the texts that he chose to set and the way that he set them. He remained loyal to the church in which he had been nurtured, and retained a lifelong affection for the Anglican choral tradition.

The post Rehabilitating the sacred side of Arthur Sullivan, Britain’s most performed composer appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 13, 2021 02:30

April 12, 2021

“We don’t like either side very much”: British attitudes to the American Civil War

One hundred and sixty years ago, on 12 April 1861, the Confederacy attacked Fort Sumter, a Union fort in Charleston harbour. The first shots of the Civil War had been fired.

British attitudes to that war baffled both participants at the time, and perhaps still do. Many considerations, often conflicting, fed into the perceptions of individuals and groups in Britain, making their attitudes to both the war and their own self-interest in it, which were not necessarily the same thing, complicated and sometimes contradictory.

Many people in Britain were passionate supporters of one side or the other. However, many others—and perhaps a majority—felt a strong distaste for both sides. In the case of the South, the overwhelming reason for this can be stated in one word: slavery. Britain had abolished slavery in most of its own colonies in 1834 and, in the intervening thirty years or so, had become a strongly abolitionist nation. Even many of the Southern partisans in Britain refused to endorse slavery.

Despite knowing this, Confederate leaders managed to convince themselves that Britain’s need for raw cotton would lead it to recognise Southern independence and intervene to break the Union blockade of the cotton ports. “Why, sir,” said a fellow diner to William Howard Russell of The Times at Charleston in April 1861, “we have only to shut off your supply of cotton for a few weeks and we can create a revolution in Great Britain. … No sir, we know that England must recognize us.” After the war, Jefferson Davis’s widow wrote that her husband had regarded foreign recognition as an assumed fact.

Without this assumption, it is possible that secession and the Civil War would never have happened. It was a disastrous misjudgement by the Confederacy, which simultaneously underestimated the depth of the British loathing for slavery and overestimated the reliance of the British economy on the cotton trade.

“implacable hostility in Britain towards slavery, and towards the South, did not translate immediately into widespread support for the North”

Yet this implacable hostility in Britain towards slavery, and towards the South, did not translate immediately into widespread support for the North. Here, the reasons were a great deal more varied and complicated. The starting point is to understand that, before the Civil War, Britain and America had not been allies or political friends, despite many close personal links between the two countries.

It was, after all, less than 50 years since the British army had set fire to the White House and the Capitol—not a long time in the national memory. In the decade before the Civil War, Britain and America could have gone to war on several occasions. In particular, Britain was angered by America’s attempt to gain an advantage at Britain’s expense over the Crimean War, fought only a few years earlier. The two nations were fast becoming rivals, certainly in the economic sphere and increasingly in the political sphere too. Neither did the British Establishment take kindly to the republican and anti-monarchist ethos of America, especially at a time when a revolutionary threat in Europe bubbled close to the surface.

So, when the Civil War started, there was little existing bedrock of friendship between the governments in London and Washington.

The North made a calamitous start to the war as far as obtaining British sympathy was concerned. It denied that slavery had anything to do with the conflict. Many in Britain shared the sentiment of Giuseppe Garibaldi that, if the war was not about slavery, it was merely an “intestine war” over territory and sovereignty, “like any civil war in which the world at large could have little interest or sympathy”.

Then the North enacted the Morrill tariff, a protectionist measure that substantially raised the duties on British imports. This alarmed even Northern supporters in Britain, almost all of whom were fervent supporters of free trade. The Union had a press, especially in New York, that was vitriolically anti-British. It had a Secretary of State, William Seward, who was hostile to Britain, and who was also suspected (with good reason) of having designs on Canada. The North suspended habeas corpus and introduced restrictions on civil liberties, which in Britain were evidence of a tyranny.

“though Britain’s lack of sympathy infuriated the North, it was the South that needed Britain’s outright support, while the North needed only her abstention”

On top of all this, in November 1861, a Federal naval captain, acting on his own initiative, intercepted the British mail packet SS Trent and removed two Confederate diplomats. Feverish support for this action in America, and outraged opposition to it in Britain, nearly led to war between the two countries.

With all this in mind, it is perhaps not surprising that many in Britain found themselves unwilling to support the North, whilst being unable to support the South. Both sides in America were aggrieved by this fact, and largely uncomprehending of it. But, much though Britain’s lack of sympathy infuriated the North, it was the South that needed Britain’s outright support, while the North—although it also wanted it—needed only her abstention. In that respect, the British response to the Civil War was more to the North’s benefit.

In time, attitudes in Britain towards the North softened, due principally to President Lincoln’s Emancipation Proclamation. The ending of slavery had now become a war aim. The South’s last chance of obtaining British recognition had gone forever.

Featured image: “Over the Way.” Punch Magazine, 16th November 1861.

The post “We don’t like either side very much”: British attitudes to the American Civil War appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on April 12, 2021 02:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.