Oxford University Press's Blog, page 197

March 20, 2019

On sluts and slatterns

Here is another attack on sl-words, a continuation of the one celebrated last week (“The sl-morass). There, I mentioned the fact that a word beginning with sl– tends to develop negative connotations. My example was Engl. slim, as opposed to German schlimm “bad.” Another example (one of many!) is slight. In the Old Germanic languages, including Gothic, which was recorded in the fourth century, the word meant “smooth,” perhaps “slippery.” (Incidentally, to slip is also a verb of questionable antecedents, and one can easily slip on a slope; slope, we are told, is a word of unknown origin). Engl. slight retained its ancient meaning “smooth” in some dialects. Elsewhere, it means “small; insignificant,” but in German, schlecht yielded the senses “simple” and “bad.” Allegedly, under the influence of Dutch or Low [= northern] German the English verb to slight began to mean “to disdain, disparage.” Conclusion: beware of sl! You will end up neck-deep in slime (slush) or among sleazy individuals, to say nothing of sluts and slatterns.

Slattern turned up in texts only in the seventeenth century, but it seems to have been known long before the time of its first attestation, approximately when slut turned up. At first, both meant “a slovenly woman.” Slut deteriorated, like many other words for “an untidy female,” that is, from “draggletail” to “whore.” The literature on both words is rather extensive, and it is usually taken for granted that they are in some way cognate. Only Walter W. Skeat insisted that they were either unrelated or derived from different forms of the same verb.

At least as early as the 1870s, Engl. slattern and slut, along with their German and Dutch look-alikes, were traced to Celtic. This is an unpromising hypothesis, because the origin of the Celtic words is unknown and the circumstances under which they supposedly reached such broad areas have never been revealed. Quite a few English sl-words are of Scandinavian origin, and the Norse etymology of both slut and slattern appear in several dictionaries; it has a modern supporter, namely Dr. William Sayers. However, almost identical words were widely known in Germany (see below), where the influence from Scandinavian speakers cannot be considered.

Look at this picture and the other two pictures posted below: they feature similar-looking but unrelated “entities.” Image credit: Mushroom by adege. Pixabay License via Bali Indonesia Children by Nico_Boersen. Pixabay License via Pixabay.

In this context, I cannot refrain from citing Russian shliukha “slovenly woman; prostitute.” This word is usually connected with a verb meaning “to gad about,” but is it not a member of the slut-schluite club? Didn’t some such words travel all over Europe? Those interested in analogous cases may consult the entries trull and traipse in my etymological dictionary. (I did not check the existence of schluite and know only German schlutte and schlotte, but in my experience, Mackay’s material, unlike his erratic hypotheses, is always reliable.)

The main facts that throw light on the history of slut come from Italy. Following the suggestion of the distinguished Romance scholar Ernst Gamillscheg, Vittoria Grazi collected a list of German and Italian regional nouns, adjectives, and verbs of the type that interests us. I’ll cite only a few Italian forms, obviously borrowed from German dialects. Their semantic range goes all the way from “prostitute” and “dirty” to “suicide”: sludre, slodra, sloter, slöder, and others beginning with sl-. It seems reasonable to suggest that such words were coined in the German-speaking area. They could not come to Italy from Scandinavia or Ireland, but from Germany they seem to have spread in several directions, flooded Dutch, and reached England. In English they are not native. Yet their origin is obscure. Some of them may be onomatopoeic and sound symbolic. The group sl– seems to imply the idea of filth in many parts of the Eurasian world.

RCMP Boots by Nic Amaya. Public domain via Unsplash.

It is probably unpractical to look for the exact form that gave rise to slut and slattern. The Oxford Dictionary of English Etymology took its cue from the OED and wrote (at slut): “Of unknown origin; contact with continental words similarly used and having the same cons[onant] structure sl..t. cannot be proved.” But how can anything be “proved” here? What arguments would be decisive to show the connection? Those words are related in the same vague way in which all such formations are. Therefore, the researchers who lumped them together were probably right.  Sl-nouns, adjectives, and verbs rarely disclose their past. Look up sloven “rascal,” which we remember only thanks to slovenly, and you will read that this sixteenth-century word is perhaps based on Flemish sloef “dirty, squalid,” Dutch slof “negligent.” And so it goes. Dirty business indeed.

Image credit: Skating by LuigiTa7. Pixabay License via Pixabay

 

 

The post On sluts and slatterns appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2019 05:30

Notable female microbiologists you’ve never heard of

Browsing through the most notable names in the history of microbiology, you could be fooled into thinking there were no female scientists working in ground-breaking fields such as antibiotic studies, bacteriology, or virology in the middle of the twentieth century. In fact, laboratories did employ women, though male scientists often thought of them as supplemental parts of teams working in a highly technical field, not contributing much in the way of impact. However, a closer look at the history of microbiology shows how women were actively involved in several important discoveries.

1930

Marjory Stephenson, born in 1885 near Cambridge, England, was instrumental in investigating bacterial metabolism, the biochemical reactions that occur within bacteria to allow them to live and reproduce. She realised that investigation of bacteria would help towards our understanding of cell biology in general. Her book Bacterial Metabolism, published in 1930, become essential reading for biologists and biochemists. Stephenson wasn’t just an excellent researcher, but also a leading educator, actively promoting training in microbiology, and co-founding the Society for General Microbiology – eventually becoming its second president.

1943

When Alexander Fleming accidentally discovered penicillin, humanity hoped to eliminate many infectious diseases. Unfortunately, Fleming’s mould strain could not produce enough penicillin for mass production. It was a little known American, Mary Hunt, who decided the future of antibiotic production. The Northern Regional Research Laboratory in Illinois hired Hunt as an expert in moulds, the lab later collaborating with the University of Oxford to find the best penicillin producer. Hunt was known at local markets as the weird woman looking for mouldy produce, and eventually found the strain which led to mass penicillin production (Penicillum chrysogenum) on a cantaloupe melon in 1943. As a result, Mary Hunt became folk legend Mouldy Mary.

1944

While penicillin helped combat widespread infections like bacterial meningitis and pneumonia, it was ineffective against gram-negative bacteria and that which caused tuberculosis. In 1944, a team of scientists from Rutgers University discovered streptomycin, the first antibiotic active against tuberculosis that wasn’t toxic to humans. One author of the paper written about their discovery was Elizabeth Bugie. However, the other male researchers didn’t include Bugie’s name on the patent for streptomycin telling her it wasn’t important for her name to be listed, as one day she would “get married and have a family”. In her time at Rutgers, Bugie also researched other antimicrobial substances and her work formed the basis of early antibiotic studies. Bugie was not the only female microbiologist who contributed to the acceleration of antibiotic development at Rutgers. Elizabeth Horning, Doris Jones, Christine Reilly, Dorris Hutchinson, and Vivian Schatz also worked during the golden age of antibiotic discovery (1950 – 1970), where half of the antibiotics commonly used today were developed.

1950s

English parasitologist Ann Bishop entered Girton College at Cambridge University in 1922. She remained there for most of her working life despite early hardships – including being forced to sit on the first aid box at departmental tea. She was part of the Molteno Institute (eventually becoming its director), one of the first labs to study malaria and its treatment. Bishop predicted that malaria parasites would develop resistance after prolonged exposure to drugs, her work on cross-resistance of antimalarial drugs significantly improving malaria therapy in the 1950s.

1970s

It was difficult for researchers to identify and investigate viruses, due to their particularly small size, before the pioneering work of June Almeida. Born in Scotland in 1930, Almeida had no official academic qualifications, but became an experienced technician and electron microscopy expert. Her technique made it possible to obtain a much more detailed look at the structures of viruses, and resulted in the first visualisations of rubella and hepatitis A in the 1970s. Almeida was an innovator as well as an educator, with the unique ability to unlock seemingly complex problems with simple solutions. Her collaborations with other virologists led to the discovery of countless other viruses.

These examples are just a small glance at female involvement in the history of microbiology. A look at any university archive will lead to many more. Female microbiologists many not have had professor or doctor in front of their names, instead listed as laboratory assistants or technicians, but in many cases their skills were critical for numerous notable discoveries. It is worth reminding ourselves who they are and how they changed the world for good, without expecting to receive the recognition that was rightfully theirs.

Featured image credit: “British women working in chemical laboratory near Manchester, 1914” by University of British Columbia Library. Public Domain via Flickr.

The post Notable female microbiologists you’ve never heard of appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 20, 2019 02:30

March 19, 2019

Warning: music therapy comes with risks

Bob Marley sings, “One good thing about music—when it hits you, you feel no pain.” Although this may be the case for some people and in some circumstances, we dispute this statement as a global truth. After all, couldn’t any phenomenon commanding enough to alleviate human pain (ostensibly instantaneously) also harbor the potential to catalyze undesirable, even injurious, effects? And couldn’t this influence then logically extend to music employed within the context of a therapeutic process? As music therapist and Concordia University Associate Professor Dr. Laurel Young writes, “the ‘miraculous’ effects of music as featured in popular media along with the widely accepted notion that music is a ‘universal’ medium can lead to false generalizations and over-simplification of how music can and should be used in healthcare or other psychosocial contexts.” One possible manifestation of this oversimplification is to view music as a noninvasive and wholly-positive cure-all, and thus disregard the potential risks associated with music engagement.

All therapeutic encounters carry possible risks. In fact, a certain level of risk-taking is necessary for any significant and enduring change to occur. And so, all therapists need a detailed understanding of relevant risks in order to discern the optimal risk-benefit ratio for the people who enlist their services.

The potential risks of music therapy vary according to the type of music experience, or method, employed. Depending on clients’ needs, they may be invited to improvise, perform, compose, or listen and respond to music. Music therapists must not only understand the inherent benefits of these various methods, but also the unique risks associated with each.

Improvising is making up music on the spot. Whereas improvising on percussion instruments in a group session may promote a sense of cohesion among adults seeking mental health treatment, for people whose connection with reality is tenuous, repetitive rhythmic sounds hold the potential to evoke psychotic reactions. And, while improvising with their voices, clients may experience various levels of emotionally-charged self-consciousness. Beyond embarrassment from using their voices and words in this expository way, the experience may evoke long-buried, unconscious memories and associations of an unpleasant or even traumatizing nature.

Image credit: “three persons playing sundry instruments at home” by Miriam Doerr Martin Frommherz. Royalty free via Shutterstock.

When clients engage in performing pre-composed music, such as a group of high school students with intellectual disabilities who learn and rehearse a piece on the tone chimes, they may feel a sense of pride in their musical accomplishments. However, they may also experience intense anxiety, dissatisfaction, and humiliation if the musical parts do not fit together as anticipated during the school assembly.

As an abused teen works with a therapist while composing an original song or instrumental piece, the energies brought to bear through musical tensions and resolutions, or the moods evoked by the sonic qualities of certain instruments, may be incongruous with the client’s lyrics. This may distort rather than clarify the client’s inner experiences, which could complicate the therapeutic process.

Finally, risks inherent to a music listening experience may include overstimulation and confusion. This would be a relevant consideration for a person who has sustained a brain injury or who has a neurologic disorder that impacts their ability to make meaning of sensory input. And, while a listener takes in music to guide their movement schemes, as when dancing for self-expression or the release of stress, there is always the risk of physical injury.

A primary responsibility of the music therapist is to intentionally design and facilitate all four methods and their innumerable variations to support clients as they face life’s trials, identify new options for acting, and build on personal and collective capacities toward enhanced health and wellbeing. Additionally, it is the therapist’s ethical duty to consider carefully the possible risks intrinsic to each type of music experience and to share this knowledge with the client as appropriate, with the ultimate aim of protecting their safety and promoting their agency in the therapeutic process.

Featured image credit: “a group of musical instruments including a guitar, drum, and keyboard” by Brian Goodman. Royalty free via Shutterstock.

The post Warning: music therapy comes with risks appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 19, 2019 02:30

March 18, 2019

Theranos and the cult of personality in science and tech

Elizabeth Holmes was a chemical engineering student who dropped out of Stanford to found Theranos: a silicon-valley start-up company that, at one point, was valued at US$9 billion. Her plan was to be another Steve Jobs and, for a while, it looked like that would happen. She made the cover of magazines like ForbesFortune, and even Glamour, wearing black polo-neck shirts and was touted as being the next big thing. Former President Clinton was a fan. Former Secretary-of-State George Schultz was an investor and on the Theranos board as were Henry Kissinger and James (Mad Dog) Mattis who stepped down as Secretary of Defense last year.

Today, she is facing fraud and other criminal charges.

It’s a long, fascinating story. Essentially, her technology was supposed to revolutionise health care: automatically performing hundreds of blood tests on a couple of drops of blood in just a few minutes. In reality, it was no more carefully thought out than an undergraduate research project. She lied to her staff, lied to her investors, lied to her board, and lied to her company’s potential customers. Her company claimed it was using new technology to perform blood tests when, in fact, it was using the same equipment as every other lab (except Theranos got poorer results because they did not have enough blood to do the tests properly).

All this is detailed in the book Bad Blood by John Carreyrou, the Wall Street Journal investigative reporter who broke the story with the help of whistle-blowers.

What fascinates me about this saga is how the scam was able to go on for so long: more than 10 years. In that time there was an incredible rate of staff turnover (including one suicide), endless lawsuits, countless missed deadlines, and a seeming inability to finish any serious studies that demonstrated the technology worked. At one stage, Holmes was even asked to step down by members of her board but managed to talk them out of it.

The cult of personality that grew up around Elizabeth Holmes accounts for both her success and failure. By all accounts she is an incredibly magnetic person and – at least to begin with – most people found her an inspiring leader and visionary. This is what made it possible to raise millions of dollars, get such prominent people on her board, and attract a lot of incredibly talented people to work with and for her.

Steve Jobs, of course, was famous for a similar kind of charisma and bullying (also an issue at Theranos), so perhaps it’s not surprising that the ‘reality-distortion field’ associated with him was present here too. But, of course, Steve Jobs wasn’t alone. He had Steve Wozniak, the technical brains that made Apple possible. Jobs could play the visionary, safe in the knowledge that Woz (and others that followed) would look after the details.

Elizabeth didn’t have that: in fact, the cult of personality built up around her made it impossible for her Woz to emerge.

Experts don’t like being undervalued, so the naysayers start to leave and the yes-men take their places.

Where there is a Great Leader figure – in business, politics, tech, academia – who doesn’t have the personality or maturity to use their power wisely, there is often a predictable downward spiral. First, our hero starts to believe that their initial success comes from extraordinary talent, and that despite their lack of education or training or experience (or all of the above) they are the only ones capable of realising their vision. This strips away any humility they might have had. Even if they have a lot of great people around them, their belief in their own exceptionalism leads them to start overruling those who disagree based on instinct rather than expertise.

Experts don’t like being undervalued, so the naysayers start to leave and the yes-men take their places. They tell the boss what the boss wants to hear, true or not. This makes those left who are willing to speak out seem even more whiney in contrast, making the ‘great leader’ paranoid. Now, loyalty is the ultimate virtue: not to the original idea or mission of the company, but to the leader. Being a team player starts being equated with doing what you’re told. Because performance now comes second to obedience, mediocrity thrives. There is no voice for excellence. Ironically, of course, the people with the most integrity and genuine passion for the project are the most likely to leave, leaving the opportunists behind.

This paranoia also cultivates a culture of secrecy and centralisation, where the left hand is prevented from knowing what the right is doing for fear that they may conspire. People are discouraged from asking questions and start to hear the answer, “Because I said so.” This lack of transparency leads to poor communication, misunderstanding of problems, and inefficiency. Since staff have no agency, many feel absolved of personal responsibility. When asked to behave unethically, they feel they have no choice. This is a road not just to failure, but disaster.

We need to stop looking for the next great leader and focus on looking for great ideas. We will figure out which will succeed and which will fail not through being charmed, or going with our gut, or believing in the latest genius, but by asking intelligent and probing questions and teaming up with people who are working hard to find answers. Yes, it’s easier to sit back passively and ride on some great leader’s coat tails. If you do, however, don’t count on reaching the destination you had in mind.

Featured image credit: ““828584” by Free-Photos. CCO via Pixabay.

The post Theranos and the cult of personality in science and tech appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 18, 2019 05:30

The future of borders in the Middle East

The collapse of Arab regional order during the 2011 uprisings provided a chance to reconsider the Middle East’s famously misshapen states. Most rebels sought to control the central government, not to break away from it. Separatist, in contrast, unilaterally sought territorial autonomy or outright secession. They took advantage of the breakdown of security services to set up their own peripheral enclaves. In the last eight years, separatists have served as foot soldiers in the coalition that defeated ISIS. In Yemen, Libya, Syria, and Iraq, they were able to sustain local orders while national politics descended into chaos. Separatists offered alternative modes of governance. They controlled oil installations, ran irrigation networks, and provided to protection to civilians that was “good enough” in the midst of brutal war. Beyond touting their service to global security, separatists invoked the Wilsonian principles of self- determination that had far-reaching impact on the region at the end of World War I. They traced their ancestry to national liberation movements that had been defeated or unjustly denied during the previous century and asked the international community to reinstate their lost sovereignty.

Kurds nationalists lament the abrogation of plans for a Kurdish national homeland sketched in the Treaty of Lausanne. Brief moments of Kurdish self-rule ended in brutal repression. After the 1990-91 Gulf War, Kurdish forces managed to expel Saddam Hussein’s troops from northern Iraq. Under the umbrella of the United States no-fly zone, the Kurdish leadership launched the Kurdistan Regional Government. Iraq’s 2005 constitution granted the regional government broad autonomy and legalized Kurdish security forces. Kurdish troops were indispensable in Iraq’s anti-ISIS campaign. Regional President Massoud Barzani hoped to parlay this contribution into territorial gains and even independence. In Syria, too, efforts to ensure Kurdish autonomy after World War I failed. In 2011, Syrian Kurds associated with Democratic Union Party exploited the breakdown of government control. Although the party averred loyalty to Syria, it also proclaimed Rojava (Western Kurdistan) as an autonomous zone. The party’s troops were the largest contingent in the US-backed Syrian Democratic Forces, which fought against ISIS in Syria.

 Years of virtually unfettered autonomy have emboldened separatists’ demands, but the international community has always treated separatists warily.

In Yemen, the Southern Resistance positions itself as the reincarnation the southern Yemeni republic. As Yemen’s regime transition turned to civil war between the central government and Houthi rebels, Southern Resistance activists seized control in several cities. With President Abdrabbah Mansur Hadi’s “legitimate government” decamped to Riyadh, Southern Resistance militias collaborated with the United Arab Emirates to root out ISIS and al-Qaeda. In 2015 the Southern Resistance formally declared its independence. In Libya separatist militias similarly rejected the feckless transitional government. With the country descending into civil war, they similarly sought to restore the sovereignty of the short-lived emirate of Cyrenaica. Separatist eventually aligned with the renegade General Khalifa Haftar and joined the campaigns to oust Islamists militias from Benghazi and Derna.

Years of virtually unfettered autonomy have emboldened separatists’ demands, but the international community has always treated separatists warily. Even as the U.S. and other power allies with separatists, the U.N. ritualistically affirms the norms of territorial integrity. But as regional wars begin to unwind, the separatist challenge has become more acute.

In 2017 the Iraqi Kurdish leadership went forward with a long-delayed referendum on independence. Ignoring warnings from allies like Iran, Turkey, and the U.S., Barzani declared that Kurds had earned the right to decide their fate. The results overwhelmingly favored independence. Baghdad and the international community refused to recognize the polls legitimacy. The Kurdish leadership fractured. Neighboring states blockaded the Kurdish territory. Iraqi troops forced Kurdish militias out of Kirkuk and other disputed areas. Although civil war was averted, tensions between the regional government and Baghdad remain high. The recently announced US withdrawal from Syria puts the Democratic Union Party in a precarious position. The party’s militia have no hope to stand up to the Turkish army, which opposes any Kurdish activity along its border. The party’s leaders have turned to Assad and Russia for support. The status of the Kurdish group in future constitutional negotiations is uncertain, setting up another potential round of conflict.

In Libya, separatist militias appear exhausted after years of fighting. Haftar, meanwhile, plots a march on Tripoli with the backing of Cairo, Abu Dhabi, and Moscow. The European peace initiatives for Libya so far ignore the concerns of the separatists, but any conciliation or constitutional reforms will have to reckon with their aspirations. The situation in southern Yemen is even more volatile. Upon the announcement of the December 2018 ceasefire the Southern Resistance reiterated its intent to secede. Houthi leaders accused the Southern Resistance of trying to scuttle the peace. Still, Yemen’s functionally moribund government can do little to rollback separatist control.

Separatists stand athwart the standard approach of resolving civil wars by strengthening and empowering states. Separatists loath to share power in states whose sovereignty they intrinsically dispute. Instead of integration, separatists will likely seek to block encroachment upon their hard-earned autonomy. Even if military defeated, separatist ambitions are recurrent, awaiting the next opportunity to regain power. In Yemen, Syria, Iraq, and Libya, then, they are poised as spoilers to a still-nascent peace.

Featured Image Credit: “060411-A-1067B-007” by Morning Calm Weekly Newspaper Installation Management Command. CC BY-NC-ND 2.0 via Flickr.

The post The future of borders in the Middle East appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 18, 2019 02:30

March 17, 2019

How boring was life in the British Empire?

Boredom is a pervasive problem. Teenagers suffer from it. Workers are afflicted by it. Psychologists research it. Academic conferences are devoted to it. There is even evidence that you can die of it. And while there are those who claim that boredom can foster creativity, many people would rather give themselves an electric shock than be bored.

The word itself was not used until the mid-nineteenth century (in Charles Dickens’ Bleak House). The feeling, however, saw increased expression beginning in the 1760s with the phrase “to be bored,” in the sense of being made weary by tedious conversation. This occurred, significantly, alongside the expansion and transformation of the British Empire after the Seven Years’ War.

For centuries, the British Empire has been portrayed as a place of adventure and excitement. Novels and films, from Robinson Crusoe to Lawrence of Arabia, romanticized the empire. Real-life heroes from Walter Raleigh to Cecil Rhodes were celebrated for their vision and valor.

Yet in 1896, after only one month in India, twenty-one year old Winston Churchill declared Britain’s largest and most important colony “dull and interesting.” Nor were his views unique. “The same sameness day after day,” complained George Hennessy, a lieutenant colonel serving in Kandahar in 1879 during the Second Afghan War.

Several decades earlier, John Henderson, who emigrated to New South Wales in 1838 at the age of nineteen, wrote that out in the Australian bush a man “eats and drinks, and sleeps; what then? He eats, and drinks, and sleeps again.” According to Henderson, every day was “a repetition of the one that went before.” He grumbled about “dull evenings” and cautioned that the settler’s life was “monotonous and toilsome.”

Imperial boredom affected women as well. Anna Jameson, a British writer who traveled to Toronto in 1836 to visit her husband who had been appointed chief justice of Upper Canada wrote about “The monotony of this, my most monotonous existence.” Similarly, Fanny Parkes, an enthusiastic sightseer who spent twenty—four years in India while her husband, an East India Company employee, was stationed in Allahabad, wrote about her “struggle against this lifeless life.”

Boredom was particularly a problem for imperial officials “Dullness is the central characteristic of an Indian viceroy’s life,” remarked Lord Dufferin, who resigned a year before his term was up in 1888. “Routine,” noted Leonard Woolf in his diary for 10 November 1908, an entry he repeated each day for four straight days and numerous other times while serving a three—year appointment as a government agent in Ceylon.

There were many reasons why British men and women were bored in and by their empire. Some were situational, such as the grueling 3—4 month voyage to India or Australia, the small size of British colonial communities, and the absence of familiar recreational activities. Communication problems were also legendary, especially in the pre-telegraph age when the time lag between letters sent and received meant months of waiting for news.

Image credit: The Viceroy of India, 6th Earl of Mayo (1822–1872), receives Sher Ali Khan (1825–1879), Amir of Afghanistan, at Ambala, India, 27th March 1869. Photo by Hulton Archive/Getty Images. Public domain via Wikimedia Commons.

Other factors had to do with the changing nature of the empire. The bureaucratization of imperial administration during the nineteenth century produced a marked increase in regulations and paperwork, leaving imperial officials with less influence, autonomy, and free time. Governors and administrators found themselves drowning in dispatches and frustrated by the monotony of what they thought would be a great adventure. There were also fewer opportunities to interact with indigenous people as the empire became increasingly ceremonial.

Soldiers were affected as well.  Despite Britain’s many colonial wars, some regiments experienced lengthy gaps between periods of active duty. The Royal Lincolnshire, after serving in India from 1846—58, did not fight again until it was sent to Malaya in 1875—6, and then endured another long hiatus until it went to the Sudan in 1898. And the Cheshire Regiment, which fought in the Sind War of 1843, was essentially at rest for forty—five years afterwards until it saw action in Burma in the late 1880s. Soldiers stationed in the empire could go decades without participating in a single skirmish.

Military boredom was also a function of unmet expectations about personal happiness and professional fulfillment, as recruiting posters and popular accounts of past military endeavors implied that soldiers would spend their time fighting, not marching, drilling, or sitting around doing nothing. Isolation, which afflicted men in the more remote locations, contributed to feelings of boredom as well. So too did the hot temperatures and torrential rains that were the hallmarks of imperial service in India and which confined solders to their barracks for hours a day. Many soldiers complained about the “uniform sameness” of their daily routine, as one staff sergeant put it. He remarked that both young and old were afflicted by “languor and lassitude.”

Changes in the size of the British Empire also influenced imperial soldiering. By 1900 Britain was supervising, formally or informally, an empire on which the sun never set, and with it came massive administrative responsibilities as the predictability of policing replaced the thrill of conquest. Many soldiers who had enlisted with enthusiasm succumbed to “apathetic indifference” and a loss of motivation. Some resigned from military service simply because they were bored.

The boredom experienced by the British in the service of their empire has many parallels in the contemporary world, and not just in Britain. In 2016, British Armed Forces Minister Mike Penning, who joined the Grenadier Guards as a 16—year old and later served in Kenya and Northern Ireland, confessed that he got “very bored” in the Army, a sentiment shared by soldiers on both sides of the Atlantic, including former US marine Anthony Swofford, who confessed in Jarhead (2003), his chronicle of the Gulf War, that his “despair” was from “boredom and loneliness.” Expats, too, have complained about getting bored with their lives overseas. And, it is clear that when businessmen are dispatched to faraway offices, the managers who send them need to be mindful that the excitement of a foreign posting can quickly dissipate, undermining morale and productivity.

Finally, there is the tragic story of the two Navy SEALS who died of a heroin overdose on board the Maersk Alabama, one of the largest cargo ships in the world, which they had been hired to protect from pirates. Boredom, the men had told their friends, was the real enemy at sea, and perhaps of empire­—building more broadly.

Featured image: British Empire map, 1886, printed on linen. From the British Library. CC0 via  Wikimedia Commons

The post How boring was life in the British Empire? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 17, 2019 05:30

The case for citizenship for US immigrants serving in the military

The United States has a long history of immigrant military service. Immigrants who serve in the armed forces during declared hostilities, including the period after 11 September 2001, are eligible for expedited naturalization. However, those who naturalized through military service since 24 November 2003 are vulnerable to potential revocation of their US citizenship. This presents unique and unacceptable risks for non-citizens who volunteer to serve in the United States military beyond the already grave dangers that all service members agree to bear.

Immigrant military personnel are in need of the protection of the United States beginning when they take an oath to “defend the Constitution of the United States against all enemies, foreign and domestic” and to “bear true faith and allegiance to the same” as part of the enlistment process. Despite the risks that come with swearing allegiance to the US and serving in its military, non-citizen enlisted personnel are not assured of US citizenship. Those who assume the risks of volunteering for foreign military service deserve protection by the state that is recruiting them.

Once immigrant military personnel pass security checks, swear allegiance to the United States as part of their oath of enlistment, and begin to serve in their adopted nation’s defense, they should receive unconditional naturalization for both their loyalty and service. Currently, conditional naturalization does not adequately protect immigrant military personnel who face possible revocation of citizenship in their country of origin if they naturalize elsewhere. Most naturalized citizens who lose their nationality of origin by naturalizing elsewhere are in minimal danger of losing their newly acquired US citizenship, unless they commit fraud during the process. Armed forces personnel who naturalize through one of the military naturalization statutes can also lose their citizenship if they receive an other than honorable discharge from the military for any reason during their first five years of service. Immigrant military personnel should not be prone to denaturalization after they have served in their adopted country’s defense. Nor should they be without protection from subsequent vulnerabilities to deportation and statelessness.

Immigrants who earn their naturalization through military service are at once exemplary and vulnerable citizens. They are commended by other citizens for their service but uniquely prone to denaturalization. Here, the danger comes from state actions that are overly responsive to fears about immigrant loyalties. In an all-volunteer army, everyone who enlists takes on risks and obligations not shared by their fellow citizens, who rely upon them for their nation’s defense. By asking the United States not to deport or denaturalize them, immigrant military personnel are invoking a basic moral claim to citizenship as reciprocity.

In the process, immigrant military personnel are invoking traditional questions of political obligation in new ways. They are not asking for a justification for why they should be required to serve their adopted country and potentially sacrifice their lives in its defense. This question is more relevant to states that still conscript their civilian residents to serve involuntarily in their militaries. In an all-volunteer military, both immigrant and citizen military personnel and veterans willingly agree to take on demanding civic obligations. The question here is not what they owe the state, but what the state and its citizens owe military personnel and veterans in return for their loyalty and service.

Based on their allegiance, service, and the risks they incur on behalf of their adopted country, the US government and its citizens bear primary moral responsibility for the welfare of their personnel. Those who enlist in the US military as non-citizens may be required to go to war with their country of national origin. As a safeguard against any repercussions they may face from their country of origin, and as a reward for voluntary service that extends beyond what is required of civilian US citizens, immigrant military personnel need and deserve the protection of irrevocable US citizenship.

Featured image credit: POM/DLIFLC Memorial Day Ceremony 2017 by Presidio of Monterey. Public Domain via Flickr.

The post The case for citizenship for US immigrants serving in the military appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 17, 2019 02:30

March 16, 2019

Seven reasons why failure is impossible for feminists

In 1906, an 86-year-old woman greeted a room full of suffragists who were still fighting for the right to vote. Susan B. Anthony made her last public statement: “But with all the help with people like we have in this room, failure is impossible.” She died a month later, and it took until 1920 for women to be finally able to vote. In this era of a president who is proud to be a pussy-grabber, it is understandable for feminists to still get discouraged. Yes, lawmakers are trying to restrict a woman’s bodily autonomy through drastic anti-abortion laws. Yes, the stories of sexual assault/harassment are depressing because of their ubiquity as seen in the high-profile cases such as R. Kelly and the millions of MeToo posts. However, Susan B. Anthony’s defiant statement that “failure is impossible” still rings true. Here are seven reasons why her optimism is still spot-on.

The 2018 election results. Now 127 women serve in the House and Senate. In several states, women have also taken leadership roles. They are facing down the hostility of those who would rather insult them than listen to them. Fortunately, these women are following the dictum: You are either at the table or on the table.The heritage of the suffragist movement, which was honored by the Congressional women in whiteWe are excited to celebrate the upcoming centennial of the Nineteenth Amendment in 2020. It took over a century for women to get the vote. Men in power did not graciously hand out the right to vote as a present; thousands of women had to fight for it. Alice Paul, for instance, went on hunger strikes and was force-fed with a metal contraption—thus earning the title “Iron Jawed Angel.”Ruth Bader Ginsburg, the exemplar of a strong-willed woman. Ginsburg continues to inspire us. She has become known as the Notorious RBG for her long history of legal battles for gender equality and other progressive causes. A  recent documentary about her life and career  was nominated for an Academy Award, presenting an opportunity for more people to learn from her example.The younger generation and their enlightened approach to equality and the nonbinary nature of gender. Traditional norms, which punish persons who cross the gender line, may become obsolete as we embrace our trans brothers and sisters. Other nonbinary persons are also coming out in public to be recognized for our shared humanity.Men, both teens and adults, who are standing up to toxic masculinity. The  “It’s on Us” campaign , for example, encourages males to step up if they see harassment or other wrongs. Gillette called out toxic masculinity in a  much-discussed commercial . President Obama has recently spoken out about masculinity: “ If you are very confident about your sexuality, you don’t have to have eight women around you twerking.” Women of color, who are ensuring that the voices of all females are heard. Intersectional feminism is well represented by women such as Tarana Burke, who started the MeToo movement in 2006 for young girls (mostly of color) in Alabama. Native American women, once invisible in the mainstream media, are now gaining recognition through the election of two Native women to Congress. Deb Haaland and Sharice Davids are challenging the “Indian princess” stereotype of a submissive female—Davids even featured her martial arts in campaign ads.Changing norms around respecting women in public spaces. Women in the  “Hollaback” movement  have confronted harassers in the U.S. while an international movement has grown in India and other countries. No longer should a woman be hesitant to walk down a sidewalk because of men telling her to smile or worse. No longer should a woman be called a “bitch” if she does not smile or accept the “compliments.” Society is also rethinking the derogatory language of slut-shaming and other destructive practices. Women are continuing to demand respect because like the right to vote, men in power will not graciously hand it to them.

Failure is impossible. These are words not only of hope but of joy because the future has so much potential. Whether I talk to a young person who is excited about making a change or a veteran activist who has endless perseverance, I know that equality is not only possible but probable. We are honored to continue the fight of the suffragists.

Featured image credit: Neon light by Sarah McKellar. Free for public use via Unsplash

The post Seven reasons why failure is impossible for feminists appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 16, 2019 05:30

When a river is dammed, is it damned forever?

Since the dawn of advanced civilizations, humanity has sought to manage the flow of rivers. Protection from floods, water for drinking and irrigating crops, and extraction of resources like food and energy are among the most popular reasons for building dams. Early successes in controlling the flow of rivers for society’s benefit led to more construction, reaching the point that in Europe and North America today most prime locations for placing dams have been taken. Additionally, many large river systems in the developed world have at least one but often multiple dams on them. Estimates suggest that water in reservoirs has increased the world’s lake environments by nearly ten percent. In parts of the developing world, new dam construction is now booming as well, as emerging economies try to make up for lost time.

As the dam construction boom continued in Europe and North America, policymakers seldom considered the full ecological costs. This was in part because the environmental costs of harnessing rivers were poorly understood. We now know that a myriad of physical, biological, and water quality impacts are associated with placing a dam on a river. Dams alter riparian and downstream habitats, replace sections of river with impounded lakes, and disrupt the flows of sediment, organic materials, and the migration of aquatic organisms like fish. Altering natural flow regimes also disrupts complex interactions among geomorphic, fluvial, and biological processes that are critical to ecosystem function.

Now that the ecological costs of dam construction are better understood, research has led some policymakers to implement new technologies to minimize impacts, such as better situating dams on the landscape, changing flow regimes to accommodate downstream organisms, and creating fish passage structures that allow fish to swim upstream and downstream.

In addition to mitigating impacts of current and future dams, society is also dealing with some existing dams by removing them. This is particularly the case for old, hazardous dams or those no longer serving their original purpose. In the United States, more than 1,400 dams have been removed from rivers; likewise, there are many completed or planned dam removal projects in Europe. Over the past three decades, researchers have been studying the outcomes of about 9% of these removals, which leads us to better understand the question posed above: “When a river is dammed, is it damned forever?” In other words, can natural functions of river ecosystems be restored after dams are removed? If so, how long does it take? And can these ecological outcomes be predicted before dams are removed?

Research suggests that river ecosystems—when given the opportunity—can recover after dams come down. Importantly, the trajectory of ecological recovery can be predicted. Although ecological responses to each dam removal are different because of unique local and regional conditions, including the dam’s size, location in the river network, and the river’s history and surrounding land uses (such as agricultural and urban development), the physical and biological processes that govern how ecosystems respond to dam removal are similar. Recent empirical studies and established theories about how rivers work provides a framework for identifying these shared physical and biological processes, and in so doing, creates a template for predicting how the ecology of a river responds to dam removal.

In a recent BioScience article, we constructed conceptual models that elucidate these shared physical and biological processes. These models can help resource managers, river restoration practitioners and other stakeholders identify the major factors likely to control ecological responses to future dam removals. By identifying the most important factors involved with ecosystem response, this approach should also help scientists predict ecological outcomes and identify which variables should be monitored. In turn, the knowledge gained from such monitoring can be used to further refine our conceptual understanding, and our ability to predict outcomes of future projects.

Conceptual models and a growing number of empirical studies suggest that rivers can indeed substantially restore functions lost from having been dammed. But the structure and function of the ecosystem may not be the same or even similar to what existed before dam emplacement. For instance, damming rivers causes changes in ecological communities, as native species adapted to flowing waters are replaced by nonnative species that thrive in reservoirs. Therefore, the ecological communities that assemble following dam removal may be different than those that existed before the dam was constructed. Managers and practitioners can use conceptual models to help stakeholders and community members understand the management context and potential range of ecological responses to dam removal and therefore the most likely future conditions, thereby informing expectations for ecological recovery.

Featured image credit: Glines Canyon Dam and Lake Mills post dam removal. CC0 via U.S. Geological Survey.

The post When a river is dammed, is it damned forever? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 16, 2019 02:30

March 15, 2019

Beer before wine – can we avoid hangovers that way?

As St. Patrick’s Day approaches, many dread the incapacitated hangover of the day after – when the nausea hits you and you cannot do anything but lay in bed and every movement worsens your pounding headache. Wouldn’t it be helpful to have ways to lessen the burden of alcohol-induced hangover?

A hangover is a complex of symptoms following an evening of heavy drinking that includes thirstiness, fatigue, headache, nausea, and dizziness. Even though we are more than familiar with its symptoms, scientists still don’t fully understand all the mechanisms that lead to alcohol-induced hangover and medical remedies are yet to be found.

Some would advise not to drink at all, but on a day like St. Patrick’s Day, for many, that’s not a valid option. Others endorsing tactical drinking often relay on old-aged wisdom such as “Beer before liquor, you’ve never been sicker – Liquor before beer, you’re in the clear!” or “Beer before wine, and you’ll feel fine – Wine before beer and you’ll feel queer!” The Irish would probably suggest just sticking with their tasty dry stout. But is there any evidence for the idea that the order in which you drink alcoholic beverages can affect how you feel the next day?

In a recent study, German researchers undertook a randomized clinical trial to test the effect of the order of beer and wine consumption on the next day’s hangover severity. The researchers enrolled 90 volunteers and matched them for gender, age, height, weight, drinking habits, and hangover frequency, before randomizing them into three groups – two study groups and one control group.

Participants in study group 1 enjoyed Carlsberg premium Pilsner up to a breath alcohol concentration of 0.05% (the legal driving limit in Germany) and were then switched to white wine until they reached a happy 0.11%. Study group 2’s regimen was the other way around. On average, each volunteer consumed three large beers plus almost a bottle of wine in each study group. A third group, the controls, drank either just beer or just wine.

Researchers assessed hangover severity the next morning when breath alcohol concentrations had returned to zero. After a clear out phase of one week, volunteers returned for a second evening and switched their alcohol consumption regimen.

Interestingly, the team did not find any truth in the old folklore indicating that drinking beer before wine would lead to a milder hangover, undermining all efforts of tactical drinking. Women reported slightly worse hangovers compared to men, but neither age, sex, body weight, nor even drinking experience were good predictors for how severe the hangover was the next morning.

On a side note, for the whiskey lovers – be warned. An earlier study compared bourbon vs. vodka, showing that less distilled whiskey significantly worsens hangover severity the next day. Since the study did not cover Irish single malt whiskey, that might be worth a try – in the spirit of this holiday.

For now, the authors conclude that only that perceived drunkenness and the personal gut feeling—in its truest meaning—are good predictors of the hangover the following day. And they remind us that a hangover is an important body reaction, telling us that excessive drinking is unhealthy.

Featured image credit: South Rims Wine and Beer by Scottb211. CC BY 2.0 via Wikimedia Commons

The post Beer before wine – can we avoid hangovers that way? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2019 05:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.