Oxford University Press's Blog, page 171

October 31, 2019

Seven things you didn’t know could kill you

Medicine has advanced so much over the years, it’s hard to believe that some diseases still exist or don’t have a cure. Commonly known conditions such as cancer, stroke, and heart disease are scary enough, but there are plenty of other conditions that are potentially deadlier. From Black Death to Stone Man Syndrome, here are some lesser known medical killers, many of which have no cure.

Commotio Cordis
Commotio Cordis refers to a severe cardiac arrhythmia, triggered by a blow to the chest directly over the heart. This can lead to sudden cardiac death. It occurs most often in organized sports, either in the form of a projectile (baseball, hockey), or in contact sports – for example, by taking a knee to the chest in football, getting punched in boxing, or getting kicked by a horse. Internationally, most cases occur in football.  Commotio cordis accounts for around 20% of sudden deaths in US sports. Necrotizing Fasciitis
Commonly known as the flesh-eating disease, necrotizing fasciitisis is an infection that causes the death of soft-tissue in the body. This serious disease spreads through the body rapidly, therefore treatment is needed urgently but often the clinical signs and symptoms are nonspecific, such as fever, chills and pain. Even with treatment two in every five cases is fatal. Amyloidosis
Amyloidsis is a rare disease caused by an abnormal build-up of a protein called amyloid. The build-up of this protein can make it difficult for organs and tissues to work. Without treatment the disease is fatal within five years, usually due to heart or renal failure. A common treatment is chemotherapy, but even that is not a cure because the amyloid deposits cannot be removed directly. Creutzfeldt-Jakob Disease
Creutzfeldt-Jakob disease is a rare and fatal disease that affects the brain and is usually characterized by rapidly progressive dementia. There is no cure for the disease. Treatment aims to make the affected person as comfortable as possible. Death occurs within a year for 80% of patients. Fibrodysplasia Ossificans Progressive
Fibrodysplasia Ossificans Progressive, or Stone Man Syndrome, is an extremely rare human genetic disease, affecting one in every 2 million people. It is a disorder in which muscle tissue and connective tissue such as tendons and ligaments are gradually replaced by bone, forming bone outside the skeleton that restricts movement. This incurable disease causes most people to become immobilized by the third decade of life die before age 40.  African Trypanosomiasis
African Trypanosomiasis, also known as sleeping sickness, is an insect-borne parasitic disease transmitted to humans and animals by tsetse flies. The first case reports of the disease go back to the 14th century. Its impact in Africa has been enormous. Many areas were long rendered uninhabitable for people and livestock. During the early decades of the 20th century, millions may have died in Central Africa around Lake Victoria and in the Congo basin.
If caught early enough the disease can be cured. In the late stages of the disease, however, treatment is difficult and dangerous; all of the drugs used are toxic and have many side effects, some potentially lethal. Bubonic Plague
Few diseases have impacted human history and culture like the plague. It has been credited with the fall of great empires, inspired literature and art, and shaped the way we view disease. Plague persists today, although with few cases and infrequent outbreaks around the globe. The United States on average, only seven human cases are reported each year.
Bubonic plague, or Black Death, is the most common form of plague characterized by buboes (painful swollen lymph nodes that are tender to touch). If untreated, bacteria quickly invade the blood stream and without antibiotics death can occur in 2-6 days.

As worrying as some of these diseases are, the chances of you getting one are very unlikely. As medicine continues to progress and awareness grows, perhaps we will eventually see advances in treatment or eradication of these diseases.

Featured image credit: “exit signage” by perry c via Unsplash.

The post Seven things you didn’t know could kill you appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 31, 2019 02:30

October 30, 2019

An etymological aid to hearing

As promised, I am continuing the series on senses. There have already been posts on feel and taste. To show how hard it may be to discover the origin of some of our most basic words, I have chosen the verb hear. Germanic is here uniform: all the languages of this group have predictable reflexes (continuations) of the ancient form hauzjan. Tracing the way from hauzjan to Engl. hear, German hören, Dutch hooren, and Icelandic heyra is not worth the trouble, because all the changes from the protoform are regular, that is, they obey some well-known phonetic rules. Gothic, recorded in the fourth century, had hausjan “to hear.” If hear means and has always meant “to perceive sound,” what impulses made people coin it? In one post after another, I celebrate the role of onomatopoeia. Here is a linguistic event in which we could expect reference to sound, noise, or at least vibrations, but, alas, neither hauzjan nor hausjan produces any vibrations worth mentioning.

He heard the whole world. Ludwig van Beethoven by Joseph Karl Stieler. Public domain via Wikimedia Commons.

Yet one easily notices something else: hear rhymes with ear, and the rhyme is ancient (the Gothic for “ear” was auso). Is it possible that haus- (-jan, the ending of the infinitive, need not bother us) is aus with some mysterious prefix? After all, if s-mobile exists, why not h-mobile? Such a solution would leave us with the question about the distant origin of aus– “ear,” but this question exists anyway, and it would be better to have one riddle rather than two. Since many learned people refuse to believe that hear and ear are so strikingly similar by chance, the hunt for the origin of h- began long ago.

Smobile may be a nuisance, but it occurs in dozens of words, while the prefix h does not exist. To anticipate a tempting hypothesis, it should be said at once that h-dropping, (which presupposes the loss of initial h and adding this sound where it does not belong, as in the hatmosphere of the hair and ‘Arry for Harry) did occur in Old Germanic but was rare (nothing to compare with perhaps the most famous feature of the Cockney dialect), and reference to this process in our case cannot even be considered.

In searching for the origin of ancient Germanic words, we try to find their cognates outside Germanic. When it comes to consonants, our guide is a set of correspondences, known as Grimm’s Law, or the First Consonant Shift. According to it, where Germanic has f, th, and h, Sanskrit, Greek, Latin, Celtic, and Slavic are expected to have p, t. and k. Is there a non-Germanic word for hear that begins with k? Sort of, as they say. The Greek for “I hear” is akoúō. Assuming that a- is, from a historical point of view, a prefix, koú looks like a semi-respectable partner of Germanic hau-s, especially if we remember Greek a-kous-iō “I listen attentively,” which has s in the root (and which we know from acoustics). But this bold dismemberment of the Greek words makes one feel uneasy, for, among other things, the “prefix a-” here is no less enigmatic than h in hauzjan. In this connection I wish to cite, not for the first time, an important law (I call it important, because it was I who formulated it): correct etymologies are rarely, if ever, stunningly complex.

Audition: It is all about hearing. The audition by Jacques Wely. Public domain via Wikimedia Commons.With such ears one can afford to be stubborn. Image by Håkan Dahlström, CC by 2.0.

There also is Latin audire (cf. Engl. audible and audition) and auscultare (recognizable from Engl. auscultation). Both mean “to listen” (not “to hear”!), and both contain au. Auscultare, rather obviously consists of aus– and –cultare, with aus– being an exact counterpart of Gothic aus-o “ear,” rather than haus-, with its indestructible initial h.

So far, the path from hear to Greek and Latin has led us nowhere. But one notices Greek koéō “I observe,” with a few related Latin words, familiar to English speakers: consider cavēre “to beware,” as in caveat, caution, and kudos (the latter from epic Greek, meaning “glory,” literally “that which is heard of”).

Slavic too has a few related forms, and so do Old English and Old Icelandic, but, surprisingly, the Germanic cognates point to sight, rather than hearing! Engl. show and sheen belong here, both with old s before k in sk-, which became sh– (movable s, s-mobile of course!). This comparison is fairly suggestive, and, if it is correct, it teaches us an important lesson: when we search for the related forms of the words designating senses, we shouldn’t hunt for exact synonyms. In the recent discussion of feel and taste, we kept running into “touch,” and, if hear has any good cognates, they need not mean “hear”: “observe,” “perceive, “be cognizant of” are good enough.

The root haus (from Indo-European kaus) might indeed mean something like “take note of,” but why this complex was endowed with such a  meaning remains a puzzle, and we would still want to understand the reason for the incredible similarity between hear and ear? Is it due to chance? We’ll never know.

Sam Weller, perhaps the most famous h-dropper ever. Home School of American Literature, via the Internet Book Archive. No known copyright restrictions.

The word ear, contrary to the verb hear, has unquestionable cognates and, along with eye, foot, heart, nose, and nail, goes back to the oldest stage of Indo-European. Since hear has rather dubious relatives outside Germanic, while the ancestry of ear has been established beyond reasonable doubt, a union between hear and ear begins to look even less credible than before. To be sure, we can suppose that some ancient verb meaning “hear” was altered by popular usage to sound like the name of the organ of hearing, but, since this hypothesis cannot be proved, there is no point discussing it.

By way of conclusion, it should be added that not only a word like hear did not have to mean “to perceive sound” when it was coined. The names of physical defects may be equally non-specific. It is quite likely that the Greek cognate of deaf is tuphlós “blind.” The two words are connected in that they mean “confused, embarrassed, unable to react.” Their later development is also characteristic: in several Germanic languages, the verbs derived from the root of deaf mean “to be mad, crazy; to rage; to debilitate.” The change of dumb from “unable to speak” to “stupid” is known only too well, and that is why the older term deaf and dumb has yielded to deaf and mute.

Deaf, blind, and so forth may be amazingly obscure, because they were subject to taboo. People were afraid to pronounce some words for fear of inviting the objects, diseases, and wild animals designated by such dangerous words: for example, you will say deaf and become deaf! Sounds would be transposed or changed in them. Such garbled words defy the efforts of the most ingenious etymologists. It does not seem that see and hear, along with the names of the corresponding organs, fell victim to taboo. Yet, as follows from the story told above, their origins are also hidden better than one could wish for.

Feature image credit: In the ear hearing aid. CC0 1.0 Universal Public Domain, via Wikimedia Commons.

The post An etymological aid to hearing appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 30, 2019 05:35

How Brexit may have changed Parliament forever

During 2019, the Brexit process has radically changed the dynamics between the prime minister and the House of Commons. Normally the United Kingdom’s government, led by the prime minister and her Cabinet, provides leadership, and drives and implements policy while Parliament exercises control over the government by scrutinising its actions and holding it to account. This is a carefully balanced relationship, although a government with a strong majority can dominate decision making in the House of Commons.

For a government with the slimmest of working majorities, and now no majority at all, piloting Brexit through Parliament has until now proved unachievable. “Impasse” and “kicking the can down the road” have become the most overworked expressions of the Brexit process, but they aptly describe the deadlock, delays and divergent views that have characterised it. The Brexit timeline is well known but briefly, the United Kingdom began the withdrawal process (known as triggering Article 50) on 29 March 2017 and was set to leave the European Union on 29 March 2019, a date enshrined in law by the European Union (Withdrawal) Act 2018. In the interim, Theresa May, then prime minister, negotiated with the European Union a withdrawal agreement and a non-legally binding political declaration on the future relationship between the United Kingdom and the European Union.

The European Union repeatedly made it clear that the withdrawal agreement was not open for renegotiation but MPs in the House of Commons rejected it three times overwhelmingly. The first rejection in January 2019 saw the government defeated by a historic 230 votes. Then over three days between 12 and 14 March, MPs not only rejected the withdrawal agreement again by 149 votes, but voted against leaving the European Union without a deal, and resoundingly supported an extension to the withdrawal date to 30 June. Criticising Parliament for avoiding making a choice and simply saying what it did not want May was forced to request an extension to avoid the United Kingdom leaving the European Union without a deal, and the European Council offered a delay until 12 April (or 22 May if the withdrawal deal was approved).

Then, in an imaginative response which set a modern constitutional precedent, Parliament attempted to take over the driving seat and assert control over the Brexit process. Normally, the government controls the House of Commons timetable, but Sir Oliver Letwin MP proposed that MPs take control of Commons business on specified days through a mechanism known as business motions which can be used to change the timetable. David Lidington, then Minister for the Cabinet Office, said that the Letwin proposal “would overturn the balance between Parliament and the Government” but it was successful, enabling two things to happen: MPs would have the chance to vote on their preferred alternatives to the government’s withdrawal deal (known as indicative votes), and Yvette Cooper MP could introduce draft legislation seeking to prevent a no-deal exit on 12 April. A proactive, assertive House of Commons was stepping on to the government’s turf of steering policy and negotiation.

The MPs aimed to find consensus but when MPs voted on 27 March and 1 April, none of the options secured a majority. In any event, the votes were not legally binding so would not have committed the government to adopting the outcome. On 29 March, MPs rejected the government’s withdrawal agreement for a third time and Theresa May said that she feared that they were “reaching the limits of this process in this House.”

With the risk of a no-deal exit on 12 April increasing, the Commons passed Yvette Cooper’s legislative proposals by one vote (despite one MP’s misgivings about what he perceived as their “constitutional flaws”) and subsequently came into force as the European Union (Withdrawal) Act 2019. Effectively, this enabled the Commons to require the prime minister to request an extension to avert a no-deal exit, although May actually requested an extension three days before the Act came into force. On 11 Mrs May and the European Council agreed a delay until 31 October.

As Theresa May argued before the Commons on 11 April 2019, this is not normal British politics. On 24 May, she resigned as prime minister and was succeeded by Boris Johnson. The Brexit process has created a battle of wills between government and Parliament which distorts the delicate balance between them, and subsequent events have shown that over-assertiveness on either side risks damaging conflict.

Featured image credit: “The iconic british old red telephone box with the Big Ben at background in the center of London” by ZGPhotography. Royalty free via Shutterstock.

The post How Brexit may have changed Parliament forever appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 30, 2019 02:30

October 29, 2019

How to communicate with animals

More than ever, humans need to find new ways to connect to other animals. In the United States alone, over 150 million people have a pet. There are over 10,000 zoos worldwide, with each of the larger zoos having several thousand animals. These and millions of other animals rely on humans for their survival. It wasn’t always like this. When humans were hunter-gatherers, we were immersed in animal life and had deep respect for other animals who didn’t need us to survive.

This changed when humans began to domesticate other animals thousands of years ago. Humans are not going to become hunter-gatherers again. But does that mean we need to experience our relationship with other animals as our domesticating them, as our taking care of them? What we mostly have now are two kinds of parent-child relationships with animals. One kind is the domineering parent to the submissive child. To make wild animals safe, they are made to be submissive, like “breaking” a wild mustang. The other kind is the loving nurturant parent to the dependent child…even when they are adult animals. This is the most common relationship today.

Is there another way we can relate to animals today? Yes! We can work to be animal whisperers.  How can we do this? By experiencing shared reality with them in an adult-adult relationship. We have all heard about horse whisperers. The true horse whisperers signal to the horse that what matters to the horse also matters to them. Over time this builds trust. The horse and the whisperer become partners.

And it is not just horses. Many humans have dogs whom they love and care for. But does that make them dog whisperers? Dog trainers recognize as the highest level of human–dog relationship the adult–adult relationship. What is this? A dog and its human partner approach another dog. The dog wants to play with this other dog. But it first looks to check how their human partner is reacting to this other dog. Is my partner having a positive or a negative reaction to this other dog? If positive, then approach. If negative, then avoid. Importantly, the complementary situation for the human also occurs. Is my dog, my partner, having a positive or a negative reaction to this other dog? If positive, then approach. If negative, then avoid. This is an equal, adult–adult relationship where the human and the dog as partners learn from each other how to react to a third party. From the beginning, they know this third party matters to both of them. They have shared relevance.

This adult-adult relationship with another animal is not just about the animal learning to trust the human. It is about the human taking turns with their animal partner about who takes the teacher role and who takes the learner role. It is about humans trusting their animal partner as someone to learn from. And it is even more than this. In this way, humans can learn from their animal partner about what matters in the world, what in the world is worthy of attention. The humans should signal their respect for the animal, their gratitude that the animal has taught them something about the world that they did not know before. This is the kind of respect of and learning from animals that we had as hunter-gatherers. Simply put: We should say thank you!

How can this be done with a partner who does not have language or the knowledge that we have? Most of us have already had this experience with a partner who does not have language or the knowledge that we have…and they are called children. Most of us have experienced seeing the world in a new way with a very young child who directs our attention to something we overlooked or ignored that the child teaches us is worthy of our attention. We all know how magical that can be. And when we partner with a child to learn from that child what matters in the world, the child experiences that he or she is being respected by us, is someone we can learn from. It is no longer just a parent to child interaction. And we should thank them too.

Our animals can do the same for us if we treat them with respect and pay attention to what they know that we do not—creating a shared reality with them when they are the teachers and we are the learners. And we can initiate shared reality interactions by looking for and noticing what is grabbing their attention that we could learn about. Moreover, it is not just learning from pets. We can also learn from our billions of farm animals. Have you ever seen a cow jumping for joy in the fresh spring grass? It makes you appreciate fresh spring grass in a new way.

We all need to become animal whisperers. That will be great for the animals. It will be great for our relationships with them. And it will be great for our learning about the world in which we live.

Featured Image Credit: Image by Drew Hays via  Unsplash

The post How to communicate with animals appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 29, 2019 02:30

October 26, 2019

Brexit: when psychology and politics clash

With the recent publication of the UK Government’s Yellowhammer document outlining the financial disaster forecasted for Brexit, it would seem reasonable for people who voted to leave the European Union to change their opinions. Psychological research, however, suggests that once people commit to a decision, albeit a bad one, they are reluctant to change their minds.

Why do so many British voters still want to leave the EU? Brexit was supposed to be about sovereignty but it is really about ownership and taking back control as championed by the political populism that is sweeping across the Western world. Most people aren’t populists but they can easily become so. One reason is uncertainty for the future, which makes people more inclined toward the far right. In their analysis of the current political environment, psychologists Karen Stenner and Jonathan Haidt concluded that a third of adults across Europe and the US were predisposed to authoritarianism, while 37% were non-authoritarian and 29% were neutral. However, when we feel we are under threat or perceive that our moral values are being eroded, we seek reassurance from leaders who articulate strong, resolute visions.

This hypothesis received support in a study of 140,000 voters, across sixty-nine countries over the past two decades, which revealed that those experiencing the greatest economic hardship voted for populist candidates unless they reported a strong personal sense of control. However, economics still does not explain why the populists also received the support of the predominantly rich, white males for whom hardship was not a primary concern. Ronald Inglehart, a political scientist, argues that, in addition to economic inequality, we are witnessing the effects of a “silent revolution” backlash brewing among the older generation, who see social changes in the younger generation as a threat to their traditional values.

In his analysis of the shifting political landscape, Inglehart discovered that the economic hardship account could not explain all the data he analysed from the demographics of voters for 268 political parties in 31 European countries. A silent revolution would explain why older members of society vote for populist politicians. Each generation wants to take back control of the values they hold most dear from the current generation who they believe are squandering them. Inglehart concludes that “these are the groups most likely to feel that they have become strangers from the predominant values in their own country, left behind by progressive tides of cultural change which they do not share.”

Voters are also unduly fearful of the future. In virtually all the key dimensions of human well-being life is much better than it was only a few hundred years ago. And yet most of us think the world is going to hell in a handcart. This is a phenomenon known as declinism– the belief that the past was much better than the present. Most citizens in prosperous countries overwhelmingly believe that the world is getting worse. Declinism is a distorted perspective that plays into the hands of right-wing politicians who stoke the fires of nationalism and protectionism. The reasons for declinism are many, from various biases in human cognition (including rose-tinted nostalgia and the tendency to pay greater attention to future dangers, especially if you are already wealthy) to fact that bad news gets more coverage than optimism. Social media amplify these concerns by providing a constant stream of unfiltered perspectives and biases.

Declinism explains why extreme actions and politicians seem warranted when you hold unreasonable fears for the future. In 2012, YouGov reported that most UK citizens they surveyed thought that since the Queen’s coronation in 1953 Britain had changed for the worse, with the greatest proportion endorsing this negative view among the over sixties.  However, when the pollsters asked whether the quality of life for the average person had improved, respondents overwhelmingly agreed that it had. People can objectively recognize better healthcare, better education, and a better quality of life but this does not translate into an appreciation that things are getting better overall. When asked whether the world was getting better just before the Brexit referendum, only 11% thought the future would be better, with 58 per cent saying that it was getting worse. Again, the older participants were, the more pessimistic their responses.

Ultimately, Brexit comes down to tribalism and ownership. The trouble with ownership is that we become irrational when we feel threatened by loss. Who we are is not simply our bodies and minds, but our property, our relationships, our jobs, our opinions and our beliefs. And if someone threatens to take any of them away, we fight to keep control even when it is not in our best interests.

Featured image credit: “EU United Kingdom” by Elionas2 via Pixabay.

The post Brexit: when psychology and politics clash appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 26, 2019 02:30

October 24, 2019

How birth shapes human existence

Many classic existentialists— Albert Camus, Simone de Beauvoir, Martin Heidegger—thought that we should confront our mortality, and that human existence is fundamentally shaped by the fact that we will die. But human beings do not only die. We are also born. Once we acknowledge that birth as well as death shapes human existence, existentialism starts to look different. The outlines of a natal existentialism appear.

Let’s look at this first in relation to Beauvoir, who writes in The Ethics of Ambiguity that “every living moment is a sliding towards death.” It is not simply that we are all ageing and so getting closer to death. We are constantly pursuing projects and creating values—for instance, I’m writing this blog piece, thereby giving value to the activity of writing philosophy. But, Beauvoir thinks, my projects risk being brought to nothing when I die. When I die, either my projects will be left unfinished or, if I had completed them—say, by finishing writing a book—I will no longer be there to invest this achievement with value and meaning. The book I labored over will end up a mere dusty tome languishing on a shelf. For Beauvoir, then, death threatens the meaningfulness of one’s existence. But we can combat this threat by sharing our projects and values with others, who can then take up and continue these projects even after we die.

However, things look different once we remember that we were all born. Having been born, I began life helplessly dependent on care from the adults around me, and very unformed and immature. I began straightaway to absorb the culture of the part of the world I was born into, partly by learning it from the adults I depended on.

So whenever I pursue a project, such as writing this essay, I do not take it up just out of nowhere. I have always-already been involved in particular projects which I have taken on from the others who have influenced me. For instance, I have always read voraciously, something I took over from my mother. Whenever I read, something of my relationship with my mother is kept alive.

In pursuing particular projects, then, we are always carrying forward meanings embraced by others before us. Since those people in turn acquired their projects from others who preceded them, we are always carrying forward webs of meaning that have come down along chains of predecessors. Thus, as beings that are born, we never create meaning and value from scratch as single individuals. We receive and inherit value and meaning from others. It is these webs of inherited meaning that death threatens—and the threat only matters because we are already attached to these meanings, by virtue of having been born.

Now let’s look at Camus’s novel The Outsider (L’Étranger). It opens starkly: “Today mother died. Or maybe yesterday, I don’t know. I got a telegram from the home. ‘Mother deceased. Funeral tomorrow. Deep sympathy’. That means nothing. It could have been yesterday.” The narrator, Meursault, goes on to kill an unnamed Arab whom he perceives to have been threatening him. Meursault stands trial and is sentenced to death. During the trial he is portrayed as a monster, a man devoid of normal human emotions, values and attachments.

Yet Camus sees Meursault as a hero of sorts. Meursault confronts what Camus regards as the truth about the world—that there is no inherent value in anything: value only ever exists insofar as human individuals create it by choosing to invest certain acts or things with value. For Camus, then, there is no inherent value in life or in being alive. Meursault recognises this: he sees that there is nothing inherently sad about his mother dying and nothing inherently wrong about his killing the Arab. These would be sad and wrong only if he chose so to regard them—and he does not.

But, contrary to Camus, there are values that Meursault has inherited by birth. As a working-class white Algerian, he is estranged from both the native Arab population and the colonial elite. He feels rootless, dislocated and alienated. This makes it possible for him to kill the Arab, from whom he is disconnected. Meursault’s moods and actions are actually motivated by the social position he inherited by birth.

Had Camus considered birth, then, he would have seen human existence differently. For him, the world is inherently meaningless, and we each face the daunting task of creating meaning single-handed. But in reality we always-already inhabit webs of meanings that have come down to us from birth.

Hidden within Beauvoir’s and Camus’s death-focused approaches, we start to see how birth also structures human existence. We receive and inherit horizons of meaning and value by birth, through the relationships and locations in society and culture into which birth places us. Meaning-reception comes before meaning-creation.

Featured image credit: Photo by Alex Hockett via Unsplash.

The post How birth shapes human existence appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 24, 2019 02:30

October 23, 2019

Our five senses: taste

Having discussed the origin of the verbs smell (The sense and essence of smell) and feel (Fingers feel, or feel free), I thought that it might be worthwhile to touch on the etymology of see, hear, and taste. Touch, ultimately of onomatopoeic origin, has been mentioned, though briefly, in one of the earlier posts. I’ll begin the projected series with taste.

Taste has an instructive history. It is a Romance word, which, in English, surfaced only in the thirteenth century, but at that time it meant “to examine by touch [again touch!], try, test.” The by now familiar sense “to have a particular flavor” does not antedate the fifteen hundreds. See and hear refer to rather easily described sensations. I see for “I understand” needs minimal explanation: if you see something, you see the light, as it were, and understand what you observe. Likewise, if you hear a signal, it registers in your brain. The taste of wormwood and honey poses no difficulties either, but having good taste in classical music does not presuppose putting anything in the mouth: sweet, sour, and bitter are irrelevant concepts here. This figurative usage goes back to French, and in French it seems to be from Italian. (Incidentally, Sweet and Sour has been the subject of two other recent posts.)

If you have a signal, you hear it. Image by PublicDomainPictures from Pixabay.

The etymology of taste is rather obscure. There must have been the Romance verb tastare, rhyming with its well-known synonym gustare (familiar from Engl. gusto and disgusting), but the details are lost, and I’ll let them be. More important is the fact that the English cognate of gustare is Engl. choose (Old Engl. cēosan, Gothic kiusan, and so on). It follows that the idea of tasting suggests not only touching but also choosing. That is why an etymologist investigating the origin of such words should cast the net rather widely.

You have good taste in classical music. Image by TravelCoffeeBook from Pixabay.

In today’s post, I will stay away from the Romance verbs and their congeners and look only at the Germanic word for “taste.” It has been preserved by all the West Germanic languages (Frisian, Dutch, and German), so once again, as happened to smell, by one branch of Germanic. This limited geographic distribution of words is a mystery, and we have to live with it. West Germanic presents a clear picture: the old root was smak; we find it in the Modern German noun Geschmack “taste” (ge– is a prefix) and the verb schmecken “to taste.”

Before I turn to these words, I must make a digression. English has two verbs smack. One is defined as “to taste; savor,” the other as “to open and close the lips noisily,” and it is the noise that will interest us here. Smack also means “to strike, spank” (compare the adverb: “I ran smack into the lamppost” and “The great oak is smack in the middle of the park”). Smack2 is obviously sound-imitative. There seems to be a near-consensus that, from the etymological viewpoint, smack1 and smack2 are different entities, and this is what I wrote in my post on smell (not because I thought so, but because I copied the opinion that looked like a recognized truth). But now that I have read all there is on the two homonyms, I am far from certain that the consensus should be accepted as final.

Suggestions that smack1 and smack2 are related probably go back to the publication of Francis A. Wood, an American etymologist whose name turns up in this blog with great regularity. His solutions should be viewed critically, but they are, to use the hackneyed phrase, invariably thought-provoking. He noted that, while tasting food, one often “smacks.” Perhaps this late verb was borrowed from Low German or Dutch, but it does not follow that the two homonyms are unrelated! Smack1 also has close Low German and Dutch congeners. Wood, I believe, was right.

Smell and taste are closely connected, and it is not surprising that a word may designate both. Dutch smaak, which is glossed as “taste” in modern dictionaries, in the past also meant “smell.” Similar examples occur in the German-speaking area, and the same symbiosis was recorded in Old Icelandic. We observe with some dismay that we have once again run into a sm-word. Some such words lack s-, but this is a familiar complication: the mysterious and ubiquitous s-mobile has been mentioned in this blog multiple times. My hackneyed example is Engl. steer “bull” versus the s-less Latin taurus (we even had a picture of the constellation Taurus not long ago). For variety’s sake and to remain true to the sm-subject, I may add Engl. smear versus Greek múron “ointment.”

This is the idea of the Russian chmok. Image by PublicDomainPictures from Pixabay.

Do we again encounter a sound-symbolic or a sound-imitative word like smell? Possibly so. No one doubts that smack “open and close the lips with a noise” is onomatopoeic. For comparison, I may cite a few Russian monosyllables of the same type: chmok (the closest analog of smack, with regard to sound and sense), shmyak “thud,” and shmyg (the reference is to a quick movement). In English, smash and smatter (mentioned in the earlier discussion of smell) belong here. Tracing them to remote Indo-European roots looks like an unproductive procedure. Yet it does not follow that such words cannot have cognates outside a narrow language subgroup. Lithuanian smaguriaî  “dainties, tidbits,” apparently, belongs here too, and, if we agree that “noise,” as in smack, is tangentially connected with eating, Lithuanian smôgti “to strike” may also join the list. After all, Old Engl. –smacian meant “to pat, caress,” not “taste.”

Perhaps the most curious word in this group is Gothic smacca “fig” (the fruit).  Gothic, it will be remembered, has survived, because, in the fourth century, parts of the New Testament were translated into that language. Long consonants are extremely rare in Gothic and are usually of expressive origin. Smakka has close parallels in Slavic: Russian smokva (stress on the first syllable), etc. It remains a matter of debate whether the Gothic word was a loanword from Slavic, whether the process of borrowing went in the opposite direction, or whether we are dealing with a migratory word, initially alien to both Gothic and Slavic. In any case, the smakka was the fruit from the fateful tree of knowledge, and the word for the fig tree occurs in the episode in Jesus’s life. It has been suggested that smakka is related to the root we find in German schmecken, so that the word meant “a very tasty fruit.” It is an attractive hypothesis. (What fruit really grew on that tree has been discussed many times but will not concern us here.)

There is one more smack in English. It denotes a sailing ship and may well be related to the verbs smack, described above, but its history may take us too far afield and will be left for some other occasion.

Here then is the summary of this post on smak, the Germanic word for “taste.” Arguably, smak designated the sensation people have when they put something in the mouth and the sound they make while eating (not a delicate sound, but we are present at an etymological, rather than a royal feast). The word is sound-imitating. I have no doubt that many points in this summary will invite discussion.

This a royal, rather than an etymological feast. Banquet du paon, unknown artist. Public domain via Wikimedia Commons.

Feature image credit: Nathaniel Under the Fig Tree by James Tissot. No known copyright restrictions. Via the Brooklyn Museum.

The post Our five senses: taste appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 23, 2019 05:35

October 22, 2019

A full century later, the 1919 World Series remains the most historic of all

What makes a World Series historic? It’s a given that fans of any particular team are going to remember the ones where their team triumphs. In San Francisco, the early 2010s will always be the time of the Giants and Madison Bumgarner. The mid-1970s are never going to be long ago in Cincinnati, where the boys of the Big Red Machine remain forever young.

There are certain years where the World Series transcends geography, though, and finds its way into the national psyche. This century’s long-awaited feel-good wins for the Cubs and Red Sox certainly were of national interest. The Yankees long string of wins in the 1940s, 1950s, and 1960s may run together for much of the world, but the sheer volume makes them singular and unique. The 1960 Pirates scoring an unlikely win over the Yankees in on Bill Mazeroski’s bottom-of-the-ninth homer in game seven is eternal in Pittsburgh and beyond. In 1905 Christy Mathewson pitched three complete-game shutouts in a span of six days, a feat unlikely to be either eclipsed or forgotten.

But when it comes to historical World Series, 1919 will forever stand alone. This is the crooked World Series, the Black Sox World Series. The White Sox loss to the Reds after seven players on the team agreed to throw the series—in return for the promise of each getting somewhere between $5000 and $20,000 from gamblers who stood to make many times that number betting on the heavily favored White Sox to lose—stands as a timeless American morality play It continues to play on not just our emotions, but our fundamental attachment to the game itself, our sense that baseball stands apart from all other sports, embodying a kind of pastoral purity. We remain bewildered as to why a genuine American icon like Shoeless Joe Jackson would undermine it—“it ain’t true, is it Joe?” How could the White Sox betray  “the faith of fifty million people?” as Nick Carraway put it in The Great Gatsby.

We search for answers: What really happened? Who double-crossed whom? Did gamblers double-cross the players by stiffing them on the promised payments? Did the players try to double-cross the gamblers by playing their best? And if they did—as those who would later speak to either the grand jury or reporters insisted they did—then did the White Sox just get themselves honestly beaten by a Reds team that pitched brilliantly and bunched all their hits into timely rallies?

We are overwhelmed by the criminal enormity of the scheme and the incompetent human scale of its execution. And where are the good guys? We search for someone to root for here. Players complicit in the fix may be the most sympathetic figures in the whole story. Their temptation is understandable, the bribes offered were two to four times their annual salaries, and the culture of the game had long tolerated exactly the sort of thing they were doing. Baseball owners had always been reluctant to discipline their miscreants. Rumors of game fixing abounded in early baseball, and while rumors were not good for business, rooting about and proving them to be true would have been decidedly worse.

And what of the reporters, who had all heard the rumors, and had sources enough in the game to pursue them? Journalism too, wore a mantle of shame in this episode. The few reporters who dared whisper of impropriety were shouted down by their brethren in the pressbox. Anyone who knows anything about the game knows that a baseball game can’t be fixed went the cry in the editorials in The Sporting News and Baseball Magazine. Had they been inclined to coin a phrase, the critics would have decried the game-fixing stories as fake news.

The players were indicted on an array of arcane charges—there was no law against fixing baseball games—and quickly acquitted at trial, whereupon new baseball commissioner Kenesaw Mountain Landis quickly banned them for life, “regardless of the verdicts of juries.”

The baseball press wrote that story; with Landis’s steady on the tiller the game had again regained its purity and innocence. Future rumors were brushed aside easily. Keep moving along, nothing to see here.

But there has been plenty to see, the modern game came out of the sordidness of 1919. The fact that the picture has been fuzzy and difficult to bring into focus has had us looking all the harder, for the whole of a hundred years now.

The post A full century later, the 1919 World Series remains the most historic of all appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2019 05:30

How alternative employment contracts affect low wage workers

Contemporary labour markets are characterised by more atypical or alternative work arrangements. Some of these – like independent contractors – have emerged in the context of self-employment, while others – like zero hours contracts and temporary work – are evolutions of traditional employment contracts. Irrespective of its form, the increase in atypical work has led to discussions of a trade-off between the potentially desirable flexibility it offers and the emergence of low-wage, dead-end jobs, characterised by considerably fewer employment rights and less job security than conventional forms of employment.

The rise of atypical work has been a key feature of the labour market of the United Kingdom in recent years.. One kind of alternative work arrangement that is increasingly common in the UK are zero hours contracts. Under such contracts, employers are not obliged to guarantee hours or times of work, and workers are – at least in principle – not obliged to accept work offered and are paid only for the work carried out. Almost a million people in the UK work like this today, compared to only two hundred thousand at the turn of the millennium. Many of these contracts are prominent in low-wage sectors of the economy, such as hospitality and social care.

How has this worked out? It turns out there’s a stark dichotomy between workers who are satisfied with their flexible working arrangement and workers who instead would like to work more hours. While almost a third of workers on zero hours contracts do it because they enjoy the  flexibility offered, a third work under such contracts because they can’t find jobs with a guaranteed number of hours. Another third work under such contracts in order to complement pay from other sources or earn money while studying. About half of zero-hours contract workers say they’re satisfied with their job, but 45% would like to work more hours and a more regular pattern of hours.

In Europe, the rise of alternative work arrangements is often seen as an expression of the segmentation of the labour market into two sectors. The primary sector is characterised by secure employment contracts and the secondary sector is characterised by less stable and less protected jobs. More stringent labour market regulations in the primary market – such as the imposition of a higher minimum wage – can lead to greater use of more flexible contract types. Do higher minimum wage rates lead companies to shift the composition of their workforce towards more flexible jobs by forcing employees into alternative work arrangements?

The adult social care sector (nursing home workers, in US terms) is a good place to see how this plays out. In April 2016 the UK increased the national minimum wage for workers aged 25 and over to £7.20 per hours – a sizable 7.5% increase over the previous minimum wage rate. The minimum wage is very important for workers in this field. It’s very low paying and almost 50% of workers there were affected by the introduction of the new wage.

The new national minimum wage had a strong positive impact on workers’ wages, with no detrimental effect on their employment opportunities. But firms also increased their use of zero hours contracts, in some cases quite dramatically. In particular, a domiciliary care worker (home care worker, in US terms) paid the minimum wage received a 7.5% increase in wage, but was also 6.1% more likely to be forced into working under a zero hours contract. It appears that firms try to deal with the wage cost shock from the minimum wage increase by employing contracts with substantial hour flexibility. This is likely to have occurred in other low-pay sectors of the UK labour market, such as hospitality and cleaning.

These results have an important bearing on policy making. The UK government has made a commitment to achieve a National Living Wage of 60% of the median wage by 2020. As a means of comparison, the median hourly wage in the UK was £14.40 in 2018. Under the 60% target, the minimum wage would have been £8.64 rather than £7.83. At the same time many policymakers and economists have expressed concerns about insecure working arrangements. Given their interaction with minimum wage policy, it is evident that trade-offs may surface unless legislation to regulate atypical work is introduced.

Featured Image Credit: Eggtimer by Storkman. Public Domain via  Pixabay .

The post How alternative employment contracts affect low wage workers appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 22, 2019 02:30

October 19, 2019

How Asia got richer

Two centuries ago, in 1820, Asia accounted for two-thirds of world population and more than one-half of world income. The subsequent decline of Asia was due largely to its integration with the world economy shaped by colonialism and driven by imperialism. By 1962, its share in world income had plummeted to 15%. Even in 1970, Asia was the poorest continent in the world. Its demographic and social indicators of development, among the worst anywhere, epitomized its underdevelopment. Gunnar Myrdal, who published his magnum opus Asian Drama in 1968, was deeply pessimistic about the continent’s development prospects.

In the half century since then, Asia has witnessed a profound transformation in terms of economic progress of nations and living conditions of people. By 2016, it accounted for 30% of world income, 40% of world manufacturing, and over one-third of world trade, while its income per capita converged towards the world average. This transformation was unequal across countries and between people. Even so, predicting it would have required an imagination run wild. Indeed, Asia’s economic transformation in this short time-span is almost unprecedented in history.

It is essential to recognize the diversity of Asia. There were marked differences between countries in geographical size, embedded histories, colonial legacies, nationalist movements, initial conditions, natural resource endowments, population size, income levels and political systems. The reliance on markets and degree of openness in economies varied greatly across countries and over time. The politics too ranged widely from authoritarian regimes or oligarchies to political democracies. So did ideologies, from communism to state capitalism and capitalism. Development outcomes differed across space and over time. There were different paths to development, because there were no universal solutions, magic wands, or silver bullets. Despite such diversity, there are common discernible patterns.

For Asian countries, political independence, which restored their economic autonomy and enabled them to pursue their national development objectives, made this transformation possible. Economic growth drove development. Growth rates of GDP and GDP per capita in Asia were stunning and far higher than elsewhere in the world. Rising investment and savings rates combined with the spread of education were the underlying factors. Growth was driven by rapid industrialization, often export-led, associated with structural changes in the composition of output and employment. It was supported by coordinated economic policies across sectors and over time.

Rising per capita incomes transformed social indicators of development, as literacy rates and life expectancy rose everywhere. Rapid growth led to massive reduction in absolute poverty. But the scale of absolute poverty that persists, despite unprecedented growth, is just as striking as the sharp reduction therein. The poverty reduction could have been much greater but for the rising inequality. Inequality between people within countries rose almost everywhere, and the gap between the richest and poorest countries in Asia remains awesome.

Governments performed a vital role, ranging from leader to catalyst or supporter, in the half-century economic transformation of Asia. Success at development in Asia was about managing this evolving relationship between states and markets, complements rather than substitutes, by finding the right balance in their respective roles that also changed over time.

Governments in South Korea, Taiwan, and Singapore coordinated policies across sectors over time in pursuit of national development objectives and became industrialized nations in just fifty years. China emulated these developmental states with much success, and Vietnam followed on the same path two decades later, as both countries have strong one-party communist governments that could coordinate and implement policies.

It is not possible to replicate these states elsewhere in Asia. But other countries did manage to evolve some institutional arrangements, even if less effective, that were conducive to industrialization and development. In some of these countries, the institutionalized checks-and-balances of political democracies were crucial to making governments more development-oriented and people-friendly.

Economic openness performed a critical supportive role in Asian development, wherever it has been in the form of strategic integration with, rather than passive insertion into, the world economy. While openness was necessary for successful industrialization, it was not sufficient. Openness facilitated industrialization only when combined with industrial policy. Clearly, success at industrialization in Asia was driven by sensible industrial policy that was implemented by effective governments.

The countries in Asia that modified, adapted, and contextualized their reform agenda, while calibrating the sequence of, and the speed at which, economic reforms were introduced, did well. They did not hesitate to use heterodox or unorthodox policies, or experiment and innovate, for their national development objectives.

The rise of Asia represents the beginnings of a shift in the balance of economic power in the world and some erosion in the political hegemony of the West. The future will be shaped partly by how Asia exploits the opportunities and meets the challenges and partly by how the difficult economic and political conjuncture in the world unfolds. Yet, it is plausible to suggest that by 2050, a century after the end of colonial rule, Asia will account for more than one-half of world income and will be home to more than one-half the people on earth. It will thus have an economic and political significance in the world that would have been difficult to imagine fifty years ago.

Featured image credit:  Photo by Valentino Funghi on Unsplash 

The post How Asia got richer appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on October 19, 2019 05:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.