Oxford University Press's Blog, page 166

December 17, 2019

How pictures can lie

On 9 August 1997, The Mirror printed an edited photo of Diana, Princess of Wales, and Dodi Fayed on its front page. The edited photo shows Diana and Fayed facing each other and about to kiss, although the unedited photo reveals that at that point Fayed was facing an entirely different direction. Did The Mirror lie to its readers?

There is a broad understanding of lie on which the answer must be yes. On this understanding, most insincere acts can count as a lie. For example, I would count as lying if I were to stand by the window, pack my bags and leave the house in order to give my neighbours the (mistaken) impression that I am going on a journey. If the act of packing my bags can count as a lie, then surely the same must be possible for the act of printing and distributing an edited picture.

However, on many occasions we use lie in a narrower sense that allows for a finer differentiation among insincere acts. On this narrow sense, I would be lying by telling my neighbours that I am going on a journey (without an intention of doing so), but I would not be lying by packing my bags. The act of packing my bags would be misleading, but it would not be a lie.

So, given a narrow sense of lie, did The Mirror lie to its readers? Here the answer may be less clear. If we trust the philosophical mainstream it should be no. Prominent philosophical accounts of the nature of lying require liars to say something they believe to be false. If saying requires the use of words and sentences, as seems plausible, then printing and distributing a picture can be insincere and misleading, but it cannot count as lying.

But is it really plausible to deny that The Mirror lied to its readers? In my view, a good case can be made to count the edited photo as a lie even on a narrow understanding of what it is to lie. For one thing, the act of printing and distributing the edited photo bears an important hallmark of lying: a lack of deniability. As an illustration of this hallmark, consider a case in which Anne asks Bert whether he’s going to the party on Saturday. Bert has to work late on Saturday, but is nonetheless planning on going after his shift has finished. However, he does not want Anne to know about his plans. Now, Bert could lie to Anne by saying “I’m not going,” or he could try to mislead her (without lying) by saying “I have to work late.” If he goes for the second option, he retains deniability. If Anne finds out that he was insincere and accuses him of lying, Bert can respond: “I never claimed that I wasn’t not going to the party. I merely claimed that I had to work.” Such a denial may be pedantic, but it does seem to be true.

Let us return to the edited photo of Diana and Fayed. Did The Mirror retain deniability? Arguably it did not. By printing and distributing the edited photo, the newspaper claimed that Diana and Fayed were facing each other and about to kiss at the time the photo was taken. It would have been clearly false if the newspaper had denied such a claim in response to an accusation of lying.

A second reason to count the edited picture as a lie is that the distinction between lying and misleading naturally extends to communicative acts involving photos. Given the right circumstances, it is possible to mislead someone by presenting an unedited, accurate photo. For example, one could use an unedited photo of two people facing each other to create the misleading impression that they were talking to each other when the photo was taken. Such a use of a photo would be misleading, but arguably not a lie (still: on a narrow sense of lie). Moreover, it would present a clear contrast to the edited photo of Diana and Fayed. This leads to the following overall view: just as some linguistic utterances are lies and others are misleading, some communicative acts involving photos are lies and other such communicative acts are misleading. In this view, the edited photo of Diana and Fayed seems to belong into the category of photos that are lies.

If these two considerations are on the right track, The Mirror did lie to its readers, even on a narrow understanding of lie. This speaks against philosophical accounts of lying that require liars to say something they believe to be false. And it raises the following questions: What is it that makes a lie a lie? What do linguistic and pictorial lies have in common? And how does lying differ from misleading?

The notion of commitment can play a role in answering all three of these questions. Lying is a communicative act through which one (intentionally) commits oneself to something one believes to be false. This explains why lying is not tied to linguistic utterances: clearly, communicative agents can use photos to commit themselves to something they believe to be false. And it explains how lying differs from misleading: in contrast to lying, misleading is a non-committal communicative act – which is why misleaders retain deniability, and liars do not.

If all this is right, we can indeed learn something from the edited photo printed by The Mirror. Not about Diana and Fayed, but about the nature of lying.

Featured image: Photos, photography by brisch27. Public Domain via  Pixabay .

The post How pictures can lie appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 17, 2019 02:30

December 16, 2019

Why recognizing the Anthropocene Age doesn’t matter

You’ve probably heard that we’re living in the Anthropocene, a new geological epoch in which human activity is the dominant geological process.

If you’ve been attentive to discussion surrounding the Anthropocene, you probably also know that the Anthropocene Working Group, a panel of scientists tasked to make a recommendation as to whether geologists should formally recognize the Anthropocene, voted just a few months ago to recommend recognizing the new epoch.

And if you’ve been really attentive, you’ll be aware that, despite widespread adoption of “Anthropocene” talk, there remains some serious opposition to officially recognizing the Anthropocene. Some researchers think such recognition would be premature. Others call attention to the fact that humans have been transforming the planet for thousands of years, and there is no sudden, recent, tipping point in this process. Yet others are skeptical that our impact on the planet is as geologically significant as we, in typical human hubris, think it is. And socially-oriented environmental activists have argued that it is misleading to conceptualize an “Anthropo”-cene when the environmental crisis isn’t driven by humankind in general, but by a subset of industrialized cultures.

Most of this opposition to the Anthropocene involves contestation of the relevant facts surrounding global change. But there’s another important angle as well: seeing the question of the Anthropocene as a question of science communication.

Official recognition of the Anthropocene, it’s often claimed, will help communicate the seriousness of the environmental crisis to the public.

This is wishful thinking.

Why would we think that a bunch of geologists adding another label to the geologic time scale would shift public opinion on issues like climate change?

Well, we might hope that people will be moved by the fact that a whole field of science has come to a consensus that human activity is radically altering the planet. Except that a whole field of science—climate science—has already come to that consensus, and that 97% consensus among climate scientists is well-publicized by activistsprofessional scientific organizations, and government agencies. Obviously, the public now accepts that consensus, right?

You know the answer: not in the United States, where the public is neither aware of the consensus nor in general agreement with it. Perhaps because of effective misinformation campaigns, repeated declaration of the consensus among climate scientists hasn’t had a radical effect on public acceptance of the facts of anthropogenic global warming. Given that fact, it seems foolish to think that a declaration by stratigraphers that we live in the Anthropocene will win over many members of the public to team Save the Planet. Such a declaration would be like declaring a consensus among climate scientists, except that much of the general public couldn’t even tell you what stratigraphy is, let alone why they should care what scientists who study it say.

This is not to say that communicating science to the public is pointless. On the contrary, good science communication is crucial to a functional modern society. But research on climate communication tells us the sort of science education that actually wins hearts and minds involves detailed causal understanding and compelling quantitative facts—not preachy labels.

But recognition of the Anthropocene doesn’t involve transferring any sort of detailed understanding of the mechanisms of anthropogenic geological change to the public. Nor does communicating these persuasive facts require official recognition of a novel geological epoch. Given the independence of what is involved in good science communication and what is communicated by the label “Anthropocene,” it seems that the argument for recognizing the Anthropocene on the basis of its science communication benefits is unwarranted.

Moreover, to the extent that the motivation for recognizing the Anthropocene is political, it might be a bad idea to treat it as inevitable. To proclaim that we live in the Anthropocene could come across as defeatist, along the lines of the “climate cowards” who have given up hope of climate mitigation and think we should throw our energies into adaptation.

Sure, it looks like we’re radically altering the planet’s geology, but what if we managed to rein things in? What if we pull our heads out of the sand and actually limit global warming to a couple of degrees centigrade? What if we put a halt to the incipient Sixth Mass Extinction before it comes to maturity? It’s still within our power to make the Industrial and Atomic Ages a blip in the geological record, rather than a permanent, epochal transformation of the planet. If we think of the Anthropocene this way, as preventable, there is room for hope.

In other words, the Anthropocene works better as a threat than a promise. To motivate action we should present it as an avoidable future. If we inscribe it forever into the geological time scale, it becomes immutable, past prevention. But that’s the opposite of what we want to be communicating to the public if we want them to act.

Featured Image Credit: ‘Reflected Arctic Iceberg’ by Annie Spratt. CCO via Unsplash .

The post Why recognizing the Anthropocene Age doesn’t matter appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 16, 2019 02:30

December 13, 2019

Aunt Lydia could be the voice of conservative women

Fans of Margaret Atwood’s 1985 The Handmaid’s Tale proliferated as the novel’s scenarios came to resemble the realities of American women who were subject to more regulation and surveillance. In its 2019 sequel, The Testaments, Atwood gives us a different voice and a different tale, told largely from the perspective of someone who perpetuated the authoritarian regime instead of fighting it. This time, Atwood’s most influenced readers may be conservatives instead of feminists.

Atwood has remarked that it took her a long time to write a sequel because she couldn’t reproduce the voice of Offred, the narrator from The Handmaid’s Tale whose potential fecundity destines her to serve as a reproductive handmaid to a governing family in the ascendant Republic of Gilead. In the novel, Offred’s voice is riddled with a confused self-consciousness as it unfurls her story.  Now, in The Testaments, we have another puzzle to play with: How should we view Aunt Lydia, a villain of The Handmaid’s Tale, who emerges as the key political subversive in The Testaments?

Aunt Lydia, one of the aunts who led the indoctrination of prisoners in the re-education units, or Red Centers, of the Republic of Gilead, is both familiar and abhorrent to readers of The Handmaid’s Tale. She conducts classes that guide the “girls” into groupthink that reinforces victim-blaming as a natural impulse. When one “girl” confesses to being gang-raped, the inductees respond to a leading question, “who led them on?” – to which they chant in unison, “she did, she did, she did.”

The fictional aunts of The Handmaid’s Tale had their counterparts in the election of Donald Trump: they were to more or lesser degree the 53% of white women who voted for Trump and chanted “lock her up, lock her up” when he spoke of “crooked Hillary.”

In The Testaments, we have a new view of Aunt Lydia. Through Aunt Lydia’s secret diary, we learn from her point of view what it was like to abruptly become a second-class citizen after a coup, to be tortured into submission, and then to choose to help build the new patriarchal regime by torturing, exploiting, and re-educating other women in the new Republic of Gilead.

By the end of the novel, Aunt Lydia is a self-realized sell-out survivor who gets her retribution by tearing down what she herself has built. When she sees an opportunity to thwart the regime she herself has built, she puts into motion a collaborative rebellion:   Her secret records – including evidence of Gilead’s political corruption – are smuggled out, and the Republic falls.   She is heralded a hero by a new generation.

This idea that the truth will out, say some, can only be read as utopic given our own current demonstrations of the meaninglessness of truth, facts, and multiple revelations of corruption.

But by mourning the loss of veracity’s authority we miss the novel’s crucial point that directs us away from truth claims. For people who know what it means to have their full credibility eradicated and the reality of their lived experience utterly rejected, the radically fickle nature of truth, evidence, and facts has long been established. The hearings of Dr. Christine Blasey-Ford and Ambassador Marie Yanovitch are mere reminders. The Testaments is not a utopic story meant to comfort us at a time of misinformation and misery. It is a story about subversive women writing themselves into history and, consequently, changing the course of the world.

Who are the nonfiction counterparts of Aunt Lydia in the sequel to the novel that so presciently predicted the authoritarianism, or “maximalist” executive power, that fuels this presidency? One counterpart is Katie McHugh, the woman who denounced the far right after working as an editor for three years at Breitbart, and leaked more than 900 emails from Stephen Miller to the Southern Poverty Law Center. Those emails confirmed the extent to which Miller, a White House senior policy advisor in charge of immigration, embraced and promoted white nationalist views and implemented a policy of family separation at refugee resettlement facilities. McHugh’s about-face provided the undeniable links between the policies and the ideologies, the man and the organizations. Like Aunt Lydia, McHugh was in a place of privilege ruled by men, and had access to damning evidence about them. “Get out while you can,” she says.

Last month during a research trip to the Schlesinger Library for the History of Women in America, I walked the brick streets amid the wrought iron fences of Cambridge, recognizing them as the setting for The Handmaid’s Tale and The Testaments. Having traveled there from Kentucky, it felt funny to imagine the blue state of Massachusetts as a part of Gilead when my own home, and that of Mitch McConnell, seems more to fit the mold of Atwood’s dystopic vision in The Handmaid’s Tale:  It outlaws reproductive options, subjugates and segregates by racecriminalizes difference, and reserves natural resources for only the richest. But that’s precisely why I’ll be teaching The Testaments in Kentucky. Perhaps my students, and their parents who may have voted for Trump, will see themselves in Lydia’s survival, grit, cunning, determination – and redemption.

Given the defeat of Republican Matt Bevin in the recent Kentucky gubernatorial election, we have reason to believe they will. President Trump stumped for Bevin the night before the election by rallying in Lexington, Kentucky. But his endorsement seemed to backfire. In a close, contested race, Andy Beshear won the governorship in Kentucky.  This can be seen as a good sign for Democrats. So, too, is the fact that in a significant about-face roughly 57% of Kentucky women voted for Beshear; in 2016, 54% of Kentucky women voted for Donald Trump.

Others can offer sophisticated analyses of exit polls and electoral projections. My interest is in the power of fiction to reflect and produce national subjects. Perhaps the power of The Testaments is its insight into how women might reassert themselves. If Aunt Lydia did it, and Katie McHugh did it, so too may white women voters rewrite the ending of The Donald’s tale and rewrite their own legacy as opposition to double standards, corruption, and authoritarianism. Perhaps the women who chanted “lock her up” will turn against the authoritarianism that has come to rein them in rather than set them free. In that case, Atwood’s scenarios will once again resemble American women’s realities and, this time, conservatives may see themselves in her fiction.

Featured Image courtesy of  Unsplash

The post Aunt Lydia could be the voice of conservative women appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2019 05:30

Why scientists should be atheists

My friend and colleague George asked me, “Do you think a scientist can be an atheist?”  I replied, “Not only can a scientist be an atheist, he should be one.” I was teasing because I knew what response George wanted to hear and this was not it. Sure enough, he shook his head. The only logical position that a scientist can take, he said, is to be an agnostic because we can never know the answer to the question of whether God exists or not.

This is of course an old debate. To avoid the logical problem of proving nonexistence, some early atheists chose to define themselves differently. In the 19th century, the political activist and self-declared atheist Charles Bradlaugh said, “The atheist does not say “There is no God,” but he says, “I know not what you mean by God….I do not deny God, because I cannot deny that of which I have no conception.”

Bradlaugh’s contemporary Charles Darwin was reluctant to accept the label of atheist urged upon him by Edward Aveling, a well-known atheist of his time. Darwin’s stated reason for rejecting the label was because he saw it as too “aggressive.” Darwin and some of his contemporaries preferred to call themselves agnostics, a term coined by one of his supporters T. H. Huxley, partly because that position seemed less likely to offend others in the Victorian social circles in which they moved. But there are other reasons to prefer the label of agnostic over atheist.

Very often this discussion devolves into debating dictionary definitions. For example, the second part of the Oxford English Dictionary definition of atheist as one who “disbelieves” in God is unproblematic. It is the first part of the definition that says that an atheist is “One who denies the existence of a God” that causes problems. It can be argued that that this implies that the atheist is saying he or she is certain that there is no God. Since one cannot prove the non-existence of a god, few atheists would sign on to such a strong statement.

The OED definition of an agnostic—“One who holds that the existence of anything beyond and behind material phenomena is unknown and unknowable”—seems to support George’s position and appears to be a more logical one. Thus, by definition, all atheists become agnostics.

So how can I justify my statement to George that a scientist can, and perhaps should, be an atheist?

One argument is that for a scientist to accept the existence of a deity who has the ability to cause events that contradict the laws of science would be to open up a can of worms, since then any phenomenon to which we do not know the answer could be ascribed to the actions of a supernatural power and thus shut down further scientific investigations. As a result, scientists usually take a pragmatic approach, saying that the nature of scientific research requires one to eschew any supernatural explanations when doing science.

This approach can be described as methodological naturalism, but there exists a stronger formulation of naturalism that is referred to as philosophical naturalism, which is the statement that the material world governed by natural laws is all there is and no supernatural phenomena exist at all. This is what the strong form of atheism implies. Can this be justified since we can never prove the nonexistence of a deity, or indeed of many other supernatural phenomena?

One problem with using George’s standard is that then we have to leave open the possibility for the existence of anything that the imagination can conjure up, such as zombies, vampires, unicorns, werewolves, etc. Most of us would flatly dismiss that such things exist, but these phantasms are not the only things that we confidently assert to not exist. There are plenty of examples of entities that were once firmly believed by scientists to exist but are now as firmly asserted to not exist. The aether and phlogiston are two such examples. How can scientists so confidently dismiss their existence now when they have not proved their nonexistence and indeed cannot do so on the grounds of logical impossibility? It is because scientists are using the scientific logic.

The history of science suggests that entities are considered to not exist when two conditions are met: there is no preponderance of positive evidence for them and they cease to be necessary as explanatory concepts. The latter condition arises when a new theory is proposed that seems promising and does not require the existence of those entities. It was the theory of special relativity that did not require the existence of the aether, and the oxygen theory of combustion made phlogiston redundant. Once these new theories became part of the accepted paradigms of the scientific community, the aether and phlogiston were deemed to not exist.

Applying that same scientific logic to the existence of a deity or supernatural phenomena in general, we can frame the question in a more unambiguous way: Is there a preponderance of positive evidence for existence of a deity? Is its existence a necessary explanatory concept? The answer given by atheists to both questions is in the negative. All positive evidence produced by believers is at best highly ambiguous and open to alternative explanations and there is no fact that requires the postulation of a deity to explain it.

In short, God is not a necessary explanatory concept and can thus be firmly asserted to not exist until either of the above two conditions are met. Using scientific logic, we can be as sure of God’s nonexistence as we are of the nonexistence of the aether, phlogiston or werewolves!

Featured image credit: “Brick Cathedral” by Luca Baggio. CC0 via Unsplash.

The post Why scientists should be atheists appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 13, 2019 02:30

December 11, 2019

Some of our basic verbs: “eat”

Whoever the Indo-Europeans were and wherever they lived several thousand years ago, by the time they began to write, they had produced a word for “eat” that sounded nearly the same all over the enormous territory they occupied. In Latin, Celtic, Slavic, Baltic, Greek, Sanskrit, and beyond, the verb for devouring food resembles Engl. eat. Eat, like be, belonged to a rare grammatical class. Only one example from Modern English can show how things stood. The first person singular of Engl. to be is am. Final m in it is an ancient ending. Compare Engl. (I) am and Russian (ya) yem “I eat.” (Engl. yummy and yum ~ yum-yum have nothing to do with it.) Such verbs were few. In Germanic, unlike what we observe in Slavic, the situation changed early (that is, with regard to the conjugation, eat joined the main group), but, since Modern English has lost all verbal endings except for –s (he/she eats), there is nothing for us to discuss.

Only the past tense of eat presents some interest to a modern speaker of English, because, whatever dictionaries may say, in educated British English, ate more often rhymes with let than with late, while in American English the situation is reverse. The historically justified variant is ate homonymous with eight, and therefore it does not come as a surprise that the colonists took this old pronunciation to the New World. The origin of the pronunciation et for “ate” is not quite clear. There may have been the Middle English form ēt, which underwent shortening. Since both variants have been known in England for centuries, their coexistence in America need not surprise us either. Problems arise only when such variants become shibboleths, markers of a social class. Since all people prefer their own pronunciation and tend to look on it as correct, some disagreement is inevitable. But we are familiar with the same situation in many other cases. Sneaked, not snuck! Dived not dove! As I said, not like I said. All such discussions are as inspiring as they are fruitless.

Back to the origin of the verb eat. Old English infinitive was etan. Its perfect congeners are Latin edere (familiar to English speakers from the borrowed adjective edible), Greek édein, Lithuanian édmi, and so forth. The correspondence of t in English to d outside Germanic is due to the constantly invoked and indispensable First Consonant Shift. The word’s root was ed, in which e alternated with other vowels by ablaut. Naturally, we would like to know why the process of consuming food was called ed-, for such is always the main question of etymology. There seems to be only one fairly reliable clue. To discover it, we have to turn to the history of the word tooth.

The Old English form of this word was tōþ (þ had the value of th). As is clear from the sign ō, the vowel in tōþ was long, but it had not always been such! At one time, the word contained a short vowel followed by n. This becomes clear when we look at the cognates of tooth. The Latin for “tooth” is dens, from dents (compare the genitive dentis; remember Engl. dental and dentist). The German for “tooth” is Zahn, with z, pronounced as ts, from t (h designates the vowel’s length in the modern language). The ancient root was tand, going back to tanth. Finally, Gothic had tunþus “tooth.” (Gothic is a Germanic language recorded in the fourth century.) So we have dent– (Latin), dand- (the oldest German), and tunth– (Gothic).

The vowels, highlighted above, alternated by ablaut, as they do in Engl. beget ~ begat and sing ~ sang ~ sung. Ablaut has “grades,” with the Germanic e ~ a representing the normal grade (such is the technical term). Some time before Gothic was put to parchment, stress in tunþus fell on the second syllable, and there was no vowel between t and n, just tnþús, or rather tnþúz. Only later did u insert itself in that position, probably to make the whole easier to pronounce. This u exemplifies the so-called zero grade, for it appeared to fill a void (nothing, zero).

The oldest English also had n in the word for “tooth” and sounded as tanth-, but lost n. By way of compensation, the vowel got length; hence, after a series of changes, the modern form tooth. In Indo-European, a vowel could also be long “by nature,” that is, not because it was lengthened as the result of some change. Then we witness the lengthened grade. For example, the perfect of Latin edo was ēdi (“I have eaten”). The upshot of this tiresome digression is that we have the root for “tooth” represented by two grades of ablaut: normal (e/a) and lengthened.

Three grades of ablaut: normal, zero, and lengthened. Image 1: by Giulia Marotta from Pixabay. Image 2: Ethnie Dong, CC BY-SA 3.0 via Wikimedia Commons. Image 3: by M W from Pixabay.

Now is the time to remember that once upon a time an elephantine mammal roamed the earth. Nineteenth-century zoologists coined the name mastodon for it from Greek mastós “breast” (some of our readers surely know what mastitis is) and odont– “tooth.” The root of odont is od-. And it has been suggested that this od– alternates with ed-, as in Latin edo “I eat,” by ablaut—of course by ablaut! If this comparison is correct, then the Indo-European verb designating eating and the word for “tooth” are related. It seems that eat once meant “to bite.” This reconstruction leaves us wondering how tooth (dent– or dant-, or dnt-) got its name, but that is another story, and today it need not bother us.

The rest is added for desert. English has the verb fret “to irritate.” But once, that is, in Old English, it meant “to devour” and “to gnaw” (also figuratively!). German fressen, a cognate of fret, means “to eat” (said about animals) or “to gobble up.” It would have been difficult to guess its etymology without Gothic, which preserved the verb fra-itan “to give away for consumption.” We have itan “to eat” with a “destructive” prefix fra-. (The English cognate of this prefix is for-, as in forget and forgo, among others.) Gothic was recorded many centuries before English, German, and the other Germanic languages. Therefore, it often contains the forms that have unmistakable cognates elsewhere, but in the later languages much was usually changed, partly by wear and tear.

English has two more words fret. One occurs chiefly in the form of the past participle (fretted) and means “adorned with carved or embossed work.” It surfaced in print in the fourteenth century and may be a borrowing from French. A third fret means “a ridge to regulate the pitch in some stringed instruments.” Although it has been around since the sixteenth century, next to nothing is known about its origin.

In a fret. Image by StockSnap from Pixabay

So let us rejoice that we can eat with a clear understanding of what we are doing and never be in a fret while enjoying our food (with the help of the teeth in several grades of decay and ablaut).

Feature image credit: Image by fauxels via Pexels.

The post Some of our basic verbs: “eat” appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 11, 2019 05:30

December 9, 2019

The Oxford Place of the Year 2019 is…

After a close round of voting, the winner of our Place of the Year 2019 is the atmosphere!

While the global conversation around climate change has increased in recent years, 2019 set many records – this past summer tied for the hottest one on record in the northern hemisphere, continuing the trend of extreme weather set by deadly cold winter temperatures, heavy snowfalls, and catastrophic mudslides and typhoons worldwide. (In fact, the upswing of the term “climate emergency,” which was used 10,796% more frequently this year than in 2018, led to it being selected as Oxford’s Word of the Year 2019).

Climate change claimed its first Icelandic glacier as a victim, where researchers marked the event with memorial plaque, and Arctic sea ice experienced the largest September decrease in 1,000 years. In an unprecedented loss, Greenland (roughly 80% of which is covered in ice) had two large ice-melts, culminating in a record-breaking loss of 58 billion tons of ice in one year—40 billion more tons than the average. In November, Greenland’s main airport Kangerlussuaq Airport reported that they will cease to operate civilian flights within five years due to runways cracking as the permafrost melts below them. As a result, Greenland is building a new airport in a more stable location. All of these climate events are driven by the carbon dioxide being poured into the oceans and Earth’s atmosphere by human activities, from corporations’ carbon footprints to the deliberate burning of the Amazon in exchange for timber and livestock pastures.

In September, the focus moved from drastic weather and climate events to the policy realm when the International Panel on Climate Change released a landmark report that the effects of climate change are being felt much more severely, and sooner, than previously anticipated; hundred-year floods are projected to become a yearly occurrence by 2050 in many locations, and global sea levels may rise as much as three feet by 2100 (this is 12% higher than the most recent 2013 estimate). Also in September, over 4 million people worldwide participated in a global climate strike, and the United Nations hosted a Climate Action Summit in New York City with the objective to limit the rising of the global average temperature. Yet despite the international conversations and dire scientific warnings, 2019 is projected to be the year with the highest carbon emissions of all time, and while the fact that the ozone hole is the smallest it’s been since its discovery might sound like a silver lining, it’s actually being kept at its reduced side by the record heat in our atmosphere.

The runner-up for the Place of the Year was New Zealand, which entered the global conversation after a terrorist attack on two mosques was swiftly followed by governmental bans on assault weapons and increased gun control.

The post The Oxford Place of the Year 2019 is… appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 09, 2019 05:30

Why there is a moral duty to vote

In recent years, democracies around the world have witnessed the steady rise of anti-liberal, populist movements. In the face of this trend, some may think it apposite to question the power of elections to protect cherished democratic values. Among some (vocal) political scientists and philosophers today, it is common to hear concern about voter incompetence, which allegedly explains why democracy stands on shaky ground in many places.

Do we do well in thinking of voting as a likely threat to fair governance? But voting is a vehicle for justice, not a paradoxical menace tbeo democracy. Two questions: Do people have a duty to vote, and if so, do they have a duty to vote with care?  We can see voting as an act of justice in the light of a Samaritan duty of aid towards society. We have a duty of conscience to vote with care; with information and a sense of the common good, in order to help our fellow-citizens prevent injustice and ensure decently good governance. The latter can be achieved, if voters manage to elect acceptably fair-minded governments and vote out corrupt or inept ones. Voting governments in and out is not all there is to justice, but voting is a basic democratic act because elections install governments. Governments, in turn, enact policies that can have an immense influence on people’s access to primary goods like security, peace, economic stability, education, healthcare, and others. In short, governments can foster or impede justice in ways that very few other entities can.

In particular, three of the most important assumptions that critics of a putative duty to vote make are that: 1) citizens’ political knowledge is almost impossible to improve, 2) voting cannot be a matter of duty because its individual costs are higher than its individual benefits 3) if we care about the common good, we can do other things besides participating in politics, many of which will be more effective than casting a vote in large elections.  To these objections, I offer the following arguments:

First, citizens’ competence is not a fact of nature and it can be modified. Political ignorance and lack of political interest may spring from structural features of the political and economic systems, not from individual cognitive failures. Blaming the individual for her lack of sufficient political knowledge entails neglecting the distorting roles of such elite-level, political party and economic factors that have a non-trivial effect on people’s incentives to know and to care about politics. Thus, focusing only on individual-level deficiencies neglects some promising approaches to improving voter competence and political knowledge.

Second, voting as an individual act is not disqualified from being a duty simply because its impact is negligible, nor is it pointless to the citizen. We may have a duty to vote so as to contribute to a larger collective activity that will be impactful in terms of justice—and valuable precisely because of its justice-promoting nature. In other words, we may have a duty of “common pursuit” to join forces with others, and vote, so that we can all together help society minimize suffering in the way that a good Samaritan would.  This duty of common pursuit binds everyone—regardless of their capacity to make an individual impact—because nobody has a stronger claim to being exempted from it than anyone else (under general circumstances). All are required to not shirk their duty to cooperate with others in the search of good governance.

Lastly, even though voting with sufficient knowledge is not the only way to contribute to the common good, the fact that citizens elect governments by way of elections makes it morally special. Governments have tremendous potential to affect the life of millions in a way that few other entities can; if we ought to act as good Samaritans and help the common good, partaking of the mechanism that elects governments is essential.

Even though voting is surely not the only way to affect government, it seems to be the only way to choose it, which makes it morally distinctive as a way to influence the quality of governance. This last argument does not negate the possibility that governments act wrongly by being agents of injustice and evil. My arguments operate under the assumption that the duty to vote with care is morally stringent only in a context of decent governmental responsiveness, and not in contexts of abuse or steep governmental ineptitude, where voting would be dangerous or elections would be pointless.

Elections give us the opportunity to act as good Samaritans and minimize suffering and injustice affecting our fellow-citizens. Because being a good Samaritan does not require heroism from us, my arguments for the duty to vote do not prescribe a self-sacrificial duty to be politically engaged all the time. This should not be a humanly impossible requirement, even if it is difficult for some at the moment.

Featured image credit: Photo by Parker Johnson via Unsplash.

The post Why there is a moral duty to vote appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 09, 2019 02:30

December 7, 2019

Why we don’t understand what a space race means

Fifty years after the first moon landing, a quantum leap is underway in space as a domain of human activity. Although most regular people are unaware of it, the global space economy has rapidly grown to almost US$400 billion in size, and will likely more than triple by 2040 (compared to 2017). The space economy is now already made up of hundreds of actors, from space agencies to private companies to start-ups. Over 70 countries have space programs and 14 have launch capabilities. These developments have involved intense cooperation across borders, both across public and private sectors.

Despite this, at the opening address of the 70th International Astronautical Congress in Washington DC in October of this year, US Vice President Mike Pence delivered an unambiguous “America first” speech in front of the world’s largest, annual gathering of international space agency leaders, scientists, engineers, private companies, and start-ups. The rest of the week, NASA Administrator Jim Bridenstine tried to undo the damage, repeatedly emphasizing that none of NASA’s future goals in space – from a permanent presence on the moon to a manned mission to Mars – would be possible without international cooperation.

This high-profile gathering of the world’s leading space professionals occurred in the context of well-publicized plans to launch a US Space Force as a sixth branch of the US military. A heavily militarized narrative is palpable in American military and policy circles when it comes to space, and it is serving to rachet up geopolitical tensions. US officials talk about the Space Force in terms of “allies” versus “adversaries,” “the next battlefield,” and have even publicly promoted the slogan, “Always the predator, never the prey,” to justify its creation.

The US Space Force is not the only worrying development in what is increasingly being characterized as Space Race 2.0. India recently exploded a satellite – something China proved it could do 12 years ago. China and Russia already have space forces, and India and France are considering plans of their own. Why are governments so focused on weaponizing space, while space agencies, scientists, and private companies are entirely committed to peaceful exploration, science, and industry?

A close examination of the development of space exploration since the 1920s shows that history has been on the side of the non-state actors, who have convinced states to pursue its peaceful uses, despite efforts to weaponize it. A major justification of those who profess that space is the next battlefield, is the insistence that space has always been about military competition – a classic security dilemma – starting with the 1960s Space Race. However, this is a serious misunderstanding of history.

Even during the height of the Space Race, there were strong efforts on the part of Americans and Soviets to cooperate. While many assume that Sputnik – the first human-made satellite to orbit the Earth – sparked an immediate panic on the part of Americans, in fact its 1957 launch occurred during the International Geophysical Year, meaning that the goal itself was formally a collective, international endeavor and some of the science needed to achieve it was being widely shared across countries. Multiple surveys at the time showed that there was little immediate concern over the Soviet achievement. Contrary to the perception of cut-throat competition, the US and USSR were already talking about the next steps in cooperation at exactly the same time as Sputnik.

Of course, political expediency and the media eventually succeeded in stoking popular panic, but space experts spearheaded serious efforts at international cooperation. Communication between NASA and the Soviet Academy commenced regularly, in what became informally known as the “NASA-Soviet Academy channel,” and the two countries moved forward with cooperation on a number of satellite projects, earth mapping, sharing of space medicine, and planning on how to work together on exploration of the lunar surface and Mars or Venus.

President John F. Kennedy eventually became so convinced of the need to cooperate that he delivered a 1963 speech at the United Nations inviting the Russians to work with the US on a joint moon landing. Soviet Premier Nikita Khrushchev accepted the idea in principle, and the leaders of their two space agencies signed an agreement to that effect. Although it wasn’t to be in the 1960s, this burgeoning space diplomacy paved the way for more intense cooperation in the following decades – Spacelab, Apollo-Soyuz, Shuttle-Mir, and finally the International Space Station. Since the 1960s, NASA has had around 4,000 international projects with many other countries.

Misperceptions and caricatures about the original Space Race arguably impact decisions that are made today about a potential Space Race 2.0, and have especially fed into the adversarial rhetoric surrounding the creation of a US Space force. It is important to remember that despite the Cold War, the superpowers treated space as a categorically different domain of activity – one that could feed the human drive to explore and serve as a reminder of our common humanity. To this day, space is not weaponized. Once again, we have the opportunity to avoid a self-fulfilling security dilemma.

Featured Image: “Live Coverage of SpaceX CRS-18 Launch to the International Space Station”, via NASA.

The post Why we don’t understand what a space race means appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 07, 2019 02:30

December 4, 2019

Etymology gleanings for November 2019

Amateur etymologists and a golden key

I agree: no voice should be silenced, but it does not follow that every voice deserves equal respect. I called the previous two posts “Etymology and Delusion” and deliberately did not emphasize such words as madness, lunacy, and derangement, for perfectly normal people can also be deluded. In etymology, the line separating amateurs from professionals is in most cases easy to draw. Amateurs tend to discover a single key to all problems. They begin by refuting what they call dogma, concentrate on some one factor that, in their opinion, holds out great promise, and treat with disdain the professionals who cling to their petty, porous hypotheses and fail to see the forest for the trees.

The dogma may indeed be wrong, but this circumstance does not make the opposing view correct. Moreover, the weakness of the scholarly consensus is its main strength. By contrast, revolutionary counter-theories are not falsifiable, and that circumstance dooms them. In the posts, I cited a few daring proposals. All English idioms, we were told, go back to some lost Low Saxon dialect. Or the source of all English words is Irish (Arabic, Hebrew, Latin—you name it). According to a brilliant theory, all words of all languages derive from the roots sal, ben, yon, rosh. Conversely, the underlying concept of all words was said to be “earth.” Indeed, why not?

Since all such hypotheses appear to be equally convincing, they cancel one another out. An etymology worthy of consideration usually seldom goes far enough. An amateur has no knowledge of the mountain of articles and books devoted to the history of every word, be it Boche  or kibosh, and sees no use in studying them, for in his mind their fallacy is a given. Details don’t bother him. (Excuse my pronoun: I am not aware of any woman among the characters I have discussed.) Similar situations also occur in other spheres of knowledge. For instance, many people insist that Shakespeare is not the author of the plays ascribed to him. Various candidates have been proposed, and we face the familiar situation: all hypotheses are equally persuasive. Last week, I suggested to those who have not read Chekov’s Ward No. 6 to read it. Now I would like to advise them to turn to The Tale of Captain Kopeikin in Chapter 10 of Gogol’s Dead Souls; its background is characteristic. Yet it is only fair to admit that dogma may get hold of a serious researcher. A pet theory becomes an obsession, and the scholar turns into a monomaniac and charlatan. But, as a general rule, amateurs know the truth, while specialists seek it.

Guinea pigs are often suppressed. Photo by Jay Reed, CC by-sa 2.0, via Flickr.

Before the discovery of the comparative method (at the beginning of the nineteenth century), all etymologists were amateurs. While groping in the dark, they occasionally stumbled on convincing solutions, but the lack of method tended to produce monomaniacs. The famous Horne Tooke derived all words from past participles. It is a joy to see how watertight his etymologies sounded and how many people admired them. My advice to amateurs: first learn the subject, then write about it. Incidentally, in giving this advice, I have no concrete targets in view. Nor do I expect that anyone will follow my recommendation.

Early metaphors

Yes, indeed, Old English had kennings, that is, compounds that needed decipherment, such as bone-house “body.” There were also phrases like the road of the whales “sea, ocean.” I deliberately stayed away from the mindboggling kennings in Old Icelandic skaldic poetry, but, if we look at simple kennings in that language, such as the field of necklaces “woman,” we discover that they have the same structure as the road of the whales (periphrastic collocations). Yet Old Icelandic was far ahead of Old English and Old/Middle High German, for it had rather numerous idioms like our let off steam, pull the strings, bury the hatchet, and to leave something on the back burner. The image behind such phrases was clear to the speakers.

My beloved is (like) a rose. Le printemps by Emile Vernon. Public domain via Wikimedia Commons.

In principle, every idiom must have had a similar history. The difference between older and later Germanic is that the number of even transparent idioms (forgetting about Old Icelandic) was extremely small. By contrast, today every European language has hundreds of obscure phrases. No speaker of Modern English understands why something happens before you can say Jack Robinson, why it rains cats and dogs, why we pay through the nose, kick the bucket, go the whole hog, and call a spade a spade. The original motivation has been effaced. (In a way, the same can be said about words: apparently, when a word is coined, its origin is clear to the coiner.) Usually our picturesque phrases do not antedate the Renaissance. And the same holds for metaphorical thinking as a whole: in the post-Classical languages, metaphors appeared relatively late. Similes meet us at every step, but, apparently, it was hard to bridge the gulf between my beloved is like a rose (simile) and my beloved is a rose (metaphor).

A hog on ice

In the post on lie doggo, I cited the American idiom as independent as a hog on ice as a piece of confirming evidence. Since there was a question about the idiom, I may return briefly to the subject. Charles Earle Funk spent years exploring the reality behind this phrase. His first book (1948) bears the title A Hog on Ice and Other Curious Expressions. Funk sought the advice of many people, but no one knew the origin of the odd simile. However, it turned out that a hog on a smooth icy surface cannot move about in a normal manner. His feet will slide out from under him, and the legs will spread, or they will be drawn under him. So much for the poor critter’s independence. The Irish or English origin of the idiom was suggested and abandoned, because of the rarity of ice in those parts and because pig was British use, rather than hog. Despite such arguments, the idea of the Scottish origin appeared to be feasible.

Keep hogs away from ice. Land girl with pig from the UK National Archives UK via Flickr.

In the Scottish game of curling, “when a player does not give his stone sufficient impetus to cause it to slide beyond a certain distance, that stone, when it comes to rest, is called ‘hog’.” Funk risked the suggestion that sometime during the early centuries of the game (perhaps in the 1500s) someone, at seeing a heavy stone not having enough momentum to carry it to its destination, likened it (partly frozen into the ice) to a hog, because of its unwieldiness. If so, Scottish immigrants brought the image and the idiom to the New World, and it stayed there. The first to suggest such an etymology was The Century Dictionary. Whether the question has been solved is anyone’ guess, but the reference to curling sounds realistic, while real hogs have probably nothing to do with our story.

Two words

Doggone

No, doggone is not an echo of the phrase lie doggo. This curse (it means “damn!”) may be an American coinage. The Scottish variant is dagone! “deuce take it.” Doggone surfaced in the 19th century, and its predecessor was dog on it. The OED calls the origin of the phrase obscure. Perhaps I may add that the sacrilegious word play god/dog has been known for centuries. In the Middle Ages, one could be executed for discussing the similarity in public.

Aqueduct

An aqueduct works equally well, regardless of how we spell the word. Roman aqueduct near Tarragona by Cruccone. CC by-sa 3.0 via Wikimedia Commons.

Why is it not spelled aquaeduct? Quite naturally, the Latin form was aquæductus “water conduit,” in which aquæ, that is, aquae, was the genitive of aqua. Italian slightly “vulgarized” the first element (the word is acquidotto). French changed the form to aqueduct and later to aqueduc (e in the middle, and no final consonant).  English seems to have borrowed the word from French.

Please send comments and questions for the last “gleanings” of the year!

Featured image credit: Image by Gerhard Gellinger from Pixabay.

The post Etymology gleanings for November 2019 appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 04, 2019 05:30

Etymology gleanings for 2019

Amateur etymologists and a golden key

I agree: no voice should be silenced, but it does not follow that every voice deserves equal respect. I called the previous two posts “Etymology and Delusion” and deliberately did not emphasize such words as madness, lunacy, and derangement, for perfectly normal people can also be deluded. In etymology, the line separating amateurs from professionals is in most cases easy to draw. Amateurs tend to discover a single key to all problems. They begin by refuting what they call dogma, concentrate on some one factor that, in their opinion, holds out great promise, and treat with disdain the professionals who cling to their petty, porous hypotheses and fail to see the forest for the trees.

The dogma may indeed be wrong, but this circumstance does not make the opposing view correct. Moreover, the weakness of the scholarly consensus is its main strength. By contrast, revolutionary counter-theories are not falsifiable, and that circumstance dooms them. In the posts, I cited a few daring proposals. All English idioms, we were told, go back to some lost Low Saxon dialect. Or the source of all English words is Irish (Arabic, Hebrew, Latin—you name it). According to a brilliant theory, all words of all languages derive from the roots sal, ben, yon, rosh. Conversely, the underlying concept of all words was said to be “earth.” Indeed, why not?

Since all such hypotheses appear to be equally convincing, they cancel one another out. An etymology worthy of consideration usually seldom goes far enough. An amateur has no knowledge of the mountain of articles and books devoted to the history of every word, be it Boche  or kibosh, and sees no use in studying them, for in his mind their fallacy is a given. Details don’t bother him. (Excuse my pronoun: I am not aware of any woman among the characters I have discussed.) Similar situations also occur in other spheres of knowledge. For instance, many people insist that Shakespeare is not the author of the plays ascribed to him. Various candidates have been proposed, and we face the familiar situation: all hypotheses are equally persuasive. Last week, I suggested to those who have not read Chekov’s Ward No. 6 to read it. Now I would like to advise them to turn to The Tale of Captain Kopeikin in Chapter 10 of Gogol’s Dead Souls; its background is characteristic. Yet it is only fair to admit that dogma may get hold of a serious researcher. A pet theory becomes an obsession, and the scholar turns into a monomaniac and charlatan. But, as a general rule, amateurs know the truth, while specialists seek it.

Guinea pigs are often suppressed. Photo by Jay Reed, CC by-sa 2.0, via Flickr.

Before the discovery of the comparative method (at the beginning of the nineteenth century), all etymologists were amateurs. While groping in the dark, they occasionally stumbled on convincing solutions, but the lack of method tended to produce monomaniacs. The famous Horne Tooke derived all words from past participles. It is a joy to see how watertight his etymologies sounded and how many people admired them. My advice to amateurs: first learn the subject, then write about it. Incidentally, in giving this advice, I have no concrete targets in view. Nor do I expect that anyone will follow my recommendation.

Early metaphors

Yes, indeed, Old English had kennings, that is, compounds that needed decipherment, such as bone-house “body.” There were also phrases like the road of the whales “sea, ocean.” I deliberately stayed away from the mindboggling kennings in Old Icelandic skaldic poetry, but, if we look at simple kennings in that language, such as the field of necklaces “woman,” we discover that they have the same structure as the road of the whales (periphrastic collocations). Yet Old Icelandic was far ahead of Old English and Old/Middle High German, for it had rather numerous idioms like our let off steam, pull the strings, bury the hatchet, and to leave something on the back burner. The image behind such phrases was clear to the speakers.

My beloved is (like) a rose. Le printemps by Emile Vernon. Public domain via Wikimedia Commons.

In principle, every idiom must have had a similar history. The difference between older and later Germanic is that the number of even transparent idioms (forgetting about Old Icelandic) was extremely small. By contrast, today every European language has hundreds of obscure phrases. No speaker of Modern English understands why something happens before you can say Jack Robinson, why it rains cats and dogs, why we pay through the nose, kick the bucket, go the whole hog, and call a spade a spade. The original motivation has been effaced. (In a way, the same can be said about words: apparently, when a word is coined, its origin is clear to the coiner.) Usually our picturesque phrases do not antedate the Renaissance. And the same holds for metaphorical thinking as a whole: in the post-Classical languages, metaphors appeared relatively late. Similes meet us at every step, but, apparently, it was hard to bridge the gulf between my beloved is like a rose (simile) and my beloved is a rose (metaphor).

A hog on ice

In the post on lie doggo, I cited the American idiom as independent as a hog on ice as a piece of confirming evidence. Since there was a question about the idiom, I may return briefly to the subject. Charles Earle Funk spent years exploring the reality behind this phrase. His first book (1948) bears the title A Hog on Ice and Other Curious Expressions. Funk sought the advice of many people, but no one knew the origin of the odd simile. However, it turned out that a hog on a smooth icy surface cannot move about in a normal manner. His feet will slide out from under him, and the legs will spread, or they will be drawn under him. So much for the poor critter’s independence. The Irish or English origin of the idiom was suggested and abandoned, because of the rarity of ice in those parts and because pig was British use, rather than hog. Despite such arguments, the idea of the Scottish origin appeared to be feasible.

Keep hogs away from ice. Land girl with pig from the UK National Archives UK via Flickr.

In the Scottish game of curling, “when a player does not give his stone sufficient impetus to cause it to slide beyond a certain distance, that stone, when it comes to rest, is called ‘hog’.” Funk risked the suggestion that sometime during the early centuries of the game (perhaps in the 1500s) someone, at seeing a heavy stone not having enough momentum to carry it to its destination, likened it (partly frozen into the ice) to a hog, because of its unwieldiness. If so, Scottish immigrants brought the image and the idiom to the New World, and it stayed there. The first to suggest such an etymology was The Century Dictionary. Whether the question has been solved is anyone’ guess, but the reference to curling sounds realistic, while real hogs have probably nothing to do with our story.

Two words

Doggone

No, doggone is not an echo of the phrase lie doggo. This curse (it means “damn!”) may be an American coinage. The Scottish variant is dagone! “deuce take it.” Doggone surfaced in the 19th century, and its predecessor was dog on it. The OED calls the origin of the phrase obscure. Perhaps I may add that the sacrilegious word play god/dog has been known for centuries. In the Middle Ages, one could be executed for discussing the similarity in public.

Aqueduct

An aqueduct works equally well, regardless of how we spell the word. Roman aqueduct near Tarragona by Cruccone. CC by-sa 3.0 via Wikimedia Commons.

Why is it not spelled aquaeduct? Quite naturally, the Latin form was aquæductus “water conduit,” in which aquæ, that is, aquae, was the genitive of aqua. Italian slightly “vulgarized” the first element (the word is acquidotto). French changed the form to aqueduct and later to aqueduc (e in the middle, and no final consonant).  English seems to have borrowed the word from French.

Please send comments and questions for the last “gleanings” of the year!

Featured image credit: Image by Gerhard Gellinger from Pixabay.

The post Etymology gleanings for 2019 appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on December 04, 2019 05:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.