Oxford University Press's Blog, page 158

March 16, 2020

Why Iran’s dependence on China puts it risk

The depth of ties between China and Iran was revealed dramatically in late February 2020, when news broke that some of Tehran’s most senior officials had contracted the coronavirus. By early March, one of Iran’s vice presidents, the deputy health minister, and 23 members of parliament were reported ill. A member of the 45-person Expediency Council charged with advising Supreme Leader Ali Khamenei died, and even the head of Iran’s coronavirus task force is sick. After China, Iran has more confirmed deaths from the virus than any other nation.

Yet this episode is only the latest indication of intensified relations between China and Iran. Over the past two decades, their links have deepened along strategic, commercial, technological, and political dimensions.

Leaders in Tehran and Beijing often invoke their 2,000 years of peaceful civilizational ties. The two nations were undeniably connected by the famous caravan routes of the Silk Road in ancient times, but their strategic relationship really dates to the 1980s, when China opportunistically sold arms to both sides in the Iran-Iraq War. China’s military support to Iran has waxed and waned over the subsequent decades, but it is clear that critical advances in Iran’s missile programs owe much to Chinese technologies, and the US intelligence community remains convinced that Chinese networks continue to provide sensitive defense materials to Iran in spite of international sanctions. Over the past several years, Beijing and Tehran have expanded their military partnership to include a series of joint exercises and a military cooperation agreement signed in 2016. As China’s defense industry matures and overtakes Russia’s, Iran has every incentive to continue its investments in Chinese arms.

For Iran’s clerical regime, commercial ties to China have assumed near-existential importance over the past two decades. Iran’s oil sales to China rose from $2.5 billion in 2001 to more than $27 billion in 2014. Over that same period, Iran’s imports from China grew at a similar pace, and China increased to account for 50% of Iran’s total trade. When Iran was hit with Obama-era nuclear sanctions, China remained an economic lifeline through barter arrangements, financial workarounds like the China National Petroleum Corporation -affiliated Bank of Kunlun, and over $5 billion in infrastructure investments through the 2010s.

In the Trump era, Washington slapped Iran with unilateral sanctions that sent Europeans and other international investors scurrying. Yet China again sought ways to keep open its commercial ties with Iran. As evidence of these sanction-busting efforts, in September 2019 Washington leveled sanctions on subsidiaries of the giant Chinese shipping company, COSCO, for trafficking in Iranian oil. In January 2020, the Trump administration accused China of doing hundreds of millions of dollars in business with Iran’s national oil company over the prior year.

In sum, whereas Iran’s revolutionary founder, Ayatollah Khomeini, famously declared during the Cold War that his country should choose “neither East nor West,” today’s hardline leaders in Tehran perceive China as a necessary—even natural—partner in opposition to the United States. The ruling regime thus remains desperate for Beijing’s support.

That said, Tehran’s tight embrace of China comes with serious potential downsides. However essential Iran’s commercial and economic ties to China may be, they are also broadly unpopular. During the Obama-era sanctions, Iranians called Chinese barter trade a “junk for oil” scheme, believing that Chinese businesses were unfairly exploiting Iran’s plight.

In addition, Chinese investments in Iran play to the advantage of Iran’s entrenched establishment and prop up the most dysfunctional features of its economy, including the dominance of the Revolutionary Guard Corps and the well-connected religious foundations, or bonyads. Private companies—from textile manufacturers to walnut farmers—have suffered from Chinese competition. As a consequence, many of Iran’s pragmatists, reformists, and opposition figures tend to see China as a partner of necessity rather than choice.

As the Iranian regime has taken ever more repressive measures against its opponents, it has relied on Chinese tools to do so. During the “Green Revolution” of 2009, anti-regime protesters were heard chanting “Death to China,” believing that the regime’s anti-riot gear and surveillance technologies had been provided by Beijing. Today, the techniques and tools used by the regime to tighten its control over national cyberspace clearly bear the imprint of Chinese telecommunications companies, including Huawei and ZTE.

Facing further pressure from the United States, Iran’s leaders may have no choice but to make themselves even more dependent on Beijing, especially when it comes to commerce and tools of political repression. But such moves are also fraught with risk; they are not widely popular among Iranians, threaten the long-term health of the Iranian economy, and force Tehran into a vulnerable junior role in its strategic partnership with an increasingly powerful China.

For a regime that so jealously guards its national autonomy and independence, such outcomes may prove even more dangerous than the immediate threat posed by the coronavirus.

Featured Image Credit: hanging signage lot  by Alana Harris via Unsplash 

The post Why Iran’s dependence on China puts it risk appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 16, 2020 02:30

What is the place of human beings in the world

Philosophers disagree on what philosophy is supposed to do, but one popular candidate for what is part of the philosophical project is to try to understand the place of human beings in the world. What is our significance in the world as whole? What place do human beings have in the universe and in all of reality? These questions are not merely about how we are different from other creatures, or whether we are special in the sense that we are the best or worst at something. We might be the best at music in all of the universe, but that by itself does not make us special in the right way. The largest volcano is special in that it is the largest one, and that makes it special in some sense. But the universe might not care about large volcanoes, so to speak, just as it might not care about music. If we are truly special in and central to the world then we must be so in some way other than being simply best or worst at something. But it is not easy to say more precisely in what sense we must be special in order for this to be of philosophical interest.

Let us call the question we are after when we ask about our place in the world the Big Question. How to state the Big Question more precisely and explicitly is not clear, but we can approach the Big Question by considering the two main responses to it that seem to answer it as it is intended.

The first of these answers is the broadly naturalistic answer, which holds that we are not special. We are merely slightly more complex creatures that arose out of a fortunate accident and that is locally, on planet earth, significant. But the world as whole, the universe, or more philosophically, all of reality, is largely indifferent to us. We are at best a fortunate bonus to reality, but not a central part of it.

The second main answer to the Big Question holds that we are special, since we have a special relationship with God. We are central to the world as a whole, since we are part of the reason why there is a material world in the first place: God created it for us, or at least with us in mind. If this were so then we would be truly special in the world as a whole and have a significance that other creatures wouldn’t have.

These two answers to the Big Question are well known and fiercely debated. And this debate naturally is in part about whether there is a God who has such a close connection to us.

But besides these two answers there is also a third answer to the Big Question. This answer holds that we are special, not because we have a close relationship to God, but more directly, because there is an intimate connection between our minds and reality itself. Our minds and the world as a whole are, somehow, connected and this link gives us a central significance in the world. Such an answer has been defended from time to time in the history of philosophy, where it is generally labeled as idealism. This idealist answer to the Big Question is nowadays considered mostly of historical significance, as a view that was defended in the distant past, but which is a compete non-starter in contemporary philosophy. Nonetheless, there is reason to think that this idealist position is indeed correct. And if so then our significance in the world would not come from a connection to God, but because our minds and reality itself are directly and intimately connected. Such a view must surely seem absurd. After all, human beings have only been in contact with a small part of the universe, spatially and temporally, so how could we be central for all of it? Let’s consider one way this could be.

Ludwig Wittgenstein famously declared that reality is the totality of facts, not the totality of things. Whether he was right is, of course, another story, but at least this points to an important difference in how we might think of the world, or reality: either as all of the objects or things, or as all of the facts or truths. And those are different: The cup is a thing, but that the cup is full is a fact. The cup figures in the fact that the cup is full, but it is different from the fact that the cup is full.

The question remains whether facts themselves are also things. Is the fact that the cup is full a thing, although maybe a different kind of thing than the cup? If facts are things then when we talk about all the things then the facts are included in this. If facts are not things then talking about all the facts is very different from talking about all the things. What then are we doing when we talk about facts, in particular all the facts? This is a substantial question about our own language and our own speech, one that concerns philosophy as well as linguistics. Talk about all the things is most naturally understood as making a claim about a domain that is just there, waiting for us to talk about. But if facts are not things then this picture does not directly carry over to talk about all the facts. Instead of talking about a domain of things we might simply generalize over the individual instances when we talk about all the facts: that fact that the cup is full, the fact that snow is white, etc. Claiming that all facts are such and such would then come down to claiming that each instance is such and such. No domain of things would be relied upon in doing this. And these instances are each expressed in our own language, and so each fact is one that can be represented in our language.

If facts are not things and this picture of what we do when we talk about facts is on the right track then there is a kind of a harmony between reality as the totality of facts and what can be represented in our language. All facts are representable by us, since talk about all facts, including what I say in this sentence, generalizes over the instances in our own language. Thus it is ruled out that there are facts that cannot be represented in our language at least in principle. And so what can be represented by our human minds or languages and reality as the totality of facts is connected in an intimate way. Any facts that could obtain must be representable by us. And this leads to the kind of idealist third answer to the Big Question alluded to at the beginning. We are central in the world as a whole, not because of our connection to God, but because of a direct connection between our minds and reality itself.

We are not the center of the universe, nor does our centrality rely on a connection to God. We are central because there is a harmony between reality as the totality of facts and what our minds can represent in thought or language. And if this is indeed so then we truly have a special place in the world.

Featured Image Credit: ‘Woman standing on rock in front of mountain during daytime’ by Lucas Wesney via Unsplash.

The post What is the place of human beings in the world appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 16, 2020 02:30

March 15, 2020

How air pollution may lead to Alzheimer’s disease

Air pollution harms billions of people worldwide. Pollutants are produced from all types of combustion, including motor vehicles, power plants, residential wood burning, and forest fires so they are found everywhere. One of the most dangerous of these, fine particulate matter, is 20 to 30 times smaller than a strand of human hair. Their tiny size allows them to be easily inhaled into the body, causing a number of adverse health effects. Over the past few decades, it has become widely recognized that outdoor air pollution is detrimental to respiratory and cardiovascular health, but recently scientists have come to acknowledge the damage it may cause on the brain as well.

A growing body of scientific evidence from around the world has shown that there may also be a link between fine particulate matter, cognitive performance, and Alzheimer’s disease and related dementias. The mechanisms by which fine particulate matter may lead to Alzheimer’s disease are unclear. Since animal studies have shown synaptic damage and neuronal loss resulting from exposure to ambient particles, could  fine particulate matter be causing structural changes to the human brain, eventually leading to memory decline and Alzheimer’s disease?

Recently researchers used data from 998 women from the Women’s Health Initiative who were aged 73 to 87 and had up to two brain scans five years apart. Researchers scored women’s brain scans on the basis of their similarity to patterns of grey matter atrophy using a machine learning tool that had been trained to learn these patterns via brain scans of people with Alzheimer’s disease. Higher scores indicated that a woman’s brain structure was more similar to someone with Alzheimer’s disease. They also collected women’s addresses and built a mathematical model that allowed them to estimate the daily outdoor fine particulate matter levels where these women lived throughout the study period. When researchers combined all of the above information they found a significant association between higher exposure to fine particulate matter, physical brain changes, and memory problems, even before any symptoms of Alzheimer’s disease become apparent. More precisely, fine particulate matter was associated with gray matter atrophy in brain areas where Alzheimer’s disease neuropathologies are thought to first emerge, and those changes were connected to declines in episodic memory performance. These findings could not be explained by age, race, geographic region, income, education, employment status, smoking, alcohol use, physical activity, or cardiovascular risk factors.

It appears that fine particulate matter is associated with increased risk of Alzheimer’s disease and related outcomes. This identifies one way air pollution may be linked to memory decline.

While some risk factors for Alzheimer’s disease are non-modifiable (e.g., age; genetics), several risk factors, such as environmental exposures, may be modifiable through lifestyle changes. Since there are currently no treatments for Alzheimer’s disease, it is becoming increasingly important to identify modifiable risk factors of the disease. Air pollution appears to be one such risk.

Featured Image Credit: Ralf Vetterle via  Pixabay

The post How air pollution may lead to Alzheimer’s disease appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 15, 2020 02:30

March 14, 2020

What good writers do

In his novel Love in the Time of Cholera, Gabriel García Márquez writes:

Florentino Ariza moved through every post during thirty years of dedication and tenacity in the face of every trial. He fulfilled all his duties with admirable skill … but he never won the honour he most desired, which was to write one, just one, acceptable business letter.

Plenty of authors in the humdrum world of business writing seem to share Señor Ariza’s difficulty. Standard letters and emails are sometimes decently written, which suggests the presence of editorial oversight. But one-off responses to customers’ questions and complaints rarely make sense, are full of grammatical and spelling mistakes, and use robotic jargon in which authors “reach out” while “touching base;” “address challenges” instead of “tackle problems;” and use “going forwards” whenever they want to talk about the future. Semiliterate responses from organizations great and small have become normal.

Take this muddled and meaningless letter signed by the chief executive of a leading eye hospital, responding to a complaint about a cancelled appointment:

Your appointment for the 26th January had to be cancelled because there were no clinics running that day. It is normal practice for access to all clinics for this day being denied, however this was not possible on this occasion due to the outreach clinics still running.

And this email from the John Lewis store is gibberish:

Further to our conversation this morning regards the estimates for blinds I have sent for the Make of Roman blinds I have spoken to my Manager regarding your comments.

In my experience, such examples are usually written by native English speakers with British-sounding names and, if I phone them, British accents—so we’re not talking about non-native speakers.

A common mistake is not knowing that a full stop or semicolon is usually the correct punctuation at a sentence boundary, something you’d hope authors would have learnt by the age of eleven. The error occurs six times in this email from Hoover/Candy, which was responding to a complaint that they’d reneged on a call centre’s promise to replace a broken appliance free of charge or at a large discount. I’ve put an [X] where all the full stops should be, but ignored other errors including loose for lose.

Thank you for your feedback[X], please note the contact centre staff are not aware that we factor a discount based on the age of the current appliance on a sliding scale[X] the newer the appliance the bigger the discount etc.

The appliance in question is over three years old[X] the discount applied is considered to be good[X] however, when you add VAT and carriage this is what takes the amount over the recommended RRP on our website.

We would not like to loose your mother as a customer[X] hence, on this occasion we are willing to add further discretionary discount based on the offer of model PU71/PU01001 current RRP of £89.99[X] the new price to you will be £64.99.

Hoover/Candy are not minnows. No doubt they have a training budget. Why don’t they spend some of it on improving staff writing skills?

Good writers exist somewhere in most organizations. When responding to complaints, they tend to do the same things and make it look deceptively easy:

They assess the problem and decide what they want to achieve.They plan, which gives them a clear structure. In longer letters, they use subheadings.They use simple sentences (average 15–20 words), everyday language, and good punctuation. They know enough about modern English that they’ll begin sentences with But and So when needed. They also know that nobody’s worried about split infinitives any more.They put themselves in the reader’s shoes, and they deal with all the relevant problems the customer has raised.They re-read what they’ve written for sense and logical flow.

Companies can get it right if they try. Last year, after months of dismal ineptitude over a cooker repair, a senior official at Aga Rangemaster finally realized what had gone wrong. Suddenly, there emerged an email with coherent English, proper punctuation, some empathy, and a settlement worth several hundred pounds:

After reviewing the details of the case I can appreciate the frustration you have felt in trying to get the appliance repaired. I would like to reiterate my apologies on behalf of AGA Rangemaster for the service you have received. As a gesture of goodwill to try and restore your faith in AGA Rangemaster and demonstrate that we can improve, we would like to offer you two free years on the AGA Care Plan.

This kind of quality matters, because a company’s reputation rides out with every letter and email it sends. Writing well is hard work and often people need training to do it. There should be a pride in writing well, even about workaday things. Clear, well-expressed documents help to build and restore customer confidence. Ultimately, they pay off.

Feature Image Credit: Startup Stock Photos via Pixabay

The post What good writers do appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 14, 2020 02:30

March 13, 2020

Lessons for the coronavirus from the 1899 Honolulu plague

Public health officials all over the United States—indeed globally—are trying to decide how to deal with the world’s coronavirus pandemic.   They know the coronavirus originated in China, and they know they can identify it with certainty.  But they do not know what might kill it, and they have no cure for anyone who contracts it.  Importantly, they are under intense economic and political pressure to suppress the pandemic, despite the diversity, suspicions, and limited trust of the populations affected.

Few people realize, however, that this is not the first time US health officials have faced a pandemic crisis under almost exactly the same circumstances.  In 1899, the world’s third great wave of bubonic plague reached Honolulu.  Neither of the world’s first two waves of Black Death had made landfall in the islands, so the situation was both novel and terrifying.  Like officials today, health officers in Hawaii knew the disease had come from China, and they could identify it with certainty under microscopes.  But they did not know how to kill it, they did not know either how or how rapidly it spread, and they did not know how to cure it.   And they too were under intense economic and political pressure; all trade with the islands was suspended under international protocols, and their own local government had panicked.

The city’s health officials began by implementing quarantines around the district where the first victims died, but like today those quarantines failed to contain the disease.  Since the early victims were Chinese, ugly cries arose for the destruction of all Asian neighborhoods on the pretext that they seemed to be breeding grounds for plague; blaming victims and increased hostility toward minorities had been hallmarks of health-related panics since ancient times.  Honolulu’s health officials bravely rejected those demands, but with the economy at a standstill and pressure mounting from Washington, they dared not do nothing.

With no proven options available, they decided—on the first day of the twentieth century—to try burning any building where a victim had died.  They hoped that would eliminate the plague’s strongholds.  On January 20, 1900, an unexpected shift in the prevailing wind transformed one of those targeted burns into an uncontrollable firestorm, which completely destroyed one-fifth of Honolulu in a matter of hours.  The thousands of people who had lived in the incinerated district—almost all of whom were Chinese, Japanese, or Hawaiian—not only lost everything, but found themselves forced into ad hoc quarantine camps under armed guard for weeks.  Mismanaged compensation schemes worsened their suffering for years to come.  By succumbing to the pressure for action, those health officials had inadvertently precipitated what remains the worst civic catastrophe in Hawaiian history and one of the worst disasters ever initiated in the name of public health by American medical officials anywhere.

While catastrophes on the scale of the great Honolulu fire are unlikely to recur, the Hawaiian experience suggests that contemporary policymakers need to weigh their options carefully. They face remarkably similar circumstances and they too will continue to be under intense economic and political pressure to act.  Officials need to be reasonably certain that their actions will do more good than harm, in the short term as well as the long term.  They need to consider in advance what could go wrong as the result of any given policy—however safe and well-intended it seems—and take steps to guard against those possibilities.  And they need to pay special attention to those who are least influential and most vulnerable; they were the ones who suffered disproportionately a hundred and twenty years ago, and they are likely to be the ones disproportionately affected today.

Featured image: Honolulu Chinatown Fire of 1900, public domain via  Wikimedia .

The post Lessons for the coronavirus from the 1899 Honolulu plague appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2020 05:30

Learning microbiology through comics

What do most people know about microbes? We know that they are tiny creatures that can attack us, causing illness, and kill us. Recent outbreaks such as measles and the Wuhan coronavirus are discussed in the media heavily. We understand microbes through mass media; it is not surprising there is widespread antibiotic abuse.

However, scientists are also to blame; we are failing to communicate science to society with accurate and rigorous facts and also in simple words so everybody can understand, engage, and act accordingly. An excellent example of this failure is the vaccine hesitancy, listed by the World Health Organization as one of the ten threats to global health.

Fortunately, in the last few years, there has been an increasing concern among the scientific community about science education, and microbiology is not the exception. Scientific journals are considering microbiology education as thematic issueseditorial comments, and consensus statements. There are even new journals specifically dedicated to microbiology education.

Why is microbiology so important? We live in a microbial world, yet gaining awareness of this unseen world is not easy. Microbes are hidden in plain sight. We live most of our life with no proof of their ubiquitous existence. However, microbes matter to our lives and to every ecosystem. They were the first organisms on Earth and will definitely be the last ones. Thanks to them, all life forms we know were possible and the list of awesome things they can do is infinite.

The only tool we have to bring microbes closer to society and to change misconceptions is microbiology education. Teachers need to introduce the discipline to children at early ages so kids can appreciate these amazing organisms and comprehend their importance to all forms of life, to climate, food, health, and much more. But teaching microbiology isn’t easy. How confident can a teacher be to talk about invisible organisms with such a bad reputation? We need to explore resources in order to make educators feel confident and passionate about the discipline so they can share it, and we need to spark children’s interest.  Comics are one great resource.

This two-page comic depicts antibiotic resistance acquisition among bacteria, introducing the concept of horizontal gene transfer in simple words. Taken from “Bacteria: the tinniest story ever told” created by the author’s group.

Comics are entertaining, visual, and easy to follow at the reader´s own pace. They are an excellent resource for explaining difficult concepts and things that are beyond human perception, such as microbes. They seem to be the perfect tool for children to gain awareness of the invisible world of microbes, to comprehend their complexity and beauty, and to understand that we cohabitate this planet not only with all the organisms that we can see, but also with those hidden in plain sight. In addition, who doesn´t like comics?

There are some great comic strips and comic books dedicated to introduce microbiology concepts and themes. Different subjects such as human microbiota and how we relate and depend on it, vaccines and how they work, and diseases like measles and Ebola are some examples of microbiology comics that can be found online.

It is important to generate more resources for microbiology education. We need to evaluate the efficiency of comics in knowledge acquisition and explore other tools that might do the job. Microbiologists and educators should make it clear to policymakers that if we invest in microbiology education, the outcome will be an informed society that will understand the importance of using antibiotics responsibly and avoid one of the biggest threats to global health today. People wouldn’t question vaccination to protect those who are most vulnerable and exposed. Finally people will understand that protecting biodiversity is a necessity.

We need to talk more about microbes, because microbiology can be comic but the outcome of a society with no microbiology literacy is not.

Featured Image Credit: ‘Virus-1913183_1920’ by Monoar. Public domain via Pixabay.

The post Learning microbiology through comics appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 13, 2020 02:30

March 12, 2020

Why we like a good robot story

We have been telling stories about machines with minds for almost three thousand years. In the Iliad, written around 800 BCE, Homer describes the oldest known AI: “golden handmaidens” created by Hephaestus, the disabled god of metalworking. They “seemed like living maidens” with “intelligence… voice and vigour”, and “bustled about supporting their master.” In the Odyssey, Homer also gave us the first autonomous vehicles — the self-sailing ships that take Odysseus home to Ithaca, navigating “by thought,” along with biomimetic robots — a pair of silver and gold watchdogs which guard a palace not with teeth and claws but with their “intelligent minds.”

Such stories have been told continually ever since. They come in a wide range of forms: myths and legends, apocryphal tales, film and fiction, and serious-minded speculations about the future. Today more than ever, intelligent machines are staples of blockbusters and bestsellers, from Star Wars and Westworld to Ian McEwan’s Machines Like Me. Currently, we might call such machines “AI” — artificial intelligence — a term coined in 1955. But they have had many other names, all of which have different nuances, including “automaton” (since antiquity), “android” (1728), “robot” (1921), and “cyborg” (1960). It seems that thinking machines are, as Claude Lévi-Strauss once said of animals, good to think with.

Why? A few scholars have tried to explain this fascination with humanoid machines. The first was Ernst Jentsch, who claimed in his 1906 essay “On the Psychology of the Uncanny” that “in storytelling, one of the most reliable artistic devices for producing uncanny effects easily is to leave the reader in uncertainty as to whether he has a human person or rather an automaton before him.” He illustrated this with a brief mention of the 1816 short story “The Sandman” by E.T.A. Hoffmann, which features a young man Nathanael, enchanted by the beautiful Olimpia, who lives in the house opposite with her father. She is an excellent dancer, but does not speak beyond saying “Ah-ah!” Nonetheless, Nathanael is so distraught when he discovers the object of his rapture is in fact an automaton, constructed by her “father,” that he commits suicide.

In his essay “The Uncanny” a few years later, Sigmund Freud further developed this idea — and in so doing made that term a staple of literary analysis. But his analysis of ‘The Sandman’ actually diverts attention away from Olimpia’s artificial nature, and towards something quite different: “the theme of the [mythical] ‘Sand-Man’ who tears out children’s eyes.” The notion of the uncanny has nonetheless remained central to thinking about our reaction to human-like machines to this day: the Japanese roboticist Masahiro Mori famously coined the term “uncanny valley” to describe the unsettling effect of being confronted with a machine that is almost, but not quite, human.

However, Minsoo Kang argued in his book Sublime Dreams of Living Machines (2011) that Freud is wrong to only focus on the fears associated with automata, such as being deceived by a machine into thinking that it is human as in the case of “The Sandman.” Throughout history people have also linked hopes and positive feelings to automata. For example, E.R. Truitt, writes in her book Medieval Robots (2015) about the chateau of Hesdin in medieval France, where automata pulled pranks on unsuspecting visitors; or Star Wars fans might think of the comic relief provided by the robots C-3PO and R2-D2.

Kang’s own view is that the humanoid machine is fascinating because it is “the ultimate categorical anomaly. Its very nature is a series of contradictions, and its purpose is to flaunt its own insoluble paradox.” The uncanny is one aspect of this, in challenging the categories of the real and the unreal. But is it not the whole story, as such machines also challenge our divisions between, for example, the living and the dead, or creatures and objects.

Kang’s analysis is surely right — but again, not the whole story. Writer and critic Victoria Nelson added another piece of the puzzle, suggesting that repressed religious beliefs motivate many of the stories we tell about humanoid machines. This too seems right. Classicist Adrienne Mayor describes mythical tales of intelligent machines as “ancient thought experiments” in the potential of technology to transform the human condition. This too seems like an important function, and we argue in the book that stories — in particular, the last hundred years of science fiction — offer the most nuanced explorations available of life with AI.

So one explanation for why humanoid robots are so good to think with is that they can fulfil so many functions, and be by turns unsettling, funny, or thought-provoking. To this we want to add another piece: in a way, they can be what we want them to be, unbounded by what would count as “realistic” for a human. In this, they can fulfill narrative roles akin to those of gods or demons, embodying archetypes and superlatives: the ruthless unstoppable killer, the perfect lover, or the cerebral, ultra-rational calculating machine. They allow us to explore extremes — which is one reason why robot stories tend towards the utopian or dystopian. Stories about such machines are therefore always really stories about ourselves: parodying, probing, or problematizing our notions of what it means to be human.

Featured Image Credits by Pete Linforth from Pixabay

The post Why we like a good robot story appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 12, 2020 05:30

March 11, 2020

Some of our tools: “awl”

The names of weapons, tools, and all kinds of appurtenances provide a rare insight into the history of civilization. Soldiers and journeymen travel from land to land, and the names of their instruments, whether murderous or peaceful, become so called migratory words (Wanderwörter, as they are called in German: words errant, as it were). I have dealt with such words in the post on bodkin (October 7, 2015) and ajar (August 22, 2012). For a long time, I have been meaning to write a short essay on adz(e), a word discussed in my etymological dictionary. As a rule, I prefer not to replicate the material of the dictionary in this blog, but there the exposition is technical, so next week I’ll probably retell the story in more popular terms.

Before going on, I would like to note that not only awl but the very word tool ends in -l. This –l is a remnant of several once productive suffixes. We detect it, for example, in bridle, girdle, saddle, satchel, and needle. Tool has it, because its root meant “to produce, prepare.” Many Romance words also end in -l—there, a diminutive suffix (so in satchel “little sack” and the like). The Latin for “awl’ is sūbula. Its – is akin to Engl. sew-; apparently, the word meant an instrument for sewing. Russian shilo (that is, shi-l-o) is a close analog of sūbula (shi-t’ “to sew”). Deceptively or for good reason, awl, too, ends in -l.  In any case, this l might help it to survive. All the rest is enveloped in obscurity.

A cobble and his tool. Image credit: Ancient Egyptian cobblers at work via Wikimedia.

The Old English for “awl” was æl. It occurred only as a translation of or gloss on Latin sūbula. The word figures in the biblical texts in descriptions of torture: for example, people’s ears are said to be pierced with an æl. The word continued into Middle English, but for phonetic reasons the form awl, homonymous with all, cannot be its reflex: the modern word would have been pronounced as ale. Awl goes back to Scandinavian al-r, a cognate of æl. This is a common situation: an English word competed with its Danish relative and lookalike and was ousted by it. See the map: the Danes ruled over two thirds of mediaeval England.

This is the part of England where the native population interacted with the invaders (new settlers). Image Credit: OpenDemocracy via Flickr.

A similar instrument was called prēon(e) in Old English. It means “pin; brooch,” and dialectal preen still means “pin” or “pincers for removing clothes pegs” (unrelated to the verbs preen and prune), but German Pfriem(en) and Dutch priem designate the same instrument as awl. Their origin is as obscure as that of awl. A word cognate with æl was known elsewhere in West Germanic. Old High German ala became Ahle (h in it is only a graphic sign of vowel length).  I always pass by the niceties of pronunciation irrelevant to the word’s origin, but in this case, the length of the vowel in æl is of some importance, because dictionaries give partly misleading information on this subject and because the etymology of awl depends on our knowledge of the value of æ in æl and of a in Old High German ala.

The spelling of the attested forms provides no information on how those æ and a were pronounced, because in Middle English and Middle High German they would have been lengthened anyway. Therefore, when our most authoritative sources (solid dictionaries and papers bydistinguished German and Finnish scholars) reconstruct the ancient Germanic and Indo-European form with a long vowel (for example, ērō), this information should be taken with a grain of salt. Awl has close counterparts in Baltic, Finnish, and Sanskrit (a typical situation when one deals with the names of instruments and tools), and in those languages the root vowel is indeed long. Yet when a word travels from one part of the world to another, its pronunciation is liable to change. Perhaps Old Engl. æl and Old High German ala did have long æ and long a, but perhaps not. Migratory words are just vagabonds wearing similar clothes, and Old Engl. æl, Lithuanian ýla, Finnish ora “thorn,” and Sanskrit ārā “awl” need not go back to an Indo-European root. Some speakers of an extinct language that invented a thorn-like instrument may have taught the inhabitants of Indian how to use it, and the tool, along with its name, began to move west.

The form ala enjoyed obvious popularity, because it also existed with an additional instrumental suffix. Alongside Old High German ala, in later northern German texts elsene and elsen were recorded (hence Modern Dutch els).  The suffix is familiar from the German noun Sense “saw.” It appears that one instrumental suffix (l) in ala was not enough, or perhaps -l was not understood as a meaningful element. This detail would not have been worthy of mention if some Germanic form like alasno had not migrated to the Romance-speaking world: hence Spanish alesna, French alène, and Italian lesina.

Of course, not the word but the tool and the people who wielded it “migrated.” However, we cannot ascertain the epicenter of its spread in the enormous territory between India and the Baltic Sea. It seems more reasonable to reconstruct the starting point in the East, but what was so special in that first awl, and who carried its fame to the remotest borders of Europe? We have no answer, and that is why the etymology of awl remains “unknown.”  Awls are used for piercing small holes in leather, wood, etc. and can be bent- or straight-pointed. Hence the distinction between bradawls and sewing awls. It has also been suggested that some ancient awls were used as weapons. Is this what made them so well-known over most of Eurasia?

The new settlers were not particular about phonetic niceties. Image Credit: British Library via Flickr.

A curious episode unites the history of awl and augur, another piercing instrument. Augur is what remains of the once long compound nafogār, from nave (as in the name of the hub of a wheel) and gar “spear” (as in garfish and others). A nauger became an auger, because of misdivision (the technical term for it is metanalysis). By contrast, an awl has often been attested in dialects as a nawl. Phonetics played a decisive role in the name of another boring instrument, namely, wimble. The well-known alteration of French gw and English w (as in Guillaume versus William) produced gymble. A diminutive suffix turned it into gimlet, remembered mainly from the phrase eyes like gimlets. Awl, bodkin, preen, auger, wimble ~ gimlet—I don’t find the story boring.

Only a postscript remains to be added to this essay. Alongside æl, Old English had āwel “flesh hook,” this time definitely with a long vowel, which developed because of the loss of h after short a in the original combination. Its root is akin to ac– in acute. Consequently, the flesh hook was simply “a sharp instrument.” Yet the similarity with the word for “awl” is almost uncanny. For some time, historical linguists believed that they were dealing with the same word; yet James A. H. Murray, the OED’s first editor, sensed the difficulty. In 1905, a special article dealt with this small problem, but it took the authors of even dependable manuals and dictionaries quite some time to represent facts in their true light. According to a Russian saying, one cannot conceal an awl in a sack: the truth will come out. And it did.

Featured Image Credit: Dominique grassigli via Wikimedia

The post Some of our tools: “awl” appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 11, 2020 05:30

Let people change their minds

Everyone does it. Some people do it several times a day. Others, weekly, monthly, or even just a few times in their lives. We would be suspicious, and rightly so, of someone who claimed never to have done it. Some have even become famous for doing it. Making a public show of it can make or break a career. But how often is too often?

I’m talking about changing one’s mind. As philosophers, we are often surprised when one of us reverses position on an issue. Hilary Putnam, for example, was famous for changing his mind, and this fact is mentioned in nearly every article about him. But why is this really so remarkable? Indeed, isn’t the really surprising thing that we’re not changing our minds more often? Think about it: We spend our professional lives discussing and assessing arguments against our position. We have lots of intelligent colleagues. Indeed, I think many philosophers are not only as intelligent as I am, but more intelligent. Of this group, some have given more thought to the issues than I have. As a result, there are very strong arguments out there against my position. And this isn’t just because I happen to have a weak view; the same can be said for virtually any interesting philosophical position (if no one disagrees with your view, you should worry—it’s not a good sign). Given this, and given that our profession is predicated on the view that arguments should and do affect our opinions, the question remains. Why is it so noteworthy when philosophers change their minds?

Maybe changing one’s mind frequently is a sign of taking positions too quickly, without sufficiently attending to, or waiting for, evidence. If I find myself changing my mind about an issue, it is a sign that I made up my mind prematurely to begin with—I should have waited to hear more arguments on the issue.

Maybe. But not necessarily. One could end up endlessly flipping and flopping precisely because one takes belief formation very seriously. While some changes of heart might happen because of overly hasty belief formation, or beliefs that are motivated by self-interest, it’s also possible that a willingness to take countervailing evidence and arguments extremely seriously leads one to change one’s mind often.

Yet another possibility is that mind-changing doesn’t actually reflect change in beliefs; instead, philosophers who change their minds are simply taking up arguments without believing them, perhaps for opportunistic reasons—getting more publications, say. In this case, the changing one’s mind is really changing one’s position, since one’s mind was never fully made up to begin with.

Belief in one’s work might have instrumental value—perhaps one will defend it more ardently if one is a true believer, perhaps one will produce more or better arguments—but it’s not a requirement or duty. We’re more like lawyers than priests. Our job is not to profess views out of faith, but to offer the best possible defense of the positions we choose to represent. If that’s right, philosophers who change their mind often are not deficient in belief, and don’t lack sufficient conviction, because no conviction need be required. A good argument is a good argument, regardless of whether or not its author sees it as such. If this is right, then it’s no surprise when philosophers change their mind; we ought to expect and even reward it.

Still, one might worry about relaxing the requirement that philosophers believe what they write. If people aren’t required to stand behind their arguments, we seem to be opening the door to a kind of cavalier attitude towards argument, a kind of just-putting-it-out-there casualness that leads people to make arguments for the sake of eliciting a reaction.  And while eliciting critique can be productive, it can also be distracting and a drain on our epistemic resources. Responding to arguments takes time, and for this reason alone, lowering the bar so that one needn’t believe an argument in order to make it seems to open the door to problematic trolling—making claims or taking positions just to elicit a reaction.

The solution here lies in distinguishing two things: our attitudes towards specific arguments and positions, and our attitude towards the broader project we’re engaged in when we do philosophy. I’ve been arguing that philosophers don’t need to believe in their arguments in order to make them. But what they do need to believe in is the project of philosophical inquiry itself. A philosopher might offer up her argument in the absence of conviction but in the hopes of furthering the philosophical discussion around it. This is very different from someone who offers up a controversial claim in order to stir the pot of internet discourse, or enrage his opponents. While belief in one’s position can be laudable, it’s not the only laudable motive for doing philosophy. One can aim at truth even while reserving judgment on whether one has hit it this time.

Featured Image Credit: Person Using Laptop by NordWood Themes via Unsplash .

The post Let people change their minds appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 11, 2020 02:30

March 10, 2020

Scientific facts are not 100% certain. So what?

Science affects everyone. Generally, people want to trust what scientists tell them and they support science. Nevertheless, groups, such as climate-change deniers, tobacco industry employees, and others, find fertile ground for their obfuscatory messages in the public’s lack of understanding of science. While the entrenched economic, political, or social interests that feed the various controversies are beyond our control, we scientists could make a difference by clarifying what we’re doing.

I think that people are confused by what scientists say. Indeed, on the face of it, scientists often talk in riddles, not to say nonsense. Consider that any thoughtful scientist will acknowledge that no scientific fact can be established as being 100% certifiably true. Then, a minute later, that same person will turn around and solemnly declare that anthropogenic (man-made) global climate change is unquestionably a fact. It’s well underway and an increasing danger to us all. No wonder the public doesn’t know what to believe.

Why don’t scientists do a better job of communicating with non-scientists? I suspect that many of us just don’t have good answers to the questions about the nature of science that we’re sometimes asked. Our education is packed with narrowly-focused, required courses. Our mentors are harried and results-oriented, with little time for, or even frank antipathy to, larger philosophical topics. When I surveyed hundreds of members of biological societies, I was surprised to find that 68% of us had almost no formal training in the scientific method or scientific thinking and reasoning. As a group, we devote even less attention to topics that range beyond our areas of expertise.

Let’s look at the paradox alluded to above: how can we square the tenet that scientific facts cannot be established to be true, with the reality that scientists frequently behave as if their facts were true?

One solution comes from realizing that science is not a unified endeavor.  In particular there is a gulf between the ultimate goals of basic (or pure) and applied science. Basic science seeks knowledge for its own sake, and it is uncompromising: nothing less than a complete explanation of every aspect of nature—everything, everywhere, and for all time—will satisfy it. An unattainable objective, obviously, but that’s the way it is. And it is no more or less foolhardy than pursuing perfection in other aspects of life, as many artists, athletes, mathematicians, etc., normally do. The context of basic science is where we have to keep the dictum of “no 100% true facts” firmly in mind.

What about our attitudes towards climate change? First off, this is an applied science problem, and we can identify applied science problems without knowing all of the details that basic science is seeking. Applied science pursues practicable outcomes, not an abstract ideal of all-encompassing certainty. It relies on the best information currently available and it accepts that the information must be incomplete at some level. There is nothing slipshod or objectionable about depending on incomplete knowledge; we do it all the time.

Take a homely example: the slipperiness of ice. Today, in 2020, science cannot give a complete explanation of why ice is slippery. Early hypotheses about a micro-layer of melted water caused by, e.g. the pressure and friction of a skate blade, were falsified by the finding that ice at temperatures close to absolute zero, where liquid water cannot exist, is still slippery. A full accounting will probably be found deep within a quantum-mechanical framework. Meanwhile, basic physics acknowledges ignorance on this point and keeps on investigating. Applied science can’t and needn’t wait for the answer. If slippery ice is a problem, then put sand or salt on it, mount snow tires on vehicles, melt it, chip it away, or avoid it altogether.

The mere fact that we do not comprehend a problem or its solution in the minutest detail does not preclude sensible action. We don’t need a finished theory of slippery to prevent slipping.

When the sowers of doubt claim that we can’t do anything because not all the data are in, they’re half-correct. All of the basic science data are not in, that’s true. But then all of the basic science data never will be in. It’s wrong to imply that, therefore, applied science is stymied. It’s not. We need to know only whether feasible steps would alleviate a problem and then—guided by the best information we have—take them. For decades, we’ve known that preventative actions can reduce the dangers of climate change, tobacco-smoking, etc. We’ve done little, in part because we’ve been misled by self-interested economic forces that want us to believe that any action must wait for total understanding. Appreciation of the complex nature of science can keep us from being fooled again.

Featured image credits: Jacqueline Godany via Unsplash

The post Scientific facts are not 100% certain. So what? appeared first on OUPblog.

 •  0 comments  •  flag
Share on Twitter
Published on March 10, 2020 02:30

Oxford University Press's Blog

Oxford University Press
Oxford University Press isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Oxford University Press's blog with rss.